problem_id
stringlengths 18
21
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
54
| prompt
stringlengths 1.28k
64.2k
| golden_diff
stringlengths 166
811
| verification_info
stringlengths 604
118k
|
---|---|---|---|---|---|---|
gh_patches_debug_0 | rasdani/github-patches | git_diff | conda-forge__conda-smithy-864 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Autogenerated README.md missing final newline
## The Problem
As I've confirmed is the case on multiple repos here, including our own ``spyder-feedstock`` and ``spyder-kernels-feedstock`` as well as two arbitrary conda-forge repos I checked conda-forge, the last line in README.md lacks a terminating newline (LF/``x0D``), and is thus ill-formed. I'd be happy to submit a PR to fix it since I imagine it is probably pretty trivial, if someone more knowlegable than me can let me know how to approach it.
## Proposed Solutions
A naive hack would seem to be just writing an additional ``\n`` [here](https://github.com/conda-forge/conda-smithy/blob/855f23bb96efb1cbdbdc5e60dfb9bbdd3e142d31/conda_smithy/configure_feedstock.py#L718), but editing the [template ](https://github.com/conda-forge/conda-smithy/blob/master/conda_smithy/templates/README.md.tmpl) would seem to make far more sense. However, the template *has* a trailing newline and hasn't been edited in a while, so not sure what's going onβis it not writing the last one; is it getting stripped, or what?
Thanks!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `conda_smithy/vendored/__init__.py`
Content:
```
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/conda_smithy/vendored/__init__.py b/conda_smithy/vendored/__init__.py
--- a/conda_smithy/vendored/__init__.py
+++ b/conda_smithy/vendored/__init__.py
@@ -0,0 +1 @@
+
| {"golden_diff": "diff --git a/conda_smithy/vendored/__init__.py b/conda_smithy/vendored/__init__.py\n--- a/conda_smithy/vendored/__init__.py\n+++ b/conda_smithy/vendored/__init__.py\n@@ -0,0 +1 @@\n+\n", "issue": "Autogenerated README.md missing final newline\n## The Problem\r\n\r\nAs I've confirmed is the case on multiple repos here, including our own ``spyder-feedstock`` and ``spyder-kernels-feedstock`` as well as two arbitrary conda-forge repos I checked conda-forge, the last line in README.md lacks a terminating newline (LF/``x0D``), and is thus ill-formed. I'd be happy to submit a PR to fix it since I imagine it is probably pretty trivial, if someone more knowlegable than me can let me know how to approach it. \r\n\r\n## Proposed Solutions\r\n\r\nA naive hack would seem to be just writing an additional ``\\n`` [here](https://github.com/conda-forge/conda-smithy/blob/855f23bb96efb1cbdbdc5e60dfb9bbdd3e142d31/conda_smithy/configure_feedstock.py#L718), but editing the [template ](https://github.com/conda-forge/conda-smithy/blob/master/conda_smithy/templates/README.md.tmpl) would seem to make far more sense. However, the template *has* a trailing newline and hasn't been edited in a while, so not sure what's going on\u2014is it not writing the last one; is it getting stripped, or what?\r\n\r\nThanks!\n", "before_files": [{"content": "", "path": "conda_smithy/vendored/__init__.py"}], "after_files": [{"content": "\n", "path": "conda_smithy/vendored/__init__.py"}]} |
gh_patches_debug_1 | rasdani/github-patches | git_diff | microsoft__AzureTRE-1754 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Release version 0.3
## Description
As a TRE developer
I want to release current code base as version 0.3
So that people can use a more stable version going forward
## Acceptance criteria
- [ ] All core apps are bumped to 0.3
- [ ] All bundles are bumped to 0.3
- [ ] A tag is created
- [ ] A release is created
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `api_app/_version.py`
Content:
```
1 __version__ = "0.2.28"
2
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/api_app/_version.py b/api_app/_version.py
--- a/api_app/_version.py
+++ b/api_app/_version.py
@@ -1 +1 @@
-__version__ = "0.2.28"
+__version__ = "0.3.0"
| {"golden_diff": "diff --git a/api_app/_version.py b/api_app/_version.py\n--- a/api_app/_version.py\n+++ b/api_app/_version.py\n@@ -1 +1 @@\n-__version__ = \"0.2.28\"\n+__version__ = \"0.3.0\"\n", "issue": "Release version 0.3\n## Description\r\n\r\nAs a TRE developer \r\nI want to release current code base as version 0.3\r\nSo that people can use a more stable version going forward\r\n\r\n## Acceptance criteria\r\n\r\n- [ ] All core apps are bumped to 0.3\r\n- [ ] All bundles are bumped to 0.3\r\n- [ ] A tag is created\r\n- [ ] A release is created\r\n\n", "before_files": [{"content": "__version__ = \"0.2.28\"\n", "path": "api_app/_version.py"}], "after_files": [{"content": "__version__ = \"0.3.0\"\n", "path": "api_app/_version.py"}]} |
gh_patches_debug_2 | rasdani/github-patches | git_diff | OpenEnergyPlatform__oeplatform-1475 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Scenario bundles: Output datasets render weirdly
## Description of the issue
I added an output dataset for the WAM scenario for this factsheet: https://openenergy-platform.org/scenario-bundles/id/95a65aca-6915-b64a-cac7-3831c12885b4

It reads wrongly and shows more than only the title of the dataset, i.e. it should only be rendered as: Rahmendaten fΓΌr den Projektionsbericht 2023 (Datentabelle) - as it does for the WEM scenario (this was already existing before the new release).
## Steps to Reproduce
1. Add a dataset to a scenario
2.
3.
## Ideas of solution
Describe possible ideas for solution and evaluate advantages and disadvantages.
## Context and Environment
* Version used:
* Operating system:
* Environment setup and (python) version:
## Workflow checklist
- [ ] I am aware of the workflow in [CONTRIBUTING.md](https://github.com/OpenEnergyPlatform/oeplatform/blob/develop/CONTRIBUTING.md)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `oeplatform/__init__.py`
Content:
```
1 __version__ = "0.16.1"
2
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/oeplatform/__init__.py b/oeplatform/__init__.py
--- a/oeplatform/__init__.py
+++ b/oeplatform/__init__.py
@@ -1 +1 @@
-__version__ = "0.16.1"
+__version__ = "0.16.2"
| {"golden_diff": "diff --git a/oeplatform/__init__.py b/oeplatform/__init__.py\n--- a/oeplatform/__init__.py\n+++ b/oeplatform/__init__.py\n@@ -1 +1 @@\n-__version__ = \"0.16.1\"\n+__version__ = \"0.16.2\"\n", "issue": "Scenario bundles: Output datasets render weirdly\n## Description of the issue\r\n\r\nI added an output dataset for the WAM scenario for this factsheet: https://openenergy-platform.org/scenario-bundles/id/95a65aca-6915-b64a-cac7-3831c12885b4\r\n\r\n\r\n\r\nIt reads wrongly and shows more than only the title of the dataset, i.e. it should only be rendered as: Rahmendaten f\u00fcr den Projektionsbericht 2023 (Datentabelle) - as it does for the WEM scenario (this was already existing before the new release). \r\n\r\n\r\n## Steps to Reproduce\r\n1. Add a dataset to a scenario\r\n2.\r\n3.\r\n\r\n## Ideas of solution\r\n\r\nDescribe possible ideas for solution and evaluate advantages and disadvantages.\r\n\r\n## Context and Environment\r\n* Version used: \r\n* Operating system: \r\n* Environment setup and (python) version: \r\n\r\n## Workflow checklist\r\n- [ ] I am aware of the workflow in [CONTRIBUTING.md](https://github.com/OpenEnergyPlatform/oeplatform/blob/develop/CONTRIBUTING.md)\r\n\n", "before_files": [{"content": "__version__ = \"0.16.1\"\n", "path": "oeplatform/__init__.py"}], "after_files": [{"content": "__version__ = \"0.16.2\"\n", "path": "oeplatform/__init__.py"}]} |
gh_patches_debug_3 | rasdani/github-patches | git_diff | OCHA-DAP__hdx-ckan-1655 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Main Nav changes to accomodate "Feedback" button
Updating the action based on discussions:
1. Remove `Contact` from main nav.
2. Between `About` and `Submit Data`, add a button `Feedback`.
3. While you are there, change `Submit Data` to `Share Data` (there's a later issue for that which this will close)
Button style should be the same as the "Follow Us" button here, except gray, not blue: http://docs.hdx.rwlabs.org/get-involved/
Note that the megaphone icon shown below will not be used. No icon on the button.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ckanext-hdx_theme/ckanext/hdx_theme/version.py`
Content:
```
1 hdx_version = 'v0.4.5'
2
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ckanext-hdx_theme/ckanext/hdx_theme/version.py b/ckanext-hdx_theme/ckanext/hdx_theme/version.py
--- a/ckanext-hdx_theme/ckanext/hdx_theme/version.py
+++ b/ckanext-hdx_theme/ckanext/hdx_theme/version.py
@@ -1 +1 @@
-hdx_version = 'v0.4.5'
+hdx_version = 'v0.4.6'
| {"golden_diff": "diff --git a/ckanext-hdx_theme/ckanext/hdx_theme/version.py b/ckanext-hdx_theme/ckanext/hdx_theme/version.py\n--- a/ckanext-hdx_theme/ckanext/hdx_theme/version.py\n+++ b/ckanext-hdx_theme/ckanext/hdx_theme/version.py\n@@ -1 +1 @@\n-hdx_version = 'v0.4.5'\n+hdx_version = 'v0.4.6'\n", "issue": "Main Nav changes to accomodate \"Feedback\" button\nUpdating the action based on discussions:\n1. Remove `Contact` from main nav. \n2. Between `About` and `Submit Data`, add a button `Feedback`. \n3. While you are there, change `Submit Data` to `Share Data` (there's a later issue for that which this will close)\n\nButton style should be the same as the \"Follow Us\" button here, except gray, not blue: http://docs.hdx.rwlabs.org/get-involved/\n\nNote that the megaphone icon shown below will not be used. No icon on the button.\n\n", "before_files": [{"content": "hdx_version = 'v0.4.5'\n", "path": "ckanext-hdx_theme/ckanext/hdx_theme/version.py"}], "after_files": [{"content": "hdx_version = 'v0.4.6'\n", "path": "ckanext-hdx_theme/ckanext/hdx_theme/version.py"}]} |
gh_patches_debug_4 | rasdani/github-patches | git_diff | OCHA-DAP__hdx-ckan-2076 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Login page: change spacing on left panel
The spacing in the left panel is odd. Change to something like the below:

Note, this will stay in backlog for now as we may want to revise this page to align with the Frog design.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ckanext-hdx_theme/ckanext/hdx_theme/version.py`
Content:
```
1 hdx_version = 'v0.5.13'
2
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ckanext-hdx_theme/ckanext/hdx_theme/version.py b/ckanext-hdx_theme/ckanext/hdx_theme/version.py
--- a/ckanext-hdx_theme/ckanext/hdx_theme/version.py
+++ b/ckanext-hdx_theme/ckanext/hdx_theme/version.py
@@ -1 +1 @@
-hdx_version = 'v0.5.13'
+hdx_version = 'v0.5.15'
| {"golden_diff": "diff --git a/ckanext-hdx_theme/ckanext/hdx_theme/version.py b/ckanext-hdx_theme/ckanext/hdx_theme/version.py\n--- a/ckanext-hdx_theme/ckanext/hdx_theme/version.py\n+++ b/ckanext-hdx_theme/ckanext/hdx_theme/version.py\n@@ -1 +1 @@\n-hdx_version = 'v0.5.13'\n+hdx_version = 'v0.5.15'\n", "issue": "Login page: change spacing on left panel \nThe spacing in the left panel is odd. Change to something like the below: \n\n\n\nNote, this will stay in backlog for now as we may want to revise this page to align with the Frog design.\n\n", "before_files": [{"content": "hdx_version = 'v0.5.13'\n", "path": "ckanext-hdx_theme/ckanext/hdx_theme/version.py"}], "after_files": [{"content": "hdx_version = 'v0.5.15'\n", "path": "ckanext-hdx_theme/ckanext/hdx_theme/version.py"}]} |
gh_patches_debug_5 | rasdani/github-patches | git_diff | sql-machine-learning__elasticdl-1463 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
No module named 'elasticdl.python.elasticdl.layers' on master
```
Traceback (most recent call last):
File "/usr/local/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/local/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/elasticdl/python/master/main.py", line 28, in <module>
from elasticdl.python.elasticdl.layers.embedding import Embedding
ModuleNotFoundError: No module named 'elasticdl.python.elasticdl.layers'
```
Seems `layers` directory is not installed to `/usr/local/lib/python3.7/site-packages/elasticdl-develop-py3.7.egg/elasticdl/python/elasticdl` after running `python setup.py install`
Steps to reproduce:
1. In a Python Docker container, clone ElasticDL and run `python setup.py install`
1. remove the cloned source
1. execute a demo job by: `elasticdl train ...`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `elasticdl/python/elasticdl/__init__.py`
Content:
```
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/elasticdl/python/elasticdl/__init__.py b/elasticdl/python/elasticdl/__init__.py
--- a/elasticdl/python/elasticdl/__init__.py
+++ b/elasticdl/python/elasticdl/__init__.py
@@ -0,0 +1 @@
+from elasticdl.python.elasticdl import layers # noqa: F401
| {"golden_diff": "diff --git a/elasticdl/python/elasticdl/__init__.py b/elasticdl/python/elasticdl/__init__.py\n--- a/elasticdl/python/elasticdl/__init__.py\n+++ b/elasticdl/python/elasticdl/__init__.py\n@@ -0,0 +1 @@\n+from elasticdl.python.elasticdl import layers # noqa: F401\n", "issue": "No module named 'elasticdl.python.elasticdl.layers' on master\n```\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.7/runpy.py\", line 193, in _run_module_as_main\r\n \"__main__\", mod_spec)\r\n File \"/usr/local/lib/python3.7/runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"/elasticdl/python/master/main.py\", line 28, in <module>\r\n from elasticdl.python.elasticdl.layers.embedding import Embedding\r\nModuleNotFoundError: No module named 'elasticdl.python.elasticdl.layers'\r\n```\r\n\r\nSeems `layers` directory is not installed to `/usr/local/lib/python3.7/site-packages/elasticdl-develop-py3.7.egg/elasticdl/python/elasticdl` after running `python setup.py install`\r\n\r\nSteps to reproduce:\r\n\r\n1. In a Python Docker container, clone ElasticDL and run `python setup.py install`\r\n1. remove the cloned source\r\n1. execute a demo job by: `elasticdl train ...`\n", "before_files": [{"content": "", "path": "elasticdl/python/elasticdl/__init__.py"}], "after_files": [{"content": "from elasticdl.python.elasticdl import layers # noqa: F401\n", "path": "elasticdl/python/elasticdl/__init__.py"}]} |
gh_patches_debug_6 | rasdani/github-patches | git_diff | codespell-project__codespell-2626 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`python setup.py check` β `twine check`
Because `setup.py ...` is deprecated, we need an alternative to `setup.py check` such as `twine`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #! /usr/bin/env python
2
3 from setuptools import setup
4
5 if __name__ == "__main__":
6 setup()
7
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
deleted file mode 100755
--- a/setup.py
+++ /dev/null
@@ -1,6 +0,0 @@
-#! /usr/bin/env python
-
-from setuptools import setup
-
-if __name__ == "__main__":
- setup()
| {"golden_diff": "diff --git a/setup.py b/setup.py\ndeleted file mode 100755\n--- a/setup.py\n+++ /dev/null\n@@ -1,6 +0,0 @@\n-#! /usr/bin/env python\n-\n-from setuptools import setup\n-\n-if __name__ == \"__main__\":\n- setup()\n", "issue": "`python setup.py check` \u2192 `twine check`\nBecause `setup.py ...` is deprecated, we need an alternative to `setup.py check` such as `twine`.\n", "before_files": [{"content": "#! /usr/bin/env python\n\nfrom setuptools import setup\n\nif __name__ == \"__main__\":\n setup()\n", "path": "setup.py"}], "after_files": [{"content": null, "path": "setup.py"}]} |
gh_patches_debug_7 | rasdani/github-patches | git_diff | django__channels-1860 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Current version on Pypi is incompatible with Django 4.0+
The current version available on Pypi, channels-3.0.4 still use the providing_args keyword for signals.
Since this keyword has been removed from Django 4.0+ (December 2021), you cannot use channels with an up-to-date Django.
Is a version 3.0.5 planned for Pypi?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `channels/__init__.py`
Content:
```
1 __version__ = "3.0.4"
2
3 try:
4 import django
5
6 if django.VERSION < (3, 2):
7 default_app_config = "channels.apps.ChannelsConfig"
8 except ModuleNotFoundError:
9 pass
10
11 DEFAULT_CHANNEL_LAYER = "default"
12
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/channels/__init__.py b/channels/__init__.py
--- a/channels/__init__.py
+++ b/channels/__init__.py
@@ -1,4 +1,4 @@
-__version__ = "3.0.4"
+__version__ = "3.0.5"
try:
import django
| {"golden_diff": "diff --git a/channels/__init__.py b/channels/__init__.py\n--- a/channels/__init__.py\n+++ b/channels/__init__.py\n@@ -1,4 +1,4 @@\n-__version__ = \"3.0.4\"\n+__version__ = \"3.0.5\"\n \n try:\n import django\n", "issue": "Current version on Pypi is incompatible with Django 4.0+\nThe current version available on Pypi, channels-3.0.4 still use the providing_args keyword for signals.\r\nSince this keyword has been removed from Django 4.0+ (December 2021), you cannot use channels with an up-to-date Django.\r\n\r\nIs a version 3.0.5 planned for Pypi? \n", "before_files": [{"content": "__version__ = \"3.0.4\"\n\ntry:\n import django\n\n if django.VERSION < (3, 2):\n default_app_config = \"channels.apps.ChannelsConfig\"\nexcept ModuleNotFoundError:\n pass\n\nDEFAULT_CHANNEL_LAYER = \"default\"\n", "path": "channels/__init__.py"}], "after_files": [{"content": "__version__ = \"3.0.5\"\n\ntry:\n import django\n\n if django.VERSION < (3, 2):\n default_app_config = \"channels.apps.ChannelsConfig\"\nexcept ModuleNotFoundError:\n pass\n\nDEFAULT_CHANNEL_LAYER = \"default\"\n", "path": "channels/__init__.py"}]} |
gh_patches_debug_8 | rasdani/github-patches | git_diff | MongoEngine__mongoengine-2224 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
New release
Hi,
When is coming new release, because I can't update to mongodb 4.2 because of this: https://github.com/MongoEngine/mongoengine/pull/2160/commits/47f8a126ca167cb8fe020e3cc5604b155dfcdebc.
Thanks
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mongoengine/__init__.py`
Content:
```
1 # Import submodules so that we can expose their __all__
2 from mongoengine import connection
3 from mongoengine import document
4 from mongoengine import errors
5 from mongoengine import fields
6 from mongoengine import queryset
7 from mongoengine import signals
8
9 # Import everything from each submodule so that it can be accessed via
10 # mongoengine, e.g. instead of `from mongoengine.connection import connect`,
11 # users can simply use `from mongoengine import connect`, or even
12 # `from mongoengine import *` and then `connect('testdb')`.
13 from mongoengine.connection import *
14 from mongoengine.document import *
15 from mongoengine.errors import *
16 from mongoengine.fields import *
17 from mongoengine.queryset import *
18 from mongoengine.signals import *
19
20
21 __all__ = (
22 list(document.__all__)
23 + list(fields.__all__)
24 + list(connection.__all__)
25 + list(queryset.__all__)
26 + list(signals.__all__)
27 + list(errors.__all__)
28 )
29
30
31 VERSION = (0, 18, 2)
32
33
34 def get_version():
35 """Return the VERSION as a string.
36
37 For example, if `VERSION == (0, 10, 7)`, return '0.10.7'.
38 """
39 return ".".join(map(str, VERSION))
40
41
42 __version__ = get_version()
43
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mongoengine/__init__.py b/mongoengine/__init__.py
--- a/mongoengine/__init__.py
+++ b/mongoengine/__init__.py
@@ -28,7 +28,7 @@
)
-VERSION = (0, 18, 2)
+VERSION = (0, 19, 0)
def get_version():
| {"golden_diff": "diff --git a/mongoengine/__init__.py b/mongoengine/__init__.py\n--- a/mongoengine/__init__.py\n+++ b/mongoengine/__init__.py\n@@ -28,7 +28,7 @@\n )\n \n \n-VERSION = (0, 18, 2)\n+VERSION = (0, 19, 0)\n \n \n def get_version():\n", "issue": "New release\nHi,\r\n\r\nWhen is coming new release, because I can't update to mongodb 4.2 because of this: https://github.com/MongoEngine/mongoengine/pull/2160/commits/47f8a126ca167cb8fe020e3cc5604b155dfcdebc.\r\n\r\nThanks\n", "before_files": [{"content": "# Import submodules so that we can expose their __all__\nfrom mongoengine import connection\nfrom mongoengine import document\nfrom mongoengine import errors\nfrom mongoengine import fields\nfrom mongoengine import queryset\nfrom mongoengine import signals\n\n# Import everything from each submodule so that it can be accessed via\n# mongoengine, e.g. instead of `from mongoengine.connection import connect`,\n# users can simply use `from mongoengine import connect`, or even\n# `from mongoengine import *` and then `connect('testdb')`.\nfrom mongoengine.connection import *\nfrom mongoengine.document import *\nfrom mongoengine.errors import *\nfrom mongoengine.fields import *\nfrom mongoengine.queryset import *\nfrom mongoengine.signals import *\n\n\n__all__ = (\n list(document.__all__)\n + list(fields.__all__)\n + list(connection.__all__)\n + list(queryset.__all__)\n + list(signals.__all__)\n + list(errors.__all__)\n)\n\n\nVERSION = (0, 18, 2)\n\n\ndef get_version():\n \"\"\"Return the VERSION as a string.\n\n For example, if `VERSION == (0, 10, 7)`, return '0.10.7'.\n \"\"\"\n return \".\".join(map(str, VERSION))\n\n\n__version__ = get_version()\n", "path": "mongoengine/__init__.py"}], "after_files": [{"content": "# Import submodules so that we can expose their __all__\nfrom mongoengine import connection\nfrom mongoengine import document\nfrom mongoengine import errors\nfrom mongoengine import fields\nfrom mongoengine import queryset\nfrom mongoengine import signals\n\n# Import everything from each submodule so that it can be accessed via\n# mongoengine, e.g. instead of `from mongoengine.connection import connect`,\n# users can simply use `from mongoengine import connect`, or even\n# `from mongoengine import *` and then `connect('testdb')`.\nfrom mongoengine.connection import *\nfrom mongoengine.document import *\nfrom mongoengine.errors import *\nfrom mongoengine.fields import *\nfrom mongoengine.queryset import *\nfrom mongoengine.signals import *\n\n\n__all__ = (\n list(document.__all__)\n + list(fields.__all__)\n + list(connection.__all__)\n + list(queryset.__all__)\n + list(signals.__all__)\n + list(errors.__all__)\n)\n\n\nVERSION = (0, 19, 0)\n\n\ndef get_version():\n \"\"\"Return the VERSION as a string.\n\n For example, if `VERSION == (0, 10, 7)`, return '0.10.7'.\n \"\"\"\n return \".\".join(map(str, VERSION))\n\n\n__version__ = get_version()\n", "path": "mongoengine/__init__.py"}]} |
gh_patches_debug_9 | rasdani/github-patches | git_diff | scikit-image__scikit-image-6307 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Undefined names in Python code found with flake8
## Description
## Way to reproduce
[flake8](http://flake8.pycqa.org) testing of https://github.com/scikit-image/scikit-image on Python 3.7.1
$ __flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics__
```
./skimage/measure/mc_meta/createluts.py:139:18: F821 undefined name 'luts'
for a in dir(luts):
^
./doc/ext/notebook_doc.py:1:1: F822 undefined name 'python_to_notebook' in __all__
__all__ = ['python_to_notebook', 'Notebook']
^
1 F821 undefined name 'luts'
1 F822 undefined name 'python_to_notebook' in __all__
2
```
__E901,E999,F821,F822,F823__ are the "_showstopper_" [flake8](http://flake8.pycqa.org) issues that can halt the runtime with a SyntaxError, NameError, etc. These 5 are different from most other flake8 issues which are merely "style violations" -- useful for readability but they do not effect runtime safety.
* F821: undefined name `name`
* F822: undefined name `name` in `__all__`
* F823: local variable name referenced before assignment
* E901: SyntaxError or IndentationError
* E999: SyntaxError -- failed to compile a file into an Abstract Syntax Tree
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `doc/ext/notebook_doc.py`
Content:
```
1 __all__ = ['python_to_notebook', 'Notebook']
2
3 import json
4 import copy
5 import warnings
6
7
8 # Skeleton notebook in JSON format
9 skeleton_nb = """{
10 "metadata": {
11 "name":""
12 },
13 "nbformat": 3,
14 "nbformat_minor": 0,
15 "worksheets": [
16 {
17 "cells": [
18 {
19 "cell_type": "code",
20 "collapsed": false,
21 "input": [
22 "%matplotlib inline"
23 ],
24 "language": "python",
25 "metadata": {},
26 "outputs": []
27 }
28 ],
29 "metadata": {}
30 }
31 ]
32 }"""
33
34
35 class Notebook(object):
36 """
37 Notebook object for building an IPython notebook cell-by-cell.
38 """
39
40 def __init__(self):
41 # cell type code
42 self.cell_code = {
43 'cell_type': 'code',
44 'collapsed': False,
45 'input': [
46 '# Code Goes Here'
47 ],
48 'language': 'python',
49 'metadata': {},
50 'outputs': []
51 }
52
53 # cell type markdown
54 self.cell_md = {
55 'cell_type': 'markdown',
56 'metadata': {},
57 'source': [
58 'Markdown Goes Here'
59 ]
60 }
61
62 self.template = json.loads(skeleton_nb)
63 self.cell_type = {'input': self.cell_code, 'source': self.cell_md}
64 self.valuetype_to_celltype = {'code': 'input', 'markdown': 'source'}
65
66 def add_cell(self, value, cell_type='code'):
67 """Add a notebook cell.
68
69 Parameters
70 ----------
71 value : str
72 Cell content.
73 cell_type : {'code', 'markdown'}
74 Type of content (default is 'code').
75
76 """
77 if cell_type in ['markdown', 'code']:
78 key = self.valuetype_to_celltype[cell_type]
79 cells = self.template['worksheets'][0]['cells']
80 cells.append(copy.deepcopy(self.cell_type[key]))
81 # assign value to the last cell
82 cells[-1][key] = value
83 else:
84 warnings.warn('Ignoring unsupported cell type (%s)' % cell_type)
85
86 def json(self):
87 """Return a JSON representation of the notebook.
88
89 Returns
90 -------
91 str
92 JSON notebook.
93
94 """
95 return json.dumps(self.template, indent=2)
96
97
98
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/doc/ext/notebook_doc.py b/doc/ext/notebook_doc.py
--- a/doc/ext/notebook_doc.py
+++ b/doc/ext/notebook_doc.py
@@ -1,4 +1,4 @@
-__all__ = ['python_to_notebook', 'Notebook']
+__all__ = ['Notebook']
import json
import copy
| {"golden_diff": "diff --git a/doc/ext/notebook_doc.py b/doc/ext/notebook_doc.py\n--- a/doc/ext/notebook_doc.py\n+++ b/doc/ext/notebook_doc.py\n@@ -1,4 +1,4 @@\n-__all__ = ['python_to_notebook', 'Notebook']\n+__all__ = ['Notebook']\n \n import json\n import copy\n", "issue": "Undefined names in Python code found with flake8\n## Description\r\n\r\n\r\n## Way to reproduce\r\n[flake8](http://flake8.pycqa.org) testing of https://github.com/scikit-image/scikit-image on Python 3.7.1\r\n\r\n$ __flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics__\r\n```\r\n./skimage/measure/mc_meta/createluts.py:139:18: F821 undefined name 'luts'\r\n for a in dir(luts):\r\n ^\r\n./doc/ext/notebook_doc.py:1:1: F822 undefined name 'python_to_notebook' in __all__\r\n__all__ = ['python_to_notebook', 'Notebook']\r\n^\r\n1 F821 undefined name 'luts'\r\n1 F822 undefined name 'python_to_notebook' in __all__\r\n2\r\n```\r\n__E901,E999,F821,F822,F823__ are the \"_showstopper_\" [flake8](http://flake8.pycqa.org) issues that can halt the runtime with a SyntaxError, NameError, etc. These 5 are different from most other flake8 issues which are merely \"style violations\" -- useful for readability but they do not effect runtime safety.\r\n* F821: undefined name `name`\r\n* F822: undefined name `name` in `__all__`\r\n* F823: local variable name referenced before assignment\r\n* E901: SyntaxError or IndentationError\r\n* E999: SyntaxError -- failed to compile a file into an Abstract Syntax Tree\r\n\n", "before_files": [{"content": "__all__ = ['python_to_notebook', 'Notebook']\n\nimport json\nimport copy\nimport warnings\n\n\n# Skeleton notebook in JSON format\nskeleton_nb = \"\"\"{\n \"metadata\": {\n \"name\":\"\"\n },\n \"nbformat\": 3,\n \"nbformat_minor\": 0,\n \"worksheets\": [\n {\n \"cells\": [\n {\n \"cell_type\": \"code\",\n \"collapsed\": false,\n \"input\": [\n \"%matplotlib inline\"\n ],\n \"language\": \"python\",\n \"metadata\": {},\n \"outputs\": []\n }\n ],\n \"metadata\": {}\n }\n ]\n}\"\"\"\n\n\nclass Notebook(object):\n \"\"\"\n Notebook object for building an IPython notebook cell-by-cell.\n \"\"\"\n\n def __init__(self):\n # cell type code\n self.cell_code = {\n 'cell_type': 'code',\n 'collapsed': False,\n 'input': [\n '# Code Goes Here'\n ],\n 'language': 'python',\n 'metadata': {},\n 'outputs': []\n }\n\n # cell type markdown\n self.cell_md = {\n 'cell_type': 'markdown',\n 'metadata': {},\n 'source': [\n 'Markdown Goes Here'\n ]\n }\n\n self.template = json.loads(skeleton_nb)\n self.cell_type = {'input': self.cell_code, 'source': self.cell_md}\n self.valuetype_to_celltype = {'code': 'input', 'markdown': 'source'}\n\n def add_cell(self, value, cell_type='code'):\n \"\"\"Add a notebook cell.\n\n Parameters\n ----------\n value : str\n Cell content.\n cell_type : {'code', 'markdown'}\n Type of content (default is 'code').\n\n \"\"\"\n if cell_type in ['markdown', 'code']:\n key = self.valuetype_to_celltype[cell_type]\n cells = self.template['worksheets'][0]['cells']\n cells.append(copy.deepcopy(self.cell_type[key]))\n # assign value to the last cell\n cells[-1][key] = value\n else:\n warnings.warn('Ignoring unsupported cell type (%s)' % cell_type)\n\n def json(self):\n \"\"\"Return a JSON representation of the notebook.\n\n Returns\n -------\n str\n JSON notebook.\n\n \"\"\"\n return json.dumps(self.template, indent=2)\n\n\n", "path": "doc/ext/notebook_doc.py"}], "after_files": [{"content": "__all__ = ['Notebook']\n\nimport json\nimport copy\nimport warnings\n\n\n# Skeleton notebook in JSON format\nskeleton_nb = \"\"\"{\n \"metadata\": {\n \"name\":\"\"\n },\n \"nbformat\": 3,\n \"nbformat_minor\": 0,\n \"worksheets\": [\n {\n \"cells\": [\n {\n \"cell_type\": \"code\",\n \"collapsed\": false,\n \"input\": [\n \"%matplotlib inline\"\n ],\n \"language\": \"python\",\n \"metadata\": {},\n \"outputs\": []\n }\n ],\n \"metadata\": {}\n }\n ]\n}\"\"\"\n\n\nclass Notebook(object):\n \"\"\"\n Notebook object for building an IPython notebook cell-by-cell.\n \"\"\"\n\n def __init__(self):\n # cell type code\n self.cell_code = {\n 'cell_type': 'code',\n 'collapsed': False,\n 'input': [\n '# Code Goes Here'\n ],\n 'language': 'python',\n 'metadata': {},\n 'outputs': []\n }\n\n # cell type markdown\n self.cell_md = {\n 'cell_type': 'markdown',\n 'metadata': {},\n 'source': [\n 'Markdown Goes Here'\n ]\n }\n\n self.template = json.loads(skeleton_nb)\n self.cell_type = {'input': self.cell_code, 'source': self.cell_md}\n self.valuetype_to_celltype = {'code': 'input', 'markdown': 'source'}\n\n def add_cell(self, value, cell_type='code'):\n \"\"\"Add a notebook cell.\n\n Parameters\n ----------\n value : str\n Cell content.\n cell_type : {'code', 'markdown'}\n Type of content (default is 'code').\n\n \"\"\"\n if cell_type in ['markdown', 'code']:\n key = self.valuetype_to_celltype[cell_type]\n cells = self.template['worksheets'][0]['cells']\n cells.append(copy.deepcopy(self.cell_type[key]))\n # assign value to the last cell\n cells[-1][key] = value\n else:\n warnings.warn('Ignoring unsupported cell type (%s)' % cell_type)\n\n def json(self):\n \"\"\"Return a JSON representation of the notebook.\n\n Returns\n -------\n str\n JSON notebook.\n\n \"\"\"\n return json.dumps(self.template, indent=2)\n\n\n", "path": "doc/ext/notebook_doc.py"}]} |
gh_patches_debug_10 | rasdani/github-patches | git_diff | RedHatInsights__insights-core-1452 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Run Flake8 lint on RHEL6
Currently, flake8 is run only on RHEL7 and 8 and not on RHEL6. According to [the documentation](http://flake8.pycqa.org/en/latest/#installation) it is necessary to run flake8 with the exact Python version that is used. Thus to be sure that the syntax is ok even for the older Python version, we have to run in to RHEL6 too.
Tackled in #1251.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 import os
2 from setuptools import setup, find_packages
3
4 __here__ = os.path.dirname(os.path.abspath(__file__))
5
6 package_info = dict.fromkeys(["RELEASE", "COMMIT", "VERSION", "NAME"])
7
8 for name in package_info:
9 with open(os.path.join(__here__, "insights", name)) as f:
10 package_info[name] = f.read().strip()
11
12 entry_points = {
13 'console_scripts': [
14 'insights-run = insights:main',
15 'insights-info = insights.tools.query:main',
16 'gen_api = insights.tools.generate_api_config:main',
17 'insights-perf = insights.tools.perf:main',
18 'client = insights.client:run',
19 'mangle = insights.util.mangle:main'
20 ]
21 }
22
23 runtime = set([
24 'pyyaml>=3.10,<=3.13',
25 'six',
26 ])
27
28
29 def maybe_require(pkg):
30 try:
31 __import__(pkg)
32 except ImportError:
33 runtime.add(pkg)
34
35
36 maybe_require("importlib")
37 maybe_require("argparse")
38
39
40 client = set([
41 'requests',
42 'pyOpenSSL',
43 ])
44
45 develop = set([
46 'futures==3.0.5',
47 'requests==2.13.0',
48 'wheel',
49 ])
50
51 docs = set([
52 'Sphinx==1.7.9',
53 'nbsphinx==0.3.1',
54 'sphinx_rtd_theme',
55 'ipython<6',
56 'colorama',
57 ])
58
59 testing = set([
60 'coverage==4.3.4',
61 'pytest==3.0.6',
62 'pytest-cov==2.4.0',
63 'mock==2.0.0',
64 ])
65
66 linting = set([
67 'flake8==3.3.0',
68 ])
69
70 optional = set([
71 'jinja2',
72 'python-cjson',
73 'python-logstash',
74 'python-statsd',
75 'watchdog',
76 ])
77
78 if __name__ == "__main__":
79 # allows for runtime modification of rpm name
80 name = os.environ.get("INSIGHTS_CORE_NAME", package_info["NAME"])
81
82 setup(
83 name=name,
84 version=package_info["VERSION"],
85 description="Insights Core is a data collection and analysis framework",
86 long_description=open("README.rst").read(),
87 url="https://github.com/redhatinsights/insights-core",
88 author="Red Hat, Inc.",
89 author_email="[email protected]",
90 packages=find_packages(),
91 install_requires=list(runtime),
92 package_data={'': ['LICENSE']},
93 license='Apache 2.0',
94 extras_require={
95 'develop': list(runtime | develop | client | docs | linting | testing),
96 'client': list(runtime | client),
97 'optional': list(optional),
98 'docs': list(docs),
99 'linting': list(linting | client),
100 'testing': list(testing | client)
101 },
102 classifiers=[
103 'Development Status :: 5 - Production/Stable',
104 'Intended Audience :: Developers',
105 'Natural Language :: English',
106 'License :: OSI Approved :: Apache Software License',
107 'Programming Language :: Python',
108 'Programming Language :: Python :: 2.6',
109 'Programming Language :: Python :: 2.7',
110 'Programming Language :: Python :: 3.3',
111 'Programming Language :: Python :: 3.4',
112 'Programming Language :: Python :: 3.5',
113 'Programming Language :: Python :: 3.6'
114 ],
115 entry_points=entry_points,
116 include_package_data=True
117 )
118
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -64,7 +64,7 @@
])
linting = set([
- 'flake8==3.3.0',
+ 'flake8==2.6.2',
])
optional = set([
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -64,7 +64,7 @@\n ])\n \n linting = set([\n- 'flake8==3.3.0',\n+ 'flake8==2.6.2',\n ])\n \n optional = set([\n", "issue": "Run Flake8 lint on RHEL6\nCurrently, flake8 is run only on RHEL7 and 8 and not on RHEL6. According to [the documentation](http://flake8.pycqa.org/en/latest/#installation) it is necessary to run flake8 with the exact Python version that is used. Thus to be sure that the syntax is ok even for the older Python version, we have to run in to RHEL6 too.\r\n\r\nTackled in #1251.\n", "before_files": [{"content": "import os\nfrom setuptools import setup, find_packages\n\n__here__ = os.path.dirname(os.path.abspath(__file__))\n\npackage_info = dict.fromkeys([\"RELEASE\", \"COMMIT\", \"VERSION\", \"NAME\"])\n\nfor name in package_info:\n with open(os.path.join(__here__, \"insights\", name)) as f:\n package_info[name] = f.read().strip()\n\nentry_points = {\n 'console_scripts': [\n 'insights-run = insights:main',\n 'insights-info = insights.tools.query:main',\n 'gen_api = insights.tools.generate_api_config:main',\n 'insights-perf = insights.tools.perf:main',\n 'client = insights.client:run',\n 'mangle = insights.util.mangle:main'\n ]\n}\n\nruntime = set([\n 'pyyaml>=3.10,<=3.13',\n 'six',\n])\n\n\ndef maybe_require(pkg):\n try:\n __import__(pkg)\n except ImportError:\n runtime.add(pkg)\n\n\nmaybe_require(\"importlib\")\nmaybe_require(\"argparse\")\n\n\nclient = set([\n 'requests',\n 'pyOpenSSL',\n])\n\ndevelop = set([\n 'futures==3.0.5',\n 'requests==2.13.0',\n 'wheel',\n])\n\ndocs = set([\n 'Sphinx==1.7.9',\n 'nbsphinx==0.3.1',\n 'sphinx_rtd_theme',\n 'ipython<6',\n 'colorama',\n])\n\ntesting = set([\n 'coverage==4.3.4',\n 'pytest==3.0.6',\n 'pytest-cov==2.4.0',\n 'mock==2.0.0',\n])\n\nlinting = set([\n 'flake8==3.3.0',\n])\n\noptional = set([\n 'jinja2',\n 'python-cjson',\n 'python-logstash',\n 'python-statsd',\n 'watchdog',\n])\n\nif __name__ == \"__main__\":\n # allows for runtime modification of rpm name\n name = os.environ.get(\"INSIGHTS_CORE_NAME\", package_info[\"NAME\"])\n\n setup(\n name=name,\n version=package_info[\"VERSION\"],\n description=\"Insights Core is a data collection and analysis framework\",\n long_description=open(\"README.rst\").read(),\n url=\"https://github.com/redhatinsights/insights-core\",\n author=\"Red Hat, Inc.\",\n author_email=\"[email protected]\",\n packages=find_packages(),\n install_requires=list(runtime),\n package_data={'': ['LICENSE']},\n license='Apache 2.0',\n extras_require={\n 'develop': list(runtime | develop | client | docs | linting | testing),\n 'client': list(runtime | client),\n 'optional': list(optional),\n 'docs': list(docs),\n 'linting': list(linting | client),\n 'testing': list(testing | client)\n },\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'Natural Language :: English',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6'\n ],\n entry_points=entry_points,\n include_package_data=True\n )\n", "path": "setup.py"}], "after_files": [{"content": "import os\nfrom setuptools import setup, find_packages\n\n__here__ = os.path.dirname(os.path.abspath(__file__))\n\npackage_info = dict.fromkeys([\"RELEASE\", \"COMMIT\", \"VERSION\", \"NAME\"])\n\nfor name in package_info:\n with open(os.path.join(__here__, \"insights\", name)) as f:\n package_info[name] = f.read().strip()\n\nentry_points = {\n 'console_scripts': [\n 'insights-run = insights:main',\n 'insights-info = insights.tools.query:main',\n 'gen_api = insights.tools.generate_api_config:main',\n 'insights-perf = insights.tools.perf:main',\n 'client = insights.client:run',\n 'mangle = insights.util.mangle:main'\n ]\n}\n\nruntime = set([\n 'pyyaml>=3.10,<=3.13',\n 'six',\n])\n\n\ndef maybe_require(pkg):\n try:\n __import__(pkg)\n except ImportError:\n runtime.add(pkg)\n\n\nmaybe_require(\"importlib\")\nmaybe_require(\"argparse\")\n\n\nclient = set([\n 'requests',\n 'pyOpenSSL',\n])\n\ndevelop = set([\n 'futures==3.0.5',\n 'requests==2.13.0',\n 'wheel',\n])\n\ndocs = set([\n 'Sphinx==1.7.9',\n 'nbsphinx==0.3.1',\n 'sphinx_rtd_theme',\n 'ipython<6',\n 'colorama',\n])\n\ntesting = set([\n 'coverage==4.3.4',\n 'pytest==3.0.6',\n 'pytest-cov==2.4.0',\n 'mock==2.0.0',\n])\n\nlinting = set([\n 'flake8==2.6.2',\n])\n\noptional = set([\n 'jinja2',\n 'python-cjson',\n 'python-logstash',\n 'python-statsd',\n 'watchdog',\n])\n\nif __name__ == \"__main__\":\n # allows for runtime modification of rpm name\n name = os.environ.get(\"INSIGHTS_CORE_NAME\", package_info[\"NAME\"])\n\n setup(\n name=name,\n version=package_info[\"VERSION\"],\n description=\"Insights Core is a data collection and analysis framework\",\n long_description=open(\"README.rst\").read(),\n url=\"https://github.com/redhatinsights/insights-core\",\n author=\"Red Hat, Inc.\",\n author_email=\"[email protected]\",\n packages=find_packages(),\n install_requires=list(runtime),\n package_data={'': ['LICENSE']},\n license='Apache 2.0',\n extras_require={\n 'develop': list(runtime | develop | client | docs | linting | testing),\n 'client': list(runtime | client),\n 'optional': list(optional),\n 'docs': list(docs),\n 'linting': list(linting | client),\n 'testing': list(testing | client)\n },\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'Natural Language :: English',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6'\n ],\n entry_points=entry_points,\n include_package_data=True\n )\n", "path": "setup.py"}]} |
gh_patches_debug_11 | rasdani/github-patches | git_diff | openstates__openstates-scrapers-2289 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
InsecureRequestWarning log spam
Scrape logs for https sites are spammed with this INFO-level message on every HTTPS request:
```
/opt/openstates/venv-pupa/lib/python3.5/site-packages/urllib3/connectionpool.py:858: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
```
I'm looking for advice about what should be done. My inclination is to quell the warnings altogether, because I _suspect_ that stale state certs are frequent enough to not want to bother with verification. I believe (but have not tested) that this can be done in openstates with
```py
import urllib3
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
```
If we want to verify certs, it probably requires changes somewhere up the stack.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `openstates/__init__.py`
Content:
```
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/openstates/__init__.py b/openstates/__init__.py
--- a/openstates/__init__.py
+++ b/openstates/__init__.py
@@ -0,0 +1,4 @@
+import urllib3
+
+# Quell InsecureRequestWarning: Unverified HTTPS request warnings
+urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
| {"golden_diff": "diff --git a/openstates/__init__.py b/openstates/__init__.py\n--- a/openstates/__init__.py\n+++ b/openstates/__init__.py\n@@ -0,0 +1,4 @@\n+import urllib3\n+\n+# Quell InsecureRequestWarning: Unverified HTTPS request warnings\n+urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)\n", "issue": "InsecureRequestWarning log spam\nScrape logs for https sites are spammed with this INFO-level message on every HTTPS request:\r\n```\r\n/opt/openstates/venv-pupa/lib/python3.5/site-packages/urllib3/connectionpool.py:858: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings\r\n```\r\n\r\nI'm looking for advice about what should be done. My inclination is to quell the warnings altogether, because I _suspect_ that stale state certs are frequent enough to not want to bother with verification. I believe (but have not tested) that this can be done in openstates with\r\n\r\n```py\r\nimport urllib3\r\nurllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)\r\n```\r\n\r\nIf we want to verify certs, it probably requires changes somewhere up the stack.\r\n\n", "before_files": [{"content": "", "path": "openstates/__init__.py"}], "after_files": [{"content": "import urllib3\n\n# Quell InsecureRequestWarning: Unverified HTTPS request warnings\nurllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)\n", "path": "openstates/__init__.py"}]} |
gh_patches_debug_12 | rasdani/github-patches | git_diff | jazzband__django-oauth-toolkit-1126 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
fix(tasks): fix error caused by relative import
## Description of the Change
Running `oauth2_provider.tasks.clear_tokens` results in an error e.g.:
```python
>>> from oauth2_provider.tasks import clear_tokens
>>> clear_tokens()
Traceback (most recent call last):
File "[python3.9]/code.py", line 90, in runcode
exec(code, self.locals)
File "<console>", line 1, in <module>
File "[site-packages]/celery/local.py", line 188, in __call__
return self._get_current_object()(*a, **kw)
File "[site-packages]/celery/app/task.py", line 392, in __call__
return self.run(*args, **kwargs)
File "[site-packages]/oauth2_provider/tasks.py", line 6, in clear_tokens
from ...models import clear_expired # noqa
ImportError: attempted relative import beyond top-level package
```
This update fixes the import path.
## Checklist
<!-- Replace '[ ]' with '[x]' to indicate that the checklist item is completed. -->
<!-- You can check the boxes now or later by just clicking on them. -->
- [x] PR only contains one change (considered splitting up PR)
- [ ] unit-test added
- [ ] documentation updated
- [ ] `CHANGELOG.md` updated (only for user relevant changes)
- [ ] author name in `AUTHORS`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `oauth2_provider/tasks.py`
Content:
```
1 from celery import shared_task
2
3
4 @shared_task
5 def clear_tokens():
6 from ...models import clear_expired # noqa
7
8 clear_expired()
9
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/oauth2_provider/tasks.py b/oauth2_provider/tasks.py
deleted file mode 100644
--- a/oauth2_provider/tasks.py
+++ /dev/null
@@ -1,8 +0,0 @@
-from celery import shared_task
-
-
-@shared_task
-def clear_tokens():
- from ...models import clear_expired # noqa
-
- clear_expired()
| {"golden_diff": "diff --git a/oauth2_provider/tasks.py b/oauth2_provider/tasks.py\ndeleted file mode 100644\n--- a/oauth2_provider/tasks.py\n+++ /dev/null\n@@ -1,8 +0,0 @@\n-from celery import shared_task\n-\n-\n-@shared_task\n-def clear_tokens():\n- from ...models import clear_expired # noqa\n-\n- clear_expired()\n", "issue": "fix(tasks): fix error caused by relative import\n## Description of the Change\r\n\r\nRunning `oauth2_provider.tasks.clear_tokens` results in an error e.g.:\r\n```python\r\n>>> from oauth2_provider.tasks import clear_tokens\r\n>>> clear_tokens()\r\nTraceback (most recent call last):\r\n File \"[python3.9]/code.py\", line 90, in runcode\r\n exec(code, self.locals)\r\n File \"<console>\", line 1, in <module>\r\n File \"[site-packages]/celery/local.py\", line 188, in __call__\r\n return self._get_current_object()(*a, **kw)\r\n File \"[site-packages]/celery/app/task.py\", line 392, in __call__\r\n return self.run(*args, **kwargs)\r\n File \"[site-packages]/oauth2_provider/tasks.py\", line 6, in clear_tokens\r\n from ...models import clear_expired # noqa\r\nImportError: attempted relative import beyond top-level package\r\n```\r\n\r\nThis update fixes the import path.\r\n\r\n## Checklist\r\n\r\n<!-- Replace '[ ]' with '[x]' to indicate that the checklist item is completed. -->\r\n<!-- You can check the boxes now or later by just clicking on them. -->\r\n\r\n- [x] PR only contains one change (considered splitting up PR)\r\n- [ ] unit-test added\r\n- [ ] documentation updated\r\n- [ ] `CHANGELOG.md` updated (only for user relevant changes)\r\n- [ ] author name in `AUTHORS`\r\n\n", "before_files": [{"content": "from celery import shared_task\n\n\n@shared_task\ndef clear_tokens():\n from ...models import clear_expired # noqa\n\n clear_expired()\n", "path": "oauth2_provider/tasks.py"}], "after_files": [{"content": null, "path": "oauth2_provider/tasks.py"}]} |
gh_patches_debug_13 | rasdani/github-patches | git_diff | mathesar-foundation__mathesar-3190 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Release 0.1.3
## 2023-08-16
```[tasklist]
### Tasks
- [x] Cut 0.1.3 release branch, freeze code
- [x] Update version number in all places in the new branch
- [x] Make an image from the branch with tag `0.1.3`, push to Dockerhub
- [x] Test installation with the new image
- [x] Test upgrade
- [x] Smoke testing application
- [x] Stability of the newly released items
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mathesar/__init__.py`
Content:
```
1 default_app_config = 'mathesar.apps.MathesarConfig'
2
3 __version__ = "0.1.2"
4
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mathesar/__init__.py b/mathesar/__init__.py
--- a/mathesar/__init__.py
+++ b/mathesar/__init__.py
@@ -1,3 +1,3 @@
default_app_config = 'mathesar.apps.MathesarConfig'
-__version__ = "0.1.2"
+__version__ = "0.1.3"
| {"golden_diff": "diff --git a/mathesar/__init__.py b/mathesar/__init__.py\n--- a/mathesar/__init__.py\n+++ b/mathesar/__init__.py\n@@ -1,3 +1,3 @@\n default_app_config = 'mathesar.apps.MathesarConfig'\n \n-__version__ = \"0.1.2\"\n+__version__ = \"0.1.3\"\n", "issue": "Release 0.1.3\n## 2023-08-16\r\n```[tasklist]\r\n### Tasks\r\n- [x] Cut 0.1.3 release branch, freeze code\r\n- [x] Update version number in all places in the new branch\r\n- [x] Make an image from the branch with tag `0.1.3`, push to Dockerhub\r\n- [x] Test installation with the new image\r\n- [x] Test upgrade\r\n- [x] Smoke testing application\r\n- [x] Stability of the newly released items\r\n```\r\n\n", "before_files": [{"content": "default_app_config = 'mathesar.apps.MathesarConfig'\n\n__version__ = \"0.1.2\"\n", "path": "mathesar/__init__.py"}], "after_files": [{"content": "default_app_config = 'mathesar.apps.MathesarConfig'\n\n__version__ = \"0.1.3\"\n", "path": "mathesar/__init__.py"}]} |
gh_patches_debug_14 | rasdani/github-patches | git_diff | magenta__magenta-1079 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Error in running Onsets and Frames Colab Notebook
Hi @cghawthorne
I am using your [Colab notebook](https://colab.research.google.com/notebook#fileId=/v2/external/notebooks/magenta/onsets_frames_transcription/onsets_frames_transcription.ipynb) to test your model but it stopped working a week ago.
Error on the inference section:
UnknownError: exceptions.AttributeError: 'module' object has no attribute 'logamplitude'
[[Node: wav_to_spec = PyFunc[Tin=[DT_STRING], Tout=[DT_FLOAT], token="pyfunc_1"](transform_wav_data_op)]]
[[Node: IteratorGetNext = IteratorGetNext[output_shapes=[[?], [?,?,88], [?,?,88], [?], [?], [?,?,88], [?,?,229,1]], output_types=[DT_STRING, DT_FLOAT, DT_FLOAT, DT_INT32, DT_STRING, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](Iterator)]]
Thanks,
Bardia
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `magenta/version.py`
Content:
```
1 # Copyright 2016 Google Inc. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 r"""Separate file for storing the current version of Magenta.
15
16 Stored in a separate file so that setup.py can reference the version without
17 pulling in all the dependencies in __init__.py.
18 """
19
20 __version__ = '0.3.5'
21
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/magenta/version.py b/magenta/version.py
--- a/magenta/version.py
+++ b/magenta/version.py
@@ -17,4 +17,4 @@
pulling in all the dependencies in __init__.py.
"""
-__version__ = '0.3.5'
+__version__ = '0.3.6'
| {"golden_diff": "diff --git a/magenta/version.py b/magenta/version.py\n--- a/magenta/version.py\n+++ b/magenta/version.py\n@@ -17,4 +17,4 @@\n pulling in all the dependencies in __init__.py.\n \"\"\"\n \n-__version__ = '0.3.5'\n+__version__ = '0.3.6'\n", "issue": "Error in running Onsets and Frames Colab Notebook\nHi @cghawthorne\r\nI am using your [Colab notebook](https://colab.research.google.com/notebook#fileId=/v2/external/notebooks/magenta/onsets_frames_transcription/onsets_frames_transcription.ipynb) to test your model but it stopped working a week ago.\r\n\r\nError on the inference section:\r\nUnknownError: exceptions.AttributeError: 'module' object has no attribute 'logamplitude'\r\n\t [[Node: wav_to_spec = PyFunc[Tin=[DT_STRING], Tout=[DT_FLOAT], token=\"pyfunc_1\"](transform_wav_data_op)]]\r\n\t [[Node: IteratorGetNext = IteratorGetNext[output_shapes=[[?], [?,?,88], [?,?,88], [?], [?], [?,?,88], [?,?,229,1]], output_types=[DT_STRING, DT_FLOAT, DT_FLOAT, DT_INT32, DT_STRING, DT_FLOAT, DT_FLOAT], _device=\"/job:localhost/replica:0/task:0/device:CPU:0\"](Iterator)]]\r\n\r\nThanks,\r\nBardia\r\n\r\n\n", "before_files": [{"content": "# Copyright 2016 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nr\"\"\"Separate file for storing the current version of Magenta.\n\nStored in a separate file so that setup.py can reference the version without\npulling in all the dependencies in __init__.py.\n\"\"\"\n\n__version__ = '0.3.5'\n", "path": "magenta/version.py"}], "after_files": [{"content": "# Copyright 2016 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nr\"\"\"Separate file for storing the current version of Magenta.\n\nStored in a separate file so that setup.py can reference the version without\npulling in all the dependencies in __init__.py.\n\"\"\"\n\n__version__ = '0.3.6'\n", "path": "magenta/version.py"}]} |
gh_patches_debug_15 | rasdani/github-patches | git_diff | Anselmoo__spectrafit-715 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Feature]: Add python 3.11 support
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Missing Feature
Add python 3.11 support
### Possible Solution
_No response_
### Anything else?
_No response_
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `spectrafit/__init__.py`
Content:
```
1 """SpectraFit, fast command line tool for fitting data."""
2 __version__ = "0.16.6"
3
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/spectrafit/__init__.py b/spectrafit/__init__.py
--- a/spectrafit/__init__.py
+++ b/spectrafit/__init__.py
@@ -1,2 +1,2 @@
"""SpectraFit, fast command line tool for fitting data."""
-__version__ = "0.16.6"
+__version__ = "0.16.7"
| {"golden_diff": "diff --git a/spectrafit/__init__.py b/spectrafit/__init__.py\n--- a/spectrafit/__init__.py\n+++ b/spectrafit/__init__.py\n@@ -1,2 +1,2 @@\n \"\"\"SpectraFit, fast command line tool for fitting data.\"\"\"\n-__version__ = \"0.16.6\"\n+__version__ = \"0.16.7\"\n", "issue": "[Feature]: Add python 3.11 support\n### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Current Missing Feature\n\nAdd python 3.11 support\n\n### Possible Solution\n\n_No response_\n\n### Anything else?\n\n_No response_\n\n### Code of Conduct\n\n- [X] I agree to follow this project's Code of Conduct\n", "before_files": [{"content": "\"\"\"SpectraFit, fast command line tool for fitting data.\"\"\"\n__version__ = \"0.16.6\"\n", "path": "spectrafit/__init__.py"}], "after_files": [{"content": "\"\"\"SpectraFit, fast command line tool for fitting data.\"\"\"\n__version__ = \"0.16.7\"\n", "path": "spectrafit/__init__.py"}]} |
gh_patches_debug_16 | rasdani/github-patches | git_diff | typeddjango__django-stubs-1429 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bump mypy from 1.1.1 to 1.2.0
Bumps [mypy](https://github.com/python/mypy) from 1.1.1 to 1.2.0.
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/python/mypy/commit/4f47dfb64dff920c237e7c8c58f8efba57cf57cf"><code>4f47dfb</code></a> Promote version to 1.2.0 and drop +dev from the version</li>
<li><a href="https://github.com/python/mypy/commit/06aa182b4973ea122c9f536855a31234d75b93b9"><code>06aa182</code></a> [dataclass_transform] support implicit default for "init" parameter in field ...</li>
<li><a href="https://github.com/python/mypy/commit/7beaec2e4a1c7891b044b45e538a472dbe86f240"><code>7beaec2</code></a> Support descriptors in dataclass transform (<a href="https://redirect.github.com/python/mypy/issues/15006">#15006</a>)</li>
<li><a href="https://github.com/python/mypy/commit/a7a995a0409b623b941a1e2f882792abed45fddf"><code>a7a995a</code></a> Multiple inheritance considers callable objects as subtypes of functions (<a href="https://redirect.github.com/python/mypy/issues/14">#14</a>...</li>
<li><a href="https://github.com/python/mypy/commit/7f2a5b5bf7dca35402390f2ff30c35c23b4085d4"><code>7f2a5b5</code></a> [dataclass_transform] fix deserialization for frozen_default</li>
<li><a href="https://github.com/python/mypy/commit/bfa9eacedb0554e1a6fe9245dbd5ccdbbc555fae"><code>bfa9eac</code></a> [mypyc] Be stricter about function prototypes (<a href="https://redirect.github.com/python/mypy/issues/14942">#14942</a>)</li>
<li><a href="https://github.com/python/mypy/commit/4e6d68322774d5f7c15d5067613fc851b4640d3e"><code>4e6d683</code></a> [mypyc] Document native floats and integers (<a href="https://redirect.github.com/python/mypy/issues/14927">#14927</a>)</li>
<li><a href="https://github.com/python/mypy/commit/aa2679b6b0bbbffcb454081a81346c0a82804e52"><code>aa2679b</code></a> [mypyc] Fixes to float to int conversion (<a href="https://redirect.github.com/python/mypy/issues/14936">#14936</a>)</li>
<li><a href="https://github.com/python/mypy/commit/9944d5fc6ae29a862bfab980a42a9bfae89ee5c0"><code>9944d5f</code></a> [mypyc] Support iterating over a TypedDict (<a href="https://redirect.github.com/python/mypy/issues/14747">#14747</a>)</li>
<li><a href="https://github.com/python/mypy/commit/1a8ea6187474fcc5896cf4b7f47074673e07ad42"><code>1a8ea61</code></a> [mypyc] Avoid boxing/unboxing when coercing between tuple types (<a href="https://redirect.github.com/python/mypy/issues/14899">#14899</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/python/mypy/compare/v1.1.1...v1.2.0">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
</details>
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 import os
2 from typing import List
3
4 from setuptools import find_packages, setup
5
6
7 def find_stub_files(name: str) -> List[str]:
8 result = []
9 for root, _dirs, files in os.walk(name):
10 for file in files:
11 if file.endswith(".pyi"):
12 if os.path.sep in root:
13 sub_root = root.split(os.path.sep, 1)[-1]
14 file = os.path.join(sub_root, file)
15 result.append(file)
16 return result
17
18
19 with open("README.md") as f:
20 readme = f.read()
21
22 dependencies = [
23 "mypy>=0.980",
24 "django",
25 "django-stubs-ext>=0.8.0",
26 "tomli; python_version < '3.11'",
27 # Types:
28 "typing-extensions",
29 "types-pytz",
30 "types-PyYAML",
31 ]
32
33 extras_require = {
34 "compatible-mypy": ["mypy>=1.1.1,<1.2"],
35 }
36
37 setup(
38 name="django-stubs",
39 version="1.16.0",
40 description="Mypy stubs for Django",
41 long_description=readme,
42 long_description_content_type="text/markdown",
43 license="MIT",
44 url="https://github.com/typeddjango/django-stubs",
45 author="Maksim Kurnikov",
46 author_email="[email protected]",
47 maintainer="Marti Raudsepp",
48 maintainer_email="[email protected]",
49 py_modules=[],
50 python_requires=">=3.7",
51 install_requires=dependencies,
52 extras_require=extras_require,
53 packages=["django-stubs", *find_packages(exclude=["scripts"])],
54 package_data={
55 "django-stubs": find_stub_files("django-stubs"),
56 "mypy_django_plugin": ["py.typed"],
57 },
58 classifiers=[
59 "License :: OSI Approved :: MIT License",
60 "Operating System :: OS Independent",
61 "Programming Language :: Python :: 3.7",
62 "Programming Language :: Python :: 3.8",
63 "Programming Language :: Python :: 3.9",
64 "Programming Language :: Python :: 3.10",
65 "Programming Language :: Python :: 3.11",
66 "Typing :: Typed",
67 "Framework :: Django",
68 "Framework :: Django :: 2.2",
69 "Framework :: Django :: 3.0",
70 "Framework :: Django :: 3.1",
71 "Framework :: Django :: 3.2",
72 "Framework :: Django :: 4.0",
73 "Framework :: Django :: 4.1",
74 ],
75 project_urls={
76 "Release notes": "https://github.com/typeddjango/django-stubs/releases",
77 },
78 )
79
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -31,7 +31,7 @@
]
extras_require = {
- "compatible-mypy": ["mypy>=1.1.1,<1.2"],
+ "compatible-mypy": ["mypy>=1.2.0,<1.3"],
}
setup(
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -31,7 +31,7 @@\n ]\n \n extras_require = {\n- \"compatible-mypy\": [\"mypy>=1.1.1,<1.2\"],\n+ \"compatible-mypy\": [\"mypy>=1.2.0,<1.3\"],\n }\n \n setup(\n", "issue": "Bump mypy from 1.1.1 to 1.2.0\nBumps [mypy](https://github.com/python/mypy) from 1.1.1 to 1.2.0.\n<details>\n<summary>Commits</summary>\n<ul>\n<li><a href=\"https://github.com/python/mypy/commit/4f47dfb64dff920c237e7c8c58f8efba57cf57cf\"><code>4f47dfb</code></a> Promote version to 1.2.0 and drop +dev from the version</li>\n<li><a href=\"https://github.com/python/mypy/commit/06aa182b4973ea122c9f536855a31234d75b93b9\"><code>06aa182</code></a> [dataclass_transform] support implicit default for "init" parameter in field ...</li>\n<li><a href=\"https://github.com/python/mypy/commit/7beaec2e4a1c7891b044b45e538a472dbe86f240\"><code>7beaec2</code></a> Support descriptors in dataclass transform (<a href=\"https://redirect.github.com/python/mypy/issues/15006\">#15006</a>)</li>\n<li><a href=\"https://github.com/python/mypy/commit/a7a995a0409b623b941a1e2f882792abed45fddf\"><code>a7a995a</code></a> Multiple inheritance considers callable objects as subtypes of functions (<a href=\"https://redirect.github.com/python/mypy/issues/14\">#14</a>...</li>\n<li><a href=\"https://github.com/python/mypy/commit/7f2a5b5bf7dca35402390f2ff30c35c23b4085d4\"><code>7f2a5b5</code></a> [dataclass_transform] fix deserialization for frozen_default</li>\n<li><a href=\"https://github.com/python/mypy/commit/bfa9eacedb0554e1a6fe9245dbd5ccdbbc555fae\"><code>bfa9eac</code></a> [mypyc] Be stricter about function prototypes (<a href=\"https://redirect.github.com/python/mypy/issues/14942\">#14942</a>)</li>\n<li><a href=\"https://github.com/python/mypy/commit/4e6d68322774d5f7c15d5067613fc851b4640d3e\"><code>4e6d683</code></a> [mypyc] Document native floats and integers (<a href=\"https://redirect.github.com/python/mypy/issues/14927\">#14927</a>)</li>\n<li><a href=\"https://github.com/python/mypy/commit/aa2679b6b0bbbffcb454081a81346c0a82804e52\"><code>aa2679b</code></a> [mypyc] Fixes to float to int conversion (<a href=\"https://redirect.github.com/python/mypy/issues/14936\">#14936</a>)</li>\n<li><a href=\"https://github.com/python/mypy/commit/9944d5fc6ae29a862bfab980a42a9bfae89ee5c0\"><code>9944d5f</code></a> [mypyc] Support iterating over a TypedDict (<a href=\"https://redirect.github.com/python/mypy/issues/14747\">#14747</a>)</li>\n<li><a href=\"https://github.com/python/mypy/commit/1a8ea6187474fcc5896cf4b7f47074673e07ad42\"><code>1a8ea61</code></a> [mypyc] Avoid boxing/unboxing when coercing between tuple types (<a href=\"https://redirect.github.com/python/mypy/issues/14899\">#14899</a>)</li>\n<li>Additional commits viewable in <a href=\"https://github.com/python/mypy/compare/v1.1.1...v1.2.0\">compare view</a></li>\n</ul>\n</details>\n<br />\n\n\n[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)\n\nDependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.\n\n[//]: # (dependabot-automerge-start)\n[//]: # (dependabot-automerge-end)\n\n---\n\n<details>\n<summary>Dependabot commands and options</summary>\n<br />\n\nYou can trigger Dependabot actions by commenting on this PR:\n- `@dependabot rebase` will rebase this PR\n- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it\n- `@dependabot merge` will merge this PR after your CI passes on it\n- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it\n- `@dependabot cancel merge` will cancel a previously requested merge and block automerging\n- `@dependabot reopen` will reopen this PR if it is closed\n- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually\n- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)\n\n\n</details>\n", "before_files": [{"content": "import os\nfrom typing import List\n\nfrom setuptools import find_packages, setup\n\n\ndef find_stub_files(name: str) -> List[str]:\n result = []\n for root, _dirs, files in os.walk(name):\n for file in files:\n if file.endswith(\".pyi\"):\n if os.path.sep in root:\n sub_root = root.split(os.path.sep, 1)[-1]\n file = os.path.join(sub_root, file)\n result.append(file)\n return result\n\n\nwith open(\"README.md\") as f:\n readme = f.read()\n\ndependencies = [\n \"mypy>=0.980\",\n \"django\",\n \"django-stubs-ext>=0.8.0\",\n \"tomli; python_version < '3.11'\",\n # Types:\n \"typing-extensions\",\n \"types-pytz\",\n \"types-PyYAML\",\n]\n\nextras_require = {\n \"compatible-mypy\": [\"mypy>=1.1.1,<1.2\"],\n}\n\nsetup(\n name=\"django-stubs\",\n version=\"1.16.0\",\n description=\"Mypy stubs for Django\",\n long_description=readme,\n long_description_content_type=\"text/markdown\",\n license=\"MIT\",\n url=\"https://github.com/typeddjango/django-stubs\",\n author=\"Maksim Kurnikov\",\n author_email=\"[email protected]\",\n maintainer=\"Marti Raudsepp\",\n maintainer_email=\"[email protected]\",\n py_modules=[],\n python_requires=\">=3.7\",\n install_requires=dependencies,\n extras_require=extras_require,\n packages=[\"django-stubs\", *find_packages(exclude=[\"scripts\"])],\n package_data={\n \"django-stubs\": find_stub_files(\"django-stubs\"),\n \"mypy_django_plugin\": [\"py.typed\"],\n },\n classifiers=[\n \"License :: OSI Approved :: MIT License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: 3.11\",\n \"Typing :: Typed\",\n \"Framework :: Django\",\n \"Framework :: Django :: 2.2\",\n \"Framework :: Django :: 3.0\",\n \"Framework :: Django :: 3.1\",\n \"Framework :: Django :: 3.2\",\n \"Framework :: Django :: 4.0\",\n \"Framework :: Django :: 4.1\",\n ],\n project_urls={\n \"Release notes\": \"https://github.com/typeddjango/django-stubs/releases\",\n },\n)\n", "path": "setup.py"}], "after_files": [{"content": "import os\nfrom typing import List\n\nfrom setuptools import find_packages, setup\n\n\ndef find_stub_files(name: str) -> List[str]:\n result = []\n for root, _dirs, files in os.walk(name):\n for file in files:\n if file.endswith(\".pyi\"):\n if os.path.sep in root:\n sub_root = root.split(os.path.sep, 1)[-1]\n file = os.path.join(sub_root, file)\n result.append(file)\n return result\n\n\nwith open(\"README.md\") as f:\n readme = f.read()\n\ndependencies = [\n \"mypy>=0.980\",\n \"django\",\n \"django-stubs-ext>=0.8.0\",\n \"tomli; python_version < '3.11'\",\n # Types:\n \"typing-extensions\",\n \"types-pytz\",\n \"types-PyYAML\",\n]\n\nextras_require = {\n \"compatible-mypy\": [\"mypy>=1.2.0,<1.3\"],\n}\n\nsetup(\n name=\"django-stubs\",\n version=\"1.16.0\",\n description=\"Mypy stubs for Django\",\n long_description=readme,\n long_description_content_type=\"text/markdown\",\n license=\"MIT\",\n url=\"https://github.com/typeddjango/django-stubs\",\n author=\"Maksim Kurnikov\",\n author_email=\"[email protected]\",\n maintainer=\"Marti Raudsepp\",\n maintainer_email=\"[email protected]\",\n py_modules=[],\n python_requires=\">=3.7\",\n install_requires=dependencies,\n extras_require=extras_require,\n packages=[\"django-stubs\", *find_packages(exclude=[\"scripts\"])],\n package_data={\n \"django-stubs\": find_stub_files(\"django-stubs\"),\n \"mypy_django_plugin\": [\"py.typed\"],\n },\n classifiers=[\n \"License :: OSI Approved :: MIT License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: 3.11\",\n \"Typing :: Typed\",\n \"Framework :: Django\",\n \"Framework :: Django :: 2.2\",\n \"Framework :: Django :: 3.0\",\n \"Framework :: Django :: 3.1\",\n \"Framework :: Django :: 3.2\",\n \"Framework :: Django :: 4.0\",\n \"Framework :: Django :: 4.1\",\n ],\n project_urls={\n \"Release notes\": \"https://github.com/typeddjango/django-stubs/releases\",\n },\n)\n", "path": "setup.py"}]} |
gh_patches_debug_17 | rasdani/github-patches | git_diff | typeddjango__django-stubs-1496 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bump mypy from 1.2.0 to 1.3.0
Bumps [mypy](https://github.com/python/mypy) from 1.2.0 to 1.3.0.
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/python/mypy/commit/9df39ab1801369cb49467fa52080df9c42377384"><code>9df39ab</code></a> set version to 1.3.0</li>
<li><a href="https://github.com/python/mypy/commit/c1464a9ea61fe9c350b61c1989d98bbc33d74982"><code>c1464a9</code></a> Revert "Fix disappearing errors when re-running dmypy check (<a href="https://redirect.github.com/python/mypy/issues/14835">#14835</a>)" (<a href="https://redirect.github.com/python/mypy/issues/15179">#15179</a>)</li>
<li><a href="https://github.com/python/mypy/commit/d887e9c0d090694b66b5fa20ac249b3d749a8518"><code>d887e9c</code></a> Fix performance in union subtyping (<a href="https://redirect.github.com/python/mypy/issues/15104">#15104</a>)</li>
<li><a href="https://github.com/python/mypy/commit/320b883ada83375f1e6929b4703b741d3c4813ce"><code>320b883</code></a> Typeshed cherry-pick: stdlib/xml: fix return types for toxml/toprettyxml meth...</li>
<li><a href="https://github.com/python/mypy/commit/6a68049e903dba7bbcff5a530b63731535f8d5f7"><code>6a68049</code></a> Fix sys.platform when cross-compiling with emscripten (<a href="https://redirect.github.com/python/mypy/issues/14888">#14888</a>)</li>
<li><a href="https://github.com/python/mypy/commit/3d9661c91d5dfaf3ae0d3ca5624867cdf449da77"><code>3d9661c</code></a> Fix bounded self types in override incompatibility checking (<a href="https://redirect.github.com/python/mypy/issues/15045">#15045</a>)</li>
<li><a href="https://github.com/python/mypy/commit/0799a8ab0dc8deed8d2e0ec34b1aab2fe39ebd96"><code>0799a8a</code></a> [mypyc] Fix unions of bools and ints (<a href="https://redirect.github.com/python/mypy/issues/15066">#15066</a>)</li>
<li><a href="https://github.com/python/mypy/commit/4276308be01ea498d946a79554b4a10b1cf13ccb"><code>4276308</code></a> (π) update black to 23.3.0 (<a href="https://redirect.github.com/python/mypy/issues/15059">#15059</a>)</li>
<li><a href="https://github.com/python/mypy/commit/14493660eadf35553a3cecb746704b58a401c68d"><code>1449366</code></a> Allow objects matching <code>SupportsKeysAndGetItem</code> to be unpacked (<a href="https://redirect.github.com/python/mypy/issues/14990">#14990</a>)</li>
<li><a href="https://github.com/python/mypy/commit/69c774e6d6fa92aea8f32cd0e045e8a34a0f7215"><code>69c774e</code></a> Sync typeshed (<a href="https://redirect.github.com/python/mypy/issues/15055">#15055</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/python/mypy/compare/v1.2.0...v1.3.0">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
</details>
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 import os
3 from typing import List
4
5 from setuptools import find_packages, setup
6
7
8 def find_stub_files(name: str) -> List[str]:
9 result = []
10 for root, _dirs, files in os.walk(name):
11 for file in files:
12 if file.endswith(".pyi"):
13 if os.path.sep in root:
14 sub_root = root.split(os.path.sep, 1)[-1]
15 file = os.path.join(sub_root, file)
16 result.append(file)
17 return result
18
19
20 with open("README.md") as f:
21 readme = f.read()
22
23 dependencies = [
24 "mypy>=1.0.0",
25 "django",
26 "django-stubs-ext>=4.2.0",
27 "tomli; python_version < '3.11'",
28 # Types:
29 "typing-extensions",
30 "types-pytz",
31 "types-PyYAML",
32 ]
33
34 extras_require = {
35 "compatible-mypy": ["mypy>=1.2.0,<1.3"],
36 }
37
38 setup(
39 name="django-stubs",
40 version="4.2.0",
41 description="Mypy stubs for Django",
42 long_description=readme,
43 long_description_content_type="text/markdown",
44 license="MIT",
45 license_files=["LICENSE.md"],
46 url="https://github.com/typeddjango/django-stubs",
47 author="Maksim Kurnikov",
48 author_email="[email protected]",
49 maintainer="Marti Raudsepp",
50 maintainer_email="[email protected]",
51 py_modules=[],
52 python_requires=">=3.8",
53 install_requires=dependencies,
54 extras_require=extras_require,
55 packages=["django-stubs", *find_packages(exclude=["scripts"])],
56 package_data={
57 "django-stubs": find_stub_files("django-stubs"),
58 "mypy_django_plugin": ["py.typed"],
59 },
60 classifiers=[
61 "License :: OSI Approved :: MIT License",
62 "Operating System :: OS Independent",
63 "Programming Language :: Python :: 3.8",
64 "Programming Language :: Python :: 3.9",
65 "Programming Language :: Python :: 3.10",
66 "Programming Language :: Python :: 3.11",
67 "Typing :: Typed",
68 "Framework :: Django",
69 "Framework :: Django :: 2.2",
70 "Framework :: Django :: 3.0",
71 "Framework :: Django :: 3.1",
72 "Framework :: Django :: 3.2",
73 "Framework :: Django :: 4.1",
74 "Framework :: Django :: 4.2",
75 ],
76 project_urls={
77 "Release notes": "https://github.com/typeddjango/django-stubs/releases",
78 },
79 )
80
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
old mode 100755
new mode 100644
--- a/setup.py
+++ b/setup.py
@@ -32,7 +32,7 @@
]
extras_require = {
- "compatible-mypy": ["mypy>=1.2.0,<1.3"],
+ "compatible-mypy": ["mypy>=1.3.0,<1.4"],
}
setup(
| {"golden_diff": "diff --git a/setup.py b/setup.py\nold mode 100755\nnew mode 100644\n--- a/setup.py\n+++ b/setup.py\n@@ -32,7 +32,7 @@\n ]\n \n extras_require = {\n- \"compatible-mypy\": [\"mypy>=1.2.0,<1.3\"],\n+ \"compatible-mypy\": [\"mypy>=1.3.0,<1.4\"],\n }\n \n setup(\n", "issue": "Bump mypy from 1.2.0 to 1.3.0\nBumps [mypy](https://github.com/python/mypy) from 1.2.0 to 1.3.0.\n<details>\n<summary>Commits</summary>\n<ul>\n<li><a href=\"https://github.com/python/mypy/commit/9df39ab1801369cb49467fa52080df9c42377384\"><code>9df39ab</code></a> set version to 1.3.0</li>\n<li><a href=\"https://github.com/python/mypy/commit/c1464a9ea61fe9c350b61c1989d98bbc33d74982\"><code>c1464a9</code></a> Revert "Fix disappearing errors when re-running dmypy check (<a href=\"https://redirect.github.com/python/mypy/issues/14835\">#14835</a>)" (<a href=\"https://redirect.github.com/python/mypy/issues/15179\">#15179</a>)</li>\n<li><a href=\"https://github.com/python/mypy/commit/d887e9c0d090694b66b5fa20ac249b3d749a8518\"><code>d887e9c</code></a> Fix performance in union subtyping (<a href=\"https://redirect.github.com/python/mypy/issues/15104\">#15104</a>)</li>\n<li><a href=\"https://github.com/python/mypy/commit/320b883ada83375f1e6929b4703b741d3c4813ce\"><code>320b883</code></a> Typeshed cherry-pick: stdlib/xml: fix return types for toxml/toprettyxml meth...</li>\n<li><a href=\"https://github.com/python/mypy/commit/6a68049e903dba7bbcff5a530b63731535f8d5f7\"><code>6a68049</code></a> Fix sys.platform when cross-compiling with emscripten (<a href=\"https://redirect.github.com/python/mypy/issues/14888\">#14888</a>)</li>\n<li><a href=\"https://github.com/python/mypy/commit/3d9661c91d5dfaf3ae0d3ca5624867cdf449da77\"><code>3d9661c</code></a> Fix bounded self types in override incompatibility checking (<a href=\"https://redirect.github.com/python/mypy/issues/15045\">#15045</a>)</li>\n<li><a href=\"https://github.com/python/mypy/commit/0799a8ab0dc8deed8d2e0ec34b1aab2fe39ebd96\"><code>0799a8a</code></a> [mypyc] Fix unions of bools and ints (<a href=\"https://redirect.github.com/python/mypy/issues/15066\">#15066</a>)</li>\n<li><a href=\"https://github.com/python/mypy/commit/4276308be01ea498d946a79554b4a10b1cf13ccb\"><code>4276308</code></a> (\ud83c\udf81) update black to 23.3.0 (<a href=\"https://redirect.github.com/python/mypy/issues/15059\">#15059</a>)</li>\n<li><a href=\"https://github.com/python/mypy/commit/14493660eadf35553a3cecb746704b58a401c68d\"><code>1449366</code></a> Allow objects matching <code>SupportsKeysAndGetItem</code> to be unpacked (<a href=\"https://redirect.github.com/python/mypy/issues/14990\">#14990</a>)</li>\n<li><a href=\"https://github.com/python/mypy/commit/69c774e6d6fa92aea8f32cd0e045e8a34a0f7215\"><code>69c774e</code></a> Sync typeshed (<a href=\"https://redirect.github.com/python/mypy/issues/15055\">#15055</a>)</li>\n<li>Additional commits viewable in <a href=\"https://github.com/python/mypy/compare/v1.2.0...v1.3.0\">compare view</a></li>\n</ul>\n</details>\n<br />\n\n\n[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)\n\nDependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.\n\n[//]: # (dependabot-automerge-start)\n[//]: # (dependabot-automerge-end)\n\n---\n\n<details>\n<summary>Dependabot commands and options</summary>\n<br />\n\nYou can trigger Dependabot actions by commenting on this PR:\n- `@dependabot rebase` will rebase this PR\n- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it\n- `@dependabot merge` will merge this PR after your CI passes on it\n- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it\n- `@dependabot cancel merge` will cancel a previously requested merge and block automerging\n- `@dependabot reopen` will reopen this PR if it is closed\n- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually\n- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)\n\n\n</details>\n", "before_files": [{"content": "#!/usr/bin/env python\nimport os\nfrom typing import List\n\nfrom setuptools import find_packages, setup\n\n\ndef find_stub_files(name: str) -> List[str]:\n result = []\n for root, _dirs, files in os.walk(name):\n for file in files:\n if file.endswith(\".pyi\"):\n if os.path.sep in root:\n sub_root = root.split(os.path.sep, 1)[-1]\n file = os.path.join(sub_root, file)\n result.append(file)\n return result\n\n\nwith open(\"README.md\") as f:\n readme = f.read()\n\ndependencies = [\n \"mypy>=1.0.0\",\n \"django\",\n \"django-stubs-ext>=4.2.0\",\n \"tomli; python_version < '3.11'\",\n # Types:\n \"typing-extensions\",\n \"types-pytz\",\n \"types-PyYAML\",\n]\n\nextras_require = {\n \"compatible-mypy\": [\"mypy>=1.2.0,<1.3\"],\n}\n\nsetup(\n name=\"django-stubs\",\n version=\"4.2.0\",\n description=\"Mypy stubs for Django\",\n long_description=readme,\n long_description_content_type=\"text/markdown\",\n license=\"MIT\",\n license_files=[\"LICENSE.md\"],\n url=\"https://github.com/typeddjango/django-stubs\",\n author=\"Maksim Kurnikov\",\n author_email=\"[email protected]\",\n maintainer=\"Marti Raudsepp\",\n maintainer_email=\"[email protected]\",\n py_modules=[],\n python_requires=\">=3.8\",\n install_requires=dependencies,\n extras_require=extras_require,\n packages=[\"django-stubs\", *find_packages(exclude=[\"scripts\"])],\n package_data={\n \"django-stubs\": find_stub_files(\"django-stubs\"),\n \"mypy_django_plugin\": [\"py.typed\"],\n },\n classifiers=[\n \"License :: OSI Approved :: MIT License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: 3.11\",\n \"Typing :: Typed\",\n \"Framework :: Django\",\n \"Framework :: Django :: 2.2\",\n \"Framework :: Django :: 3.0\",\n \"Framework :: Django :: 3.1\",\n \"Framework :: Django :: 3.2\",\n \"Framework :: Django :: 4.1\",\n \"Framework :: Django :: 4.2\",\n ],\n project_urls={\n \"Release notes\": \"https://github.com/typeddjango/django-stubs/releases\",\n },\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\nimport os\nfrom typing import List\n\nfrom setuptools import find_packages, setup\n\n\ndef find_stub_files(name: str) -> List[str]:\n result = []\n for root, _dirs, files in os.walk(name):\n for file in files:\n if file.endswith(\".pyi\"):\n if os.path.sep in root:\n sub_root = root.split(os.path.sep, 1)[-1]\n file = os.path.join(sub_root, file)\n result.append(file)\n return result\n\n\nwith open(\"README.md\") as f:\n readme = f.read()\n\ndependencies = [\n \"mypy>=1.0.0\",\n \"django\",\n \"django-stubs-ext>=4.2.0\",\n \"tomli; python_version < '3.11'\",\n # Types:\n \"typing-extensions\",\n \"types-pytz\",\n \"types-PyYAML\",\n]\n\nextras_require = {\n \"compatible-mypy\": [\"mypy>=1.3.0,<1.4\"],\n}\n\nsetup(\n name=\"django-stubs\",\n version=\"4.2.0\",\n description=\"Mypy stubs for Django\",\n long_description=readme,\n long_description_content_type=\"text/markdown\",\n license=\"MIT\",\n license_files=[\"LICENSE.md\"],\n url=\"https://github.com/typeddjango/django-stubs\",\n author=\"Maksim Kurnikov\",\n author_email=\"[email protected]\",\n maintainer=\"Marti Raudsepp\",\n maintainer_email=\"[email protected]\",\n py_modules=[],\n python_requires=\">=3.8\",\n install_requires=dependencies,\n extras_require=extras_require,\n packages=[\"django-stubs\", *find_packages(exclude=[\"scripts\"])],\n package_data={\n \"django-stubs\": find_stub_files(\"django-stubs\"),\n \"mypy_django_plugin\": [\"py.typed\"],\n },\n classifiers=[\n \"License :: OSI Approved :: MIT License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: 3.11\",\n \"Typing :: Typed\",\n \"Framework :: Django\",\n \"Framework :: Django :: 2.2\",\n \"Framework :: Django :: 3.0\",\n \"Framework :: Django :: 3.1\",\n \"Framework :: Django :: 3.2\",\n \"Framework :: Django :: 4.1\",\n \"Framework :: Django :: 4.2\",\n ],\n project_urls={\n \"Release notes\": \"https://github.com/typeddjango/django-stubs/releases\",\n },\n)\n", "path": "setup.py"}]} |
gh_patches_debug_18 | rasdani/github-patches | git_diff | freqtrade__freqtrade-5487 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Hyperoptable parameter type: CategoricalParameter is not returning correctly.
## Describe your environment
* Operating system: MacOS 11.2.3 (20D91)
* Python Version: using the version shiped freqtradeorg/freqtrade:stable (Image ID 73a48178c043)
* CCXT version: using the version shiped freqtradeorg/freqtrade:stable (Image ID 73a48178c043)
* Freqtrade Version: freqtrade 2021.4
Note: All issues other than enhancement requests will be closed without further comment if the above template is deleted or not filled out.
## Describe the problem:
Hi! It appears the Hyperoptable parameter type: `CategoricalParameter` is not returning correctly.
If I run the example as per the Freqtrade Docs [here](https://www.freqtrade.io/en/stable/hyperopt/#hyperoptable-parameters), namely setting a `CategoricalParameter` like so:
```
buy_rsi_enabled = CategoricalParameter([True, False]),
```
...then when running the Hyperopt tool there is an error in the `populate_buy_trend` as below:
```
if self.buy_adx_enabled.value:
AttributeError: 'tuple' object has no attribute 'value'
```
It would appear that the `CategoricalParameter` is not actually returning one of the categories (even a default) but instead returning a Python Tuple.
### Steps to reproduce:
1. Follow the example in the [Docs](https://www.freqtrade.io/en/stable/hyperopt/#hyperoptable-parameters)
### Observed Results:
* What happened? There was an AttributeError: 'tuple' object has no attribute 'value'.
* What did you expect to happen? The 'value' property to exist and be set to either True or False
### Relevant code exceptions or logs
Note: Please copy/paste text of the messages, no screenshots of logs please.
```
2021-05-02 09:48:02,421 - freqtrade - ERROR - Fatal exception!
joblib.externals.loky.process_executor._RemoteTraceback:
"""
Traceback (most recent call last):
File "/home/ftuser/.local/lib/python3.9/site-packages/joblib/externals/loky/process_executor.py", line 431, in _process_worker
r = call_item()
File "/home/ftuser/.local/lib/python3.9/site-packages/joblib/externals/loky/process_executor.py", line 285, in __call__
return self.fn(*self.args, **self.kwargs)
File "/home/ftuser/.local/lib/python3.9/site-packages/joblib/_parallel_backends.py", line 595, in __call__
return self.func(*args, **kwargs)
File "/home/ftuser/.local/lib/python3.9/site-packages/joblib/parallel.py", line 262, in __call__
return [func(*args, **kwargs)
File "/home/ftuser/.local/lib/python3.9/site-packages/joblib/parallel.py", line 262, in <listcomp>
return [func(*args, **kwargs)
File "/home/ftuser/.local/lib/python3.9/site-packages/joblib/externals/loky/cloudpickle_wrapper.py", line 38, in __call__
return self._obj(*args, **kwargs)
File "/freqtrade/freqtrade/optimize/hyperopt.py", line 288, in generate_optimizer
backtesting_results = self.backtesting.backtest(
File "/freqtrade/freqtrade/optimize/backtesting.py", line 352, in backtest
data: Dict = self._get_ohlcv_as_lists(processed)
File "/freqtrade/freqtrade/optimize/backtesting.py", line 196, in _get_ohlcv_as_lists
self.strategy.advise_buy(pair_data, {'pair': pair}), {'pair': pair})[headers].copy()
File "/freqtrade/freqtrade/optimize/hyperopt_auto.py", line 31, in populate_buy_trend
return self.strategy.populate_buy_trend(dataframe, metadata)
File "/freqtrade/user_data/strategies/Strategy004.py", line 149, in populate_buy_trend
if self.buy_adx_enabled.value:
AttributeError: 'tuple' object has no attribute 'value'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `freqtrade/__init__.py`
Content:
```
1 """ Freqtrade bot """
2 __version__ = 'develop'
3
4 if __version__ == 'develop':
5
6 try:
7 import subprocess
8
9 __version__ = 'develop-' + subprocess.check_output(
10 ['git', 'log', '--format="%h"', '-n 1'],
11 stderr=subprocess.DEVNULL).decode("utf-8").rstrip().strip('"')
12
13 # from datetime import datetime
14 # last_release = subprocess.check_output(
15 # ['git', 'tag']
16 # ).decode('utf-8').split()[-1].split(".")
17 # # Releases are in the format "2020.1" - we increment the latest version for dev.
18 # prefix = f"{last_release[0]}.{int(last_release[1]) + 1}"
19 # dev_version = int(datetime.now().timestamp() // 1000)
20 # __version__ = f"{prefix}.dev{dev_version}"
21
22 # subprocess.check_output(
23 # ['git', 'log', '--format="%h"', '-n 1'],
24 # stderr=subprocess.DEVNULL).decode("utf-8").rstrip().strip('"')
25 except Exception:
26 # git not available, ignore
27 try:
28 # Try Fallback to freqtrade_commit file (created by CI while building docker image)
29 from pathlib import Path
30 versionfile = Path('./freqtrade_commit')
31 if versionfile.is_file():
32 __version__ = f"docker-{versionfile.read_text()[:8]}"
33 except Exception:
34 pass
35
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/freqtrade/__init__.py b/freqtrade/__init__.py
--- a/freqtrade/__init__.py
+++ b/freqtrade/__init__.py
@@ -1,5 +1,5 @@
""" Freqtrade bot """
-__version__ = 'develop'
+__version__ = '2021.8'
if __version__ == 'develop':
| {"golden_diff": "diff --git a/freqtrade/__init__.py b/freqtrade/__init__.py\n--- a/freqtrade/__init__.py\n+++ b/freqtrade/__init__.py\n@@ -1,5 +1,5 @@\n \"\"\" Freqtrade bot \"\"\"\n-__version__ = 'develop'\n+__version__ = '2021.8'\n \n if __version__ == 'develop':\n", "issue": "Hyperoptable parameter type: CategoricalParameter is not returning correctly.\n## Describe your environment\r\n\r\n * Operating system: MacOS 11.2.3 (20D91)\r\n * Python Version: using the version shiped freqtradeorg/freqtrade:stable (Image ID 73a48178c043)\r\n * CCXT version: using the version shiped freqtradeorg/freqtrade:stable (Image ID 73a48178c043)\r\n * Freqtrade Version: freqtrade 2021.4\r\n \r\nNote: All issues other than enhancement requests will be closed without further comment if the above template is deleted or not filled out.\r\n\r\n## Describe the problem:\r\n\r\nHi! It appears the Hyperoptable parameter type: `CategoricalParameter` is not returning correctly.\r\n\r\nIf I run the example as per the Freqtrade Docs [here](https://www.freqtrade.io/en/stable/hyperopt/#hyperoptable-parameters), namely setting a `CategoricalParameter` like so:\r\n\r\n```\r\nbuy_rsi_enabled = CategoricalParameter([True, False]),\r\n```\r\n\r\n...then when running the Hyperopt tool there is an error in the `populate_buy_trend` as below:\r\n\r\n```\r\nif self.buy_adx_enabled.value:\r\nAttributeError: 'tuple' object has no attribute 'value'\r\n```\r\n\r\nIt would appear that the `CategoricalParameter` is not actually returning one of the categories (even a default) but instead returning a Python Tuple.\r\n\r\n### Steps to reproduce:\r\n\r\n 1. Follow the example in the [Docs](https://www.freqtrade.io/en/stable/hyperopt/#hyperoptable-parameters)\r\n \r\n### Observed Results:\r\n\r\n * What happened? There was an AttributeError: 'tuple' object has no attribute 'value'. \r\n * What did you expect to happen? The 'value' property to exist and be set to either True or False\r\n\r\n### Relevant code exceptions or logs\r\n\r\nNote: Please copy/paste text of the messages, no screenshots of logs please.\r\n\r\n ```\r\n2021-05-02 09:48:02,421 - freqtrade - ERROR - Fatal exception!\r\njoblib.externals.loky.process_executor._RemoteTraceback:\r\n\"\"\"\r\nTraceback (most recent call last):\r\n File \"/home/ftuser/.local/lib/python3.9/site-packages/joblib/externals/loky/process_executor.py\", line 431, in _process_worker\r\n r = call_item()\r\n File \"/home/ftuser/.local/lib/python3.9/site-packages/joblib/externals/loky/process_executor.py\", line 285, in __call__\r\n return self.fn(*self.args, **self.kwargs)\r\n File \"/home/ftuser/.local/lib/python3.9/site-packages/joblib/_parallel_backends.py\", line 595, in __call__\r\n return self.func(*args, **kwargs)\r\n File \"/home/ftuser/.local/lib/python3.9/site-packages/joblib/parallel.py\", line 262, in __call__\r\n return [func(*args, **kwargs)\r\n File \"/home/ftuser/.local/lib/python3.9/site-packages/joblib/parallel.py\", line 262, in <listcomp>\r\n return [func(*args, **kwargs)\r\n File \"/home/ftuser/.local/lib/python3.9/site-packages/joblib/externals/loky/cloudpickle_wrapper.py\", line 38, in __call__\r\n return self._obj(*args, **kwargs)\r\n File \"/freqtrade/freqtrade/optimize/hyperopt.py\", line 288, in generate_optimizer\r\n backtesting_results = self.backtesting.backtest(\r\n File \"/freqtrade/freqtrade/optimize/backtesting.py\", line 352, in backtest\r\n data: Dict = self._get_ohlcv_as_lists(processed)\r\n File \"/freqtrade/freqtrade/optimize/backtesting.py\", line 196, in _get_ohlcv_as_lists\r\n self.strategy.advise_buy(pair_data, {'pair': pair}), {'pair': pair})[headers].copy()\r\n File \"/freqtrade/freqtrade/optimize/hyperopt_auto.py\", line 31, in populate_buy_trend\r\n return self.strategy.populate_buy_trend(dataframe, metadata)\r\n File \"/freqtrade/user_data/strategies/Strategy004.py\", line 149, in populate_buy_trend\r\n if self.buy_adx_enabled.value:\r\nAttributeError: 'tuple' object has no attribute 'value'\r\n ```\r\n\n", "before_files": [{"content": "\"\"\" Freqtrade bot \"\"\"\n__version__ = 'develop'\n\nif __version__ == 'develop':\n\n try:\n import subprocess\n\n __version__ = 'develop-' + subprocess.check_output(\n ['git', 'log', '--format=\"%h\"', '-n 1'],\n stderr=subprocess.DEVNULL).decode(\"utf-8\").rstrip().strip('\"')\n\n # from datetime import datetime\n # last_release = subprocess.check_output(\n # ['git', 'tag']\n # ).decode('utf-8').split()[-1].split(\".\")\n # # Releases are in the format \"2020.1\" - we increment the latest version for dev.\n # prefix = f\"{last_release[0]}.{int(last_release[1]) + 1}\"\n # dev_version = int(datetime.now().timestamp() // 1000)\n # __version__ = f\"{prefix}.dev{dev_version}\"\n\n # subprocess.check_output(\n # ['git', 'log', '--format=\"%h\"', '-n 1'],\n # stderr=subprocess.DEVNULL).decode(\"utf-8\").rstrip().strip('\"')\n except Exception:\n # git not available, ignore\n try:\n # Try Fallback to freqtrade_commit file (created by CI while building docker image)\n from pathlib import Path\n versionfile = Path('./freqtrade_commit')\n if versionfile.is_file():\n __version__ = f\"docker-{versionfile.read_text()[:8]}\"\n except Exception:\n pass\n", "path": "freqtrade/__init__.py"}], "after_files": [{"content": "\"\"\" Freqtrade bot \"\"\"\n__version__ = '2021.8'\n\nif __version__ == 'develop':\n\n try:\n import subprocess\n\n __version__ = 'develop-' + subprocess.check_output(\n ['git', 'log', '--format=\"%h\"', '-n 1'],\n stderr=subprocess.DEVNULL).decode(\"utf-8\").rstrip().strip('\"')\n\n # from datetime import datetime\n # last_release = subprocess.check_output(\n # ['git', 'tag']\n # ).decode('utf-8').split()[-1].split(\".\")\n # # Releases are in the format \"2020.1\" - we increment the latest version for dev.\n # prefix = f\"{last_release[0]}.{int(last_release[1]) + 1}\"\n # dev_version = int(datetime.now().timestamp() // 1000)\n # __version__ = f\"{prefix}.dev{dev_version}\"\n\n # subprocess.check_output(\n # ['git', 'log', '--format=\"%h\"', '-n 1'],\n # stderr=subprocess.DEVNULL).decode(\"utf-8\").rstrip().strip('\"')\n except Exception:\n # git not available, ignore\n try:\n # Try Fallback to freqtrade_commit file (created by CI while building docker image)\n from pathlib import Path\n versionfile = Path('./freqtrade_commit')\n if versionfile.is_file():\n __version__ = f\"docker-{versionfile.read_text()[:8]}\"\n except Exception:\n pass\n", "path": "freqtrade/__init__.py"}]} |
gh_patches_debug_19 | rasdani/github-patches | git_diff | dynaconf__dynaconf-1010 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[bug] TypeError for older versions of HVAC in read_secret_version method
**Describe the bug**
A combination of newer versions of Dynaconf with older versions of HVAC result in an incompatible mix of expected vs available arguments. Specifically you can get the following traceback.
```python
109 try:
110 if obj.VAULT_KV_VERSION_FOR_DYNACONF == 2:
--> 111 data = client.secrets.kv.v2.read_secret_version(
112 path,
113 mount_point=obj.VAULT_MOUNT_POINT_FOR_DYNACONF,
114 raise_on_deleted_version=True, # keep default behavior
115 )
116 else:
117 data = client.secrets.kv.read_secret(
118 "data/" + path,
119 mount_point=obj.VAULT_MOUNT_POINT_FOR_DYNACONF,
120 )
TypeError: KvV2.read_secret_version() got an unexpected keyword argument 'raise_on_deleted_version'
```
The PR introducing this feature was included in HVAC 1.1.0: https://github.com/hvac/hvac/pull/907
**To Reproduce**
Steps to reproduce the behavior:
1. Have a version of HVAC older than 1.1.0
2. Trigger a vault version read
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 from __future__ import annotations
2
3 import os
4
5 from setuptools import find_packages
6 from setuptools import setup
7
8
9 def read(*names, **kwargs):
10 """Read a file."""
11 content = ""
12 with open(
13 os.path.join(os.path.dirname(__file__), *names),
14 encoding=kwargs.get("encoding", "utf8"),
15 ) as open_file:
16 content = open_file.read().strip()
17 return content
18
19
20 test_requirements = [
21 "pytest",
22 "pytest-cov",
23 "pytest-xdist",
24 "pytest-mock",
25 "flake8",
26 "pep8-naming",
27 "flake8-debugger",
28 "flake8-print",
29 "flake8-todo",
30 "radon",
31 "flask>=0.12",
32 "django",
33 "python-dotenv",
34 "toml",
35 "redis",
36 "hvac",
37 "configobj",
38 ]
39
40
41 setup(
42 name="dynaconf",
43 version=read("dynaconf", "VERSION"),
44 url="https://github.com/dynaconf/dynaconf",
45 license="MIT",
46 license_files=["LICENSE", "vendor_licenses/*"],
47 author="Bruno Rocha",
48 author_email="[email protected]",
49 description="The dynamic configurator for your Python Project",
50 long_description=read("README.md"),
51 long_description_content_type="text/markdown",
52 packages=find_packages(
53 exclude=[
54 "tests",
55 "tests.*",
56 "tests_functional",
57 "tests_functional.*",
58 "docs",
59 "legacy_docs",
60 "legacy_docs.*",
61 "docs.*",
62 "build",
63 "build.*",
64 "dynaconf.vendor_src",
65 "dynaconf/vendor_src",
66 "dynaconf.vendor_src.*",
67 "dynaconf/vendor_src/*",
68 ]
69 ),
70 include_package_data=True,
71 zip_safe=False,
72 platforms="any",
73 tests_require=test_requirements,
74 extras_require={
75 "redis": ["redis"],
76 "vault": ["hvac"],
77 "yaml": ["ruamel.yaml"],
78 "toml": ["toml"],
79 "ini": ["configobj"],
80 "configobj": ["configobj"],
81 "all": ["redis", "ruamel.yaml", "configobj", "hvac"],
82 "test": test_requirements,
83 },
84 python_requires=">=3.8",
85 entry_points={"console_scripts": ["dynaconf=dynaconf.cli:main"]},
86 setup_requires=["setuptools>=38.6.0"],
87 classifiers=[
88 "Development Status :: 5 - Production/Stable",
89 "Framework :: Django",
90 "Framework :: Flask",
91 "Intended Audience :: Developers",
92 "License :: OSI Approved :: MIT License",
93 "Natural Language :: English",
94 "Operating System :: OS Independent",
95 "Programming Language :: Python",
96 "Programming Language :: Python :: 3",
97 "Programming Language :: Python :: 3 :: Only",
98 "Programming Language :: Python :: 3.8",
99 "Programming Language :: Python :: 3.9",
100 "Programming Language :: Python :: 3.10",
101 "Programming Language :: Python :: 3.11",
102 "Topic :: Utilities",
103 "Topic :: Software Development :: Libraries",
104 "Topic :: Software Development :: Libraries :: Python Modules",
105 ],
106 )
107
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -33,7 +33,7 @@
"python-dotenv",
"toml",
"redis",
- "hvac",
+ "hvac>=1.1.0",
"configobj",
]
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -33,7 +33,7 @@\n \"python-dotenv\",\n \"toml\",\n \"redis\",\n- \"hvac\",\n+ \"hvac>=1.1.0\",\n \"configobj\",\n ]\n", "issue": "[bug] TypeError for older versions of HVAC in read_secret_version method\n**Describe the bug**\r\nA combination of newer versions of Dynaconf with older versions of HVAC result in an incompatible mix of expected vs available arguments. Specifically you can get the following traceback.\r\n\r\n```python\r\n 109 try:\r\n 110 if obj.VAULT_KV_VERSION_FOR_DYNACONF == 2:\r\n--> 111 data = client.secrets.kv.v2.read_secret_version(\r\n 112 path,\r\n 113 mount_point=obj.VAULT_MOUNT_POINT_FOR_DYNACONF,\r\n 114 raise_on_deleted_version=True, # keep default behavior\r\n 115 )\r\n 116 else:\r\n 117 data = client.secrets.kv.read_secret(\r\n 118 \"data/\" + path,\r\n 119 mount_point=obj.VAULT_MOUNT_POINT_FOR_DYNACONF,\r\n 120 )\r\n\r\nTypeError: KvV2.read_secret_version() got an unexpected keyword argument 'raise_on_deleted_version'\r\n```\r\n\r\nThe PR introducing this feature was included in HVAC 1.1.0: https://github.com/hvac/hvac/pull/907 \r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n\r\n1. Have a version of HVAC older than 1.1.0\r\n2. Trigger a vault version read\r\n\r\n\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nimport os\n\nfrom setuptools import find_packages\nfrom setuptools import setup\n\n\ndef read(*names, **kwargs):\n \"\"\"Read a file.\"\"\"\n content = \"\"\n with open(\n os.path.join(os.path.dirname(__file__), *names),\n encoding=kwargs.get(\"encoding\", \"utf8\"),\n ) as open_file:\n content = open_file.read().strip()\n return content\n\n\ntest_requirements = [\n \"pytest\",\n \"pytest-cov\",\n \"pytest-xdist\",\n \"pytest-mock\",\n \"flake8\",\n \"pep8-naming\",\n \"flake8-debugger\",\n \"flake8-print\",\n \"flake8-todo\",\n \"radon\",\n \"flask>=0.12\",\n \"django\",\n \"python-dotenv\",\n \"toml\",\n \"redis\",\n \"hvac\",\n \"configobj\",\n]\n\n\nsetup(\n name=\"dynaconf\",\n version=read(\"dynaconf\", \"VERSION\"),\n url=\"https://github.com/dynaconf/dynaconf\",\n license=\"MIT\",\n license_files=[\"LICENSE\", \"vendor_licenses/*\"],\n author=\"Bruno Rocha\",\n author_email=\"[email protected]\",\n description=\"The dynamic configurator for your Python Project\",\n long_description=read(\"README.md\"),\n long_description_content_type=\"text/markdown\",\n packages=find_packages(\n exclude=[\n \"tests\",\n \"tests.*\",\n \"tests_functional\",\n \"tests_functional.*\",\n \"docs\",\n \"legacy_docs\",\n \"legacy_docs.*\",\n \"docs.*\",\n \"build\",\n \"build.*\",\n \"dynaconf.vendor_src\",\n \"dynaconf/vendor_src\",\n \"dynaconf.vendor_src.*\",\n \"dynaconf/vendor_src/*\",\n ]\n ),\n include_package_data=True,\n zip_safe=False,\n platforms=\"any\",\n tests_require=test_requirements,\n extras_require={\n \"redis\": [\"redis\"],\n \"vault\": [\"hvac\"],\n \"yaml\": [\"ruamel.yaml\"],\n \"toml\": [\"toml\"],\n \"ini\": [\"configobj\"],\n \"configobj\": [\"configobj\"],\n \"all\": [\"redis\", \"ruamel.yaml\", \"configobj\", \"hvac\"],\n \"test\": test_requirements,\n },\n python_requires=\">=3.8\",\n entry_points={\"console_scripts\": [\"dynaconf=dynaconf.cli:main\"]},\n setup_requires=[\"setuptools>=38.6.0\"],\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Framework :: Django\",\n \"Framework :: Flask\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: MIT License\",\n \"Natural Language :: English\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: 3.11\",\n \"Topic :: Utilities\",\n \"Topic :: Software Development :: Libraries\",\n \"Topic :: Software Development :: Libraries :: Python Modules\",\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "from __future__ import annotations\n\nimport os\n\nfrom setuptools import find_packages\nfrom setuptools import setup\n\n\ndef read(*names, **kwargs):\n \"\"\"Read a file.\"\"\"\n content = \"\"\n with open(\n os.path.join(os.path.dirname(__file__), *names),\n encoding=kwargs.get(\"encoding\", \"utf8\"),\n ) as open_file:\n content = open_file.read().strip()\n return content\n\n\ntest_requirements = [\n \"pytest\",\n \"pytest-cov\",\n \"pytest-xdist\",\n \"pytest-mock\",\n \"flake8\",\n \"pep8-naming\",\n \"flake8-debugger\",\n \"flake8-print\",\n \"flake8-todo\",\n \"radon\",\n \"flask>=0.12\",\n \"django\",\n \"python-dotenv\",\n \"toml\",\n \"redis\",\n \"hvac>=1.1.0\",\n \"configobj\",\n]\n\n\nsetup(\n name=\"dynaconf\",\n version=read(\"dynaconf\", \"VERSION\"),\n url=\"https://github.com/dynaconf/dynaconf\",\n license=\"MIT\",\n license_files=[\"LICENSE\", \"vendor_licenses/*\"],\n author=\"Bruno Rocha\",\n author_email=\"[email protected]\",\n description=\"The dynamic configurator for your Python Project\",\n long_description=read(\"README.md\"),\n long_description_content_type=\"text/markdown\",\n packages=find_packages(\n exclude=[\n \"tests\",\n \"tests.*\",\n \"tests_functional\",\n \"tests_functional.*\",\n \"docs\",\n \"legacy_docs\",\n \"legacy_docs.*\",\n \"docs.*\",\n \"build\",\n \"build.*\",\n \"dynaconf.vendor_src\",\n \"dynaconf/vendor_src\",\n \"dynaconf.vendor_src.*\",\n \"dynaconf/vendor_src/*\",\n ]\n ),\n include_package_data=True,\n zip_safe=False,\n platforms=\"any\",\n tests_require=test_requirements,\n extras_require={\n \"redis\": [\"redis\"],\n \"vault\": [\"hvac\"],\n \"yaml\": [\"ruamel.yaml\"],\n \"toml\": [\"toml\"],\n \"ini\": [\"configobj\"],\n \"configobj\": [\"configobj\"],\n \"all\": [\"redis\", \"ruamel.yaml\", \"configobj\", \"hvac\"],\n \"test\": test_requirements,\n },\n python_requires=\">=3.8\",\n entry_points={\"console_scripts\": [\"dynaconf=dynaconf.cli:main\"]},\n setup_requires=[\"setuptools>=38.6.0\"],\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Framework :: Django\",\n \"Framework :: Flask\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: MIT License\",\n \"Natural Language :: English\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: 3.11\",\n \"Topic :: Utilities\",\n \"Topic :: Software Development :: Libraries\",\n \"Topic :: Software Development :: Libraries :: Python Modules\",\n ],\n)\n", "path": "setup.py"}]} |
gh_patches_debug_20 | rasdani/github-patches | git_diff | kedro-org__kedro-2345 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Release Kedro `0.18.5`
### Description
Release Kedro `0.18.5` which will contain lots of new features for configuration. The release depends on the following tickets to be finished:
- [x] BLOCKER: https://github.com/kedro-org/kedro/issues/2255
- [x] #1909 (Docs)
- [x] #2148
- [x] #2170
- [x] #2225
Initially we wanted to include the below issues as well, but the implementation turned out to be trickier than expected, so we'll take more time to investigate a solution and won't let it block the release.
- [x] #2146
- [x] #2212
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kedro/__init__.py`
Content:
```
1 """Kedro is a framework that makes it easy to build robust and scalable
2 data pipelines by providing uniform project templates, data abstraction,
3 configuration and pipeline assembly.
4 """
5
6 __version__ = "0.18.4"
7
8
9 import logging
10
11 logging.getLogger(__name__).addHandler(logging.NullHandler())
12
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/kedro/__init__.py b/kedro/__init__.py
--- a/kedro/__init__.py
+++ b/kedro/__init__.py
@@ -3,7 +3,7 @@
configuration and pipeline assembly.
"""
-__version__ = "0.18.4"
+__version__ = "0.18.5"
import logging
| {"golden_diff": "diff --git a/kedro/__init__.py b/kedro/__init__.py\n--- a/kedro/__init__.py\n+++ b/kedro/__init__.py\n@@ -3,7 +3,7 @@\n configuration and pipeline assembly.\n \"\"\"\n \n-__version__ = \"0.18.4\"\n+__version__ = \"0.18.5\"\n \n \n import logging\n", "issue": "Release Kedro `0.18.5`\n### Description\r\n\r\nRelease Kedro `0.18.5` which will contain lots of new features for configuration. The release depends on the following tickets to be finished:\r\n\r\n- [x] BLOCKER: https://github.com/kedro-org/kedro/issues/2255\r\n- [x] #1909 (Docs)\r\n- [x] #2148 \r\n- [x] #2170\r\n- [x] #2225 \r\n\r\nInitially we wanted to include the below issues as well, but the implementation turned out to be trickier than expected, so we'll take more time to investigate a solution and won't let it block the release.\r\n- [x] #2146 \r\n- [x] #2212 \r\n\n", "before_files": [{"content": "\"\"\"Kedro is a framework that makes it easy to build robust and scalable\ndata pipelines by providing uniform project templates, data abstraction,\nconfiguration and pipeline assembly.\n\"\"\"\n\n__version__ = \"0.18.4\"\n\n\nimport logging\n\nlogging.getLogger(__name__).addHandler(logging.NullHandler())\n", "path": "kedro/__init__.py"}], "after_files": [{"content": "\"\"\"Kedro is a framework that makes it easy to build robust and scalable\ndata pipelines by providing uniform project templates, data abstraction,\nconfiguration and pipeline assembly.\n\"\"\"\n\n__version__ = \"0.18.5\"\n\n\nimport logging\n\nlogging.getLogger(__name__).addHandler(logging.NullHandler())\n", "path": "kedro/__init__.py"}]} |
gh_patches_debug_21 | rasdani/github-patches | git_diff | wright-group__WrightTools-168 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Contours in mpl_2D seem to have shape problem.
I'm running the following script
```
import WrightTools as wt
p = '000.data'
d = wt.data.from_PyCMDS(p)
d.signal_ratio.clip(zmin=-4, zmax=4)
d.signal_ratio.znull = 0
d.signal_ratio.signed = True
d.signal_ratio._update()
art = wt.artists.mpl_2D(d)
art.plot(channel='signal_ratio', contours=9)
```
using the dataset found in `'Google Drive/MX2/CMDS/2017-08-07/p-CMDS-p/0ps 51 [w2, w1]'`. I get the following error.
```
Traceback (most recent call last):
File "<ipython-input-98-92c093c4abb1>", line 1, in <module>
runfile('/Users/darienmorrow/Google Drive/MX2/CMDS/2017-08-07/p-CMDS-p/0ps 51 [w2, w1]/workup.py', wdir='/Users/darienmorrow/Google Drive/MX2/CMDS/2017-08-07/p-CMDS-p/0ps 51 [w2, w1]')
File "/Users/darienmorrow/anaconda/lib/python3.6/site-packages/spyder/utils/site/sitecustomize.py", line 866, in runfile
execfile(filename, namespace)
File "/Users/darienmorrow/anaconda/lib/python3.6/site-packages/spyder/utils/site/sitecustomize.py", line 102, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "/Users/darienmorrow/Google Drive/MX2/CMDS/2017-08-07/p-CMDS-p/0ps 51 [w2, w1]/workup.py", line 15, in <module>
art.plot(channel='signal_ratio', contours=9, contours_local=False)
File "/Users/darienmorrow/source/WrightTools/WrightTools/artists.py", line 1858, in plot
subplot_main.contour(X, Y, zi, contours_levels, colors='k')
File "/Users/darienmorrow/anaconda/lib/python3.6/site-packages/matplotlib/__init__.py", line 1892, in inner
return func(ax, *args, **kwargs)
File "/Users/darienmorrow/anaconda/lib/python3.6/site-packages/matplotlib/axes/_axes.py", line 5819, in contour
contours = mcontour.QuadContourSet(self, *args, **kwargs)
File "/Users/darienmorrow/anaconda/lib/python3.6/site-packages/matplotlib/contour.py", line 864, in __init__
self._process_args(*args, **kwargs)
File "/Users/darienmorrow/anaconda/lib/python3.6/site-packages/matplotlib/contour.py", line 1429, in _process_args
x, y, z = self._contour_args(args, kwargs)
File "/Users/darienmorrow/anaconda/lib/python3.6/site-packages/matplotlib/contour.py", line 1508, in _contour_args
x, y, z = self._check_xyz(args[:3], kwargs)
File "/Users/darienmorrow/anaconda/lib/python3.6/site-packages/matplotlib/contour.py", line 1566, in _check_xyz
"{0} instead of {1}.".format(x.shape, z.shape))
TypeError: Shape of x does not match that of z: found (52, 52) instead of (51, 51).
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/rRaman.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """
3 Resonance Raman
4 ==========
5
6 A Resonance Raman plot.
7 """
8
9 import WrightTools as wt
10 from WrightTools import datasets
11
12 p = datasets.BrunoldrRaman.LDS821_514nm_80mW
13 data = wt.data.from_BrunoldrRaman(p)
14 trash_pixels = 56
15 data = data.split(0, 843.0)[1]
16
17 data.convert('wn', verbose=False)
18
19 artist = wt.artists.mpl_1D(data)
20 d = artist.plot()
21
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/examples/rRaman.py b/examples/rRaman.py
--- a/examples/rRaman.py
+++ b/examples/rRaman.py
@@ -1,7 +1,7 @@
# -*- coding: utf-8 -*-
"""
Resonance Raman
-==========
+===============
A Resonance Raman plot.
"""
| {"golden_diff": "diff --git a/examples/rRaman.py b/examples/rRaman.py\n--- a/examples/rRaman.py\n+++ b/examples/rRaman.py\n@@ -1,7 +1,7 @@\n # -*- coding: utf-8 -*-\n \"\"\"\n Resonance Raman\n-==========\n+===============\n \n A Resonance Raman plot.\n \"\"\"\n", "issue": "Contours in mpl_2D seem to have shape problem. \nI'm running the following script\r\n```\r\nimport WrightTools as wt\r\n\r\np = '000.data'\r\nd = wt.data.from_PyCMDS(p)\r\nd.signal_ratio.clip(zmin=-4, zmax=4)\r\nd.signal_ratio.znull = 0\r\nd.signal_ratio.signed = True\r\nd.signal_ratio._update()\r\n\r\nart = wt.artists.mpl_2D(d)\r\nart.plot(channel='signal_ratio', contours=9)\r\n```\r\nusing the dataset found in `'Google Drive/MX2/CMDS/2017-08-07/p-CMDS-p/0ps 51 [w2, w1]'`. I get the following error.\r\n```\r\nTraceback (most recent call last):\r\n\r\n File \"<ipython-input-98-92c093c4abb1>\", line 1, in <module>\r\n runfile('/Users/darienmorrow/Google Drive/MX2/CMDS/2017-08-07/p-CMDS-p/0ps 51 [w2, w1]/workup.py', wdir='/Users/darienmorrow/Google Drive/MX2/CMDS/2017-08-07/p-CMDS-p/0ps 51 [w2, w1]')\r\n\r\n File \"/Users/darienmorrow/anaconda/lib/python3.6/site-packages/spyder/utils/site/sitecustomize.py\", line 866, in runfile\r\n execfile(filename, namespace)\r\n\r\n File \"/Users/darienmorrow/anaconda/lib/python3.6/site-packages/spyder/utils/site/sitecustomize.py\", line 102, in execfile\r\n exec(compile(f.read(), filename, 'exec'), namespace)\r\n\r\n File \"/Users/darienmorrow/Google Drive/MX2/CMDS/2017-08-07/p-CMDS-p/0ps 51 [w2, w1]/workup.py\", line 15, in <module>\r\n art.plot(channel='signal_ratio', contours=9, contours_local=False)\r\n\r\n File \"/Users/darienmorrow/source/WrightTools/WrightTools/artists.py\", line 1858, in plot\r\n subplot_main.contour(X, Y, zi, contours_levels, colors='k')\r\n\r\n File \"/Users/darienmorrow/anaconda/lib/python3.6/site-packages/matplotlib/__init__.py\", line 1892, in inner\r\n return func(ax, *args, **kwargs)\r\n\r\n File \"/Users/darienmorrow/anaconda/lib/python3.6/site-packages/matplotlib/axes/_axes.py\", line 5819, in contour\r\n contours = mcontour.QuadContourSet(self, *args, **kwargs)\r\n\r\n File \"/Users/darienmorrow/anaconda/lib/python3.6/site-packages/matplotlib/contour.py\", line 864, in __init__\r\n self._process_args(*args, **kwargs)\r\n\r\n File \"/Users/darienmorrow/anaconda/lib/python3.6/site-packages/matplotlib/contour.py\", line 1429, in _process_args\r\n x, y, z = self._contour_args(args, kwargs)\r\n\r\n File \"/Users/darienmorrow/anaconda/lib/python3.6/site-packages/matplotlib/contour.py\", line 1508, in _contour_args\r\n x, y, z = self._check_xyz(args[:3], kwargs)\r\n\r\n File \"/Users/darienmorrow/anaconda/lib/python3.6/site-packages/matplotlib/contour.py\", line 1566, in _check_xyz\r\n \"{0} instead of {1}.\".format(x.shape, z.shape))\r\n\r\nTypeError: Shape of x does not match that of z: found (52, 52) instead of (51, 51).\r\n```\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nResonance Raman\n==========\n\nA Resonance Raman plot.\n\"\"\"\n\nimport WrightTools as wt\nfrom WrightTools import datasets\n\np = datasets.BrunoldrRaman.LDS821_514nm_80mW\ndata = wt.data.from_BrunoldrRaman(p)\ntrash_pixels = 56\ndata = data.split(0, 843.0)[1]\n\ndata.convert('wn', verbose=False)\n\nartist = wt.artists.mpl_1D(data)\nd = artist.plot()\n", "path": "examples/rRaman.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nResonance Raman\n===============\n\nA Resonance Raman plot.\n\"\"\"\n\nimport WrightTools as wt\nfrom WrightTools import datasets\n\np = datasets.BrunoldrRaman.LDS821_514nm_80mW\ndata = wt.data.from_BrunoldrRaman(p)\ntrash_pixels = 56\ndata = data.split(0, 843.0)[1]\n\ndata.convert('wn', verbose=False)\n\nartist = wt.artists.mpl_1D(data)\nd = artist.plot()\n", "path": "examples/rRaman.py"}]} |
gh_patches_debug_22 | rasdani/github-patches | git_diff | SciTools__cartopy-2079 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
OSGB tests fail without datum transformation grids available
### Description
Currently, tests use conda-forge, which is on 8.0.1, but Fedora Rawhide is on 8.1.1. With that version of Proj, a few tests fail now.
#### Traceback
```
___________________________ TestCRS.test_osgb[True] ____________________________
self = <cartopy.tests.test_crs.TestCRS object at 0x7f0fd2cd1d50>, approx = True
@pytest.mark.parametrize('approx', [True, False])
def test_osgb(self, approx):
> self._check_osgb(ccrs.OSGB(approx=approx))
../../BUILDROOT/python-cartopy-0.20.0-1.fc36.x86_64/usr/lib64/python3.10/site-packages/cartopy/tests/test_crs.py:73:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <cartopy.tests.test_crs.TestCRS object at 0x7f0fd2cd1d50>
osgb = <Projected CRS: +proj=tmerc +datum=OSGB36 +ellps=airy +lon_0=-2 +l ...>
Name: unknown
Axis Info [cartesian]:
- E[east]...ion:
- name: unknown
- method: Transverse Mercator
Datum: OSGB 1936
- Ellipsoid: Airy 1830
- Prime Meridian: Greenwich
def _check_osgb(self, osgb):
ll = ccrs.Geodetic()
# results obtained by streetmap.co.uk.
lat, lon = np.array([50.462023, -3.478831], dtype=np.double)
east, north = np.array([295132.1, 63512.6], dtype=np.double)
# note the handling of precision here...
> assert_arr_almost_eq(np.array(osgb.transform_point(lon, lat, ll)),
np.array([east, north]),
1)
E AssertionError:
E Arrays are not almost equal to 1 decimals
E
E Mismatched elements: 2 / 2 (100%)
E Max absolute difference: 1.62307515
E Max relative difference: 2.55551679e-05
E x: array([295131., 63511.])
E y: array([295132.1, 63512.6])
../../BUILDROOT/python-cartopy-0.20.0-1.fc36.x86_64/usr/lib64/python3.10/site-packages/cartopy/tests/test_crs.py:56: AssertionError
___________________________ TestCRS.test_osgb[False] ___________________________
self = <cartopy.tests.test_crs.TestCRS object at 0x7f0fd14d8550>, approx = False
@pytest.mark.parametrize('approx', [True, False])
def test_osgb(self, approx):
> self._check_osgb(ccrs.OSGB(approx=approx))
../../BUILDROOT/python-cartopy-0.20.0-1.fc36.x86_64/usr/lib64/python3.10/site-packages/cartopy/tests/test_crs.py:73:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <cartopy.tests.test_crs.TestCRS object at 0x7f0fd14d8550>
osgb = <Projected CRS: +proj=tmerc +datum=OSGB36 +ellps=airy +lon_0=-2 +l ...>
Name: unknown
Axis Info [cartesian]:
- E[east]...ion:
- name: unknown
- method: Transverse Mercator
Datum: OSGB 1936
- Ellipsoid: Airy 1830
- Prime Meridian: Greenwich
def _check_osgb(self, osgb):
ll = ccrs.Geodetic()
# results obtained by streetmap.co.uk.
lat, lon = np.array([50.462023, -3.478831], dtype=np.double)
east, north = np.array([295132.1, 63512.6], dtype=np.double)
# note the handling of precision here...
> assert_arr_almost_eq(np.array(osgb.transform_point(lon, lat, ll)),
np.array([east, north]),
1)
E AssertionError:
E Arrays are not almost equal to 1 decimals
E
E Mismatched elements: 2 / 2 (100%)
E Max absolute difference: 1.62307537
E Max relative difference: 2.55551713e-05
E x: array([295131., 63511.])
E y: array([295132.1, 63512.6])
../../BUILDROOT/python-cartopy-0.20.0-1.fc36.x86_64/usr/lib64/python3.10/site-packages/cartopy/tests/test_crs.py:56: AssertionError
______________________________ TestCRS.test_epsg _______________________________
self = <cartopy.tests.test_crs.TestCRS object at 0x7f0fd2c35ae0>
def test_epsg(self):
uk = ccrs.epsg(27700)
assert uk.epsg_code == 27700
assert_almost_equal(uk.x_limits, (-104009.357, 688806.007), decimal=3)
assert_almost_equal(uk.y_limits, (-8908.37, 1256558.45), decimal=2)
assert_almost_equal(uk.threshold, 7928.15, decimal=2)
> self._check_osgb(uk)
../../BUILDROOT/python-cartopy-0.20.0-1.fc36.x86_64/usr/lib64/python3.10/site-packages/cartopy/tests/test_crs.py:81:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <cartopy.tests.test_crs.TestCRS object at 0x7f0fd2c35ae0>
osgb = _EPSGProjection(27700)
def _check_osgb(self, osgb):
ll = ccrs.Geodetic()
# results obtained by streetmap.co.uk.
lat, lon = np.array([50.462023, -3.478831], dtype=np.double)
east, north = np.array([295132.1, 63512.6], dtype=np.double)
# note the handling of precision here...
> assert_arr_almost_eq(np.array(osgb.transform_point(lon, lat, ll)),
np.array([east, north]),
1)
E AssertionError:
E Arrays are not almost equal to 1 decimals
E
E Mismatched elements: 2 / 2 (100%)
E Max absolute difference: 1.62307537
E Max relative difference: 2.55551713e-05
E x: array([295131., 63511.])
E y: array([295132.1, 63512.6])
../../BUILDROOT/python-cartopy-0.20.0-1.fc36.x86_64/usr/lib64/python3.10/site-packages/cartopy/tests/test_crs.py:56: AssertionError
```
The differences are rather small, but I did not see anything obvious that might have been the cause in Proj.
<details>
<summary>Full environment definition</summary>
### Operating system
Fedora Rawhide
### Cartopy version
0.20.0
</details>
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # Copyright Cartopy Contributors
2 #
3 # This file is part of Cartopy and is released under the LGPL license.
4 # See COPYING and COPYING.LESSER in the root of the repository for full
5 # licensing details.
6
7 # NOTE: This file must remain Python 2 compatible for the foreseeable future,
8 # to ensure that we error out properly for people with outdated setuptools
9 # and/or pip.
10 import sys
11
12 PYTHON_MIN_VERSION = (3, 8)
13
14 if sys.version_info < PYTHON_MIN_VERSION:
15 error = """
16 Beginning with Cartopy 0.21, Python {} or above is required.
17 You are using Python {}.
18
19 This may be due to an out of date pip.
20
21 Make sure you have pip >= 9.0.1.
22 """.format('.'.join(str(n) for n in PYTHON_MIN_VERSION),
23 '.'.join(str(n) for n in sys.version_info[:3]))
24 sys.exit(error)
25
26
27 import os
28 import shutil
29 import subprocess
30 import warnings
31 from collections import defaultdict
32 from sysconfig import get_config_var
33
34 from setuptools import Extension, find_packages, setup
35
36 """
37 Distribution definition for Cartopy.
38
39 """
40
41 # The existence of a PKG-INFO directory is enough to tell us whether this is a
42 # source installation or not (sdist).
43 HERE = os.path.dirname(__file__)
44 IS_SDIST = os.path.exists(os.path.join(HERE, 'PKG-INFO'))
45 FORCE_CYTHON = os.environ.get('FORCE_CYTHON', False)
46
47 if not IS_SDIST or FORCE_CYTHON:
48 import Cython
49 if Cython.__version__ < '0.29':
50 raise ImportError(
51 "Cython 0.29+ is required to install cartopy from source.")
52
53 from Cython.Distutils import build_ext as cy_build_ext
54
55
56 try:
57 import numpy as np
58 except ImportError:
59 raise ImportError('NumPy 1.19+ is required to install cartopy.')
60
61
62 # Please keep in sync with INSTALL file.
63 GEOS_MIN_VERSION = (3, 7, 2)
64
65
66 def file_walk_relative(top, remove=''):
67 """
68 Return a generator of files from the top of the tree, removing
69 the given prefix from the root/file result.
70
71 """
72 top = top.replace('/', os.path.sep)
73 remove = remove.replace('/', os.path.sep)
74 for root, dirs, files in os.walk(top):
75 for file in files:
76 yield os.path.join(root, file).replace(remove, '')
77
78
79 # Dependency checks
80 # =================
81
82 # GEOS
83 try:
84 geos_version = subprocess.check_output(['geos-config', '--version'])
85 geos_version = tuple(int(v) for v in geos_version.split(b'.')
86 if 'dev' not in str(v))
87 geos_includes = subprocess.check_output(['geos-config', '--includes'])
88 geos_clibs = subprocess.check_output(['geos-config', '--clibs'])
89 except (OSError, ValueError, subprocess.CalledProcessError):
90 warnings.warn(
91 'Unable to determine GEOS version. Ensure you have %s or later '
92 'installed, or installation may fail.' % (
93 '.'.join(str(v) for v in GEOS_MIN_VERSION), ))
94
95 geos_includes = []
96 geos_library_dirs = []
97 geos_libraries = ['geos_c']
98 else:
99 if geos_version < GEOS_MIN_VERSION:
100 print('GEOS version %s is installed, but cartopy requires at least '
101 'version %s.' % ('.'.join(str(v) for v in geos_version),
102 '.'.join(str(v) for v in GEOS_MIN_VERSION)),
103 file=sys.stderr)
104 exit(1)
105
106 geos_includes = geos_includes.decode().split()
107 geos_libraries = []
108 geos_library_dirs = []
109 for entry in geos_clibs.decode().split():
110 if entry.startswith('-L'):
111 geos_library_dirs.append(entry[2:])
112 elif entry.startswith('-l'):
113 geos_libraries.append(entry[2:])
114
115
116 # Python dependencies
117 extras_require = {}
118 for name in os.listdir(os.path.join(HERE, 'requirements')):
119 with open(os.path.join(HERE, 'requirements', name)) as fh:
120 section, ext = os.path.splitext(name)
121 extras_require[section] = []
122 for line in fh:
123 if line.startswith('#'):
124 pass
125 elif line.startswith('-'):
126 pass
127 else:
128 extras_require[section].append(line.strip())
129 install_requires = extras_require.pop('default')
130 tests_require = extras_require.get('tests', [])
131
132 # General extension paths
133 if sys.platform.startswith('win'):
134 def get_config_var(name):
135 return '.'
136 include_dir = get_config_var('INCLUDEDIR')
137 library_dir = get_config_var('LIBDIR')
138 extra_extension_args = defaultdict(list)
139 if not sys.platform.startswith('win'):
140 extra_extension_args["runtime_library_dirs"].append(
141 get_config_var('LIBDIR')
142 )
143
144 # Description
145 # ===========
146 with open(os.path.join(HERE, 'README.md')) as fh:
147 description = ''.join(fh.readlines())
148
149
150 cython_coverage_enabled = os.environ.get('CYTHON_COVERAGE', None)
151 if cython_coverage_enabled:
152 extra_extension_args["define_macros"].append(
153 ('CYTHON_TRACE_NOGIL', '1')
154 )
155
156 extensions = [
157 Extension(
158 'cartopy.trace',
159 ['lib/cartopy/trace.pyx'],
160 include_dirs=([include_dir, './lib/cartopy', np.get_include()] +
161 geos_includes),
162 libraries=geos_libraries,
163 library_dirs=[library_dir] + geos_library_dirs,
164 language='c++',
165 **extra_extension_args),
166 ]
167
168
169 if cython_coverage_enabled:
170 # We need to explicitly cythonize the extension in order
171 # to control the Cython compiler_directives.
172 from Cython.Build import cythonize
173
174 directives = {'linetrace': True,
175 'binding': True}
176 extensions = cythonize(extensions, compiler_directives=directives)
177
178
179 def decythonize(extensions, **_ignore):
180 # Remove pyx sources from extensions.
181 # Note: even if there are changes to the pyx files, they will be ignored.
182 for extension in extensions:
183 sources = []
184 for sfile in extension.sources:
185 path, ext = os.path.splitext(sfile)
186 if ext in ('.pyx',):
187 if extension.language == 'c++':
188 ext = '.cpp'
189 else:
190 ext = '.c'
191 sfile = path + ext
192 sources.append(sfile)
193 extension.sources[:] = sources
194 return extensions
195
196
197 if IS_SDIST and not FORCE_CYTHON:
198 extensions = decythonize(extensions)
199 cmdclass = {}
200 else:
201 cmdclass = {'build_ext': cy_build_ext}
202
203
204 # Main setup
205 # ==========
206 setup(
207 name='Cartopy',
208 url='https://scitools.org.uk/cartopy/docs/latest/',
209 download_url='https://github.com/SciTools/cartopy',
210 author='UK Met Office',
211 description='A cartographic python library with Matplotlib support for '
212 'visualisation',
213 long_description=description,
214 long_description_content_type='text/markdown',
215 license="LGPLv3",
216 keywords="cartography map transform projection proj proj.4 geos shapely "
217 "shapefile",
218
219 install_requires=install_requires,
220 extras_require=extras_require,
221 tests_require=tests_require,
222
223 use_scm_version={
224 'write_to': 'lib/cartopy/_version.py',
225 },
226
227 packages=find_packages("lib"),
228 package_dir={'': 'lib'},
229 package_data={'cartopy': list(file_walk_relative('lib/cartopy/tests/'
230 'mpl/baseline_images/',
231 remove='lib/cartopy/')) +
232 list(file_walk_relative('lib/cartopy/data/raster',
233 remove='lib/cartopy/')) +
234 list(file_walk_relative('lib/cartopy/data/netcdf',
235 remove='lib/cartopy/')) +
236 list(file_walk_relative('lib/cartopy/data/'
237 'shapefiles/gshhs',
238 remove='lib/cartopy/')) +
239 list(file_walk_relative('lib/cartopy/tests/lakes_shapefile',
240 remove='lib/cartopy/')) +
241 ['io/srtm.npz']},
242
243 scripts=['tools/cartopy_feature_download.py'],
244 ext_modules=extensions,
245 cmdclass=cmdclass,
246 python_requires='>=' + '.'.join(str(n) for n in PYTHON_MIN_VERSION),
247 classifiers=[
248 'Development Status :: 4 - Beta',
249 'Framework :: Matplotlib',
250 'License :: OSI Approved :: GNU Lesser General Public License v3 '
251 'or later (LGPLv3+)',
252 'Operating System :: MacOS :: MacOS X',
253 'Operating System :: Microsoft :: Windows',
254 'Operating System :: POSIX',
255 'Operating System :: POSIX :: AIX',
256 'Operating System :: POSIX :: Linux',
257 'Programming Language :: C++',
258 'Programming Language :: Python',
259 'Programming Language :: Python :: 3',
260 'Programming Language :: Python :: 3.8',
261 'Programming Language :: Python :: 3.9',
262 'Programming Language :: Python :: 3.10',
263 'Programming Language :: Python :: 3 :: Only',
264 'Topic :: Scientific/Engineering',
265 'Topic :: Scientific/Engineering :: GIS',
266 'Topic :: Scientific/Engineering :: Visualization',
267 ],
268 )
269
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -25,7 +25,6 @@
import os
-import shutil
import subprocess
import warnings
from collections import defaultdict
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -25,7 +25,6 @@\n \n \n import os\n-import shutil\n import subprocess\n import warnings\n from collections import defaultdict\n", "issue": "OSGB tests fail without datum transformation grids available\n### Description\r\n\r\nCurrently, tests use conda-forge, which is on 8.0.1, but Fedora Rawhide is on 8.1.1. With that version of Proj, a few tests fail now.\r\n\r\n#### Traceback \r\n\r\n```\r\n___________________________ TestCRS.test_osgb[True] ____________________________\r\n\r\nself = <cartopy.tests.test_crs.TestCRS object at 0x7f0fd2cd1d50>, approx = True\r\n\r\n @pytest.mark.parametrize('approx', [True, False])\r\n def test_osgb(self, approx):\r\n> self._check_osgb(ccrs.OSGB(approx=approx))\r\n\r\n../../BUILDROOT/python-cartopy-0.20.0-1.fc36.x86_64/usr/lib64/python3.10/site-packages/cartopy/tests/test_crs.py:73: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\nself = <cartopy.tests.test_crs.TestCRS object at 0x7f0fd2cd1d50>\r\nosgb = <Projected CRS: +proj=tmerc +datum=OSGB36 +ellps=airy +lon_0=-2 +l ...>\r\nName: unknown\r\nAxis Info [cartesian]:\r\n- E[east]...ion:\r\n- name: unknown\r\n- method: Transverse Mercator\r\nDatum: OSGB 1936\r\n- Ellipsoid: Airy 1830\r\n- Prime Meridian: Greenwich\r\n\r\n\r\n def _check_osgb(self, osgb):\r\n ll = ccrs.Geodetic()\r\n \r\n # results obtained by streetmap.co.uk.\r\n lat, lon = np.array([50.462023, -3.478831], dtype=np.double)\r\n east, north = np.array([295132.1, 63512.6], dtype=np.double)\r\n \r\n # note the handling of precision here...\r\n> assert_arr_almost_eq(np.array(osgb.transform_point(lon, lat, ll)),\r\n np.array([east, north]),\r\n 1)\r\nE AssertionError: \r\nE Arrays are not almost equal to 1 decimals\r\nE \r\nE Mismatched elements: 2 / 2 (100%)\r\nE Max absolute difference: 1.62307515\r\nE Max relative difference: 2.55551679e-05\r\nE x: array([295131., 63511.])\r\nE y: array([295132.1, 63512.6])\r\n\r\n../../BUILDROOT/python-cartopy-0.20.0-1.fc36.x86_64/usr/lib64/python3.10/site-packages/cartopy/tests/test_crs.py:56: AssertionError\r\n___________________________ TestCRS.test_osgb[False] ___________________________\r\n\r\nself = <cartopy.tests.test_crs.TestCRS object at 0x7f0fd14d8550>, approx = False\r\n\r\n @pytest.mark.parametrize('approx', [True, False])\r\n def test_osgb(self, approx):\r\n> self._check_osgb(ccrs.OSGB(approx=approx))\r\n\r\n../../BUILDROOT/python-cartopy-0.20.0-1.fc36.x86_64/usr/lib64/python3.10/site-packages/cartopy/tests/test_crs.py:73: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\nself = <cartopy.tests.test_crs.TestCRS object at 0x7f0fd14d8550>\r\nosgb = <Projected CRS: +proj=tmerc +datum=OSGB36 +ellps=airy +lon_0=-2 +l ...>\r\nName: unknown\r\nAxis Info [cartesian]:\r\n- E[east]...ion:\r\n- name: unknown\r\n- method: Transverse Mercator\r\nDatum: OSGB 1936\r\n- Ellipsoid: Airy 1830\r\n- Prime Meridian: Greenwich\r\n\r\n\r\n def _check_osgb(self, osgb):\r\n ll = ccrs.Geodetic()\r\n \r\n # results obtained by streetmap.co.uk.\r\n lat, lon = np.array([50.462023, -3.478831], dtype=np.double)\r\n east, north = np.array([295132.1, 63512.6], dtype=np.double)\r\n \r\n # note the handling of precision here...\r\n> assert_arr_almost_eq(np.array(osgb.transform_point(lon, lat, ll)),\r\n np.array([east, north]),\r\n 1)\r\nE AssertionError: \r\nE Arrays are not almost equal to 1 decimals\r\nE \r\nE Mismatched elements: 2 / 2 (100%)\r\nE Max absolute difference: 1.62307537\r\nE Max relative difference: 2.55551713e-05\r\nE x: array([295131., 63511.])\r\nE y: array([295132.1, 63512.6])\r\n\r\n../../BUILDROOT/python-cartopy-0.20.0-1.fc36.x86_64/usr/lib64/python3.10/site-packages/cartopy/tests/test_crs.py:56: AssertionError\r\n______________________________ TestCRS.test_epsg _______________________________\r\n\r\nself = <cartopy.tests.test_crs.TestCRS object at 0x7f0fd2c35ae0>\r\n\r\n def test_epsg(self):\r\n uk = ccrs.epsg(27700)\r\n assert uk.epsg_code == 27700\r\n assert_almost_equal(uk.x_limits, (-104009.357, 688806.007), decimal=3)\r\n assert_almost_equal(uk.y_limits, (-8908.37, 1256558.45), decimal=2)\r\n assert_almost_equal(uk.threshold, 7928.15, decimal=2)\r\n> self._check_osgb(uk)\r\n\r\n../../BUILDROOT/python-cartopy-0.20.0-1.fc36.x86_64/usr/lib64/python3.10/site-packages/cartopy/tests/test_crs.py:81: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\nself = <cartopy.tests.test_crs.TestCRS object at 0x7f0fd2c35ae0>\r\nosgb = _EPSGProjection(27700)\r\n\r\n def _check_osgb(self, osgb):\r\n ll = ccrs.Geodetic()\r\n \r\n # results obtained by streetmap.co.uk.\r\n lat, lon = np.array([50.462023, -3.478831], dtype=np.double)\r\n east, north = np.array([295132.1, 63512.6], dtype=np.double)\r\n \r\n # note the handling of precision here...\r\n> assert_arr_almost_eq(np.array(osgb.transform_point(lon, lat, ll)),\r\n np.array([east, north]),\r\n 1)\r\nE AssertionError: \r\nE Arrays are not almost equal to 1 decimals\r\nE \r\nE Mismatched elements: 2 / 2 (100%)\r\nE Max absolute difference: 1.62307537\r\nE Max relative difference: 2.55551713e-05\r\nE x: array([295131., 63511.])\r\nE y: array([295132.1, 63512.6])\r\n\r\n../../BUILDROOT/python-cartopy-0.20.0-1.fc36.x86_64/usr/lib64/python3.10/site-packages/cartopy/tests/test_crs.py:56: AssertionError\r\n```\r\n\r\nThe differences are rather small, but I did not see anything obvious that might have been the cause in Proj.\r\n\r\n<details>\r\n <summary>Full environment definition</summary>\r\n\r\n### Operating system\r\nFedora Rawhide\r\n\r\n### Cartopy version\r\n0.20.0\r\n</details>\r\n\n", "before_files": [{"content": "# Copyright Cartopy Contributors\n#\n# This file is part of Cartopy and is released under the LGPL license.\n# See COPYING and COPYING.LESSER in the root of the repository for full\n# licensing details.\n\n# NOTE: This file must remain Python 2 compatible for the foreseeable future,\n# to ensure that we error out properly for people with outdated setuptools\n# and/or pip.\nimport sys\n\nPYTHON_MIN_VERSION = (3, 8)\n\nif sys.version_info < PYTHON_MIN_VERSION:\n error = \"\"\"\nBeginning with Cartopy 0.21, Python {} or above is required.\nYou are using Python {}.\n\nThis may be due to an out of date pip.\n\nMake sure you have pip >= 9.0.1.\n\"\"\".format('.'.join(str(n) for n in PYTHON_MIN_VERSION),\n '.'.join(str(n) for n in sys.version_info[:3]))\n sys.exit(error)\n\n\nimport os\nimport shutil\nimport subprocess\nimport warnings\nfrom collections import defaultdict\nfrom sysconfig import get_config_var\n\nfrom setuptools import Extension, find_packages, setup\n\n\"\"\"\nDistribution definition for Cartopy.\n\n\"\"\"\n\n# The existence of a PKG-INFO directory is enough to tell us whether this is a\n# source installation or not (sdist).\nHERE = os.path.dirname(__file__)\nIS_SDIST = os.path.exists(os.path.join(HERE, 'PKG-INFO'))\nFORCE_CYTHON = os.environ.get('FORCE_CYTHON', False)\n\nif not IS_SDIST or FORCE_CYTHON:\n import Cython\n if Cython.__version__ < '0.29':\n raise ImportError(\n \"Cython 0.29+ is required to install cartopy from source.\")\n\n from Cython.Distutils import build_ext as cy_build_ext\n\n\ntry:\n import numpy as np\nexcept ImportError:\n raise ImportError('NumPy 1.19+ is required to install cartopy.')\n\n\n# Please keep in sync with INSTALL file.\nGEOS_MIN_VERSION = (3, 7, 2)\n\n\ndef file_walk_relative(top, remove=''):\n \"\"\"\n Return a generator of files from the top of the tree, removing\n the given prefix from the root/file result.\n\n \"\"\"\n top = top.replace('/', os.path.sep)\n remove = remove.replace('/', os.path.sep)\n for root, dirs, files in os.walk(top):\n for file in files:\n yield os.path.join(root, file).replace(remove, '')\n\n\n# Dependency checks\n# =================\n\n# GEOS\ntry:\n geos_version = subprocess.check_output(['geos-config', '--version'])\n geos_version = tuple(int(v) for v in geos_version.split(b'.')\n if 'dev' not in str(v))\n geos_includes = subprocess.check_output(['geos-config', '--includes'])\n geos_clibs = subprocess.check_output(['geos-config', '--clibs'])\nexcept (OSError, ValueError, subprocess.CalledProcessError):\n warnings.warn(\n 'Unable to determine GEOS version. Ensure you have %s or later '\n 'installed, or installation may fail.' % (\n '.'.join(str(v) for v in GEOS_MIN_VERSION), ))\n\n geos_includes = []\n geos_library_dirs = []\n geos_libraries = ['geos_c']\nelse:\n if geos_version < GEOS_MIN_VERSION:\n print('GEOS version %s is installed, but cartopy requires at least '\n 'version %s.' % ('.'.join(str(v) for v in geos_version),\n '.'.join(str(v) for v in GEOS_MIN_VERSION)),\n file=sys.stderr)\n exit(1)\n\n geos_includes = geos_includes.decode().split()\n geos_libraries = []\n geos_library_dirs = []\n for entry in geos_clibs.decode().split():\n if entry.startswith('-L'):\n geos_library_dirs.append(entry[2:])\n elif entry.startswith('-l'):\n geos_libraries.append(entry[2:])\n\n\n# Python dependencies\nextras_require = {}\nfor name in os.listdir(os.path.join(HERE, 'requirements')):\n with open(os.path.join(HERE, 'requirements', name)) as fh:\n section, ext = os.path.splitext(name)\n extras_require[section] = []\n for line in fh:\n if line.startswith('#'):\n pass\n elif line.startswith('-'):\n pass\n else:\n extras_require[section].append(line.strip())\ninstall_requires = extras_require.pop('default')\ntests_require = extras_require.get('tests', [])\n\n# General extension paths\nif sys.platform.startswith('win'):\n def get_config_var(name):\n return '.'\ninclude_dir = get_config_var('INCLUDEDIR')\nlibrary_dir = get_config_var('LIBDIR')\nextra_extension_args = defaultdict(list)\nif not sys.platform.startswith('win'):\n extra_extension_args[\"runtime_library_dirs\"].append(\n get_config_var('LIBDIR')\n )\n\n# Description\n# ===========\nwith open(os.path.join(HERE, 'README.md')) as fh:\n description = ''.join(fh.readlines())\n\n\ncython_coverage_enabled = os.environ.get('CYTHON_COVERAGE', None)\nif cython_coverage_enabled:\n extra_extension_args[\"define_macros\"].append(\n ('CYTHON_TRACE_NOGIL', '1')\n )\n\nextensions = [\n Extension(\n 'cartopy.trace',\n ['lib/cartopy/trace.pyx'],\n include_dirs=([include_dir, './lib/cartopy', np.get_include()] +\n geos_includes),\n libraries=geos_libraries,\n library_dirs=[library_dir] + geos_library_dirs,\n language='c++',\n **extra_extension_args),\n]\n\n\nif cython_coverage_enabled:\n # We need to explicitly cythonize the extension in order\n # to control the Cython compiler_directives.\n from Cython.Build import cythonize\n\n directives = {'linetrace': True,\n 'binding': True}\n extensions = cythonize(extensions, compiler_directives=directives)\n\n\ndef decythonize(extensions, **_ignore):\n # Remove pyx sources from extensions.\n # Note: even if there are changes to the pyx files, they will be ignored.\n for extension in extensions:\n sources = []\n for sfile in extension.sources:\n path, ext = os.path.splitext(sfile)\n if ext in ('.pyx',):\n if extension.language == 'c++':\n ext = '.cpp'\n else:\n ext = '.c'\n sfile = path + ext\n sources.append(sfile)\n extension.sources[:] = sources\n return extensions\n\n\nif IS_SDIST and not FORCE_CYTHON:\n extensions = decythonize(extensions)\n cmdclass = {}\nelse:\n cmdclass = {'build_ext': cy_build_ext}\n\n\n# Main setup\n# ==========\nsetup(\n name='Cartopy',\n url='https://scitools.org.uk/cartopy/docs/latest/',\n download_url='https://github.com/SciTools/cartopy',\n author='UK Met Office',\n description='A cartographic python library with Matplotlib support for '\n 'visualisation',\n long_description=description,\n long_description_content_type='text/markdown',\n license=\"LGPLv3\",\n keywords=\"cartography map transform projection proj proj.4 geos shapely \"\n \"shapefile\",\n\n install_requires=install_requires,\n extras_require=extras_require,\n tests_require=tests_require,\n\n use_scm_version={\n 'write_to': 'lib/cartopy/_version.py',\n },\n\n packages=find_packages(\"lib\"),\n package_dir={'': 'lib'},\n package_data={'cartopy': list(file_walk_relative('lib/cartopy/tests/'\n 'mpl/baseline_images/',\n remove='lib/cartopy/')) +\n list(file_walk_relative('lib/cartopy/data/raster',\n remove='lib/cartopy/')) +\n list(file_walk_relative('lib/cartopy/data/netcdf',\n remove='lib/cartopy/')) +\n list(file_walk_relative('lib/cartopy/data/'\n 'shapefiles/gshhs',\n remove='lib/cartopy/')) +\n list(file_walk_relative('lib/cartopy/tests/lakes_shapefile',\n remove='lib/cartopy/')) +\n ['io/srtm.npz']},\n\n scripts=['tools/cartopy_feature_download.py'],\n ext_modules=extensions,\n cmdclass=cmdclass,\n python_requires='>=' + '.'.join(str(n) for n in PYTHON_MIN_VERSION),\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Framework :: Matplotlib',\n 'License :: OSI Approved :: GNU Lesser General Public License v3 '\n 'or later (LGPLv3+)',\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: POSIX',\n 'Operating System :: POSIX :: AIX',\n 'Operating System :: POSIX :: Linux',\n 'Programming Language :: C++',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n 'Programming Language :: Python :: 3.10',\n 'Programming Language :: Python :: 3 :: Only',\n 'Topic :: Scientific/Engineering',\n 'Topic :: Scientific/Engineering :: GIS',\n 'Topic :: Scientific/Engineering :: Visualization',\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "# Copyright Cartopy Contributors\n#\n# This file is part of Cartopy and is released under the LGPL license.\n# See COPYING and COPYING.LESSER in the root of the repository for full\n# licensing details.\n\n# NOTE: This file must remain Python 2 compatible for the foreseeable future,\n# to ensure that we error out properly for people with outdated setuptools\n# and/or pip.\nimport sys\n\nPYTHON_MIN_VERSION = (3, 8)\n\nif sys.version_info < PYTHON_MIN_VERSION:\n error = \"\"\"\nBeginning with Cartopy 0.21, Python {} or above is required.\nYou are using Python {}.\n\nThis may be due to an out of date pip.\n\nMake sure you have pip >= 9.0.1.\n\"\"\".format('.'.join(str(n) for n in PYTHON_MIN_VERSION),\n '.'.join(str(n) for n in sys.version_info[:3]))\n sys.exit(error)\n\n\nimport os\nimport subprocess\nimport warnings\nfrom collections import defaultdict\nfrom sysconfig import get_config_var\n\nfrom setuptools import Extension, find_packages, setup\n\n\"\"\"\nDistribution definition for Cartopy.\n\n\"\"\"\n\n# The existence of a PKG-INFO directory is enough to tell us whether this is a\n# source installation or not (sdist).\nHERE = os.path.dirname(__file__)\nIS_SDIST = os.path.exists(os.path.join(HERE, 'PKG-INFO'))\nFORCE_CYTHON = os.environ.get('FORCE_CYTHON', False)\n\nif not IS_SDIST or FORCE_CYTHON:\n import Cython\n if Cython.__version__ < '0.29':\n raise ImportError(\n \"Cython 0.29+ is required to install cartopy from source.\")\n\n from Cython.Distutils import build_ext as cy_build_ext\n\n\ntry:\n import numpy as np\nexcept ImportError:\n raise ImportError('NumPy 1.19+ is required to install cartopy.')\n\n\n# Please keep in sync with INSTALL file.\nGEOS_MIN_VERSION = (3, 7, 2)\n\n\ndef file_walk_relative(top, remove=''):\n \"\"\"\n Return a generator of files from the top of the tree, removing\n the given prefix from the root/file result.\n\n \"\"\"\n top = top.replace('/', os.path.sep)\n remove = remove.replace('/', os.path.sep)\n for root, dirs, files in os.walk(top):\n for file in files:\n yield os.path.join(root, file).replace(remove, '')\n\n\n# Dependency checks\n# =================\n\n# GEOS\ntry:\n geos_version = subprocess.check_output(['geos-config', '--version'])\n geos_version = tuple(int(v) for v in geos_version.split(b'.')\n if 'dev' not in str(v))\n geos_includes = subprocess.check_output(['geos-config', '--includes'])\n geos_clibs = subprocess.check_output(['geos-config', '--clibs'])\nexcept (OSError, ValueError, subprocess.CalledProcessError):\n warnings.warn(\n 'Unable to determine GEOS version. Ensure you have %s or later '\n 'installed, or installation may fail.' % (\n '.'.join(str(v) for v in GEOS_MIN_VERSION), ))\n\n geos_includes = []\n geos_library_dirs = []\n geos_libraries = ['geos_c']\nelse:\n if geos_version < GEOS_MIN_VERSION:\n print('GEOS version %s is installed, but cartopy requires at least '\n 'version %s.' % ('.'.join(str(v) for v in geos_version),\n '.'.join(str(v) for v in GEOS_MIN_VERSION)),\n file=sys.stderr)\n exit(1)\n\n geos_includes = geos_includes.decode().split()\n geos_libraries = []\n geos_library_dirs = []\n for entry in geos_clibs.decode().split():\n if entry.startswith('-L'):\n geos_library_dirs.append(entry[2:])\n elif entry.startswith('-l'):\n geos_libraries.append(entry[2:])\n\n\n# Python dependencies\nextras_require = {}\nfor name in os.listdir(os.path.join(HERE, 'requirements')):\n with open(os.path.join(HERE, 'requirements', name)) as fh:\n section, ext = os.path.splitext(name)\n extras_require[section] = []\n for line in fh:\n if line.startswith('#'):\n pass\n elif line.startswith('-'):\n pass\n else:\n extras_require[section].append(line.strip())\ninstall_requires = extras_require.pop('default')\ntests_require = extras_require.get('tests', [])\n\n# General extension paths\nif sys.platform.startswith('win'):\n def get_config_var(name):\n return '.'\ninclude_dir = get_config_var('INCLUDEDIR')\nlibrary_dir = get_config_var('LIBDIR')\nextra_extension_args = defaultdict(list)\nif not sys.platform.startswith('win'):\n extra_extension_args[\"runtime_library_dirs\"].append(\n get_config_var('LIBDIR')\n )\n\n# Description\n# ===========\nwith open(os.path.join(HERE, 'README.md')) as fh:\n description = ''.join(fh.readlines())\n\n\ncython_coverage_enabled = os.environ.get('CYTHON_COVERAGE', None)\nif cython_coverage_enabled:\n extra_extension_args[\"define_macros\"].append(\n ('CYTHON_TRACE_NOGIL', '1')\n )\n\nextensions = [\n Extension(\n 'cartopy.trace',\n ['lib/cartopy/trace.pyx'],\n include_dirs=([include_dir, './lib/cartopy', np.get_include()] +\n geos_includes),\n libraries=geos_libraries,\n library_dirs=[library_dir] + geos_library_dirs,\n language='c++',\n **extra_extension_args),\n]\n\n\nif cython_coverage_enabled:\n # We need to explicitly cythonize the extension in order\n # to control the Cython compiler_directives.\n from Cython.Build import cythonize\n\n directives = {'linetrace': True,\n 'binding': True}\n extensions = cythonize(extensions, compiler_directives=directives)\n\n\ndef decythonize(extensions, **_ignore):\n # Remove pyx sources from extensions.\n # Note: even if there are changes to the pyx files, they will be ignored.\n for extension in extensions:\n sources = []\n for sfile in extension.sources:\n path, ext = os.path.splitext(sfile)\n if ext in ('.pyx',):\n if extension.language == 'c++':\n ext = '.cpp'\n else:\n ext = '.c'\n sfile = path + ext\n sources.append(sfile)\n extension.sources[:] = sources\n return extensions\n\n\nif IS_SDIST and not FORCE_CYTHON:\n extensions = decythonize(extensions)\n cmdclass = {}\nelse:\n cmdclass = {'build_ext': cy_build_ext}\n\n\n# Main setup\n# ==========\nsetup(\n name='Cartopy',\n url='https://scitools.org.uk/cartopy/docs/latest/',\n download_url='https://github.com/SciTools/cartopy',\n author='UK Met Office',\n description='A cartographic python library with Matplotlib support for '\n 'visualisation',\n long_description=description,\n long_description_content_type='text/markdown',\n license=\"LGPLv3\",\n keywords=\"cartography map transform projection proj proj.4 geos shapely \"\n \"shapefile\",\n\n install_requires=install_requires,\n extras_require=extras_require,\n tests_require=tests_require,\n\n use_scm_version={\n 'write_to': 'lib/cartopy/_version.py',\n },\n\n packages=find_packages(\"lib\"),\n package_dir={'': 'lib'},\n package_data={'cartopy': list(file_walk_relative('lib/cartopy/tests/'\n 'mpl/baseline_images/',\n remove='lib/cartopy/')) +\n list(file_walk_relative('lib/cartopy/data/raster',\n remove='lib/cartopy/')) +\n list(file_walk_relative('lib/cartopy/data/netcdf',\n remove='lib/cartopy/')) +\n list(file_walk_relative('lib/cartopy/data/'\n 'shapefiles/gshhs',\n remove='lib/cartopy/')) +\n list(file_walk_relative('lib/cartopy/tests/lakes_shapefile',\n remove='lib/cartopy/')) +\n ['io/srtm.npz']},\n\n scripts=['tools/cartopy_feature_download.py'],\n ext_modules=extensions,\n cmdclass=cmdclass,\n python_requires='>=' + '.'.join(str(n) for n in PYTHON_MIN_VERSION),\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Framework :: Matplotlib',\n 'License :: OSI Approved :: GNU Lesser General Public License v3 '\n 'or later (LGPLv3+)',\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: POSIX',\n 'Operating System :: POSIX :: AIX',\n 'Operating System :: POSIX :: Linux',\n 'Programming Language :: C++',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n 'Programming Language :: Python :: 3.10',\n 'Programming Language :: Python :: 3 :: Only',\n 'Topic :: Scientific/Engineering',\n 'Topic :: Scientific/Engineering :: GIS',\n 'Topic :: Scientific/Engineering :: Visualization',\n ],\n)\n", "path": "setup.py"}]} |
gh_patches_debug_23 | rasdani/github-patches | git_diff | scipy__scipy-11657 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add wrappers for ?pttrf, ?pttrs
See [NumFOCUS Small Development Grant Proposal 2019 Round 3 proposal](https://www.dropbox.com/s/i24h3j2osol56uf/Small%20Development%20Grant%20-%20LAPACK%20%20%28Condensed%29.pdf?dl=0) for background.
Add wrapper for [?pttrf](http://www.netlib.org/lapack/explore-html/d0/d2f/group__double_p_tcomputational_gad408508a4fb3810c23125995dc83ccc1.html)/[?pttrs](http://www.netlib.org/lapack/explore-html/d0/d2f/group__double_p_tcomputational_gaf3cb531de6ceb79732d438ad3b66132a.html).
These routines are similar to those in gh-11123 except that these ones are tailored for matrices that are symmetric positive definite.
Suggested signatures:
```
! d, e, info = pttrf(d, e, overwrite_d=0, overwrite_e=0)
```
```
! x, info = pttrs(d, e, b, overwrite_b=0)
```
Test idea:
- Generate random tridiagonal symmetric positive definite (SPD) matrix `A`. The fact that "a Hermitian strictly diagonally dominant matrix with real positive diagonal entries is positive definite" can be used to generate one. For instance, in the real case, if all the diagonal elements are >2 and the off-diagonals are <1, the matrix will be PD.
- Decompose `A` with `?pttrf`
- Multiply factors from `?pttrf` and compare with original `A`
- Generate random solution `x`
- Generate `b` from `A@x`
- Solve using `?pttrs`
- Compare solution from `?pttrs` against known `x`
Example code for generating random tridiagonal SPD `A` and testing properties:
```
import numpy as np
import scipy.linalg
n = 10
d = np.random.rand(n)+2
e = np.random.rand(n-1)
A = np.diag(e,-1) + np.diag(e, 1) + np.diag(d)
L, Q = scipy.linalg.eig(A)
print(np.all(L>0))
print(np.allclose(Q @ Q.T, np.eye(n)))
print(np.allclose(Q*L @ Q.T, A)) # same as [email protected](L)@Q.T
```
Also test for singular matrix, non-spd matrix, and incorrect/incompatible array sizes.
Also: implement all examples from the [NAG manual](https://www.nag.com/numeric/fl/nagdoc_latest/html/f07/f07conts.html) as additional tests.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scipy/linalg/lapack.py`
Content:
```
1 """
2 Low-level LAPACK functions (:mod:`scipy.linalg.lapack`)
3 =======================================================
4
5 This module contains low-level functions from the LAPACK library.
6
7 The `*gegv` family of routines have been removed from LAPACK 3.6.0
8 and have been deprecated in SciPy 0.17.0. They will be removed in
9 a future release.
10
11 .. versionadded:: 0.12.0
12
13 .. note::
14
15 The common ``overwrite_<>`` option in many routines, allows the
16 input arrays to be overwritten to avoid extra memory allocation.
17 However this requires the array to satisfy two conditions
18 which are memory order and the data type to match exactly the
19 order and the type expected by the routine.
20
21 As an example, if you pass a double precision float array to any
22 ``S....`` routine which expects single precision arguments, f2py
23 will create an intermediate array to match the argument types and
24 overwriting will be performed on that intermediate array.
25
26 Similarly, if a C-contiguous array is passed, f2py will pass a
27 FORTRAN-contiguous array internally. Please make sure that these
28 details are satisfied. More information can be found in the f2py
29 documentation.
30
31 .. warning::
32
33 These functions do little to no error checking.
34 It is possible to cause crashes by mis-using them,
35 so prefer using the higher-level routines in `scipy.linalg`.
36
37 Finding functions
38 -----------------
39
40 .. autosummary::
41 :toctree: generated/
42
43 get_lapack_funcs
44
45 All functions
46 -------------
47
48 .. autosummary::
49 :toctree: generated/
50
51
52 sgbsv
53 dgbsv
54 cgbsv
55 zgbsv
56
57 sgbtrf
58 dgbtrf
59 cgbtrf
60 zgbtrf
61
62 sgbtrs
63 dgbtrs
64 cgbtrs
65 zgbtrs
66
67 sgebal
68 dgebal
69 cgebal
70 zgebal
71
72 sgecon
73 dgecon
74 cgecon
75 zgecon
76
77 sgeequ
78 dgeequ
79 cgeequ
80 zgeequ
81
82 sgeequb
83 dgeequb
84 cgeequb
85 zgeequb
86
87 sgees
88 dgees
89 cgees
90 zgees
91
92 sgeev
93 dgeev
94 cgeev
95 zgeev
96
97 sgeev_lwork
98 dgeev_lwork
99 cgeev_lwork
100 zgeev_lwork
101
102 sgegv
103 dgegv
104 cgegv
105 zgegv
106
107 sgehrd
108 dgehrd
109 cgehrd
110 zgehrd
111
112 sgehrd_lwork
113 dgehrd_lwork
114 cgehrd_lwork
115 zgehrd_lwork
116
117 sgels
118 dgels
119 cgels
120 zgels
121
122 sgels_lwork
123 dgels_lwork
124 cgels_lwork
125 zgels_lwork
126
127 sgelsd
128 dgelsd
129 cgelsd
130 zgelsd
131
132 sgelsd_lwork
133 dgelsd_lwork
134 cgelsd_lwork
135 zgelsd_lwork
136
137 sgelss
138 dgelss
139 cgelss
140 zgelss
141
142 sgelss_lwork
143 dgelss_lwork
144 cgelss_lwork
145 zgelss_lwork
146
147 sgelsy
148 dgelsy
149 cgelsy
150 zgelsy
151
152 sgelsy_lwork
153 dgelsy_lwork
154 cgelsy_lwork
155 zgelsy_lwork
156
157 sgeqp3
158 dgeqp3
159 cgeqp3
160 zgeqp3
161
162 sgeqrf
163 dgeqrf
164 cgeqrf
165 zgeqrf
166
167 sgeqrf_lwork
168 dgeqrf_lwork
169 cgeqrf_lwork
170 zgeqrf_lwork
171
172 sgeqrfp
173 dgeqrfp
174 cgeqrfp
175 zgeqrfp
176
177 sgeqrfp_lwork
178 dgeqrfp_lwork
179 cgeqrfp_lwork
180 zgeqrfp_lwork
181
182 sgerqf
183 dgerqf
184 cgerqf
185 zgerqf
186
187 sgesdd
188 dgesdd
189 cgesdd
190 zgesdd
191
192 sgesdd_lwork
193 dgesdd_lwork
194 cgesdd_lwork
195 zgesdd_lwork
196
197 sgesv
198 dgesv
199 cgesv
200 zgesv
201
202 sgesvd
203 dgesvd
204 cgesvd
205 zgesvd
206
207 sgesvd_lwork
208 dgesvd_lwork
209 cgesvd_lwork
210 zgesvd_lwork
211
212 sgesvx
213 dgesvx
214 cgesvx
215 zgesvx
216
217 sgetrf
218 dgetrf
219 cgetrf
220 zgetrf
221
222 sgetc2
223 dgetc2
224 cgetc2
225 zgetc2
226
227 sgetri
228 dgetri
229 cgetri
230 zgetri
231
232 sgetri_lwork
233 dgetri_lwork
234 cgetri_lwork
235 zgetri_lwork
236
237 sgetrs
238 dgetrs
239 cgetrs
240 zgetrs
241
242 sgesc2
243 dgesc2
244 cgesc2
245 zgesc2
246
247 sgges
248 dgges
249 cgges
250 zgges
251
252 sggev
253 dggev
254 cggev
255 zggev
256
257 sgglse
258 dgglse
259 cgglse
260 zgglse
261
262 sgglse_lwork
263 dgglse_lwork
264 cgglse_lwork
265 zgglse_lwork
266
267 sgtsv
268 dgtsv
269 cgtsv
270 zgtsv
271
272 chbevd
273 zhbevd
274
275 chbevx
276 zhbevx
277
278 checon
279 zhecon
280
281 cheequb
282 zheequb
283
284 cheev
285 zheev
286
287 cheev_lwork
288 zheev_lwork
289
290 cheevd
291 zheevd
292
293 cheevd_lwork
294 zheevd_lwork
295
296 cheevr
297 zheevr
298
299 cheevr_lwork
300 zheevr_lwork
301
302 cheevx
303 zheevx
304
305 cheevx_lwork
306 zheevx_lwork
307
308 chegst
309 zhegst
310
311 chegv
312 zhegv
313
314 chegv_lwork
315 zhegv_lwork
316
317 chegvd
318 zhegvd
319
320 chegvx
321 zhegvx
322
323 chegvx_lwork
324 zhegvx_lwork
325
326 chesv
327 zhesv
328
329 chesv_lwork
330 zhesv_lwork
331
332 chesvx
333 zhesvx
334
335 chesvx_lwork
336 zhesvx_lwork
337
338 chetrd
339 zhetrd
340
341 chetrd_lwork
342 zhetrd_lwork
343
344 chetrf
345 zhetrf
346
347 chetrf_lwork
348 zhetrf_lwork
349
350 chfrk
351 zhfrk
352
353 slamch
354 dlamch
355
356 slange
357 dlange
358 clange
359 zlange
360
361 slarf
362 dlarf
363 clarf
364 zlarf
365
366 slarfg
367 dlarfg
368 clarfg
369 zlarfg
370
371 slartg
372 dlartg
373 clartg
374 zlartg
375
376 slasd4
377 dlasd4
378
379 slaswp
380 dlaswp
381 claswp
382 zlaswp
383
384 slauum
385 dlauum
386 clauum
387 zlauum
388
389 sorghr
390 dorghr
391 sorghr_lwork
392 dorghr_lwork
393
394 sorgqr
395 dorgqr
396
397 sorgrq
398 dorgrq
399
400 sormqr
401 dormqr
402
403 sormrz
404 dormrz
405
406 sormrz_lwork
407 dormrz_lwork
408
409 spbsv
410 dpbsv
411 cpbsv
412 zpbsv
413
414 spbtrf
415 dpbtrf
416 cpbtrf
417 zpbtrf
418
419 spbtrs
420 dpbtrs
421 cpbtrs
422 zpbtrs
423
424 spftrf
425 dpftrf
426 cpftrf
427 zpftrf
428
429 spftri
430 dpftri
431 cpftri
432 zpftri
433
434 spftrs
435 dpftrs
436 cpftrs
437 zpftrs
438
439 spocon
440 dpocon
441 cpocon
442 zpocon
443
444 spstrf
445 dpstrf
446 cpstrf
447 zpstrf
448
449 spstf2
450 dpstf2
451 cpstf2
452 zpstf2
453
454 sposv
455 dposv
456 cposv
457 zposv
458
459 sposvx
460 dposvx
461 cposvx
462 zposvx
463
464 spotrf
465 dpotrf
466 cpotrf
467 zpotrf
468
469 spotri
470 dpotri
471 cpotri
472 zpotri
473
474 spotrs
475 dpotrs
476 cpotrs
477 zpotrs
478
479 sptsv
480 dptsv
481 cptsv
482 zptsv
483
484 crot
485 zrot
486
487 ssbev
488 dsbev
489
490 ssbevd
491 dsbevd
492
493 ssbevx
494 dsbevx
495
496 ssfrk
497 dsfrk
498
499 sstebz
500 dstebz
501
502 sstein
503 dstein
504
505 sstemr
506 dstemr
507
508 sstemr_lwork
509 dstemr_lwork
510
511 ssterf
512 dsterf
513
514 sstev
515 dstev
516
517 ssycon
518 dsycon
519 csycon
520 zsycon
521
522 ssyconv
523 dsyconv
524 csyconv
525 zsyconv
526
527 ssyequb
528 dsyequb
529 csyequb
530 zsyequb
531
532 ssyev
533 dsyev
534
535 ssyev_lwork
536 dsyev_lwork
537
538 ssyevd
539 dsyevd
540
541 ssyevd_lwork
542 dsyevd_lwork
543
544 ssyevr
545 dsyevr
546
547 ssyevr_lwork
548 dsyevr_lwork
549
550 ssyevx
551 dsyevx
552
553 ssyevx_lwork
554 dsyevx_lwork
555
556 ssygst
557 dsygst
558
559 ssygv
560 dsygv
561
562 ssygv_lwork
563 dsygv_lwork
564
565 ssygvd
566 dsygvd
567
568 ssygvx
569 dsygvx
570
571 ssygvx_lwork
572 dsygvx_lwork
573
574 ssysv
575 dsysv
576 csysv
577 zsysv
578
579 ssysv_lwork
580 dsysv_lwork
581 csysv_lwork
582 zsysv_lwork
583
584 ssysvx
585 dsysvx
586 csysvx
587 zsysvx
588
589 ssysvx_lwork
590 dsysvx_lwork
591 csysvx_lwork
592 zsysvx_lwork
593
594 ssytf2
595 dsytf2
596 csytf2
597 zsytf2
598
599 ssytrd
600 dsytrd
601
602 ssytrd_lwork
603 dsytrd_lwork
604
605 ssytrf
606 dsytrf
607 csytrf
608 zsytrf
609
610 ssytrf_lwork
611 dsytrf_lwork
612 csytrf_lwork
613 zsytrf_lwork
614
615 stbtrs
616 dtbtrs
617 ctbtrs
618 ztbtrs
619
620 stfsm
621 dtfsm
622 ctfsm
623 ztfsm
624
625 stfttp
626 dtfttp
627 ctfttp
628 ztfttp
629
630 stfttr
631 dtfttr
632 ctfttr
633 ztfttr
634
635 stgsen
636 dtgsen
637 ctgsen
638 ztgsen
639
640 stpttf
641 dtpttf
642 ctpttf
643 ztpttf
644
645 stpttr
646 dtpttr
647 ctpttr
648 ztpttr
649
650 strsyl
651 dtrsyl
652 ctrsyl
653 ztrsyl
654
655 strtri
656 dtrtri
657 ctrtri
658 ztrtri
659
660 strtrs
661 dtrtrs
662 ctrtrs
663 ztrtrs
664
665 strttf
666 dtrttf
667 ctrttf
668 ztrttf
669
670 strttp
671 dtrttp
672 ctrttp
673 ztrttp
674
675 stzrzf
676 dtzrzf
677 ctzrzf
678 ztzrzf
679
680 stzrzf_lwork
681 dtzrzf_lwork
682 ctzrzf_lwork
683 ztzrzf_lwork
684
685 cunghr
686 zunghr
687
688 cunghr_lwork
689 zunghr_lwork
690
691 cungqr
692 zungqr
693
694 cungrq
695 zungrq
696
697 cunmqr
698 zunmqr
699
700 sgeqrt
701 dgeqrt
702 cgeqrt
703 zgeqrt
704
705 sgemqrt
706 dgemqrt
707 cgemqrt
708 zgemqrt
709
710 sgttrf
711 dgttrf
712 cgttrf
713 zgttrf
714
715 sgttrs
716 dgttrs
717 cgttrs
718 zgttrs
719
720 stpqrt
721 dtpqrt
722 ctpqrt
723 ztpqrt
724
725 stpmqrt
726 dtpmqrt
727 ctpmqrt
728 ztpmqrt
729
730 cunmrz
731 zunmrz
732
733 cunmrz_lwork
734 zunmrz_lwork
735
736 ilaver
737
738 """
739 #
740 # Author: Pearu Peterson, March 2002
741 #
742
743 import numpy as _np
744 from .blas import _get_funcs, _memoize_get_funcs
745 from scipy.linalg import _flapack
746 from re import compile as regex_compile
747 try:
748 from scipy.linalg import _clapack
749 except ImportError:
750 _clapack = None
751
752 # Backward compatibility
753 from scipy._lib._util import DeprecatedImport as _DeprecatedImport
754 clapack = _DeprecatedImport("scipy.linalg.blas.clapack", "scipy.linalg.lapack")
755 flapack = _DeprecatedImport("scipy.linalg.blas.flapack", "scipy.linalg.lapack")
756
757 # Expose all functions (only flapack --- clapack is an implementation detail)
758 empty_module = None
759 from scipy.linalg._flapack import *
760 del empty_module
761
762 __all__ = ['get_lapack_funcs']
763
764 _dep_message = """The `*gegv` family of routines has been deprecated in
765 LAPACK 3.6.0 in favor of the `*ggev` family of routines.
766 The corresponding wrappers will be removed from SciPy in
767 a future release."""
768
769 cgegv = _np.deprecate(cgegv, old_name='cgegv', message=_dep_message)
770 dgegv = _np.deprecate(dgegv, old_name='dgegv', message=_dep_message)
771 sgegv = _np.deprecate(sgegv, old_name='sgegv', message=_dep_message)
772 zgegv = _np.deprecate(zgegv, old_name='zgegv', message=_dep_message)
773
774 # Modify _flapack in this scope so the deprecation warnings apply to
775 # functions returned by get_lapack_funcs.
776 _flapack.cgegv = cgegv
777 _flapack.dgegv = dgegv
778 _flapack.sgegv = sgegv
779 _flapack.zgegv = zgegv
780
781 # some convenience alias for complex functions
782 _lapack_alias = {
783 'corghr': 'cunghr', 'zorghr': 'zunghr',
784 'corghr_lwork': 'cunghr_lwork', 'zorghr_lwork': 'zunghr_lwork',
785 'corgqr': 'cungqr', 'zorgqr': 'zungqr',
786 'cormqr': 'cunmqr', 'zormqr': 'zunmqr',
787 'corgrq': 'cungrq', 'zorgrq': 'zungrq',
788 }
789
790
791 # Place guards against docstring rendering issues with special characters
792 p1 = regex_compile(r'with bounds (?P<b>.*?)( and (?P<s>.*?) storage){0,1}\n')
793 p2 = regex_compile(r'Default: (?P<d>.*?)\n')
794
795
796 def backtickrepl(m):
797 if m.group('s'):
798 return ('with bounds ``{}`` with ``{}`` storage\n'
799 ''.format(m.group('b'), m.group('s')))
800 else:
801 return 'with bounds ``{}``\n'.format(m.group('b'))
802
803
804 for routine in [ssyevr, dsyevr, cheevr, zheevr,
805 ssyevx, dsyevx, cheevx, zheevx,
806 ssygvd, dsygvd, chegvd, zhegvd]:
807 if routine.__doc__:
808 routine.__doc__ = p1.sub(backtickrepl, routine.__doc__)
809 routine.__doc__ = p2.sub('Default ``\\1``\n', routine.__doc__)
810 else:
811 continue
812
813 del regex_compile, p1, p2, backtickrepl
814
815
816 @_memoize_get_funcs
817 def get_lapack_funcs(names, arrays=(), dtype=None):
818 """Return available LAPACK function objects from names.
819
820 Arrays are used to determine the optimal prefix of LAPACK routines.
821
822 Parameters
823 ----------
824 names : str or sequence of str
825 Name(s) of LAPACK functions without type prefix.
826
827 arrays : sequence of ndarrays, optional
828 Arrays can be given to determine optimal prefix of LAPACK
829 routines. If not given, double-precision routines will be
830 used, otherwise the most generic type in arrays will be used.
831
832 dtype : str or dtype, optional
833 Data-type specifier. Not used if `arrays` is non-empty.
834
835 Returns
836 -------
837 funcs : list
838 List containing the found function(s).
839
840 Notes
841 -----
842 This routine automatically chooses between Fortran/C
843 interfaces. Fortran code is used whenever possible for arrays with
844 column major order. In all other cases, C code is preferred.
845
846 In LAPACK, the naming convention is that all functions start with a
847 type prefix, which depends on the type of the principal
848 matrix. These can be one of {'s', 'd', 'c', 'z'} for the NumPy
849 types {float32, float64, complex64, complex128} respectively, and
850 are stored in attribute ``typecode`` of the returned functions.
851
852 Examples
853 --------
854 Suppose we would like to use '?lange' routine which computes the selected
855 norm of an array. We pass our array in order to get the correct 'lange'
856 flavor.
857
858 >>> import scipy.linalg as LA
859 >>> a = np.random.rand(3,2)
860 >>> x_lange = LA.get_lapack_funcs('lange', (a,))
861 >>> x_lange.typecode
862 'd'
863 >>> x_lange = LA.get_lapack_funcs('lange',(a*1j,))
864 >>> x_lange.typecode
865 'z'
866
867 Several LAPACK routines work best when its internal WORK array has
868 the optimal size (big enough for fast computation and small enough to
869 avoid waste of memory). This size is determined also by a dedicated query
870 to the function which is often wrapped as a standalone function and
871 commonly denoted as ``###_lwork``. Below is an example for ``?sysv``
872
873 >>> import scipy.linalg as LA
874 >>> a = np.random.rand(1000,1000)
875 >>> b = np.random.rand(1000,1)*1j
876 >>> # We pick up zsysv and zsysv_lwork due to b array
877 ... xsysv, xlwork = LA.get_lapack_funcs(('sysv', 'sysv_lwork'), (a, b))
878 >>> opt_lwork, _ = xlwork(a.shape[0]) # returns a complex for 'z' prefix
879 >>> udut, ipiv, x, info = xsysv(a, b, lwork=int(opt_lwork.real))
880
881 """
882 return _get_funcs(names, arrays, dtype,
883 "LAPACK", _flapack, _clapack,
884 "flapack", "clapack", _lapack_alias)
885
886
887 _int32_max = _np.iinfo(_np.int32).max
888
889
890 def _compute_lwork(routine, *args, **kwargs):
891 """
892 Round floating-point lwork returned by lapack to integer.
893
894 Several LAPACK routines compute optimal values for LWORK, which
895 they return in a floating-point variable. However, for large
896 values of LWORK, single-precision floating point is not sufficient
897 to hold the exact value --- some LAPACK versions (<= 3.5.0 at
898 least) truncate the returned integer to single precision and in
899 some cases this can be smaller than the required value.
900
901 Examples
902 --------
903 >>> from scipy.linalg import lapack
904 >>> n = 5000
905 >>> s_r, s_lw = lapack.get_lapack_funcs(('sysvx', 'sysvx_lwork'))
906 >>> lwork = lapack._compute_lwork(s_lw, n)
907 >>> lwork
908 32000
909
910 """
911 dtype = getattr(routine, 'dtype', None)
912 ret = routine(*args, **kwargs)
913 if ret[-1] != 0:
914 raise ValueError("Internal work array size computation failed: "
915 "%d" % (ret[-1],))
916
917 if len(ret) == 2:
918 return _check_work_float(ret[0].real, dtype)
919 else:
920 return tuple(_check_work_float(x.real, dtype) for x in ret[:-1])
921
922
923 def _check_work_float(value, dtype):
924 """
925 Convert LAPACK-returned work array size float to integer,
926 carefully for single-precision types.
927 """
928
929 if dtype == _np.float32 or dtype == _np.complex64:
930 # Single-precision routine -- take next fp value to work
931 # around possible truncation in LAPACK code
932 value = _np.nextafter(value, _np.inf, dtype=_np.float32)
933
934 value = int(value)
935 if value < 0 or value > _int32_max:
936 raise ValueError("Too large work array required -- computation cannot "
937 "be performed with standard 32-bit LAPACK.")
938 return value
939
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/scipy/linalg/lapack.py b/scipy/linalg/lapack.py
--- a/scipy/linalg/lapack.py
+++ b/scipy/linalg/lapack.py
@@ -481,6 +481,16 @@
cptsv
zptsv
+ spttrf
+ dpttrf
+ cpttrf
+ zpttrf
+
+ spttrs
+ dpttrs
+ cpttrs
+ zpttrs
+
crot
zrot
| {"golden_diff": "diff --git a/scipy/linalg/lapack.py b/scipy/linalg/lapack.py\n--- a/scipy/linalg/lapack.py\n+++ b/scipy/linalg/lapack.py\n@@ -481,6 +481,16 @@\n cptsv\n zptsv\n \n+ spttrf\n+ dpttrf\n+ cpttrf\n+ zpttrf\n+\n+ spttrs\n+ dpttrs\n+ cpttrs\n+ zpttrs\n+\n crot\n zrot\n", "issue": "Add wrappers for ?pttrf, ?pttrs\nSee [NumFOCUS Small Development Grant Proposal 2019 Round 3 proposal](https://www.dropbox.com/s/i24h3j2osol56uf/Small%20Development%20Grant%20-%20LAPACK%20%20%28Condensed%29.pdf?dl=0) for background.\r\n\r\nAdd wrapper for [?pttrf](http://www.netlib.org/lapack/explore-html/d0/d2f/group__double_p_tcomputational_gad408508a4fb3810c23125995dc83ccc1.html)/[?pttrs](http://www.netlib.org/lapack/explore-html/d0/d2f/group__double_p_tcomputational_gaf3cb531de6ceb79732d438ad3b66132a.html).\r\n\r\nThese routines are similar to those in gh-11123 except that these ones are tailored for matrices that are symmetric positive definite.\r\n\r\nSuggested signatures:\r\n```\r\n! d, e, info = pttrf(d, e, overwrite_d=0, overwrite_e=0)\r\n```\r\n```\r\n! x, info = pttrs(d, e, b, overwrite_b=0)\r\n```\r\n\r\nTest idea:\r\n\r\n- Generate random tridiagonal symmetric positive definite (SPD) matrix `A`. The fact that \"a Hermitian strictly diagonally dominant matrix with real positive diagonal entries is positive definite\" can be used to generate one. For instance, in the real case, if all the diagonal elements are >2 and the off-diagonals are <1, the matrix will be PD.\r\n- Decompose `A` with `?pttrf`\r\n- Multiply factors from `?pttrf` and compare with original `A`\r\n- Generate random solution `x`\r\n- Generate `b` from `A@x`\r\n- Solve using `?pttrs`\r\n- Compare solution from `?pttrs` against known `x`\r\n\r\nExample code for generating random tridiagonal SPD `A` and testing properties:\r\n```\r\nimport numpy as np\r\nimport scipy.linalg\r\nn = 10\r\nd = np.random.rand(n)+2\r\ne = np.random.rand(n-1)\r\nA = np.diag(e,-1) + np.diag(e, 1) + np.diag(d)\r\nL, Q = scipy.linalg.eig(A)\r\nprint(np.all(L>0))\r\nprint(np.allclose(Q @ Q.T, np.eye(n)))\r\nprint(np.allclose(Q*L @ Q.T, A)) # same as [email protected](L)@Q.T\r\n```\r\nAlso test for singular matrix, non-spd matrix, and incorrect/incompatible array sizes.\r\n\r\nAlso: implement all examples from the [NAG manual](https://www.nag.com/numeric/fl/nagdoc_latest/html/f07/f07conts.html) as additional tests.\n", "before_files": [{"content": "\"\"\"\nLow-level LAPACK functions (:mod:`scipy.linalg.lapack`)\n=======================================================\n\nThis module contains low-level functions from the LAPACK library.\n\nThe `*gegv` family of routines have been removed from LAPACK 3.6.0\nand have been deprecated in SciPy 0.17.0. They will be removed in\na future release.\n\n.. versionadded:: 0.12.0\n\n.. note::\n\n The common ``overwrite_<>`` option in many routines, allows the\n input arrays to be overwritten to avoid extra memory allocation.\n However this requires the array to satisfy two conditions\n which are memory order and the data type to match exactly the\n order and the type expected by the routine.\n\n As an example, if you pass a double precision float array to any\n ``S....`` routine which expects single precision arguments, f2py\n will create an intermediate array to match the argument types and\n overwriting will be performed on that intermediate array.\n\n Similarly, if a C-contiguous array is passed, f2py will pass a\n FORTRAN-contiguous array internally. Please make sure that these\n details are satisfied. More information can be found in the f2py\n documentation.\n\n.. warning::\n\n These functions do little to no error checking.\n It is possible to cause crashes by mis-using them,\n so prefer using the higher-level routines in `scipy.linalg`.\n\nFinding functions\n-----------------\n\n.. autosummary::\n :toctree: generated/\n\n get_lapack_funcs\n\nAll functions\n-------------\n\n.. autosummary::\n :toctree: generated/\n\n\n sgbsv\n dgbsv\n cgbsv\n zgbsv\n\n sgbtrf\n dgbtrf\n cgbtrf\n zgbtrf\n\n sgbtrs\n dgbtrs\n cgbtrs\n zgbtrs\n\n sgebal\n dgebal\n cgebal\n zgebal\n\n sgecon\n dgecon\n cgecon\n zgecon\n\n sgeequ\n dgeequ\n cgeequ\n zgeequ\n\n sgeequb\n dgeequb\n cgeequb\n zgeequb\n\n sgees\n dgees\n cgees\n zgees\n\n sgeev\n dgeev\n cgeev\n zgeev\n\n sgeev_lwork\n dgeev_lwork\n cgeev_lwork\n zgeev_lwork\n\n sgegv\n dgegv\n cgegv\n zgegv\n\n sgehrd\n dgehrd\n cgehrd\n zgehrd\n\n sgehrd_lwork\n dgehrd_lwork\n cgehrd_lwork\n zgehrd_lwork\n\n sgels\n dgels\n cgels\n zgels\n\n sgels_lwork\n dgels_lwork\n cgels_lwork\n zgels_lwork\n\n sgelsd\n dgelsd\n cgelsd\n zgelsd\n\n sgelsd_lwork\n dgelsd_lwork\n cgelsd_lwork\n zgelsd_lwork\n\n sgelss\n dgelss\n cgelss\n zgelss\n\n sgelss_lwork\n dgelss_lwork\n cgelss_lwork\n zgelss_lwork\n\n sgelsy\n dgelsy\n cgelsy\n zgelsy\n\n sgelsy_lwork\n dgelsy_lwork\n cgelsy_lwork\n zgelsy_lwork\n\n sgeqp3\n dgeqp3\n cgeqp3\n zgeqp3\n\n sgeqrf\n dgeqrf\n cgeqrf\n zgeqrf\n\n sgeqrf_lwork\n dgeqrf_lwork\n cgeqrf_lwork\n zgeqrf_lwork\n\n sgeqrfp\n dgeqrfp\n cgeqrfp\n zgeqrfp\n\n sgeqrfp_lwork\n dgeqrfp_lwork\n cgeqrfp_lwork\n zgeqrfp_lwork\n\n sgerqf\n dgerqf\n cgerqf\n zgerqf\n\n sgesdd\n dgesdd\n cgesdd\n zgesdd\n\n sgesdd_lwork\n dgesdd_lwork\n cgesdd_lwork\n zgesdd_lwork\n\n sgesv\n dgesv\n cgesv\n zgesv\n\n sgesvd\n dgesvd\n cgesvd\n zgesvd\n\n sgesvd_lwork\n dgesvd_lwork\n cgesvd_lwork\n zgesvd_lwork\n\n sgesvx\n dgesvx\n cgesvx\n zgesvx\n\n sgetrf\n dgetrf\n cgetrf\n zgetrf\n\n sgetc2\n dgetc2\n cgetc2\n zgetc2\n\n sgetri\n dgetri\n cgetri\n zgetri\n\n sgetri_lwork\n dgetri_lwork\n cgetri_lwork\n zgetri_lwork\n\n sgetrs\n dgetrs\n cgetrs\n zgetrs\n\n sgesc2\n dgesc2\n cgesc2\n zgesc2\n\n sgges\n dgges\n cgges\n zgges\n\n sggev\n dggev\n cggev\n zggev\n\n sgglse\n dgglse\n cgglse\n zgglse\n\n sgglse_lwork\n dgglse_lwork\n cgglse_lwork\n zgglse_lwork\n\n sgtsv\n dgtsv\n cgtsv\n zgtsv\n\n chbevd\n zhbevd\n\n chbevx\n zhbevx\n\n checon\n zhecon\n\n cheequb\n zheequb\n\n cheev\n zheev\n\n cheev_lwork\n zheev_lwork\n\n cheevd\n zheevd\n\n cheevd_lwork\n zheevd_lwork\n\n cheevr\n zheevr\n\n cheevr_lwork\n zheevr_lwork\n\n cheevx\n zheevx\n\n cheevx_lwork\n zheevx_lwork\n\n chegst\n zhegst\n\n chegv\n zhegv\n\n chegv_lwork\n zhegv_lwork\n\n chegvd\n zhegvd\n\n chegvx\n zhegvx\n\n chegvx_lwork\n zhegvx_lwork\n\n chesv\n zhesv\n\n chesv_lwork\n zhesv_lwork\n\n chesvx\n zhesvx\n\n chesvx_lwork\n zhesvx_lwork\n\n chetrd\n zhetrd\n\n chetrd_lwork\n zhetrd_lwork\n\n chetrf\n zhetrf\n\n chetrf_lwork\n zhetrf_lwork\n\n chfrk\n zhfrk\n\n slamch\n dlamch\n\n slange\n dlange\n clange\n zlange\n\n slarf\n dlarf\n clarf\n zlarf\n\n slarfg\n dlarfg\n clarfg\n zlarfg\n\n slartg\n dlartg\n clartg\n zlartg\n\n slasd4\n dlasd4\n\n slaswp\n dlaswp\n claswp\n zlaswp\n\n slauum\n dlauum\n clauum\n zlauum\n\n sorghr\n dorghr\n sorghr_lwork\n dorghr_lwork\n\n sorgqr\n dorgqr\n\n sorgrq\n dorgrq\n\n sormqr\n dormqr\n\n sormrz\n dormrz\n\n sormrz_lwork\n dormrz_lwork\n\n spbsv\n dpbsv\n cpbsv\n zpbsv\n\n spbtrf\n dpbtrf\n cpbtrf\n zpbtrf\n\n spbtrs\n dpbtrs\n cpbtrs\n zpbtrs\n\n spftrf\n dpftrf\n cpftrf\n zpftrf\n\n spftri\n dpftri\n cpftri\n zpftri\n\n spftrs\n dpftrs\n cpftrs\n zpftrs\n\n spocon\n dpocon\n cpocon\n zpocon\n\n spstrf\n dpstrf\n cpstrf\n zpstrf\n\n spstf2\n dpstf2\n cpstf2\n zpstf2\n\n sposv\n dposv\n cposv\n zposv\n\n sposvx\n dposvx\n cposvx\n zposvx\n\n spotrf\n dpotrf\n cpotrf\n zpotrf\n\n spotri\n dpotri\n cpotri\n zpotri\n\n spotrs\n dpotrs\n cpotrs\n zpotrs\n\n sptsv\n dptsv\n cptsv\n zptsv\n\n crot\n zrot\n\n ssbev\n dsbev\n\n ssbevd\n dsbevd\n\n ssbevx\n dsbevx\n\n ssfrk\n dsfrk\n\n sstebz\n dstebz\n\n sstein\n dstein\n\n sstemr\n dstemr\n\n sstemr_lwork\n dstemr_lwork\n\n ssterf\n dsterf\n\n sstev\n dstev\n\n ssycon\n dsycon\n csycon\n zsycon\n\n ssyconv\n dsyconv\n csyconv\n zsyconv\n\n ssyequb\n dsyequb\n csyequb\n zsyequb\n\n ssyev\n dsyev\n\n ssyev_lwork\n dsyev_lwork\n\n ssyevd\n dsyevd\n\n ssyevd_lwork\n dsyevd_lwork\n\n ssyevr\n dsyevr\n\n ssyevr_lwork\n dsyevr_lwork\n\n ssyevx\n dsyevx\n\n ssyevx_lwork\n dsyevx_lwork\n\n ssygst\n dsygst\n\n ssygv\n dsygv\n\n ssygv_lwork\n dsygv_lwork\n\n ssygvd\n dsygvd\n\n ssygvx\n dsygvx\n\n ssygvx_lwork\n dsygvx_lwork\n\n ssysv\n dsysv\n csysv\n zsysv\n\n ssysv_lwork\n dsysv_lwork\n csysv_lwork\n zsysv_lwork\n\n ssysvx\n dsysvx\n csysvx\n zsysvx\n\n ssysvx_lwork\n dsysvx_lwork\n csysvx_lwork\n zsysvx_lwork\n\n ssytf2\n dsytf2\n csytf2\n zsytf2\n\n ssytrd\n dsytrd\n\n ssytrd_lwork\n dsytrd_lwork\n\n ssytrf\n dsytrf\n csytrf\n zsytrf\n\n ssytrf_lwork\n dsytrf_lwork\n csytrf_lwork\n zsytrf_lwork\n\n stbtrs\n dtbtrs\n ctbtrs\n ztbtrs\n\n stfsm\n dtfsm\n ctfsm\n ztfsm\n\n stfttp\n dtfttp\n ctfttp\n ztfttp\n\n stfttr\n dtfttr\n ctfttr\n ztfttr\n\n stgsen\n dtgsen\n ctgsen\n ztgsen\n\n stpttf\n dtpttf\n ctpttf\n ztpttf\n\n stpttr\n dtpttr\n ctpttr\n ztpttr\n\n strsyl\n dtrsyl\n ctrsyl\n ztrsyl\n\n strtri\n dtrtri\n ctrtri\n ztrtri\n\n strtrs\n dtrtrs\n ctrtrs\n ztrtrs\n\n strttf\n dtrttf\n ctrttf\n ztrttf\n\n strttp\n dtrttp\n ctrttp\n ztrttp\n\n stzrzf\n dtzrzf\n ctzrzf\n ztzrzf\n\n stzrzf_lwork\n dtzrzf_lwork\n ctzrzf_lwork\n ztzrzf_lwork\n\n cunghr\n zunghr\n\n cunghr_lwork\n zunghr_lwork\n\n cungqr\n zungqr\n\n cungrq\n zungrq\n\n cunmqr\n zunmqr\n\n sgeqrt\n dgeqrt\n cgeqrt\n zgeqrt\n\n sgemqrt\n dgemqrt\n cgemqrt\n zgemqrt\n\n sgttrf\n dgttrf\n cgttrf\n zgttrf\n\n sgttrs\n dgttrs\n cgttrs\n zgttrs\n\n stpqrt\n dtpqrt\n ctpqrt\n ztpqrt\n\n stpmqrt\n dtpmqrt\n ctpmqrt\n ztpmqrt\n\n cunmrz\n zunmrz\n\n cunmrz_lwork\n zunmrz_lwork\n\n ilaver\n\n\"\"\"\n#\n# Author: Pearu Peterson, March 2002\n#\n\nimport numpy as _np\nfrom .blas import _get_funcs, _memoize_get_funcs\nfrom scipy.linalg import _flapack\nfrom re import compile as regex_compile\ntry:\n from scipy.linalg import _clapack\nexcept ImportError:\n _clapack = None\n\n# Backward compatibility\nfrom scipy._lib._util import DeprecatedImport as _DeprecatedImport\nclapack = _DeprecatedImport(\"scipy.linalg.blas.clapack\", \"scipy.linalg.lapack\")\nflapack = _DeprecatedImport(\"scipy.linalg.blas.flapack\", \"scipy.linalg.lapack\")\n\n# Expose all functions (only flapack --- clapack is an implementation detail)\nempty_module = None\nfrom scipy.linalg._flapack import *\ndel empty_module\n\n__all__ = ['get_lapack_funcs']\n\n_dep_message = \"\"\"The `*gegv` family of routines has been deprecated in\nLAPACK 3.6.0 in favor of the `*ggev` family of routines.\nThe corresponding wrappers will be removed from SciPy in\na future release.\"\"\"\n\ncgegv = _np.deprecate(cgegv, old_name='cgegv', message=_dep_message)\ndgegv = _np.deprecate(dgegv, old_name='dgegv', message=_dep_message)\nsgegv = _np.deprecate(sgegv, old_name='sgegv', message=_dep_message)\nzgegv = _np.deprecate(zgegv, old_name='zgegv', message=_dep_message)\n\n# Modify _flapack in this scope so the deprecation warnings apply to\n# functions returned by get_lapack_funcs.\n_flapack.cgegv = cgegv\n_flapack.dgegv = dgegv\n_flapack.sgegv = sgegv\n_flapack.zgegv = zgegv\n\n# some convenience alias for complex functions\n_lapack_alias = {\n 'corghr': 'cunghr', 'zorghr': 'zunghr',\n 'corghr_lwork': 'cunghr_lwork', 'zorghr_lwork': 'zunghr_lwork',\n 'corgqr': 'cungqr', 'zorgqr': 'zungqr',\n 'cormqr': 'cunmqr', 'zormqr': 'zunmqr',\n 'corgrq': 'cungrq', 'zorgrq': 'zungrq',\n}\n\n\n# Place guards against docstring rendering issues with special characters\np1 = regex_compile(r'with bounds (?P<b>.*?)( and (?P<s>.*?) storage){0,1}\\n')\np2 = regex_compile(r'Default: (?P<d>.*?)\\n')\n\n\ndef backtickrepl(m):\n if m.group('s'):\n return ('with bounds ``{}`` with ``{}`` storage\\n'\n ''.format(m.group('b'), m.group('s')))\n else:\n return 'with bounds ``{}``\\n'.format(m.group('b'))\n\n\nfor routine in [ssyevr, dsyevr, cheevr, zheevr,\n ssyevx, dsyevx, cheevx, zheevx,\n ssygvd, dsygvd, chegvd, zhegvd]:\n if routine.__doc__:\n routine.__doc__ = p1.sub(backtickrepl, routine.__doc__)\n routine.__doc__ = p2.sub('Default ``\\\\1``\\n', routine.__doc__)\n else:\n continue\n\ndel regex_compile, p1, p2, backtickrepl\n\n\n@_memoize_get_funcs\ndef get_lapack_funcs(names, arrays=(), dtype=None):\n \"\"\"Return available LAPACK function objects from names.\n\n Arrays are used to determine the optimal prefix of LAPACK routines.\n\n Parameters\n ----------\n names : str or sequence of str\n Name(s) of LAPACK functions without type prefix.\n\n arrays : sequence of ndarrays, optional\n Arrays can be given to determine optimal prefix of LAPACK\n routines. If not given, double-precision routines will be\n used, otherwise the most generic type in arrays will be used.\n\n dtype : str or dtype, optional\n Data-type specifier. Not used if `arrays` is non-empty.\n\n Returns\n -------\n funcs : list\n List containing the found function(s).\n\n Notes\n -----\n This routine automatically chooses between Fortran/C\n interfaces. Fortran code is used whenever possible for arrays with\n column major order. In all other cases, C code is preferred.\n\n In LAPACK, the naming convention is that all functions start with a\n type prefix, which depends on the type of the principal\n matrix. These can be one of {'s', 'd', 'c', 'z'} for the NumPy\n types {float32, float64, complex64, complex128} respectively, and\n are stored in attribute ``typecode`` of the returned functions.\n\n Examples\n --------\n Suppose we would like to use '?lange' routine which computes the selected\n norm of an array. We pass our array in order to get the correct 'lange'\n flavor.\n\n >>> import scipy.linalg as LA\n >>> a = np.random.rand(3,2)\n >>> x_lange = LA.get_lapack_funcs('lange', (a,))\n >>> x_lange.typecode\n 'd'\n >>> x_lange = LA.get_lapack_funcs('lange',(a*1j,))\n >>> x_lange.typecode\n 'z'\n\n Several LAPACK routines work best when its internal WORK array has\n the optimal size (big enough for fast computation and small enough to\n avoid waste of memory). This size is determined also by a dedicated query\n to the function which is often wrapped as a standalone function and\n commonly denoted as ``###_lwork``. Below is an example for ``?sysv``\n\n >>> import scipy.linalg as LA\n >>> a = np.random.rand(1000,1000)\n >>> b = np.random.rand(1000,1)*1j\n >>> # We pick up zsysv and zsysv_lwork due to b array\n ... xsysv, xlwork = LA.get_lapack_funcs(('sysv', 'sysv_lwork'), (a, b))\n >>> opt_lwork, _ = xlwork(a.shape[0]) # returns a complex for 'z' prefix\n >>> udut, ipiv, x, info = xsysv(a, b, lwork=int(opt_lwork.real))\n\n \"\"\"\n return _get_funcs(names, arrays, dtype,\n \"LAPACK\", _flapack, _clapack,\n \"flapack\", \"clapack\", _lapack_alias)\n\n\n_int32_max = _np.iinfo(_np.int32).max\n\n\ndef _compute_lwork(routine, *args, **kwargs):\n \"\"\"\n Round floating-point lwork returned by lapack to integer.\n\n Several LAPACK routines compute optimal values for LWORK, which\n they return in a floating-point variable. However, for large\n values of LWORK, single-precision floating point is not sufficient\n to hold the exact value --- some LAPACK versions (<= 3.5.0 at\n least) truncate the returned integer to single precision and in\n some cases this can be smaller than the required value.\n\n Examples\n --------\n >>> from scipy.linalg import lapack\n >>> n = 5000\n >>> s_r, s_lw = lapack.get_lapack_funcs(('sysvx', 'sysvx_lwork'))\n >>> lwork = lapack._compute_lwork(s_lw, n)\n >>> lwork\n 32000\n\n \"\"\"\n dtype = getattr(routine, 'dtype', None)\n ret = routine(*args, **kwargs)\n if ret[-1] != 0:\n raise ValueError(\"Internal work array size computation failed: \"\n \"%d\" % (ret[-1],))\n\n if len(ret) == 2:\n return _check_work_float(ret[0].real, dtype)\n else:\n return tuple(_check_work_float(x.real, dtype) for x in ret[:-1])\n\n\ndef _check_work_float(value, dtype):\n \"\"\"\n Convert LAPACK-returned work array size float to integer,\n carefully for single-precision types.\n \"\"\"\n\n if dtype == _np.float32 or dtype == _np.complex64:\n # Single-precision routine -- take next fp value to work\n # around possible truncation in LAPACK code\n value = _np.nextafter(value, _np.inf, dtype=_np.float32)\n\n value = int(value)\n if value < 0 or value > _int32_max:\n raise ValueError(\"Too large work array required -- computation cannot \"\n \"be performed with standard 32-bit LAPACK.\")\n return value\n", "path": "scipy/linalg/lapack.py"}], "after_files": [{"content": "\"\"\"\nLow-level LAPACK functions (:mod:`scipy.linalg.lapack`)\n=======================================================\n\nThis module contains low-level functions from the LAPACK library.\n\nThe `*gegv` family of routines have been removed from LAPACK 3.6.0\nand have been deprecated in SciPy 0.17.0. They will be removed in\na future release.\n\n.. versionadded:: 0.12.0\n\n.. note::\n\n The common ``overwrite_<>`` option in many routines, allows the\n input arrays to be overwritten to avoid extra memory allocation.\n However this requires the array to satisfy two conditions\n which are memory order and the data type to match exactly the\n order and the type expected by the routine.\n\n As an example, if you pass a double precision float array to any\n ``S....`` routine which expects single precision arguments, f2py\n will create an intermediate array to match the argument types and\n overwriting will be performed on that intermediate array.\n\n Similarly, if a C-contiguous array is passed, f2py will pass a\n FORTRAN-contiguous array internally. Please make sure that these\n details are satisfied. More information can be found in the f2py\n documentation.\n\n.. warning::\n\n These functions do little to no error checking.\n It is possible to cause crashes by mis-using them,\n so prefer using the higher-level routines in `scipy.linalg`.\n\nFinding functions\n-----------------\n\n.. autosummary::\n :toctree: generated/\n\n get_lapack_funcs\n\nAll functions\n-------------\n\n.. autosummary::\n :toctree: generated/\n\n\n sgbsv\n dgbsv\n cgbsv\n zgbsv\n\n sgbtrf\n dgbtrf\n cgbtrf\n zgbtrf\n\n sgbtrs\n dgbtrs\n cgbtrs\n zgbtrs\n\n sgebal\n dgebal\n cgebal\n zgebal\n\n sgecon\n dgecon\n cgecon\n zgecon\n\n sgeequ\n dgeequ\n cgeequ\n zgeequ\n\n sgeequb\n dgeequb\n cgeequb\n zgeequb\n\n sgees\n dgees\n cgees\n zgees\n\n sgeev\n dgeev\n cgeev\n zgeev\n\n sgeev_lwork\n dgeev_lwork\n cgeev_lwork\n zgeev_lwork\n\n sgegv\n dgegv\n cgegv\n zgegv\n\n sgehrd\n dgehrd\n cgehrd\n zgehrd\n\n sgehrd_lwork\n dgehrd_lwork\n cgehrd_lwork\n zgehrd_lwork\n\n sgels\n dgels\n cgels\n zgels\n\n sgels_lwork\n dgels_lwork\n cgels_lwork\n zgels_lwork\n\n sgelsd\n dgelsd\n cgelsd\n zgelsd\n\n sgelsd_lwork\n dgelsd_lwork\n cgelsd_lwork\n zgelsd_lwork\n\n sgelss\n dgelss\n cgelss\n zgelss\n\n sgelss_lwork\n dgelss_lwork\n cgelss_lwork\n zgelss_lwork\n\n sgelsy\n dgelsy\n cgelsy\n zgelsy\n\n sgelsy_lwork\n dgelsy_lwork\n cgelsy_lwork\n zgelsy_lwork\n\n sgeqp3\n dgeqp3\n cgeqp3\n zgeqp3\n\n sgeqrf\n dgeqrf\n cgeqrf\n zgeqrf\n\n sgeqrf_lwork\n dgeqrf_lwork\n cgeqrf_lwork\n zgeqrf_lwork\n\n sgeqrfp\n dgeqrfp\n cgeqrfp\n zgeqrfp\n\n sgeqrfp_lwork\n dgeqrfp_lwork\n cgeqrfp_lwork\n zgeqrfp_lwork\n\n sgerqf\n dgerqf\n cgerqf\n zgerqf\n\n sgesdd\n dgesdd\n cgesdd\n zgesdd\n\n sgesdd_lwork\n dgesdd_lwork\n cgesdd_lwork\n zgesdd_lwork\n\n sgesv\n dgesv\n cgesv\n zgesv\n\n sgesvd\n dgesvd\n cgesvd\n zgesvd\n\n sgesvd_lwork\n dgesvd_lwork\n cgesvd_lwork\n zgesvd_lwork\n\n sgesvx\n dgesvx\n cgesvx\n zgesvx\n\n sgetrf\n dgetrf\n cgetrf\n zgetrf\n\n sgetc2\n dgetc2\n cgetc2\n zgetc2\n\n sgetri\n dgetri\n cgetri\n zgetri\n\n sgetri_lwork\n dgetri_lwork\n cgetri_lwork\n zgetri_lwork\n\n sgetrs\n dgetrs\n cgetrs\n zgetrs\n\n sgesc2\n dgesc2\n cgesc2\n zgesc2\n\n sgges\n dgges\n cgges\n zgges\n\n sggev\n dggev\n cggev\n zggev\n\n sgglse\n dgglse\n cgglse\n zgglse\n\n sgglse_lwork\n dgglse_lwork\n cgglse_lwork\n zgglse_lwork\n\n sgtsv\n dgtsv\n cgtsv\n zgtsv\n\n chbevd\n zhbevd\n\n chbevx\n zhbevx\n\n checon\n zhecon\n\n cheequb\n zheequb\n\n cheev\n zheev\n\n cheev_lwork\n zheev_lwork\n\n cheevd\n zheevd\n\n cheevd_lwork\n zheevd_lwork\n\n cheevr\n zheevr\n\n cheevr_lwork\n zheevr_lwork\n\n cheevx\n zheevx\n\n cheevx_lwork\n zheevx_lwork\n\n chegst\n zhegst\n\n chegv\n zhegv\n\n chegv_lwork\n zhegv_lwork\n\n chegvd\n zhegvd\n\n chegvx\n zhegvx\n\n chegvx_lwork\n zhegvx_lwork\n\n chesv\n zhesv\n\n chesv_lwork\n zhesv_lwork\n\n chesvx\n zhesvx\n\n chesvx_lwork\n zhesvx_lwork\n\n chetrd\n zhetrd\n\n chetrd_lwork\n zhetrd_lwork\n\n chetrf\n zhetrf\n\n chetrf_lwork\n zhetrf_lwork\n\n chfrk\n zhfrk\n\n slamch\n dlamch\n\n slange\n dlange\n clange\n zlange\n\n slarf\n dlarf\n clarf\n zlarf\n\n slarfg\n dlarfg\n clarfg\n zlarfg\n\n slartg\n dlartg\n clartg\n zlartg\n\n slasd4\n dlasd4\n\n slaswp\n dlaswp\n claswp\n zlaswp\n\n slauum\n dlauum\n clauum\n zlauum\n\n sorghr\n dorghr\n sorghr_lwork\n dorghr_lwork\n\n sorgqr\n dorgqr\n\n sorgrq\n dorgrq\n\n sormqr\n dormqr\n\n sormrz\n dormrz\n\n sormrz_lwork\n dormrz_lwork\n\n spbsv\n dpbsv\n cpbsv\n zpbsv\n\n spbtrf\n dpbtrf\n cpbtrf\n zpbtrf\n\n spbtrs\n dpbtrs\n cpbtrs\n zpbtrs\n\n spftrf\n dpftrf\n cpftrf\n zpftrf\n\n spftri\n dpftri\n cpftri\n zpftri\n\n spftrs\n dpftrs\n cpftrs\n zpftrs\n\n spocon\n dpocon\n cpocon\n zpocon\n\n spstrf\n dpstrf\n cpstrf\n zpstrf\n\n spstf2\n dpstf2\n cpstf2\n zpstf2\n\n sposv\n dposv\n cposv\n zposv\n\n sposvx\n dposvx\n cposvx\n zposvx\n\n spotrf\n dpotrf\n cpotrf\n zpotrf\n\n spotri\n dpotri\n cpotri\n zpotri\n\n spotrs\n dpotrs\n cpotrs\n zpotrs\n\n sptsv\n dptsv\n cptsv\n zptsv\n\n spttrf\n dpttrf\n cpttrf\n zpttrf\n\n spttrs\n dpttrs\n cpttrs\n zpttrs\n\n crot\n zrot\n\n ssbev\n dsbev\n\n ssbevd\n dsbevd\n\n ssbevx\n dsbevx\n\n ssfrk\n dsfrk\n\n sstebz\n dstebz\n\n sstein\n dstein\n\n sstemr\n dstemr\n\n sstemr_lwork\n dstemr_lwork\n\n ssterf\n dsterf\n\n sstev\n dstev\n\n ssycon\n dsycon\n csycon\n zsycon\n\n ssyconv\n dsyconv\n csyconv\n zsyconv\n\n ssyequb\n dsyequb\n csyequb\n zsyequb\n\n ssyev\n dsyev\n\n ssyev_lwork\n dsyev_lwork\n\n ssyevd\n dsyevd\n\n ssyevd_lwork\n dsyevd_lwork\n\n ssyevr\n dsyevr\n\n ssyevr_lwork\n dsyevr_lwork\n\n ssyevx\n dsyevx\n\n ssyevx_lwork\n dsyevx_lwork\n\n ssygst\n dsygst\n\n ssygv\n dsygv\n\n ssygv_lwork\n dsygv_lwork\n\n ssygvd\n dsygvd\n\n ssygvx\n dsygvx\n\n ssygvx_lwork\n dsygvx_lwork\n\n ssysv\n dsysv\n csysv\n zsysv\n\n ssysv_lwork\n dsysv_lwork\n csysv_lwork\n zsysv_lwork\n\n ssysvx\n dsysvx\n csysvx\n zsysvx\n\n ssysvx_lwork\n dsysvx_lwork\n csysvx_lwork\n zsysvx_lwork\n\n ssytf2\n dsytf2\n csytf2\n zsytf2\n\n ssytrd\n dsytrd\n\n ssytrd_lwork\n dsytrd_lwork\n\n ssytrf\n dsytrf\n csytrf\n zsytrf\n\n ssytrf_lwork\n dsytrf_lwork\n csytrf_lwork\n zsytrf_lwork\n\n stbtrs\n dtbtrs\n ctbtrs\n ztbtrs\n\n stfsm\n dtfsm\n ctfsm\n ztfsm\n\n stfttp\n dtfttp\n ctfttp\n ztfttp\n\n stfttr\n dtfttr\n ctfttr\n ztfttr\n\n stgsen\n dtgsen\n ctgsen\n ztgsen\n\n stpttf\n dtpttf\n ctpttf\n ztpttf\n\n stpttr\n dtpttr\n ctpttr\n ztpttr\n\n strsyl\n dtrsyl\n ctrsyl\n ztrsyl\n\n strtri\n dtrtri\n ctrtri\n ztrtri\n\n strtrs\n dtrtrs\n ctrtrs\n ztrtrs\n\n strttf\n dtrttf\n ctrttf\n ztrttf\n\n strttp\n dtrttp\n ctrttp\n ztrttp\n\n stzrzf\n dtzrzf\n ctzrzf\n ztzrzf\n\n stzrzf_lwork\n dtzrzf_lwork\n ctzrzf_lwork\n ztzrzf_lwork\n\n cunghr\n zunghr\n\n cunghr_lwork\n zunghr_lwork\n\n cungqr\n zungqr\n\n cungrq\n zungrq\n\n cunmqr\n zunmqr\n\n sgeqrt\n dgeqrt\n cgeqrt\n zgeqrt\n\n sgemqrt\n dgemqrt\n cgemqrt\n zgemqrt\n\n sgttrf\n dgttrf\n cgttrf\n zgttrf\n\n sgttrs\n dgttrs\n cgttrs\n zgttrs\n\n stpqrt\n dtpqrt\n ctpqrt\n ztpqrt\n\n stpmqrt\n dtpmqrt\n ctpmqrt\n ztpmqrt\n\n cunmrz\n zunmrz\n\n cunmrz_lwork\n zunmrz_lwork\n\n ilaver\n\n\"\"\"\n#\n# Author: Pearu Peterson, March 2002\n#\n\nimport numpy as _np\nfrom .blas import _get_funcs, _memoize_get_funcs\nfrom scipy.linalg import _flapack\nfrom re import compile as regex_compile\ntry:\n from scipy.linalg import _clapack\nexcept ImportError:\n _clapack = None\n\n# Backward compatibility\nfrom scipy._lib._util import DeprecatedImport as _DeprecatedImport\nclapack = _DeprecatedImport(\"scipy.linalg.blas.clapack\", \"scipy.linalg.lapack\")\nflapack = _DeprecatedImport(\"scipy.linalg.blas.flapack\", \"scipy.linalg.lapack\")\n\n# Expose all functions (only flapack --- clapack is an implementation detail)\nempty_module = None\nfrom scipy.linalg._flapack import *\ndel empty_module\n\n__all__ = ['get_lapack_funcs']\n\n_dep_message = \"\"\"The `*gegv` family of routines has been deprecated in\nLAPACK 3.6.0 in favor of the `*ggev` family of routines.\nThe corresponding wrappers will be removed from SciPy in\na future release.\"\"\"\n\ncgegv = _np.deprecate(cgegv, old_name='cgegv', message=_dep_message)\ndgegv = _np.deprecate(dgegv, old_name='dgegv', message=_dep_message)\nsgegv = _np.deprecate(sgegv, old_name='sgegv', message=_dep_message)\nzgegv = _np.deprecate(zgegv, old_name='zgegv', message=_dep_message)\n\n# Modify _flapack in this scope so the deprecation warnings apply to\n# functions returned by get_lapack_funcs.\n_flapack.cgegv = cgegv\n_flapack.dgegv = dgegv\n_flapack.sgegv = sgegv\n_flapack.zgegv = zgegv\n\n# some convenience alias for complex functions\n_lapack_alias = {\n 'corghr': 'cunghr', 'zorghr': 'zunghr',\n 'corghr_lwork': 'cunghr_lwork', 'zorghr_lwork': 'zunghr_lwork',\n 'corgqr': 'cungqr', 'zorgqr': 'zungqr',\n 'cormqr': 'cunmqr', 'zormqr': 'zunmqr',\n 'corgrq': 'cungrq', 'zorgrq': 'zungrq',\n}\n\n\n# Place guards against docstring rendering issues with special characters\np1 = regex_compile(r'with bounds (?P<b>.*?)( and (?P<s>.*?) storage){0,1}\\n')\np2 = regex_compile(r'Default: (?P<d>.*?)\\n')\n\n\ndef backtickrepl(m):\n if m.group('s'):\n return ('with bounds ``{}`` with ``{}`` storage\\n'\n ''.format(m.group('b'), m.group('s')))\n else:\n return 'with bounds ``{}``\\n'.format(m.group('b'))\n\n\nfor routine in [ssyevr, dsyevr, cheevr, zheevr,\n ssyevx, dsyevx, cheevx, zheevx,\n ssygvd, dsygvd, chegvd, zhegvd]:\n if routine.__doc__:\n routine.__doc__ = p1.sub(backtickrepl, routine.__doc__)\n routine.__doc__ = p2.sub('Default ``\\\\1``\\n', routine.__doc__)\n else:\n continue\n\ndel regex_compile, p1, p2, backtickrepl\n\n\n@_memoize_get_funcs\ndef get_lapack_funcs(names, arrays=(), dtype=None):\n \"\"\"Return available LAPACK function objects from names.\n\n Arrays are used to determine the optimal prefix of LAPACK routines.\n\n Parameters\n ----------\n names : str or sequence of str\n Name(s) of LAPACK functions without type prefix.\n\n arrays : sequence of ndarrays, optional\n Arrays can be given to determine optimal prefix of LAPACK\n routines. If not given, double-precision routines will be\n used, otherwise the most generic type in arrays will be used.\n\n dtype : str or dtype, optional\n Data-type specifier. Not used if `arrays` is non-empty.\n\n Returns\n -------\n funcs : list\n List containing the found function(s).\n\n Notes\n -----\n This routine automatically chooses between Fortran/C\n interfaces. Fortran code is used whenever possible for arrays with\n column major order. In all other cases, C code is preferred.\n\n In LAPACK, the naming convention is that all functions start with a\n type prefix, which depends on the type of the principal\n matrix. These can be one of {'s', 'd', 'c', 'z'} for the NumPy\n types {float32, float64, complex64, complex128} respectively, and\n are stored in attribute ``typecode`` of the returned functions.\n\n Examples\n --------\n Suppose we would like to use '?lange' routine which computes the selected\n norm of an array. We pass our array in order to get the correct 'lange'\n flavor.\n\n >>> import scipy.linalg as LA\n >>> a = np.random.rand(3,2)\n >>> x_lange = LA.get_lapack_funcs('lange', (a,))\n >>> x_lange.typecode\n 'd'\n >>> x_lange = LA.get_lapack_funcs('lange',(a*1j,))\n >>> x_lange.typecode\n 'z'\n\n Several LAPACK routines work best when its internal WORK array has\n the optimal size (big enough for fast computation and small enough to\n avoid waste of memory). This size is determined also by a dedicated query\n to the function which is often wrapped as a standalone function and\n commonly denoted as ``###_lwork``. Below is an example for ``?sysv``\n\n >>> import scipy.linalg as LA\n >>> a = np.random.rand(1000,1000)\n >>> b = np.random.rand(1000,1)*1j\n >>> # We pick up zsysv and zsysv_lwork due to b array\n ... xsysv, xlwork = LA.get_lapack_funcs(('sysv', 'sysv_lwork'), (a, b))\n >>> opt_lwork, _ = xlwork(a.shape[0]) # returns a complex for 'z' prefix\n >>> udut, ipiv, x, info = xsysv(a, b, lwork=int(opt_lwork.real))\n\n \"\"\"\n return _get_funcs(names, arrays, dtype,\n \"LAPACK\", _flapack, _clapack,\n \"flapack\", \"clapack\", _lapack_alias)\n\n\n_int32_max = _np.iinfo(_np.int32).max\n\n\ndef _compute_lwork(routine, *args, **kwargs):\n \"\"\"\n Round floating-point lwork returned by lapack to integer.\n\n Several LAPACK routines compute optimal values for LWORK, which\n they return in a floating-point variable. However, for large\n values of LWORK, single-precision floating point is not sufficient\n to hold the exact value --- some LAPACK versions (<= 3.5.0 at\n least) truncate the returned integer to single precision and in\n some cases this can be smaller than the required value.\n\n Examples\n --------\n >>> from scipy.linalg import lapack\n >>> n = 5000\n >>> s_r, s_lw = lapack.get_lapack_funcs(('sysvx', 'sysvx_lwork'))\n >>> lwork = lapack._compute_lwork(s_lw, n)\n >>> lwork\n 32000\n\n \"\"\"\n dtype = getattr(routine, 'dtype', None)\n ret = routine(*args, **kwargs)\n if ret[-1] != 0:\n raise ValueError(\"Internal work array size computation failed: \"\n \"%d\" % (ret[-1],))\n\n if len(ret) == 2:\n return _check_work_float(ret[0].real, dtype)\n else:\n return tuple(_check_work_float(x.real, dtype) for x in ret[:-1])\n\n\ndef _check_work_float(value, dtype):\n \"\"\"\n Convert LAPACK-returned work array size float to integer,\n carefully for single-precision types.\n \"\"\"\n\n if dtype == _np.float32 or dtype == _np.complex64:\n # Single-precision routine -- take next fp value to work\n # around possible truncation in LAPACK code\n value = _np.nextafter(value, _np.inf, dtype=_np.float32)\n\n value = int(value)\n if value < 0 or value > _int32_max:\n raise ValueError(\"Too large work array required -- computation cannot \"\n \"be performed with standard 32-bit LAPACK.\")\n return value\n", "path": "scipy/linalg/lapack.py"}]} |
gh_patches_debug_24 | rasdani/github-patches | git_diff | vega__altair-3202 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add example showing how to render numpy image arrays as tooltip images
I think it could be helpful to show how images that are represented as numpy arrays can be rendered in tooltips in altair. I can add a doc example of this. Maybe in [the tutorials/case studies](https://altair-viz.github.io/case_studies/exploring-weather.html) section? We only have one example there currently. Another option would be to combine it with https://altair-viz.github.io/gallery/image_tooltip.html and create a new page in the user guide on images, but that 's more involved.
Here is the code and a video of the output. Note that this would add scipy as a documentation dependency (I could probably get around that, but I have another example I want to add that requires scipy so thought I might as well ask now if it is ok to add it). The images are not too large, the size of the chart saved as an html file is around 200kb.
1. Create some example image arrays with blobs in them and measure the area of the blobs.
```python
import numpy as np
import pandas as pd
from scipy import ndimage as ndi
rng = np.random.default_rng([ord(c) for c in 'altair'])
n_rows = 200
def create_blobs(img_width=96, n_dim=2, thresh=0.0001, sigmas=[0.1, 0.2, 0.3]):
"""Helper function to create blobs in the images"""
shape = tuple([img_width] * n_dim)
mask = np.zeros(shape)
points = (img_width * rng.random(n_dim)).astype(int)
mask[tuple(indices for indices in points)] = 1
return ndi.gaussian_filter(mask, sigma=rng.choice(sigmas) * img_width) > thresh
df = pd.DataFrame({
'img1': [create_blobs() for _ in range(n_rows)],
'img2': [create_blobs(sigmas=[0.15, 0.25, 0.35]) for _ in range(n_rows)],
'group': rng.choice(['a', 'b', 'c'], size=n_rows)
})
df[['img1_area', 'img2_area']] = df[['img1', 'img2']].applymap(np.mean)
df
```
2. Convert the numpy arrays to base64 encoded strings that will show in the tooltip
```python
from io import BytesIO
from PIL import Image, ImageDraw
import base64
def create_tooltip_image(df_row):
# Concatenate images to show together in the tooltip
img_gap = np.ones([df_row['img1'].shape[0], 10]) # 10 px white gap between imgs
img = Image.fromarray(
np.concatenate(
[
df_row['img1'] * 128, # grey
img_gap * 255, # white
df_row['img2'] * 128
],
axis=1
).astype('uint8')
)
# Optional: Burn in labels as pixels in the images
ImageDraw.Draw(img).text((3, 0), 'img1', fill=255)
ImageDraw.Draw(img).text((3 + df_row['img1'].shape[1] + img_gap.shape[1], 0), 'img2', fill=255)
# Convert to base64 encoded image string that can be displayed in the tooltip
buffered = BytesIO()
img.save(buffered, format="PNG")
img_str = base64.b64encode(buffered.getvalue()).decode()
return f"data:image/png;base64,{img_str}"
# The column with the image must be called "image" in order for it to trigger the image rendering in the tooltip
df['image'] = df[['img1', 'img2']].apply(create_tooltip_image, axis=1)
# Dropping the images since they are large an no longer needed
df = df.drop(columns=['img1', 'img2'])
df
```
3. Create a chart to show the images
```python
import altair as alt
alt.Chart(df, title='Area of grey blobs').mark_circle().encode(
x='group',
y=alt.Y(alt.repeat(), type='quantitative'),
tooltip=['image'],
color='group'
).repeat(
['img1_area', 'img2_area']
)
```
https://github.com/altair-viz/altair/assets/4560057/45ccc43f-c8a4-4b3b-bb42-ed0b18cd9703
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sphinxext/altairgallery.py`
Content:
```
1 import hashlib
2 import os
3 import json
4 import random
5 import collections
6 from operator import itemgetter
7 import warnings
8 import shutil
9
10 import jinja2
11
12 from docutils import nodes
13 from docutils.statemachine import ViewList
14 from docutils.parsers.rst import Directive
15 from docutils.parsers.rst.directives import flag
16
17 from sphinx.util.nodes import nested_parse_with_titles
18
19 from .utils import (
20 get_docstring_and_rest,
21 prev_this_next,
22 create_thumbnail,
23 create_generic_image,
24 )
25 from altair.utils.execeval import eval_block
26 from tests.examples_arguments_syntax import iter_examples_arguments_syntax
27 from tests.examples_methods_syntax import iter_examples_methods_syntax
28
29
30 EXAMPLE_MODULE = "altair.examples"
31
32
33 GALLERY_TEMPLATE = jinja2.Template(
34 """
35 .. This document is auto-generated by the altair-gallery extension. Do not modify directly.
36
37 .. _{{ gallery_ref }}:
38
39 {{ title }}
40 {% for char in title %}-{% endfor %}
41
42 This gallery contains a selection of examples of the plots Altair can create. Some may seem fairly complicated at first glance, but they are built by combining a simple set of declarative building blocks.
43
44 Many draw upon sample datasets compiled by the `Vega <https://vega.github.io/vega/>`_ project. To access them yourself, install `vega_datasets <https://github.com/altair-viz/vega_datasets>`_.
45
46 .. code-block:: none
47
48 python -m pip install vega_datasets
49
50 If you can't find the plots you are looking for here, make sure to check out the :ref:`altair-ecosystem` section, which has links to packages for making e.g. network diagrams and animations.
51
52 {% for grouper, group in examples %}
53
54 .. _gallery-category-{{ grouper }}:
55
56 {{ grouper }}
57 {% for char in grouper %}~{% endfor %}
58
59 .. raw:: html
60
61 <span class="gallery">
62 {% for example in group %}
63 <a class="imagegroup" href="{{ example.name }}.html">
64 <span
65 class="image" alt="{{ example.title }}"
66 {% if example['use_svg'] %}
67 style="background-image: url(..{{ image_dir }}/{{ example.name }}-thumb.svg);"
68 {% else %}
69 style="background-image: url(..{{ image_dir }}/{{ example.name }}-thumb.png);"
70 {% endif %}
71 ></span>
72
73 <span class="image-title">{{ example.title }}</span>
74 </a>
75 {% endfor %}
76 </span>
77
78 <div style='clear:both;'></div>
79
80 {% endfor %}
81
82
83 .. toctree::
84 :maxdepth: 2
85 :caption: Examples
86 :hidden:
87
88 Gallery <self>
89 Tutorials <../case_studies/exploring-weather>
90 """
91 )
92
93 MINIGALLERY_TEMPLATE = jinja2.Template(
94 """
95 .. raw:: html
96
97 <div id="showcase">
98 <div class="examples">
99 {% for example in examples %}
100 <a
101 class="preview" href="{{ gallery_dir }}/{{ example.name }}.html"
102 {% if example['use_svg'] %}
103 style="background-image: url(.{{ image_dir }}/{{ example.name }}-thumb.svg)"
104 {% else %}
105 style="background-image: url(.{{ image_dir }}/{{ example.name }}-thumb.png)"
106 {% endif %}
107 ></a>
108 {% endfor %}
109 </div>
110 </div>
111 """
112 )
113
114
115 EXAMPLE_TEMPLATE = jinja2.Template(
116 """
117 :orphan:
118 :html_theme.sidebar_secondary.remove:
119
120 .. This document is auto-generated by the altair-gallery extension. Do not modify directly.
121
122 .. _gallery_{{ name }}:
123
124 {{ docstring }}
125
126 .. altair-plot::
127 {% if code_below %}:remove-code:{% endif %}
128 {% if strict %}:strict:{% endif %}
129
130 {{ code | indent(4) }}
131
132 .. tab-set::
133
134 .. tab-item:: Method syntax
135 :sync: method
136
137 .. code:: python
138
139 {{ method_code | indent(12) }}
140
141 .. tab-item:: Attribute syntax
142 :sync: attribute
143
144 .. code:: python
145
146 {{ code | indent(12) }}
147 """
148 )
149
150
151 def save_example_pngs(examples, image_dir, make_thumbnails=True):
152 """Save example pngs and (optionally) thumbnails"""
153 if not os.path.exists(image_dir):
154 os.makedirs(image_dir)
155
156 # store hashes so that we know whether images need to be generated
157 hash_file = os.path.join(image_dir, "_image_hashes.json")
158
159 if os.path.exists(hash_file):
160 with open(hash_file) as f:
161 hashes = json.load(f)
162 else:
163 hashes = {}
164
165 for example in examples:
166 filename = example["name"] + (".svg" if example["use_svg"] else ".png")
167 image_file = os.path.join(image_dir, filename)
168
169 example_hash = hashlib.md5(example["code"].encode()).hexdigest()
170 hashes_match = hashes.get(filename, "") == example_hash
171
172 if hashes_match and os.path.exists(image_file):
173 print("-> using cached {}".format(image_file))
174 else:
175 # the file changed or the image file does not exist. Generate it.
176 print("-> saving {}".format(image_file))
177 chart = eval_block(example["code"])
178 try:
179 chart.save(image_file)
180 hashes[filename] = example_hash
181 except ImportError:
182 warnings.warn("Unable to save image: using generic image", stacklevel=1)
183 create_generic_image(image_file)
184
185 with open(hash_file, "w") as f:
186 json.dump(hashes, f)
187
188 if make_thumbnails:
189 params = example.get("galleryParameters", {})
190 if example["use_svg"]:
191 # Thumbnail for SVG is identical to original image
192 thumb_file = os.path.join(image_dir, example["name"] + "-thumb.svg")
193 shutil.copyfile(image_file, thumb_file)
194 else:
195 thumb_file = os.path.join(image_dir, example["name"] + "-thumb.png")
196 create_thumbnail(image_file, thumb_file, **params)
197
198 # Save hashes so we know whether we need to re-generate plots
199 with open(hash_file, "w") as f:
200 json.dump(hashes, f)
201
202
203 def populate_examples(**kwds):
204 """Iterate through Altair examples and extract code"""
205
206 examples = sorted(iter_examples_arguments_syntax(), key=itemgetter("name"))
207 method_examples = {x["name"]: x for x in iter_examples_methods_syntax()}
208
209 for example in examples:
210 docstring, category, code, lineno = get_docstring_and_rest(example["filename"])
211 if example["name"] in method_examples.keys():
212 _, _, method_code, _ = get_docstring_and_rest(
213 method_examples[example["name"]]["filename"]
214 )
215 else:
216 method_code = code
217 code += (
218 "# No channel encoding options are specified in this chart\n"
219 "# so the code is the same as for the method-based syntax.\n"
220 )
221 example.update(kwds)
222 if category is None:
223 raise Exception(
224 f"The example {example['name']} is not assigned to a category"
225 )
226 example.update(
227 {
228 "docstring": docstring,
229 "title": docstring.strip().split("\n")[0],
230 "code": code,
231 "method_code": method_code,
232 "category": category.title(),
233 "lineno": lineno,
234 }
235 )
236
237 return examples
238
239
240 class AltairMiniGalleryDirective(Directive):
241 has_content = False
242
243 option_spec = {
244 "size": int,
245 "names": str,
246 "indices": lambda x: list(map(int, x.split())),
247 "shuffle": flag,
248 "seed": int,
249 "titles": bool,
250 "width": str,
251 }
252
253 def run(self):
254 size = self.options.get("size", 15)
255 names = [name.strip() for name in self.options.get("names", "").split(",")]
256 indices = self.options.get("indices", [])
257 shuffle = "shuffle" in self.options
258 seed = self.options.get("seed", 42)
259 titles = self.options.get("titles", False)
260 width = self.options.get("width", None)
261
262 env = self.state.document.settings.env
263 app = env.app
264
265 gallery_dir = app.builder.config.altair_gallery_dir
266
267 examples = populate_examples()
268
269 if names:
270 if len(names) < size:
271 raise ValueError(
272 "altair-minigallery: if names are specified, "
273 "the list must be at least as long as size."
274 )
275 mapping = {example["name"]: example for example in examples}
276 examples = [mapping[name] for name in names]
277 else:
278 if indices:
279 examples = [examples[i] for i in indices]
280 if shuffle:
281 random.seed(seed)
282 random.shuffle(examples)
283 if size:
284 examples = examples[:size]
285
286 include = MINIGALLERY_TEMPLATE.render(
287 image_dir="/_static",
288 gallery_dir=gallery_dir,
289 examples=examples,
290 titles=titles,
291 width=width,
292 )
293
294 # parse and return documentation
295 result = ViewList()
296 for line in include.split("\n"):
297 result.append(line, "<altair-minigallery>")
298 node = nodes.paragraph()
299 node.document = self.state.document
300 nested_parse_with_titles(self.state, result, node)
301
302 return node.children
303
304
305 def main(app):
306 gallery_dir = app.builder.config.altair_gallery_dir
307 target_dir = os.path.join(app.builder.srcdir, gallery_dir)
308 image_dir = os.path.join(app.builder.srcdir, "_images")
309
310 gallery_ref = app.builder.config.altair_gallery_ref
311 gallery_title = app.builder.config.altair_gallery_title
312 examples = populate_examples(gallery_ref=gallery_ref, code_below=True, strict=False)
313
314 if not os.path.exists(target_dir):
315 os.makedirs(target_dir)
316
317 examples = sorted(examples, key=lambda x: x["title"])
318 examples_toc = collections.OrderedDict(
319 {
320 "Simple Charts": [],
321 "Bar Charts": [],
322 "Line Charts": [],
323 "Area Charts": [],
324 "Circular Plots": [],
325 "Scatter Plots": [],
326 "Uncertainties And Trends": [],
327 "Distributions": [],
328 "Tables": [],
329 "Maps": [],
330 "Interactive Charts": [],
331 "Advanced Calculations": [],
332 "Case Studies": [],
333 }
334 )
335 for d in examples:
336 examples_toc[d["category"]].append(d)
337
338 # Write the gallery index file
339 with open(os.path.join(target_dir, "index.rst"), "w") as f:
340 f.write(
341 GALLERY_TEMPLATE.render(
342 title=gallery_title,
343 examples=examples_toc.items(),
344 image_dir="/_static",
345 gallery_ref=gallery_ref,
346 )
347 )
348
349 # save the images to file
350 save_example_pngs(examples, image_dir)
351
352 # Write the individual example files
353 for prev_ex, example, next_ex in prev_this_next(examples):
354 if prev_ex:
355 example["prev_ref"] = "gallery_{name}".format(**prev_ex)
356 if next_ex:
357 example["next_ref"] = "gallery_{name}".format(**next_ex)
358 target_filename = os.path.join(target_dir, example["name"] + ".rst")
359 with open(os.path.join(target_filename), "w", encoding="utf-8") as f:
360 f.write(EXAMPLE_TEMPLATE.render(example))
361
362
363 def setup(app):
364 app.connect("builder-inited", main)
365 app.add_css_file("altair-gallery.css")
366 app.add_config_value("altair_gallery_dir", "gallery", "env")
367 app.add_config_value("altair_gallery_ref", "example-gallery", "env")
368 app.add_config_value("altair_gallery_title", "Example Gallery", "env")
369 app.add_directive_to_domain("py", "altair-minigallery", AltairMiniGalleryDirective)
370
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sphinxext/altairgallery.py b/sphinxext/altairgallery.py
--- a/sphinxext/altairgallery.py
+++ b/sphinxext/altairgallery.py
@@ -86,7 +86,7 @@
:hidden:
Gallery <self>
- Tutorials <../case_studies/exploring-weather>
+ Tutorials <../case_studies/index>
"""
)
| {"golden_diff": "diff --git a/sphinxext/altairgallery.py b/sphinxext/altairgallery.py\n--- a/sphinxext/altairgallery.py\n+++ b/sphinxext/altairgallery.py\n@@ -86,7 +86,7 @@\n :hidden:\n \n Gallery <self>\n- Tutorials <../case_studies/exploring-weather>\n+ Tutorials <../case_studies/index>\n \"\"\"\n )\n", "issue": "Add example showing how to render numpy image arrays as tooltip images\nI think it could be helpful to show how images that are represented as numpy arrays can be rendered in tooltips in altair. I can add a doc example of this. Maybe in [the tutorials/case studies](https://altair-viz.github.io/case_studies/exploring-weather.html) section? We only have one example there currently. Another option would be to combine it with https://altair-viz.github.io/gallery/image_tooltip.html and create a new page in the user guide on images, but that 's more involved.\r\n\r\nHere is the code and a video of the output. Note that this would add scipy as a documentation dependency (I could probably get around that, but I have another example I want to add that requires scipy so thought I might as well ask now if it is ok to add it). The images are not too large, the size of the chart saved as an html file is around 200kb.\r\n\r\n1. Create some example image arrays with blobs in them and measure the area of the blobs.\r\n ```python\r\n import numpy as np\r\n import pandas as pd\r\n from scipy import ndimage as ndi\r\n \r\n rng = np.random.default_rng([ord(c) for c in 'altair'])\r\n n_rows = 200\r\n \r\n def create_blobs(img_width=96, n_dim=2, thresh=0.0001, sigmas=[0.1, 0.2, 0.3]):\r\n \"\"\"Helper function to create blobs in the images\"\"\"\r\n shape = tuple([img_width] * n_dim)\r\n mask = np.zeros(shape)\r\n points = (img_width * rng.random(n_dim)).astype(int)\r\n mask[tuple(indices for indices in points)] = 1\r\n return ndi.gaussian_filter(mask, sigma=rng.choice(sigmas) * img_width) > thresh\r\n \r\n df = pd.DataFrame({\r\n 'img1': [create_blobs() for _ in range(n_rows)],\r\n 'img2': [create_blobs(sigmas=[0.15, 0.25, 0.35]) for _ in range(n_rows)],\r\n 'group': rng.choice(['a', 'b', 'c'], size=n_rows)\r\n })\r\n df[['img1_area', 'img2_area']] = df[['img1', 'img2']].applymap(np.mean)\r\n df\r\n ```\r\n\r\n2. Convert the numpy arrays to base64 encoded strings that will show in the tooltip\r\n\r\n ```python\r\n from io import BytesIO\r\n from PIL import Image, ImageDraw\r\n import base64\r\n \r\n \r\n def create_tooltip_image(df_row):\r\n # Concatenate images to show together in the tooltip\r\n img_gap = np.ones([df_row['img1'].shape[0], 10]) # 10 px white gap between imgs\r\n img = Image.fromarray(\r\n np.concatenate(\r\n [\r\n df_row['img1'] * 128, # grey\r\n img_gap * 255, # white\r\n df_row['img2'] * 128\r\n ],\r\n axis=1\r\n ).astype('uint8')\r\n )\r\n \r\n # Optional: Burn in labels as pixels in the images\r\n ImageDraw.Draw(img).text((3, 0), 'img1', fill=255)\r\n ImageDraw.Draw(img).text((3 + df_row['img1'].shape[1] + img_gap.shape[1], 0), 'img2', fill=255)\r\n \r\n # Convert to base64 encoded image string that can be displayed in the tooltip\r\n buffered = BytesIO()\r\n img.save(buffered, format=\"PNG\")\r\n img_str = base64.b64encode(buffered.getvalue()).decode()\r\n return f\"data:image/png;base64,{img_str}\"\r\n \r\n # The column with the image must be called \"image\" in order for it to trigger the image rendering in the tooltip\r\n df['image'] = df[['img1', 'img2']].apply(create_tooltip_image, axis=1)\r\n \r\n # Dropping the images since they are large an no longer needed\r\n df = df.drop(columns=['img1', 'img2'])\r\n df\r\n ```\r\n3. Create a chart to show the images\r\n\r\n\r\n ```python\r\n import altair as alt\r\n \r\n alt.Chart(df, title='Area of grey blobs').mark_circle().encode(\r\n x='group',\r\n y=alt.Y(alt.repeat(), type='quantitative'),\r\n tooltip=['image'],\r\n color='group'\r\n ).repeat(\r\n ['img1_area', 'img2_area']\r\n )\r\n ```\r\n\r\n\r\nhttps://github.com/altair-viz/altair/assets/4560057/45ccc43f-c8a4-4b3b-bb42-ed0b18cd9703\r\n\r\n\n", "before_files": [{"content": "import hashlib\nimport os\nimport json\nimport random\nimport collections\nfrom operator import itemgetter\nimport warnings\nimport shutil\n\nimport jinja2\n\nfrom docutils import nodes\nfrom docutils.statemachine import ViewList\nfrom docutils.parsers.rst import Directive\nfrom docutils.parsers.rst.directives import flag\n\nfrom sphinx.util.nodes import nested_parse_with_titles\n\nfrom .utils import (\n get_docstring_and_rest,\n prev_this_next,\n create_thumbnail,\n create_generic_image,\n)\nfrom altair.utils.execeval import eval_block\nfrom tests.examples_arguments_syntax import iter_examples_arguments_syntax\nfrom tests.examples_methods_syntax import iter_examples_methods_syntax\n\n\nEXAMPLE_MODULE = \"altair.examples\"\n\n\nGALLERY_TEMPLATE = jinja2.Template(\n \"\"\"\n.. This document is auto-generated by the altair-gallery extension. Do not modify directly.\n\n.. _{{ gallery_ref }}:\n\n{{ title }}\n{% for char in title %}-{% endfor %}\n\nThis gallery contains a selection of examples of the plots Altair can create. Some may seem fairly complicated at first glance, but they are built by combining a simple set of declarative building blocks.\n\nMany draw upon sample datasets compiled by the `Vega <https://vega.github.io/vega/>`_ project. To access them yourself, install `vega_datasets <https://github.com/altair-viz/vega_datasets>`_.\n\n.. code-block:: none\n\n python -m pip install vega_datasets\n\nIf you can't find the plots you are looking for here, make sure to check out the :ref:`altair-ecosystem` section, which has links to packages for making e.g. network diagrams and animations.\n\n{% for grouper, group in examples %}\n\n.. _gallery-category-{{ grouper }}:\n\n{{ grouper }}\n{% for char in grouper %}~{% endfor %}\n\n.. raw:: html\n\n <span class=\"gallery\">\n {% for example in group %}\n <a class=\"imagegroup\" href=\"{{ example.name }}.html\">\n <span\n class=\"image\" alt=\"{{ example.title }}\"\n{% if example['use_svg'] %}\n style=\"background-image: url(..{{ image_dir }}/{{ example.name }}-thumb.svg);\"\n{% else %}\n style=\"background-image: url(..{{ image_dir }}/{{ example.name }}-thumb.png);\"\n{% endif %}\n ></span>\n\n <span class=\"image-title\">{{ example.title }}</span>\n </a>\n {% endfor %}\n </span>\n\n <div style='clear:both;'></div>\n\n{% endfor %}\n\n\n.. toctree::\n :maxdepth: 2\n :caption: Examples\n :hidden:\n\n Gallery <self>\n Tutorials <../case_studies/exploring-weather>\n\"\"\"\n)\n\nMINIGALLERY_TEMPLATE = jinja2.Template(\n \"\"\"\n.. raw:: html\n\n <div id=\"showcase\">\n <div class=\"examples\">\n {% for example in examples %}\n <a\n class=\"preview\" href=\"{{ gallery_dir }}/{{ example.name }}.html\"\n{% if example['use_svg'] %}\n style=\"background-image: url(.{{ image_dir }}/{{ example.name }}-thumb.svg)\"\n{% else %}\n style=\"background-image: url(.{{ image_dir }}/{{ example.name }}-thumb.png)\"\n{% endif %}\n ></a>\n {% endfor %}\n </div>\n </div>\n\"\"\"\n)\n\n\nEXAMPLE_TEMPLATE = jinja2.Template(\n \"\"\"\n:orphan:\n:html_theme.sidebar_secondary.remove:\n\n.. This document is auto-generated by the altair-gallery extension. Do not modify directly.\n\n.. _gallery_{{ name }}:\n\n{{ docstring }}\n\n.. altair-plot::\n {% if code_below %}:remove-code:{% endif %}\n {% if strict %}:strict:{% endif %}\n\n{{ code | indent(4) }}\n\n.. tab-set::\n\n .. tab-item:: Method syntax\n :sync: method\n\n .. code:: python\n\n{{ method_code | indent(12) }}\n\n .. tab-item:: Attribute syntax\n :sync: attribute\n\n .. code:: python\n\n{{ code | indent(12) }}\n\"\"\"\n)\n\n\ndef save_example_pngs(examples, image_dir, make_thumbnails=True):\n \"\"\"Save example pngs and (optionally) thumbnails\"\"\"\n if not os.path.exists(image_dir):\n os.makedirs(image_dir)\n\n # store hashes so that we know whether images need to be generated\n hash_file = os.path.join(image_dir, \"_image_hashes.json\")\n\n if os.path.exists(hash_file):\n with open(hash_file) as f:\n hashes = json.load(f)\n else:\n hashes = {}\n\n for example in examples:\n filename = example[\"name\"] + (\".svg\" if example[\"use_svg\"] else \".png\")\n image_file = os.path.join(image_dir, filename)\n\n example_hash = hashlib.md5(example[\"code\"].encode()).hexdigest()\n hashes_match = hashes.get(filename, \"\") == example_hash\n\n if hashes_match and os.path.exists(image_file):\n print(\"-> using cached {}\".format(image_file))\n else:\n # the file changed or the image file does not exist. Generate it.\n print(\"-> saving {}\".format(image_file))\n chart = eval_block(example[\"code\"])\n try:\n chart.save(image_file)\n hashes[filename] = example_hash\n except ImportError:\n warnings.warn(\"Unable to save image: using generic image\", stacklevel=1)\n create_generic_image(image_file)\n\n with open(hash_file, \"w\") as f:\n json.dump(hashes, f)\n\n if make_thumbnails:\n params = example.get(\"galleryParameters\", {})\n if example[\"use_svg\"]:\n # Thumbnail for SVG is identical to original image\n thumb_file = os.path.join(image_dir, example[\"name\"] + \"-thumb.svg\")\n shutil.copyfile(image_file, thumb_file)\n else:\n thumb_file = os.path.join(image_dir, example[\"name\"] + \"-thumb.png\")\n create_thumbnail(image_file, thumb_file, **params)\n\n # Save hashes so we know whether we need to re-generate plots\n with open(hash_file, \"w\") as f:\n json.dump(hashes, f)\n\n\ndef populate_examples(**kwds):\n \"\"\"Iterate through Altair examples and extract code\"\"\"\n\n examples = sorted(iter_examples_arguments_syntax(), key=itemgetter(\"name\"))\n method_examples = {x[\"name\"]: x for x in iter_examples_methods_syntax()}\n\n for example in examples:\n docstring, category, code, lineno = get_docstring_and_rest(example[\"filename\"])\n if example[\"name\"] in method_examples.keys():\n _, _, method_code, _ = get_docstring_and_rest(\n method_examples[example[\"name\"]][\"filename\"]\n )\n else:\n method_code = code\n code += (\n \"# No channel encoding options are specified in this chart\\n\"\n \"# so the code is the same as for the method-based syntax.\\n\"\n )\n example.update(kwds)\n if category is None:\n raise Exception(\n f\"The example {example['name']} is not assigned to a category\"\n )\n example.update(\n {\n \"docstring\": docstring,\n \"title\": docstring.strip().split(\"\\n\")[0],\n \"code\": code,\n \"method_code\": method_code,\n \"category\": category.title(),\n \"lineno\": lineno,\n }\n )\n\n return examples\n\n\nclass AltairMiniGalleryDirective(Directive):\n has_content = False\n\n option_spec = {\n \"size\": int,\n \"names\": str,\n \"indices\": lambda x: list(map(int, x.split())),\n \"shuffle\": flag,\n \"seed\": int,\n \"titles\": bool,\n \"width\": str,\n }\n\n def run(self):\n size = self.options.get(\"size\", 15)\n names = [name.strip() for name in self.options.get(\"names\", \"\").split(\",\")]\n indices = self.options.get(\"indices\", [])\n shuffle = \"shuffle\" in self.options\n seed = self.options.get(\"seed\", 42)\n titles = self.options.get(\"titles\", False)\n width = self.options.get(\"width\", None)\n\n env = self.state.document.settings.env\n app = env.app\n\n gallery_dir = app.builder.config.altair_gallery_dir\n\n examples = populate_examples()\n\n if names:\n if len(names) < size:\n raise ValueError(\n \"altair-minigallery: if names are specified, \"\n \"the list must be at least as long as size.\"\n )\n mapping = {example[\"name\"]: example for example in examples}\n examples = [mapping[name] for name in names]\n else:\n if indices:\n examples = [examples[i] for i in indices]\n if shuffle:\n random.seed(seed)\n random.shuffle(examples)\n if size:\n examples = examples[:size]\n\n include = MINIGALLERY_TEMPLATE.render(\n image_dir=\"/_static\",\n gallery_dir=gallery_dir,\n examples=examples,\n titles=titles,\n width=width,\n )\n\n # parse and return documentation\n result = ViewList()\n for line in include.split(\"\\n\"):\n result.append(line, \"<altair-minigallery>\")\n node = nodes.paragraph()\n node.document = self.state.document\n nested_parse_with_titles(self.state, result, node)\n\n return node.children\n\n\ndef main(app):\n gallery_dir = app.builder.config.altair_gallery_dir\n target_dir = os.path.join(app.builder.srcdir, gallery_dir)\n image_dir = os.path.join(app.builder.srcdir, \"_images\")\n\n gallery_ref = app.builder.config.altair_gallery_ref\n gallery_title = app.builder.config.altair_gallery_title\n examples = populate_examples(gallery_ref=gallery_ref, code_below=True, strict=False)\n\n if not os.path.exists(target_dir):\n os.makedirs(target_dir)\n\n examples = sorted(examples, key=lambda x: x[\"title\"])\n examples_toc = collections.OrderedDict(\n {\n \"Simple Charts\": [],\n \"Bar Charts\": [],\n \"Line Charts\": [],\n \"Area Charts\": [],\n \"Circular Plots\": [],\n \"Scatter Plots\": [],\n \"Uncertainties And Trends\": [],\n \"Distributions\": [],\n \"Tables\": [],\n \"Maps\": [],\n \"Interactive Charts\": [],\n \"Advanced Calculations\": [],\n \"Case Studies\": [],\n }\n )\n for d in examples:\n examples_toc[d[\"category\"]].append(d)\n\n # Write the gallery index file\n with open(os.path.join(target_dir, \"index.rst\"), \"w\") as f:\n f.write(\n GALLERY_TEMPLATE.render(\n title=gallery_title,\n examples=examples_toc.items(),\n image_dir=\"/_static\",\n gallery_ref=gallery_ref,\n )\n )\n\n # save the images to file\n save_example_pngs(examples, image_dir)\n\n # Write the individual example files\n for prev_ex, example, next_ex in prev_this_next(examples):\n if prev_ex:\n example[\"prev_ref\"] = \"gallery_{name}\".format(**prev_ex)\n if next_ex:\n example[\"next_ref\"] = \"gallery_{name}\".format(**next_ex)\n target_filename = os.path.join(target_dir, example[\"name\"] + \".rst\")\n with open(os.path.join(target_filename), \"w\", encoding=\"utf-8\") as f:\n f.write(EXAMPLE_TEMPLATE.render(example))\n\n\ndef setup(app):\n app.connect(\"builder-inited\", main)\n app.add_css_file(\"altair-gallery.css\")\n app.add_config_value(\"altair_gallery_dir\", \"gallery\", \"env\")\n app.add_config_value(\"altair_gallery_ref\", \"example-gallery\", \"env\")\n app.add_config_value(\"altair_gallery_title\", \"Example Gallery\", \"env\")\n app.add_directive_to_domain(\"py\", \"altair-minigallery\", AltairMiniGalleryDirective)\n", "path": "sphinxext/altairgallery.py"}], "after_files": [{"content": "import hashlib\nimport os\nimport json\nimport random\nimport collections\nfrom operator import itemgetter\nimport warnings\nimport shutil\n\nimport jinja2\n\nfrom docutils import nodes\nfrom docutils.statemachine import ViewList\nfrom docutils.parsers.rst import Directive\nfrom docutils.parsers.rst.directives import flag\n\nfrom sphinx.util.nodes import nested_parse_with_titles\n\nfrom .utils import (\n get_docstring_and_rest,\n prev_this_next,\n create_thumbnail,\n create_generic_image,\n)\nfrom altair.utils.execeval import eval_block\nfrom tests.examples_arguments_syntax import iter_examples_arguments_syntax\nfrom tests.examples_methods_syntax import iter_examples_methods_syntax\n\n\nEXAMPLE_MODULE = \"altair.examples\"\n\n\nGALLERY_TEMPLATE = jinja2.Template(\n \"\"\"\n.. This document is auto-generated by the altair-gallery extension. Do not modify directly.\n\n.. _{{ gallery_ref }}:\n\n{{ title }}\n{% for char in title %}-{% endfor %}\n\nThis gallery contains a selection of examples of the plots Altair can create. Some may seem fairly complicated at first glance, but they are built by combining a simple set of declarative building blocks.\n\nMany draw upon sample datasets compiled by the `Vega <https://vega.github.io/vega/>`_ project. To access them yourself, install `vega_datasets <https://github.com/altair-viz/vega_datasets>`_.\n\n.. code-block:: none\n\n python -m pip install vega_datasets\n\nIf you can't find the plots you are looking for here, make sure to check out the :ref:`altair-ecosystem` section, which has links to packages for making e.g. network diagrams and animations.\n\n{% for grouper, group in examples %}\n\n.. _gallery-category-{{ grouper }}:\n\n{{ grouper }}\n{% for char in grouper %}~{% endfor %}\n\n.. raw:: html\n\n <span class=\"gallery\">\n {% for example in group %}\n <a class=\"imagegroup\" href=\"{{ example.name }}.html\">\n <span\n class=\"image\" alt=\"{{ example.title }}\"\n{% if example['use_svg'] %}\n style=\"background-image: url(..{{ image_dir }}/{{ example.name }}-thumb.svg);\"\n{% else %}\n style=\"background-image: url(..{{ image_dir }}/{{ example.name }}-thumb.png);\"\n{% endif %}\n ></span>\n\n <span class=\"image-title\">{{ example.title }}</span>\n </a>\n {% endfor %}\n </span>\n\n <div style='clear:both;'></div>\n\n{% endfor %}\n\n\n.. toctree::\n :maxdepth: 2\n :caption: Examples\n :hidden:\n\n Gallery <self>\n Tutorials <../case_studies/index>\n\"\"\"\n)\n\nMINIGALLERY_TEMPLATE = jinja2.Template(\n \"\"\"\n.. raw:: html\n\n <div id=\"showcase\">\n <div class=\"examples\">\n {% for example in examples %}\n <a\n class=\"preview\" href=\"{{ gallery_dir }}/{{ example.name }}.html\"\n{% if example['use_svg'] %}\n style=\"background-image: url(.{{ image_dir }}/{{ example.name }}-thumb.svg)\"\n{% else %}\n style=\"background-image: url(.{{ image_dir }}/{{ example.name }}-thumb.png)\"\n{% endif %}\n ></a>\n {% endfor %}\n </div>\n </div>\n\"\"\"\n)\n\n\nEXAMPLE_TEMPLATE = jinja2.Template(\n \"\"\"\n:orphan:\n:html_theme.sidebar_secondary.remove:\n\n.. This document is auto-generated by the altair-gallery extension. Do not modify directly.\n\n.. _gallery_{{ name }}:\n\n{{ docstring }}\n\n.. altair-plot::\n {% if code_below %}:remove-code:{% endif %}\n {% if strict %}:strict:{% endif %}\n\n{{ code | indent(4) }}\n\n.. tab-set::\n\n .. tab-item:: Method syntax\n :sync: method\n\n .. code:: python\n\n{{ method_code | indent(12) }}\n\n .. tab-item:: Attribute syntax\n :sync: attribute\n\n .. code:: python\n\n{{ code | indent(12) }}\n\"\"\"\n)\n\n\ndef save_example_pngs(examples, image_dir, make_thumbnails=True):\n \"\"\"Save example pngs and (optionally) thumbnails\"\"\"\n if not os.path.exists(image_dir):\n os.makedirs(image_dir)\n\n # store hashes so that we know whether images need to be generated\n hash_file = os.path.join(image_dir, \"_image_hashes.json\")\n\n if os.path.exists(hash_file):\n with open(hash_file) as f:\n hashes = json.load(f)\n else:\n hashes = {}\n\n for example in examples:\n filename = example[\"name\"] + (\".svg\" if example[\"use_svg\"] else \".png\")\n image_file = os.path.join(image_dir, filename)\n\n example_hash = hashlib.md5(example[\"code\"].encode()).hexdigest()\n hashes_match = hashes.get(filename, \"\") == example_hash\n\n if hashes_match and os.path.exists(image_file):\n print(\"-> using cached {}\".format(image_file))\n else:\n # the file changed or the image file does not exist. Generate it.\n print(\"-> saving {}\".format(image_file))\n chart = eval_block(example[\"code\"])\n try:\n chart.save(image_file)\n hashes[filename] = example_hash\n except ImportError:\n warnings.warn(\"Unable to save image: using generic image\", stacklevel=1)\n create_generic_image(image_file)\n\n with open(hash_file, \"w\") as f:\n json.dump(hashes, f)\n\n if make_thumbnails:\n params = example.get(\"galleryParameters\", {})\n if example[\"use_svg\"]:\n # Thumbnail for SVG is identical to original image\n thumb_file = os.path.join(image_dir, example[\"name\"] + \"-thumb.svg\")\n shutil.copyfile(image_file, thumb_file)\n else:\n thumb_file = os.path.join(image_dir, example[\"name\"] + \"-thumb.png\")\n create_thumbnail(image_file, thumb_file, **params)\n\n # Save hashes so we know whether we need to re-generate plots\n with open(hash_file, \"w\") as f:\n json.dump(hashes, f)\n\n\ndef populate_examples(**kwds):\n \"\"\"Iterate through Altair examples and extract code\"\"\"\n\n examples = sorted(iter_examples_arguments_syntax(), key=itemgetter(\"name\"))\n method_examples = {x[\"name\"]: x for x in iter_examples_methods_syntax()}\n\n for example in examples:\n docstring, category, code, lineno = get_docstring_and_rest(example[\"filename\"])\n if example[\"name\"] in method_examples.keys():\n _, _, method_code, _ = get_docstring_and_rest(\n method_examples[example[\"name\"]][\"filename\"]\n )\n else:\n method_code = code\n code += (\n \"# No channel encoding options are specified in this chart\\n\"\n \"# so the code is the same as for the method-based syntax.\\n\"\n )\n example.update(kwds)\n if category is None:\n raise Exception(\n f\"The example {example['name']} is not assigned to a category\"\n )\n example.update(\n {\n \"docstring\": docstring,\n \"title\": docstring.strip().split(\"\\n\")[0],\n \"code\": code,\n \"method_code\": method_code,\n \"category\": category.title(),\n \"lineno\": lineno,\n }\n )\n\n return examples\n\n\nclass AltairMiniGalleryDirective(Directive):\n has_content = False\n\n option_spec = {\n \"size\": int,\n \"names\": str,\n \"indices\": lambda x: list(map(int, x.split())),\n \"shuffle\": flag,\n \"seed\": int,\n \"titles\": bool,\n \"width\": str,\n }\n\n def run(self):\n size = self.options.get(\"size\", 15)\n names = [name.strip() for name in self.options.get(\"names\", \"\").split(\",\")]\n indices = self.options.get(\"indices\", [])\n shuffle = \"shuffle\" in self.options\n seed = self.options.get(\"seed\", 42)\n titles = self.options.get(\"titles\", False)\n width = self.options.get(\"width\", None)\n\n env = self.state.document.settings.env\n app = env.app\n\n gallery_dir = app.builder.config.altair_gallery_dir\n\n examples = populate_examples()\n\n if names:\n if len(names) < size:\n raise ValueError(\n \"altair-minigallery: if names are specified, \"\n \"the list must be at least as long as size.\"\n )\n mapping = {example[\"name\"]: example for example in examples}\n examples = [mapping[name] for name in names]\n else:\n if indices:\n examples = [examples[i] for i in indices]\n if shuffle:\n random.seed(seed)\n random.shuffle(examples)\n if size:\n examples = examples[:size]\n\n include = MINIGALLERY_TEMPLATE.render(\n image_dir=\"/_static\",\n gallery_dir=gallery_dir,\n examples=examples,\n titles=titles,\n width=width,\n )\n\n # parse and return documentation\n result = ViewList()\n for line in include.split(\"\\n\"):\n result.append(line, \"<altair-minigallery>\")\n node = nodes.paragraph()\n node.document = self.state.document\n nested_parse_with_titles(self.state, result, node)\n\n return node.children\n\n\ndef main(app):\n gallery_dir = app.builder.config.altair_gallery_dir\n target_dir = os.path.join(app.builder.srcdir, gallery_dir)\n image_dir = os.path.join(app.builder.srcdir, \"_images\")\n\n gallery_ref = app.builder.config.altair_gallery_ref\n gallery_title = app.builder.config.altair_gallery_title\n examples = populate_examples(gallery_ref=gallery_ref, code_below=True, strict=False)\n\n if not os.path.exists(target_dir):\n os.makedirs(target_dir)\n\n examples = sorted(examples, key=lambda x: x[\"title\"])\n examples_toc = collections.OrderedDict(\n {\n \"Simple Charts\": [],\n \"Bar Charts\": [],\n \"Line Charts\": [],\n \"Area Charts\": [],\n \"Circular Plots\": [],\n \"Scatter Plots\": [],\n \"Uncertainties And Trends\": [],\n \"Distributions\": [],\n \"Tables\": [],\n \"Maps\": [],\n \"Interactive Charts\": [],\n \"Advanced Calculations\": [],\n \"Case Studies\": [],\n }\n )\n for d in examples:\n examples_toc[d[\"category\"]].append(d)\n\n # Write the gallery index file\n with open(os.path.join(target_dir, \"index.rst\"), \"w\") as f:\n f.write(\n GALLERY_TEMPLATE.render(\n title=gallery_title,\n examples=examples_toc.items(),\n image_dir=\"/_static\",\n gallery_ref=gallery_ref,\n )\n )\n\n # save the images to file\n save_example_pngs(examples, image_dir)\n\n # Write the individual example files\n for prev_ex, example, next_ex in prev_this_next(examples):\n if prev_ex:\n example[\"prev_ref\"] = \"gallery_{name}\".format(**prev_ex)\n if next_ex:\n example[\"next_ref\"] = \"gallery_{name}\".format(**next_ex)\n target_filename = os.path.join(target_dir, example[\"name\"] + \".rst\")\n with open(os.path.join(target_filename), \"w\", encoding=\"utf-8\") as f:\n f.write(EXAMPLE_TEMPLATE.render(example))\n\n\ndef setup(app):\n app.connect(\"builder-inited\", main)\n app.add_css_file(\"altair-gallery.css\")\n app.add_config_value(\"altair_gallery_dir\", \"gallery\", \"env\")\n app.add_config_value(\"altair_gallery_ref\", \"example-gallery\", \"env\")\n app.add_config_value(\"altair_gallery_title\", \"Example Gallery\", \"env\")\n app.add_directive_to_domain(\"py\", \"altair-minigallery\", AltairMiniGalleryDirective)\n", "path": "sphinxext/altairgallery.py"}]} |
gh_patches_debug_25 | rasdani/github-patches | git_diff | graspologic-org__graspologic-366 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
returning test statistic in LDT
some practitioners (read: Vince, cep) only care about the test statistic and not the p-value. obviously one can still extract it if they perform the full test. however, that wastes time and resources. one can set the number of iterations to 1 to minimize that, but we can still do less. i propose to allow the number of permutations to be set to 0 (hyppo allows that, so really it is just a change in argument check). i am happy to do this, but:
this brings up the following questions: what should be happening to the fit_predict in that case? should it return the test statistic instead? or the p-value of 1? or NaN? should we be raising warnings?
and on a larger scale: should we really have this API? should fit predict return p-value, or a tuple of a p-value and a test statistic, like many other tests in python? furthremore, should it really be a class? once again, most tests in python that i have seen (scipy, statsmodels) are functions, not classes.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 import os
2 import sys
3 from setuptools import setup, find_packages
4 from sys import platform
5
6 PACKAGE_NAME = "graspy"
7 DESCRIPTION = "A set of python modules for graph statistics"
8 with open("README.md", "r") as f:
9 LONG_DESCRIPTION = f.read()
10 AUTHOR = ("Eric Bridgeford, Jaewon Chung, Benjamin Pedigo, Bijan Varjavand",)
11 AUTHOR_EMAIL = "[email protected]"
12 URL = "https://github.com/neurodata/graspy"
13 MINIMUM_PYTHON_VERSION = 3, 6 # Minimum of Python 3.5
14 REQUIRED_PACKAGES = [
15 "networkx>=2.1",
16 "numpy>=1.8.1",
17 "scikit-learn>=0.19.1",
18 "scipy>=1.1.0",
19 "seaborn>=0.9.0",
20 "matplotlib>=3.0.0",
21 "hyppo>=0.1.2",
22 ]
23
24
25 # Find GraSPy version.
26 PROJECT_PATH = os.path.dirname(os.path.abspath(__file__))
27 for line in open(os.path.join(PROJECT_PATH, "graspy", "__init__.py")):
28 if line.startswith("__version__ = "):
29 VERSION = line.strip().split()[2][1:-1]
30
31
32 def check_python_version():
33 """Exit when the Python version is too low."""
34 if sys.version_info < MINIMUM_PYTHON_VERSION:
35 sys.exit("Python {}.{}+ is required.".format(*MINIMUM_PYTHON_VERSION))
36
37
38 check_python_version()
39
40 setup(
41 name=PACKAGE_NAME,
42 version=VERSION,
43 description=DESCRIPTION,
44 long_description=LONG_DESCRIPTION,
45 long_description_content_type="text/markdown",
46 author=AUTHOR,
47 author_email=AUTHOR_EMAIL,
48 install_requires=REQUIRED_PACKAGES,
49 url=URL,
50 license="Apache License 2.0",
51 classifiers=[
52 "Development Status :: 3 - Alpha",
53 "Intended Audience :: Science/Research",
54 "Topic :: Scientific/Engineering :: Mathematics",
55 "License :: OSI Approved :: Apache Software License",
56 "Programming Language :: Python :: 3",
57 "Programming Language :: Python :: 3.6",
58 "Programming Language :: Python :: 3.7",
59 ],
60 packages=find_packages(),
61 include_package_data=True,
62 )
63
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -18,7 +18,7 @@
"scipy>=1.1.0",
"seaborn>=0.9.0",
"matplotlib>=3.0.0",
- "hyppo>=0.1.2",
+ "hyppo>=0.1.3",
]
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -18,7 +18,7 @@\n \"scipy>=1.1.0\",\n \"seaborn>=0.9.0\",\n \"matplotlib>=3.0.0\",\n- \"hyppo>=0.1.2\",\n+ \"hyppo>=0.1.3\",\n ]\n", "issue": "returning test statistic in LDT\nsome practitioners (read: Vince, cep) only care about the test statistic and not the p-value. obviously one can still extract it if they perform the full test. however, that wastes time and resources. one can set the number of iterations to 1 to minimize that, but we can still do less. i propose to allow the number of permutations to be set to 0 (hyppo allows that, so really it is just a change in argument check). i am happy to do this, but:\r\n\r\nthis brings up the following questions: what should be happening to the fit_predict in that case? should it return the test statistic instead? or the p-value of 1? or NaN? should we be raising warnings?\r\n\r\nand on a larger scale: should we really have this API? should fit predict return p-value, or a tuple of a p-value and a test statistic, like many other tests in python? furthremore, should it really be a class? once again, most tests in python that i have seen (scipy, statsmodels) are functions, not classes.\n", "before_files": [{"content": "import os\nimport sys\nfrom setuptools import setup, find_packages\nfrom sys import platform\n\nPACKAGE_NAME = \"graspy\"\nDESCRIPTION = \"A set of python modules for graph statistics\"\nwith open(\"README.md\", \"r\") as f:\n LONG_DESCRIPTION = f.read()\nAUTHOR = (\"Eric Bridgeford, Jaewon Chung, Benjamin Pedigo, Bijan Varjavand\",)\nAUTHOR_EMAIL = \"[email protected]\"\nURL = \"https://github.com/neurodata/graspy\"\nMINIMUM_PYTHON_VERSION = 3, 6 # Minimum of Python 3.5\nREQUIRED_PACKAGES = [\n \"networkx>=2.1\",\n \"numpy>=1.8.1\",\n \"scikit-learn>=0.19.1\",\n \"scipy>=1.1.0\",\n \"seaborn>=0.9.0\",\n \"matplotlib>=3.0.0\",\n \"hyppo>=0.1.2\",\n]\n\n\n# Find GraSPy version.\nPROJECT_PATH = os.path.dirname(os.path.abspath(__file__))\nfor line in open(os.path.join(PROJECT_PATH, \"graspy\", \"__init__.py\")):\n if line.startswith(\"__version__ = \"):\n VERSION = line.strip().split()[2][1:-1]\n\n\ndef check_python_version():\n \"\"\"Exit when the Python version is too low.\"\"\"\n if sys.version_info < MINIMUM_PYTHON_VERSION:\n sys.exit(\"Python {}.{}+ is required.\".format(*MINIMUM_PYTHON_VERSION))\n\n\ncheck_python_version()\n\nsetup(\n name=PACKAGE_NAME,\n version=VERSION,\n description=DESCRIPTION,\n long_description=LONG_DESCRIPTION,\n long_description_content_type=\"text/markdown\",\n author=AUTHOR,\n author_email=AUTHOR_EMAIL,\n install_requires=REQUIRED_PACKAGES,\n url=URL,\n license=\"Apache License 2.0\",\n classifiers=[\n \"Development Status :: 3 - Alpha\",\n \"Intended Audience :: Science/Research\",\n \"Topic :: Scientific/Engineering :: Mathematics\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n ],\n packages=find_packages(),\n include_package_data=True,\n)\n", "path": "setup.py"}], "after_files": [{"content": "import os\nimport sys\nfrom setuptools import setup, find_packages\nfrom sys import platform\n\nPACKAGE_NAME = \"graspy\"\nDESCRIPTION = \"A set of python modules for graph statistics\"\nwith open(\"README.md\", \"r\") as f:\n LONG_DESCRIPTION = f.read()\nAUTHOR = (\"Eric Bridgeford, Jaewon Chung, Benjamin Pedigo, Bijan Varjavand\",)\nAUTHOR_EMAIL = \"[email protected]\"\nURL = \"https://github.com/neurodata/graspy\"\nMINIMUM_PYTHON_VERSION = 3, 6 # Minimum of Python 3.5\nREQUIRED_PACKAGES = [\n \"networkx>=2.1\",\n \"numpy>=1.8.1\",\n \"scikit-learn>=0.19.1\",\n \"scipy>=1.1.0\",\n \"seaborn>=0.9.0\",\n \"matplotlib>=3.0.0\",\n \"hyppo>=0.1.3\",\n]\n\n\n# Find GraSPy version.\nPROJECT_PATH = os.path.dirname(os.path.abspath(__file__))\nfor line in open(os.path.join(PROJECT_PATH, \"graspy\", \"__init__.py\")):\n if line.startswith(\"__version__ = \"):\n VERSION = line.strip().split()[2][1:-1]\n\n\ndef check_python_version():\n \"\"\"Exit when the Python version is too low.\"\"\"\n if sys.version_info < MINIMUM_PYTHON_VERSION:\n sys.exit(\"Python {}.{}+ is required.\".format(*MINIMUM_PYTHON_VERSION))\n\n\ncheck_python_version()\n\nsetup(\n name=PACKAGE_NAME,\n version=VERSION,\n description=DESCRIPTION,\n long_description=LONG_DESCRIPTION,\n long_description_content_type=\"text/markdown\",\n author=AUTHOR,\n author_email=AUTHOR_EMAIL,\n install_requires=REQUIRED_PACKAGES,\n url=URL,\n license=\"Apache License 2.0\",\n classifiers=[\n \"Development Status :: 3 - Alpha\",\n \"Intended Audience :: Science/Research\",\n \"Topic :: Scientific/Engineering :: Mathematics\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n ],\n packages=find_packages(),\n include_package_data=True,\n)\n", "path": "setup.py"}]} |
gh_patches_debug_26 | rasdani/github-patches | git_diff | docker__docker-py-2917 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Missed rollback_config in service's create/update methods.
Hi, in [documentation](https://docker-py.readthedocs.io/en/stable/services.html) for service written that it support `rollback_config` parameter, but in `models/services.py`'s `CREATE_SERVICE_KWARGS` list doesn't contain it.
So, I got this error:
`TypeError: create() got an unexpected keyword argument 'rollback_config'`
Can someone tell me, is this done intentionally, or is it a bug?
**Version:** `4.4.4, 5.0.0 and older`
**My diff:**
```
diff --git a/docker/models/services.py b/docker/models/services.py
index a29ff13..0f26626 100644
--- a/docker/models/services.py
+++ b/docker/models/services.py
@@ -314,6 +314,7 @@ CREATE_SERVICE_KWARGS = [
'labels',
'mode',
'update_config',
+ 'rollback_config',
'endpoint_spec',
]
```
PS. Full stacktrace:
```
In [54]: service_our = client.services.create(
...: name=service_name,
...: image=image_full_name,
...: restart_policy=restart_policy,
...: update_config=update_config,
...: rollback_config=rollback_config
...: )
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-54-8cc6a8a6519b> in <module>
----> 1 service_our = client.services.create(
2 name=service_name,
3 image=image_full_name,
4 restart_policy=restart_policy,
5 update_config=update_config,
/usr/local/lib/python3.9/site-packages/docker/models/services.py in create(self, image, command, **kwargs)
224 kwargs['image'] = image
225 kwargs['command'] = command
--> 226 create_kwargs = _get_create_service_kwargs('create', kwargs)
227 service_id = self.client.api.create_service(**create_kwargs)
228 return self.get(service_id)
/usr/local/lib/python3.9/site-packages/docker/models/services.py in _get_create_service_kwargs(func_name, kwargs)
369 # All kwargs should have been consumed by this point, so raise
370 # error if any are left
--> 371 if kwargs:
372 raise create_unexpected_kwargs_error(func_name, kwargs)
373
TypeError: create() got an unexpected keyword argument 'rollback_config'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docker/models/services.py`
Content:
```
1 import copy
2 from docker.errors import create_unexpected_kwargs_error, InvalidArgument
3 from docker.types import TaskTemplate, ContainerSpec, Placement, ServiceMode
4 from .resource import Model, Collection
5
6
7 class Service(Model):
8 """A service."""
9 id_attribute = 'ID'
10
11 @property
12 def name(self):
13 """The service's name."""
14 return self.attrs['Spec']['Name']
15
16 @property
17 def version(self):
18 """
19 The version number of the service. If this is not the same as the
20 server, the :py:meth:`update` function will not work and you will
21 need to call :py:meth:`reload` before calling it again.
22 """
23 return self.attrs.get('Version').get('Index')
24
25 def remove(self):
26 """
27 Stop and remove the service.
28
29 Raises:
30 :py:class:`docker.errors.APIError`
31 If the server returns an error.
32 """
33 return self.client.api.remove_service(self.id)
34
35 def tasks(self, filters=None):
36 """
37 List the tasks in this service.
38
39 Args:
40 filters (dict): A map of filters to process on the tasks list.
41 Valid filters: ``id``, ``name``, ``node``,
42 ``label``, and ``desired-state``.
43
44 Returns:
45 :py:class:`list`: List of task dictionaries.
46
47 Raises:
48 :py:class:`docker.errors.APIError`
49 If the server returns an error.
50 """
51 if filters is None:
52 filters = {}
53 filters['service'] = self.id
54 return self.client.api.tasks(filters=filters)
55
56 def update(self, **kwargs):
57 """
58 Update a service's configuration. Similar to the ``docker service
59 update`` command.
60
61 Takes the same parameters as :py:meth:`~ServiceCollection.create`.
62
63 Raises:
64 :py:class:`docker.errors.APIError`
65 If the server returns an error.
66 """
67 # Image is required, so if it hasn't been set, use current image
68 if 'image' not in kwargs:
69 spec = self.attrs['Spec']['TaskTemplate']['ContainerSpec']
70 kwargs['image'] = spec['Image']
71
72 if kwargs.get('force_update') is True:
73 task_template = self.attrs['Spec']['TaskTemplate']
74 current_value = int(task_template.get('ForceUpdate', 0))
75 kwargs['force_update'] = current_value + 1
76
77 create_kwargs = _get_create_service_kwargs('update', kwargs)
78
79 return self.client.api.update_service(
80 self.id,
81 self.version,
82 **create_kwargs
83 )
84
85 def logs(self, **kwargs):
86 """
87 Get log stream for the service.
88 Note: This method works only for services with the ``json-file``
89 or ``journald`` logging drivers.
90
91 Args:
92 details (bool): Show extra details provided to logs.
93 Default: ``False``
94 follow (bool): Keep connection open to read logs as they are
95 sent by the Engine. Default: ``False``
96 stdout (bool): Return logs from ``stdout``. Default: ``False``
97 stderr (bool): Return logs from ``stderr``. Default: ``False``
98 since (int): UNIX timestamp for the logs staring point.
99 Default: 0
100 timestamps (bool): Add timestamps to every log line.
101 tail (string or int): Number of log lines to be returned,
102 counting from the current end of the logs. Specify an
103 integer or ``'all'`` to output all log lines.
104 Default: ``all``
105
106 Returns:
107 generator: Logs for the service.
108 """
109 is_tty = self.attrs['Spec']['TaskTemplate']['ContainerSpec'].get(
110 'TTY', False
111 )
112 return self.client.api.service_logs(self.id, is_tty=is_tty, **kwargs)
113
114 def scale(self, replicas):
115 """
116 Scale service container.
117
118 Args:
119 replicas (int): The number of containers that should be running.
120
121 Returns:
122 bool: ``True`` if successful.
123 """
124
125 if 'Global' in self.attrs['Spec']['Mode'].keys():
126 raise InvalidArgument('Cannot scale a global container')
127
128 service_mode = ServiceMode('replicated', replicas)
129 return self.client.api.update_service(self.id, self.version,
130 mode=service_mode,
131 fetch_current_spec=True)
132
133 def force_update(self):
134 """
135 Force update the service even if no changes require it.
136
137 Returns:
138 bool: ``True`` if successful.
139 """
140
141 return self.update(force_update=True, fetch_current_spec=True)
142
143
144 class ServiceCollection(Collection):
145 """Services on the Docker server."""
146 model = Service
147
148 def create(self, image, command=None, **kwargs):
149 """
150 Create a service. Similar to the ``docker service create`` command.
151
152 Args:
153 image (str): The image name to use for the containers.
154 command (list of str or str): Command to run.
155 args (list of str): Arguments to the command.
156 constraints (list of str): :py:class:`~docker.types.Placement`
157 constraints.
158 preferences (list of tuple): :py:class:`~docker.types.Placement`
159 preferences.
160 maxreplicas (int): :py:class:`~docker.types.Placement` maxreplicas
161 or (int) representing maximum number of replicas per node.
162 platforms (list of tuple): A list of platform constraints
163 expressed as ``(arch, os)`` tuples.
164 container_labels (dict): Labels to apply to the container.
165 endpoint_spec (EndpointSpec): Properties that can be configured to
166 access and load balance a service. Default: ``None``.
167 env (list of str): Environment variables, in the form
168 ``KEY=val``.
169 hostname (string): Hostname to set on the container.
170 init (boolean): Run an init inside the container that forwards
171 signals and reaps processes
172 isolation (string): Isolation technology used by the service's
173 containers. Only used for Windows containers.
174 labels (dict): Labels to apply to the service.
175 log_driver (str): Log driver to use for containers.
176 log_driver_options (dict): Log driver options.
177 mode (ServiceMode): Scheduling mode for the service.
178 Default:``None``
179 mounts (list of str): Mounts for the containers, in the form
180 ``source:target:options``, where options is either
181 ``ro`` or ``rw``.
182 name (str): Name to give to the service.
183 networks (:py:class:`list`): List of network names or IDs or
184 :py:class:`~docker.types.NetworkAttachmentConfig` to attach the
185 service to. Default: ``None``.
186 resources (Resources): Resource limits and reservations.
187 restart_policy (RestartPolicy): Restart policy for containers.
188 secrets (list of :py:class:`~docker.types.SecretReference`): List
189 of secrets accessible to containers for this service.
190 stop_grace_period (int): Amount of time to wait for
191 containers to terminate before forcefully killing them.
192 update_config (UpdateConfig): Specification for the update strategy
193 of the service. Default: ``None``
194 rollback_config (RollbackConfig): Specification for the rollback
195 strategy of the service. Default: ``None``
196 user (str): User to run commands as.
197 workdir (str): Working directory for commands to run.
198 tty (boolean): Whether a pseudo-TTY should be allocated.
199 groups (:py:class:`list`): A list of additional groups that the
200 container process will run as.
201 open_stdin (boolean): Open ``stdin``
202 read_only (boolean): Mount the container's root filesystem as read
203 only.
204 stop_signal (string): Set signal to stop the service's containers
205 healthcheck (Healthcheck): Healthcheck
206 configuration for this service.
207 hosts (:py:class:`dict`): A set of host to IP mappings to add to
208 the container's `hosts` file.
209 dns_config (DNSConfig): Specification for DNS
210 related configurations in resolver configuration file.
211 configs (:py:class:`list`): List of
212 :py:class:`~docker.types.ConfigReference` that will be exposed
213 to the service.
214 privileges (Privileges): Security options for the service's
215 containers.
216 cap_add (:py:class:`list`): A list of kernel capabilities to add to
217 the default set for the container.
218 cap_drop (:py:class:`list`): A list of kernel capabilities to drop
219 from the default set for the container.
220
221 Returns:
222 :py:class:`Service`: The created service.
223
224 Raises:
225 :py:class:`docker.errors.APIError`
226 If the server returns an error.
227 """
228 kwargs['image'] = image
229 kwargs['command'] = command
230 create_kwargs = _get_create_service_kwargs('create', kwargs)
231 service_id = self.client.api.create_service(**create_kwargs)
232 return self.get(service_id)
233
234 def get(self, service_id, insert_defaults=None):
235 """
236 Get a service.
237
238 Args:
239 service_id (str): The ID of the service.
240 insert_defaults (boolean): If true, default values will be merged
241 into the output.
242
243 Returns:
244 :py:class:`Service`: The service.
245
246 Raises:
247 :py:class:`docker.errors.NotFound`
248 If the service does not exist.
249 :py:class:`docker.errors.APIError`
250 If the server returns an error.
251 :py:class:`docker.errors.InvalidVersion`
252 If one of the arguments is not supported with the current
253 API version.
254 """
255 return self.prepare_model(
256 self.client.api.inspect_service(service_id, insert_defaults)
257 )
258
259 def list(self, **kwargs):
260 """
261 List services.
262
263 Args:
264 filters (dict): Filters to process on the nodes list. Valid
265 filters: ``id``, ``name`` , ``label`` and ``mode``.
266 Default: ``None``.
267
268 Returns:
269 list of :py:class:`Service`: The services.
270
271 Raises:
272 :py:class:`docker.errors.APIError`
273 If the server returns an error.
274 """
275 return [
276 self.prepare_model(s)
277 for s in self.client.api.services(**kwargs)
278 ]
279
280
281 # kwargs to copy straight over to ContainerSpec
282 CONTAINER_SPEC_KWARGS = [
283 'args',
284 'cap_add',
285 'cap_drop',
286 'command',
287 'configs',
288 'dns_config',
289 'env',
290 'groups',
291 'healthcheck',
292 'hostname',
293 'hosts',
294 'image',
295 'init',
296 'isolation',
297 'labels',
298 'mounts',
299 'open_stdin',
300 'privileges',
301 'read_only',
302 'secrets',
303 'stop_grace_period',
304 'stop_signal',
305 'tty',
306 'user',
307 'workdir',
308 ]
309
310 # kwargs to copy straight over to TaskTemplate
311 TASK_TEMPLATE_KWARGS = [
312 'networks',
313 'resources',
314 'restart_policy',
315 ]
316
317 # kwargs to copy straight over to create_service
318 CREATE_SERVICE_KWARGS = [
319 'name',
320 'labels',
321 'mode',
322 'update_config',
323 'endpoint_spec',
324 ]
325
326 PLACEMENT_KWARGS = [
327 'constraints',
328 'preferences',
329 'platforms',
330 'maxreplicas',
331 ]
332
333
334 def _get_create_service_kwargs(func_name, kwargs):
335 # Copy over things which can be copied directly
336 create_kwargs = {}
337 for key in copy.copy(kwargs):
338 if key in CREATE_SERVICE_KWARGS:
339 create_kwargs[key] = kwargs.pop(key)
340 container_spec_kwargs = {}
341 for key in copy.copy(kwargs):
342 if key in CONTAINER_SPEC_KWARGS:
343 container_spec_kwargs[key] = kwargs.pop(key)
344 task_template_kwargs = {}
345 for key in copy.copy(kwargs):
346 if key in TASK_TEMPLATE_KWARGS:
347 task_template_kwargs[key] = kwargs.pop(key)
348
349 if 'container_labels' in kwargs:
350 container_spec_kwargs['labels'] = kwargs.pop('container_labels')
351
352 placement = {}
353 for key in copy.copy(kwargs):
354 if key in PLACEMENT_KWARGS:
355 placement[key] = kwargs.pop(key)
356 placement = Placement(**placement)
357 task_template_kwargs['placement'] = placement
358
359 if 'log_driver' in kwargs:
360 task_template_kwargs['log_driver'] = {
361 'Name': kwargs.pop('log_driver'),
362 'Options': kwargs.pop('log_driver_options', {})
363 }
364
365 if func_name == 'update':
366 if 'force_update' in kwargs:
367 task_template_kwargs['force_update'] = kwargs.pop('force_update')
368
369 # fetch the current spec by default if updating the service
370 # through the model
371 fetch_current_spec = kwargs.pop('fetch_current_spec', True)
372 create_kwargs['fetch_current_spec'] = fetch_current_spec
373
374 # All kwargs should have been consumed by this point, so raise
375 # error if any are left
376 if kwargs:
377 raise create_unexpected_kwargs_error(func_name, kwargs)
378
379 container_spec = ContainerSpec(**container_spec_kwargs)
380 task_template_kwargs['container_spec'] = container_spec
381 create_kwargs['task_template'] = TaskTemplate(**task_template_kwargs)
382 return create_kwargs
383
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docker/models/services.py b/docker/models/services.py
--- a/docker/models/services.py
+++ b/docker/models/services.py
@@ -320,6 +320,7 @@
'labels',
'mode',
'update_config',
+ 'rollback_config',
'endpoint_spec',
]
| {"golden_diff": "diff --git a/docker/models/services.py b/docker/models/services.py\n--- a/docker/models/services.py\n+++ b/docker/models/services.py\n@@ -320,6 +320,7 @@\n 'labels',\n 'mode',\n 'update_config',\n+ 'rollback_config',\n 'endpoint_spec',\n ]\n", "issue": "Missed rollback_config in service's create/update methods.\nHi, in [documentation](https://docker-py.readthedocs.io/en/stable/services.html) for service written that it support `rollback_config` parameter, but in `models/services.py`'s `CREATE_SERVICE_KWARGS` list doesn't contain it.\r\nSo, I got this error:\r\n`TypeError: create() got an unexpected keyword argument 'rollback_config'`\r\nCan someone tell me, is this done intentionally, or is it a bug?\r\n\r\n**Version:** `4.4.4, 5.0.0 and older`\r\n\r\n**My diff:**\r\n```\r\ndiff --git a/docker/models/services.py b/docker/models/services.py\r\nindex a29ff13..0f26626 100644\r\n--- a/docker/models/services.py\r\n+++ b/docker/models/services.py\r\n@@ -314,6 +314,7 @@ CREATE_SERVICE_KWARGS = [\r\n 'labels',\r\n 'mode',\r\n 'update_config',\r\n+ 'rollback_config',\r\n 'endpoint_spec',\r\n ]\r\n```\r\n\r\nPS. Full stacktrace:\r\n```\r\nIn [54]: service_our = client.services.create(\r\n ...: name=service_name,\r\n ...: image=image_full_name,\r\n ...: restart_policy=restart_policy,\r\n ...: update_config=update_config,\r\n ...: rollback_config=rollback_config\r\n ...: )\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-54-8cc6a8a6519b> in <module>\r\n----> 1 service_our = client.services.create(\r\n 2 name=service_name,\r\n 3 image=image_full_name,\r\n 4 restart_policy=restart_policy,\r\n 5 update_config=update_config,\r\n\r\n/usr/local/lib/python3.9/site-packages/docker/models/services.py in create(self, image, command, **kwargs)\r\n 224 kwargs['image'] = image\r\n 225 kwargs['command'] = command\r\n--> 226 create_kwargs = _get_create_service_kwargs('create', kwargs)\r\n 227 service_id = self.client.api.create_service(**create_kwargs)\r\n 228 return self.get(service_id)\r\n\r\n/usr/local/lib/python3.9/site-packages/docker/models/services.py in _get_create_service_kwargs(func_name, kwargs)\r\n 369 # All kwargs should have been consumed by this point, so raise\r\n 370 # error if any are left\r\n--> 371 if kwargs:\r\n 372 raise create_unexpected_kwargs_error(func_name, kwargs)\r\n 373\r\n\r\nTypeError: create() got an unexpected keyword argument 'rollback_config'\r\n```\n", "before_files": [{"content": "import copy\nfrom docker.errors import create_unexpected_kwargs_error, InvalidArgument\nfrom docker.types import TaskTemplate, ContainerSpec, Placement, ServiceMode\nfrom .resource import Model, Collection\n\n\nclass Service(Model):\n \"\"\"A service.\"\"\"\n id_attribute = 'ID'\n\n @property\n def name(self):\n \"\"\"The service's name.\"\"\"\n return self.attrs['Spec']['Name']\n\n @property\n def version(self):\n \"\"\"\n The version number of the service. If this is not the same as the\n server, the :py:meth:`update` function will not work and you will\n need to call :py:meth:`reload` before calling it again.\n \"\"\"\n return self.attrs.get('Version').get('Index')\n\n def remove(self):\n \"\"\"\n Stop and remove the service.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n return self.client.api.remove_service(self.id)\n\n def tasks(self, filters=None):\n \"\"\"\n List the tasks in this service.\n\n Args:\n filters (dict): A map of filters to process on the tasks list.\n Valid filters: ``id``, ``name``, ``node``,\n ``label``, and ``desired-state``.\n\n Returns:\n :py:class:`list`: List of task dictionaries.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n if filters is None:\n filters = {}\n filters['service'] = self.id\n return self.client.api.tasks(filters=filters)\n\n def update(self, **kwargs):\n \"\"\"\n Update a service's configuration. Similar to the ``docker service\n update`` command.\n\n Takes the same parameters as :py:meth:`~ServiceCollection.create`.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n # Image is required, so if it hasn't been set, use current image\n if 'image' not in kwargs:\n spec = self.attrs['Spec']['TaskTemplate']['ContainerSpec']\n kwargs['image'] = spec['Image']\n\n if kwargs.get('force_update') is True:\n task_template = self.attrs['Spec']['TaskTemplate']\n current_value = int(task_template.get('ForceUpdate', 0))\n kwargs['force_update'] = current_value + 1\n\n create_kwargs = _get_create_service_kwargs('update', kwargs)\n\n return self.client.api.update_service(\n self.id,\n self.version,\n **create_kwargs\n )\n\n def logs(self, **kwargs):\n \"\"\"\n Get log stream for the service.\n Note: This method works only for services with the ``json-file``\n or ``journald`` logging drivers.\n\n Args:\n details (bool): Show extra details provided to logs.\n Default: ``False``\n follow (bool): Keep connection open to read logs as they are\n sent by the Engine. Default: ``False``\n stdout (bool): Return logs from ``stdout``. Default: ``False``\n stderr (bool): Return logs from ``stderr``. Default: ``False``\n since (int): UNIX timestamp for the logs staring point.\n Default: 0\n timestamps (bool): Add timestamps to every log line.\n tail (string or int): Number of log lines to be returned,\n counting from the current end of the logs. Specify an\n integer or ``'all'`` to output all log lines.\n Default: ``all``\n\n Returns:\n generator: Logs for the service.\n \"\"\"\n is_tty = self.attrs['Spec']['TaskTemplate']['ContainerSpec'].get(\n 'TTY', False\n )\n return self.client.api.service_logs(self.id, is_tty=is_tty, **kwargs)\n\n def scale(self, replicas):\n \"\"\"\n Scale service container.\n\n Args:\n replicas (int): The number of containers that should be running.\n\n Returns:\n bool: ``True`` if successful.\n \"\"\"\n\n if 'Global' in self.attrs['Spec']['Mode'].keys():\n raise InvalidArgument('Cannot scale a global container')\n\n service_mode = ServiceMode('replicated', replicas)\n return self.client.api.update_service(self.id, self.version,\n mode=service_mode,\n fetch_current_spec=True)\n\n def force_update(self):\n \"\"\"\n Force update the service even if no changes require it.\n\n Returns:\n bool: ``True`` if successful.\n \"\"\"\n\n return self.update(force_update=True, fetch_current_spec=True)\n\n\nclass ServiceCollection(Collection):\n \"\"\"Services on the Docker server.\"\"\"\n model = Service\n\n def create(self, image, command=None, **kwargs):\n \"\"\"\n Create a service. Similar to the ``docker service create`` command.\n\n Args:\n image (str): The image name to use for the containers.\n command (list of str or str): Command to run.\n args (list of str): Arguments to the command.\n constraints (list of str): :py:class:`~docker.types.Placement`\n constraints.\n preferences (list of tuple): :py:class:`~docker.types.Placement`\n preferences.\n maxreplicas (int): :py:class:`~docker.types.Placement` maxreplicas\n or (int) representing maximum number of replicas per node.\n platforms (list of tuple): A list of platform constraints\n expressed as ``(arch, os)`` tuples.\n container_labels (dict): Labels to apply to the container.\n endpoint_spec (EndpointSpec): Properties that can be configured to\n access and load balance a service. Default: ``None``.\n env (list of str): Environment variables, in the form\n ``KEY=val``.\n hostname (string): Hostname to set on the container.\n init (boolean): Run an init inside the container that forwards\n signals and reaps processes\n isolation (string): Isolation technology used by the service's\n containers. Only used for Windows containers.\n labels (dict): Labels to apply to the service.\n log_driver (str): Log driver to use for containers.\n log_driver_options (dict): Log driver options.\n mode (ServiceMode): Scheduling mode for the service.\n Default:``None``\n mounts (list of str): Mounts for the containers, in the form\n ``source:target:options``, where options is either\n ``ro`` or ``rw``.\n name (str): Name to give to the service.\n networks (:py:class:`list`): List of network names or IDs or\n :py:class:`~docker.types.NetworkAttachmentConfig` to attach the\n service to. Default: ``None``.\n resources (Resources): Resource limits and reservations.\n restart_policy (RestartPolicy): Restart policy for containers.\n secrets (list of :py:class:`~docker.types.SecretReference`): List\n of secrets accessible to containers for this service.\n stop_grace_period (int): Amount of time to wait for\n containers to terminate before forcefully killing them.\n update_config (UpdateConfig): Specification for the update strategy\n of the service. Default: ``None``\n rollback_config (RollbackConfig): Specification for the rollback\n strategy of the service. Default: ``None``\n user (str): User to run commands as.\n workdir (str): Working directory for commands to run.\n tty (boolean): Whether a pseudo-TTY should be allocated.\n groups (:py:class:`list`): A list of additional groups that the\n container process will run as.\n open_stdin (boolean): Open ``stdin``\n read_only (boolean): Mount the container's root filesystem as read\n only.\n stop_signal (string): Set signal to stop the service's containers\n healthcheck (Healthcheck): Healthcheck\n configuration for this service.\n hosts (:py:class:`dict`): A set of host to IP mappings to add to\n the container's `hosts` file.\n dns_config (DNSConfig): Specification for DNS\n related configurations in resolver configuration file.\n configs (:py:class:`list`): List of\n :py:class:`~docker.types.ConfigReference` that will be exposed\n to the service.\n privileges (Privileges): Security options for the service's\n containers.\n cap_add (:py:class:`list`): A list of kernel capabilities to add to\n the default set for the container.\n cap_drop (:py:class:`list`): A list of kernel capabilities to drop\n from the default set for the container.\n\n Returns:\n :py:class:`Service`: The created service.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n kwargs['image'] = image\n kwargs['command'] = command\n create_kwargs = _get_create_service_kwargs('create', kwargs)\n service_id = self.client.api.create_service(**create_kwargs)\n return self.get(service_id)\n\n def get(self, service_id, insert_defaults=None):\n \"\"\"\n Get a service.\n\n Args:\n service_id (str): The ID of the service.\n insert_defaults (boolean): If true, default values will be merged\n into the output.\n\n Returns:\n :py:class:`Service`: The service.\n\n Raises:\n :py:class:`docker.errors.NotFound`\n If the service does not exist.\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n :py:class:`docker.errors.InvalidVersion`\n If one of the arguments is not supported with the current\n API version.\n \"\"\"\n return self.prepare_model(\n self.client.api.inspect_service(service_id, insert_defaults)\n )\n\n def list(self, **kwargs):\n \"\"\"\n List services.\n\n Args:\n filters (dict): Filters to process on the nodes list. Valid\n filters: ``id``, ``name`` , ``label`` and ``mode``.\n Default: ``None``.\n\n Returns:\n list of :py:class:`Service`: The services.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n return [\n self.prepare_model(s)\n for s in self.client.api.services(**kwargs)\n ]\n\n\n# kwargs to copy straight over to ContainerSpec\nCONTAINER_SPEC_KWARGS = [\n 'args',\n 'cap_add',\n 'cap_drop',\n 'command',\n 'configs',\n 'dns_config',\n 'env',\n 'groups',\n 'healthcheck',\n 'hostname',\n 'hosts',\n 'image',\n 'init',\n 'isolation',\n 'labels',\n 'mounts',\n 'open_stdin',\n 'privileges',\n 'read_only',\n 'secrets',\n 'stop_grace_period',\n 'stop_signal',\n 'tty',\n 'user',\n 'workdir',\n]\n\n# kwargs to copy straight over to TaskTemplate\nTASK_TEMPLATE_KWARGS = [\n 'networks',\n 'resources',\n 'restart_policy',\n]\n\n# kwargs to copy straight over to create_service\nCREATE_SERVICE_KWARGS = [\n 'name',\n 'labels',\n 'mode',\n 'update_config',\n 'endpoint_spec',\n]\n\nPLACEMENT_KWARGS = [\n 'constraints',\n 'preferences',\n 'platforms',\n 'maxreplicas',\n]\n\n\ndef _get_create_service_kwargs(func_name, kwargs):\n # Copy over things which can be copied directly\n create_kwargs = {}\n for key in copy.copy(kwargs):\n if key in CREATE_SERVICE_KWARGS:\n create_kwargs[key] = kwargs.pop(key)\n container_spec_kwargs = {}\n for key in copy.copy(kwargs):\n if key in CONTAINER_SPEC_KWARGS:\n container_spec_kwargs[key] = kwargs.pop(key)\n task_template_kwargs = {}\n for key in copy.copy(kwargs):\n if key in TASK_TEMPLATE_KWARGS:\n task_template_kwargs[key] = kwargs.pop(key)\n\n if 'container_labels' in kwargs:\n container_spec_kwargs['labels'] = kwargs.pop('container_labels')\n\n placement = {}\n for key in copy.copy(kwargs):\n if key in PLACEMENT_KWARGS:\n placement[key] = kwargs.pop(key)\n placement = Placement(**placement)\n task_template_kwargs['placement'] = placement\n\n if 'log_driver' in kwargs:\n task_template_kwargs['log_driver'] = {\n 'Name': kwargs.pop('log_driver'),\n 'Options': kwargs.pop('log_driver_options', {})\n }\n\n if func_name == 'update':\n if 'force_update' in kwargs:\n task_template_kwargs['force_update'] = kwargs.pop('force_update')\n\n # fetch the current spec by default if updating the service\n # through the model\n fetch_current_spec = kwargs.pop('fetch_current_spec', True)\n create_kwargs['fetch_current_spec'] = fetch_current_spec\n\n # All kwargs should have been consumed by this point, so raise\n # error if any are left\n if kwargs:\n raise create_unexpected_kwargs_error(func_name, kwargs)\n\n container_spec = ContainerSpec(**container_spec_kwargs)\n task_template_kwargs['container_spec'] = container_spec\n create_kwargs['task_template'] = TaskTemplate(**task_template_kwargs)\n return create_kwargs\n", "path": "docker/models/services.py"}], "after_files": [{"content": "import copy\nfrom docker.errors import create_unexpected_kwargs_error, InvalidArgument\nfrom docker.types import TaskTemplate, ContainerSpec, Placement, ServiceMode\nfrom .resource import Model, Collection\n\n\nclass Service(Model):\n \"\"\"A service.\"\"\"\n id_attribute = 'ID'\n\n @property\n def name(self):\n \"\"\"The service's name.\"\"\"\n return self.attrs['Spec']['Name']\n\n @property\n def version(self):\n \"\"\"\n The version number of the service. If this is not the same as the\n server, the :py:meth:`update` function will not work and you will\n need to call :py:meth:`reload` before calling it again.\n \"\"\"\n return self.attrs.get('Version').get('Index')\n\n def remove(self):\n \"\"\"\n Stop and remove the service.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n return self.client.api.remove_service(self.id)\n\n def tasks(self, filters=None):\n \"\"\"\n List the tasks in this service.\n\n Args:\n filters (dict): A map of filters to process on the tasks list.\n Valid filters: ``id``, ``name``, ``node``,\n ``label``, and ``desired-state``.\n\n Returns:\n :py:class:`list`: List of task dictionaries.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n if filters is None:\n filters = {}\n filters['service'] = self.id\n return self.client.api.tasks(filters=filters)\n\n def update(self, **kwargs):\n \"\"\"\n Update a service's configuration. Similar to the ``docker service\n update`` command.\n\n Takes the same parameters as :py:meth:`~ServiceCollection.create`.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n # Image is required, so if it hasn't been set, use current image\n if 'image' not in kwargs:\n spec = self.attrs['Spec']['TaskTemplate']['ContainerSpec']\n kwargs['image'] = spec['Image']\n\n if kwargs.get('force_update') is True:\n task_template = self.attrs['Spec']['TaskTemplate']\n current_value = int(task_template.get('ForceUpdate', 0))\n kwargs['force_update'] = current_value + 1\n\n create_kwargs = _get_create_service_kwargs('update', kwargs)\n\n return self.client.api.update_service(\n self.id,\n self.version,\n **create_kwargs\n )\n\n def logs(self, **kwargs):\n \"\"\"\n Get log stream for the service.\n Note: This method works only for services with the ``json-file``\n or ``journald`` logging drivers.\n\n Args:\n details (bool): Show extra details provided to logs.\n Default: ``False``\n follow (bool): Keep connection open to read logs as they are\n sent by the Engine. Default: ``False``\n stdout (bool): Return logs from ``stdout``. Default: ``False``\n stderr (bool): Return logs from ``stderr``. Default: ``False``\n since (int): UNIX timestamp for the logs staring point.\n Default: 0\n timestamps (bool): Add timestamps to every log line.\n tail (string or int): Number of log lines to be returned,\n counting from the current end of the logs. Specify an\n integer or ``'all'`` to output all log lines.\n Default: ``all``\n\n Returns:\n generator: Logs for the service.\n \"\"\"\n is_tty = self.attrs['Spec']['TaskTemplate']['ContainerSpec'].get(\n 'TTY', False\n )\n return self.client.api.service_logs(self.id, is_tty=is_tty, **kwargs)\n\n def scale(self, replicas):\n \"\"\"\n Scale service container.\n\n Args:\n replicas (int): The number of containers that should be running.\n\n Returns:\n bool: ``True`` if successful.\n \"\"\"\n\n if 'Global' in self.attrs['Spec']['Mode'].keys():\n raise InvalidArgument('Cannot scale a global container')\n\n service_mode = ServiceMode('replicated', replicas)\n return self.client.api.update_service(self.id, self.version,\n mode=service_mode,\n fetch_current_spec=True)\n\n def force_update(self):\n \"\"\"\n Force update the service even if no changes require it.\n\n Returns:\n bool: ``True`` if successful.\n \"\"\"\n\n return self.update(force_update=True, fetch_current_spec=True)\n\n\nclass ServiceCollection(Collection):\n \"\"\"Services on the Docker server.\"\"\"\n model = Service\n\n def create(self, image, command=None, **kwargs):\n \"\"\"\n Create a service. Similar to the ``docker service create`` command.\n\n Args:\n image (str): The image name to use for the containers.\n command (list of str or str): Command to run.\n args (list of str): Arguments to the command.\n constraints (list of str): :py:class:`~docker.types.Placement`\n constraints.\n preferences (list of tuple): :py:class:`~docker.types.Placement`\n preferences.\n maxreplicas (int): :py:class:`~docker.types.Placement` maxreplicas\n or (int) representing maximum number of replicas per node.\n platforms (list of tuple): A list of platform constraints\n expressed as ``(arch, os)`` tuples.\n container_labels (dict): Labels to apply to the container.\n endpoint_spec (EndpointSpec): Properties that can be configured to\n access and load balance a service. Default: ``None``.\n env (list of str): Environment variables, in the form\n ``KEY=val``.\n hostname (string): Hostname to set on the container.\n init (boolean): Run an init inside the container that forwards\n signals and reaps processes\n isolation (string): Isolation technology used by the service's\n containers. Only used for Windows containers.\n labels (dict): Labels to apply to the service.\n log_driver (str): Log driver to use for containers.\n log_driver_options (dict): Log driver options.\n mode (ServiceMode): Scheduling mode for the service.\n Default:``None``\n mounts (list of str): Mounts for the containers, in the form\n ``source:target:options``, where options is either\n ``ro`` or ``rw``.\n name (str): Name to give to the service.\n networks (:py:class:`list`): List of network names or IDs or\n :py:class:`~docker.types.NetworkAttachmentConfig` to attach the\n service to. Default: ``None``.\n resources (Resources): Resource limits and reservations.\n restart_policy (RestartPolicy): Restart policy for containers.\n secrets (list of :py:class:`~docker.types.SecretReference`): List\n of secrets accessible to containers for this service.\n stop_grace_period (int): Amount of time to wait for\n containers to terminate before forcefully killing them.\n update_config (UpdateConfig): Specification for the update strategy\n of the service. Default: ``None``\n rollback_config (RollbackConfig): Specification for the rollback\n strategy of the service. Default: ``None``\n user (str): User to run commands as.\n workdir (str): Working directory for commands to run.\n tty (boolean): Whether a pseudo-TTY should be allocated.\n groups (:py:class:`list`): A list of additional groups that the\n container process will run as.\n open_stdin (boolean): Open ``stdin``\n read_only (boolean): Mount the container's root filesystem as read\n only.\n stop_signal (string): Set signal to stop the service's containers\n healthcheck (Healthcheck): Healthcheck\n configuration for this service.\n hosts (:py:class:`dict`): A set of host to IP mappings to add to\n the container's `hosts` file.\n dns_config (DNSConfig): Specification for DNS\n related configurations in resolver configuration file.\n configs (:py:class:`list`): List of\n :py:class:`~docker.types.ConfigReference` that will be exposed\n to the service.\n privileges (Privileges): Security options for the service's\n containers.\n cap_add (:py:class:`list`): A list of kernel capabilities to add to\n the default set for the container.\n cap_drop (:py:class:`list`): A list of kernel capabilities to drop\n from the default set for the container.\n\n Returns:\n :py:class:`Service`: The created service.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n kwargs['image'] = image\n kwargs['command'] = command\n create_kwargs = _get_create_service_kwargs('create', kwargs)\n service_id = self.client.api.create_service(**create_kwargs)\n return self.get(service_id)\n\n def get(self, service_id, insert_defaults=None):\n \"\"\"\n Get a service.\n\n Args:\n service_id (str): The ID of the service.\n insert_defaults (boolean): If true, default values will be merged\n into the output.\n\n Returns:\n :py:class:`Service`: The service.\n\n Raises:\n :py:class:`docker.errors.NotFound`\n If the service does not exist.\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n :py:class:`docker.errors.InvalidVersion`\n If one of the arguments is not supported with the current\n API version.\n \"\"\"\n return self.prepare_model(\n self.client.api.inspect_service(service_id, insert_defaults)\n )\n\n def list(self, **kwargs):\n \"\"\"\n List services.\n\n Args:\n filters (dict): Filters to process on the nodes list. Valid\n filters: ``id``, ``name`` , ``label`` and ``mode``.\n Default: ``None``.\n\n Returns:\n list of :py:class:`Service`: The services.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n return [\n self.prepare_model(s)\n for s in self.client.api.services(**kwargs)\n ]\n\n\n# kwargs to copy straight over to ContainerSpec\nCONTAINER_SPEC_KWARGS = [\n 'args',\n 'cap_add',\n 'cap_drop',\n 'command',\n 'configs',\n 'dns_config',\n 'env',\n 'groups',\n 'healthcheck',\n 'hostname',\n 'hosts',\n 'image',\n 'init',\n 'isolation',\n 'labels',\n 'mounts',\n 'open_stdin',\n 'privileges',\n 'read_only',\n 'secrets',\n 'stop_grace_period',\n 'stop_signal',\n 'tty',\n 'user',\n 'workdir',\n]\n\n# kwargs to copy straight over to TaskTemplate\nTASK_TEMPLATE_KWARGS = [\n 'networks',\n 'resources',\n 'restart_policy',\n]\n\n# kwargs to copy straight over to create_service\nCREATE_SERVICE_KWARGS = [\n 'name',\n 'labels',\n 'mode',\n 'update_config',\n 'rollback_config',\n 'endpoint_spec',\n]\n\nPLACEMENT_KWARGS = [\n 'constraints',\n 'preferences',\n 'platforms',\n 'maxreplicas',\n]\n\n\ndef _get_create_service_kwargs(func_name, kwargs):\n # Copy over things which can be copied directly\n create_kwargs = {}\n for key in copy.copy(kwargs):\n if key in CREATE_SERVICE_KWARGS:\n create_kwargs[key] = kwargs.pop(key)\n container_spec_kwargs = {}\n for key in copy.copy(kwargs):\n if key in CONTAINER_SPEC_KWARGS:\n container_spec_kwargs[key] = kwargs.pop(key)\n task_template_kwargs = {}\n for key in copy.copy(kwargs):\n if key in TASK_TEMPLATE_KWARGS:\n task_template_kwargs[key] = kwargs.pop(key)\n\n if 'container_labels' in kwargs:\n container_spec_kwargs['labels'] = kwargs.pop('container_labels')\n\n placement = {}\n for key in copy.copy(kwargs):\n if key in PLACEMENT_KWARGS:\n placement[key] = kwargs.pop(key)\n placement = Placement(**placement)\n task_template_kwargs['placement'] = placement\n\n if 'log_driver' in kwargs:\n task_template_kwargs['log_driver'] = {\n 'Name': kwargs.pop('log_driver'),\n 'Options': kwargs.pop('log_driver_options', {})\n }\n\n if func_name == 'update':\n if 'force_update' in kwargs:\n task_template_kwargs['force_update'] = kwargs.pop('force_update')\n\n # fetch the current spec by default if updating the service\n # through the model\n fetch_current_spec = kwargs.pop('fetch_current_spec', True)\n create_kwargs['fetch_current_spec'] = fetch_current_spec\n\n # All kwargs should have been consumed by this point, so raise\n # error if any are left\n if kwargs:\n raise create_unexpected_kwargs_error(func_name, kwargs)\n\n container_spec = ContainerSpec(**container_spec_kwargs)\n task_template_kwargs['container_spec'] = container_spec\n create_kwargs['task_template'] = TaskTemplate(**task_template_kwargs)\n return create_kwargs\n", "path": "docker/models/services.py"}]} |
gh_patches_debug_27 | rasdani/github-patches | git_diff | pytorch__ignite-1016 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
PyTorch dependency is lacking version constraint
## π Bug description
<!-- A clear and concise description of what the bug is. -->
PyTorch is a dependency of Ignite and, thus, is specified in `setup.py`
https://github.com/pytorch/ignite/blob/4b311cc82fe45b3082661125cd7ee54007283fb0/setup.py#L24-L26
and `conda.recipe/meta.yaml`:
https://github.com/pytorch/ignite/blob/4b311cc82fe45b3082661125cd7ee54007283fb0/conda.recipe/meta.yaml#L15-L23
The PyTorch dependency is lacking a version constraint which may work fine right now, but there is no guarantee that Ignite will be compatible with any future major PyTorch release (e.g. PyTorch v2.x).
I suggest to constrain the PyTorch version that Ignite is compatible with, e.g. `>=1.0,<2` or `<2` if any `0.x` and `1.x` version works. If PyTorch has a new major release, even previous Ignite versions can become compatible with the new major PyTorch release (especially if no changes to the code are necessary) by making new bug fix releases with relaxed version constraints to include the new PyTorch version.
In my opinion, it is highly preferable to be conservative about dependency version constraints through a [compatible release constraint](https://www.python.org/dev/peps/pep-0440/#compatible-release) in case the dependency conforms with semantic versioning. It is impossible to guarantee compatibility with a future major release of a dependency as its API can change arbitrarily.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 import os
2 import io
3 import re
4 from setuptools import setup, find_packages
5
6
7 def read(*names, **kwargs):
8 with io.open(os.path.join(os.path.dirname(__file__), *names), encoding=kwargs.get("encoding", "utf8")) as fp:
9 return fp.read()
10
11
12 def find_version(*file_paths):
13 version_file = read(*file_paths)
14 version_match = re.search(r"^__version__ = ['\"]([^'\"]*)['\"]", version_file, re.M)
15 if version_match:
16 return version_match.group(1)
17 raise RuntimeError("Unable to find version string.")
18
19
20 readme = read("README.md")
21
22 VERSION = find_version("ignite", "__init__.py")
23
24 requirements = [
25 "torch",
26 ]
27
28 setup(
29 # Metadata
30 name="pytorch-ignite",
31 version=VERSION,
32 author="PyTorch Core Team",
33 author_email="[email protected]",
34 url="https://github.com/pytorch/ignite",
35 description="A lightweight library to help with training neural networks in PyTorch.",
36 long_description_content_type="text/markdown",
37 long_description=readme,
38 license="BSD",
39 # Package info
40 packages=find_packages(exclude=("tests", "tests.*",)),
41 zip_safe=True,
42 install_requires=requirements,
43 )
44
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -22,7 +22,7 @@
VERSION = find_version("ignite", "__init__.py")
requirements = [
- "torch",
+ "torch>=1.0,<2",
]
setup(
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -22,7 +22,7 @@\n VERSION = find_version(\"ignite\", \"__init__.py\")\n \n requirements = [\n- \"torch\",\n+ \"torch>=1.0,<2\",\n ]\n \n setup(\n", "issue": "PyTorch dependency is lacking version constraint\n## \ud83d\udc1b Bug description\r\n\r\n<!-- A clear and concise description of what the bug is. -->\r\nPyTorch is a dependency of Ignite and, thus, is specified in `setup.py`\r\n\r\nhttps://github.com/pytorch/ignite/blob/4b311cc82fe45b3082661125cd7ee54007283fb0/setup.py#L24-L26\r\n\r\nand `conda.recipe/meta.yaml`:\r\n\r\nhttps://github.com/pytorch/ignite/blob/4b311cc82fe45b3082661125cd7ee54007283fb0/conda.recipe/meta.yaml#L15-L23\r\n\r\nThe PyTorch dependency is lacking a version constraint which may work fine right now, but there is no guarantee that Ignite will be compatible with any future major PyTorch release (e.g. PyTorch v2.x).\r\n\r\nI suggest to constrain the PyTorch version that Ignite is compatible with, e.g. `>=1.0,<2` or `<2` if any `0.x` and `1.x` version works. If PyTorch has a new major release, even previous Ignite versions can become compatible with the new major PyTorch release (especially if no changes to the code are necessary) by making new bug fix releases with relaxed version constraints to include the new PyTorch version.\r\n\r\nIn my opinion, it is highly preferable to be conservative about dependency version constraints through a [compatible release constraint](https://www.python.org/dev/peps/pep-0440/#compatible-release) in case the dependency conforms with semantic versioning. It is impossible to guarantee compatibility with a future major release of a dependency as its API can change arbitrarily.\n", "before_files": [{"content": "import os\nimport io\nimport re\nfrom setuptools import setup, find_packages\n\n\ndef read(*names, **kwargs):\n with io.open(os.path.join(os.path.dirname(__file__), *names), encoding=kwargs.get(\"encoding\", \"utf8\")) as fp:\n return fp.read()\n\n\ndef find_version(*file_paths):\n version_file = read(*file_paths)\n version_match = re.search(r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\", version_file, re.M)\n if version_match:\n return version_match.group(1)\n raise RuntimeError(\"Unable to find version string.\")\n\n\nreadme = read(\"README.md\")\n\nVERSION = find_version(\"ignite\", \"__init__.py\")\n\nrequirements = [\n \"torch\",\n]\n\nsetup(\n # Metadata\n name=\"pytorch-ignite\",\n version=VERSION,\n author=\"PyTorch Core Team\",\n author_email=\"[email protected]\",\n url=\"https://github.com/pytorch/ignite\",\n description=\"A lightweight library to help with training neural networks in PyTorch.\",\n long_description_content_type=\"text/markdown\",\n long_description=readme,\n license=\"BSD\",\n # Package info\n packages=find_packages(exclude=(\"tests\", \"tests.*\",)),\n zip_safe=True,\n install_requires=requirements,\n)\n", "path": "setup.py"}], "after_files": [{"content": "import os\nimport io\nimport re\nfrom setuptools import setup, find_packages\n\n\ndef read(*names, **kwargs):\n with io.open(os.path.join(os.path.dirname(__file__), *names), encoding=kwargs.get(\"encoding\", \"utf8\")) as fp:\n return fp.read()\n\n\ndef find_version(*file_paths):\n version_file = read(*file_paths)\n version_match = re.search(r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\", version_file, re.M)\n if version_match:\n return version_match.group(1)\n raise RuntimeError(\"Unable to find version string.\")\n\n\nreadme = read(\"README.md\")\n\nVERSION = find_version(\"ignite\", \"__init__.py\")\n\nrequirements = [\n \"torch>=1.0,<2\",\n]\n\nsetup(\n # Metadata\n name=\"pytorch-ignite\",\n version=VERSION,\n author=\"PyTorch Core Team\",\n author_email=\"[email protected]\",\n url=\"https://github.com/pytorch/ignite\",\n description=\"A lightweight library to help with training neural networks in PyTorch.\",\n long_description_content_type=\"text/markdown\",\n long_description=readme,\n license=\"BSD\",\n # Package info\n packages=find_packages(exclude=(\"tests\", \"tests.*\",)),\n zip_safe=True,\n install_requires=requirements,\n)\n", "path": "setup.py"}]} |
gh_patches_debug_28 | rasdani/github-patches | git_diff | codespell-project__codespell-86 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
codespell.py does nothng if [fileN] is not specified
Previously running `codespell` without file parameter starts the check. Now `codespell.py` does nothing. The behavior should stay the same as before - if file/dir argument is not specefied then current directory should be used as a default parameter.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bin/codespell.py`
Content:
```
1 #!/usr/bin/env python
2
3 import sys
4
5 if __name__ == '__main__':
6 import codespell_lib
7 sys.exit(codespell_lib.main(*sys.argv))
8
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bin/codespell.py b/bin/codespell.py
--- a/bin/codespell.py
+++ b/bin/codespell.py
@@ -4,4 +4,4 @@
if __name__ == '__main__':
import codespell_lib
- sys.exit(codespell_lib.main(*sys.argv))
+ sys.exit(codespell_lib.main(*sys.argv[1:]))
| {"golden_diff": "diff --git a/bin/codespell.py b/bin/codespell.py\n--- a/bin/codespell.py\n+++ b/bin/codespell.py\n@@ -4,4 +4,4 @@\n \n if __name__ == '__main__':\n import codespell_lib\n- sys.exit(codespell_lib.main(*sys.argv))\n+ sys.exit(codespell_lib.main(*sys.argv[1:]))\n", "issue": "codespell.py does nothng if [fileN] is not specified\nPreviously running `codespell` without file parameter starts the check. Now `codespell.py` does nothing. The behavior should stay the same as before - if file/dir argument is not specefied then current directory should be used as a default parameter.\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\nimport sys\n\nif __name__ == '__main__':\n import codespell_lib\n sys.exit(codespell_lib.main(*sys.argv))\n", "path": "bin/codespell.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\nimport sys\n\nif __name__ == '__main__':\n import codespell_lib\n sys.exit(codespell_lib.main(*sys.argv[1:]))\n", "path": "bin/codespell.py"}]} |
gh_patches_debug_29 | rasdani/github-patches | git_diff | django-wiki__django-wiki-1228 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Invalid version of popper.js
For bootstrap 4.4.1 it should be popper.js 1.16.0 ([proof](https://getbootstrap.com/docs/4.4/getting-started/introduction/)), not 2.0.5, which [is used now](https://github.com/django-wiki/django-wiki/blob/main/src/wiki/static/wiki/js/popper.js).
With wrong version I am getting error
```
bootstrap.min.js:formatted:991 Uncaught TypeError: u is not a constructor
at c.t.show (bootstrap.min.js:formatted:991)
at c.t.toggle (bootstrap.min.js:formatted:970)
at HTMLButtonElement.<anonymous> (bootstrap.min.js:formatted:1102)
at Function.each (jquery-3.4.1.min.js:2)
at k.fn.init.each (jquery-3.4.1.min.js:2)
at k.fn.init.c._jQueryInterface [as dropdown] (bootstrap.min.js:formatted:1095)
at HTMLButtonElement.<anonymous> (bootstrap.min.js:formatted:1186)
at HTMLDocument.dispatch (jquery-3.4.1.min.js:2)
at HTMLDocument.v.handle (jquery-3.4.1.min.js:2)
```
and dropdowns on wiki pages don't work.
With correct version all is OK.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `testproject/testproject/settings/base.py`
Content:
```
1 """
2 Generated by 'django-admin startproject' using Django 1.9.5.
3
4 For more information on this file, see
5 https://docs.djangoproject.com/en/1.9/topics/settings/
6
7 For the full list of settings and their values, see
8 https://docs.djangoproject.com/en/1.9/ref/settings/
9 """
10 import os
11
12 from django.urls import reverse_lazy
13
14 PROJECT_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
15 BASE_DIR = os.path.dirname(PROJECT_DIR)
16
17 # Quick-start development settings - unsuitable for production
18 # See https://docs.djangoproject.com/en/1.9/howto/deployment/checklist/
19
20 # SECURITY WARNING: keep the secret key used in production secret!
21 SECRET_KEY = "b^fv_)t39h%9p40)fnkfblo##jkr!$0)lkp6bpy!fi*f$4*92!"
22
23 # SECURITY WARNING: don't run with debug turned on in production!
24 DEBUG = False
25
26 ALLOWED_HOSTS = []
27
28
29 INSTALLED_APPS = [
30 "django.contrib.humanize.apps.HumanizeConfig",
31 "django.contrib.auth.apps.AuthConfig",
32 "django.contrib.contenttypes.apps.ContentTypesConfig",
33 "django.contrib.sessions.apps.SessionsConfig",
34 "django.contrib.sites.apps.SitesConfig",
35 "django.contrib.messages.apps.MessagesConfig",
36 "django.contrib.staticfiles.apps.StaticFilesConfig",
37 "django.contrib.admin.apps.AdminConfig",
38 "django.contrib.admindocs.apps.AdminDocsConfig",
39 "sekizai",
40 "sorl.thumbnail",
41 "django_nyt.apps.DjangoNytConfig",
42 "wiki.apps.WikiConfig",
43 "wiki.plugins.macros.apps.MacrosConfig",
44 "wiki.plugins.help.apps.HelpConfig",
45 "wiki.plugins.links.apps.LinksConfig",
46 "wiki.plugins.images.apps.ImagesConfig",
47 "wiki.plugins.attachments.apps.AttachmentsConfig",
48 "wiki.plugins.notifications.apps.NotificationsConfig",
49 "wiki.plugins.editsection.apps.EditSectionConfig",
50 "wiki.plugins.globalhistory.apps.GlobalHistoryConfig",
51 "mptt",
52 ]
53
54 TEST_RUNNER = "django.test.runner.DiscoverRunner"
55
56
57 MIDDLEWARE = [
58 "django.contrib.sessions.middleware.SessionMiddleware",
59 "django.middleware.common.CommonMiddleware",
60 "django.middleware.csrf.CsrfViewMiddleware",
61 "django.contrib.auth.middleware.AuthenticationMiddleware",
62 "django.contrib.messages.middleware.MessageMiddleware",
63 "django.middleware.clickjacking.XFrameOptionsMiddleware",
64 "django.middleware.security.SecurityMiddleware",
65 ]
66
67 ROOT_URLCONF = "testproject.urls"
68
69 TEMPLATES = [
70 {
71 "BACKEND": "django.template.backends.django.DjangoTemplates",
72 "DIRS": [
73 os.path.join(PROJECT_DIR, "templates"),
74 ],
75 "APP_DIRS": True,
76 "OPTIONS": {
77 "context_processors": [
78 "django.contrib.auth.context_processors.auth",
79 "django.template.context_processors.debug",
80 "django.template.context_processors.i18n",
81 "django.template.context_processors.request",
82 "django.template.context_processors.tz",
83 "django.contrib.messages.context_processors.messages",
84 "sekizai.context_processors.sekizai",
85 ],
86 "debug": DEBUG,
87 },
88 },
89 ]
90
91 WSGI_APPLICATION = "testproject.wsgi.application"
92
93
94 LOGIN_REDIRECT_URL = reverse_lazy("wiki:get", kwargs={"path": ""})
95
96
97 # Database
98 # https://docs.djangoproject.com/en/1.9/ref/settings/#databases
99 DATABASES = {
100 "default": {
101 "ENGINE": "django.db.backends.sqlite3",
102 "NAME": os.path.join(PROJECT_DIR, "db", "prepopulated.db"),
103 }
104 }
105
106 # Password validation
107 # https://docs.djangoproject.com/en/1.9/ref/settings/#auth-password-validators
108
109 AUTH_PASSWORD_VALIDATORS = [
110 {
111 "NAME": "django.contrib.auth.password_validation.UserAttributeSimilarityValidator",
112 },
113 {
114 "NAME": "django.contrib.auth.password_validation.MinimumLengthValidator",
115 },
116 {
117 "NAME": "django.contrib.auth.password_validation.CommonPasswordValidator",
118 },
119 {
120 "NAME": "django.contrib.auth.password_validation.NumericPasswordValidator",
121 },
122 ]
123
124 # Internationalization
125 # https://docs.djangoproject.com/en/1.9/topics/i18n/
126
127 TIME_ZONE = "Europe/Berlin"
128
129 # Language code for this installation. All choices can be found here:
130 # http://www.i18nguy.com/unicode/language-identifiers.html
131 LANGUAGE_CODE = "en-US"
132
133 SITE_ID = 1
134
135 USE_I18N = True
136
137 USE_L10N = True
138
139 USE_TZ = True
140
141
142 # Static files (CSS, JavaScript, Images)
143 # https://docs.djangoproject.com/en/1.9/howto/static-files/
144
145 STATIC_URL = "/static/"
146 STATIC_ROOT = os.path.join(PROJECT_DIR, "static")
147 MEDIA_ROOT = os.path.join(PROJECT_DIR, "media")
148 MEDIA_URL = "/media/"
149
150
151 WIKI_ANONYMOUS_WRITE = True
152 WIKI_ANONYMOUS_CREATE = False
153
154 SESSION_COOKIE_SECURE = True
155
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/testproject/testproject/settings/base.py b/testproject/testproject/settings/base.py
--- a/testproject/testproject/settings/base.py
+++ b/testproject/testproject/settings/base.py
@@ -152,3 +152,5 @@
WIKI_ANONYMOUS_CREATE = False
SESSION_COOKIE_SECURE = True
+
+DEFAULT_AUTO_FIELD = "django.db.models.AutoField"
| {"golden_diff": "diff --git a/testproject/testproject/settings/base.py b/testproject/testproject/settings/base.py\n--- a/testproject/testproject/settings/base.py\n+++ b/testproject/testproject/settings/base.py\n@@ -152,3 +152,5 @@\n WIKI_ANONYMOUS_CREATE = False\n \n SESSION_COOKIE_SECURE = True\n+\n+DEFAULT_AUTO_FIELD = \"django.db.models.AutoField\"\n", "issue": "Invalid version of popper.js\nFor bootstrap 4.4.1 it should be popper.js 1.16.0 ([proof](https://getbootstrap.com/docs/4.4/getting-started/introduction/)), not 2.0.5, which [is used now](https://github.com/django-wiki/django-wiki/blob/main/src/wiki/static/wiki/js/popper.js).\r\n\r\nWith wrong version I am getting error\r\n\r\n```\r\nbootstrap.min.js:formatted:991 Uncaught TypeError: u is not a constructor\r\n at c.t.show (bootstrap.min.js:formatted:991)\r\n at c.t.toggle (bootstrap.min.js:formatted:970)\r\n at HTMLButtonElement.<anonymous> (bootstrap.min.js:formatted:1102)\r\n at Function.each (jquery-3.4.1.min.js:2)\r\n at k.fn.init.each (jquery-3.4.1.min.js:2)\r\n at k.fn.init.c._jQueryInterface [as dropdown] (bootstrap.min.js:formatted:1095)\r\n at HTMLButtonElement.<anonymous> (bootstrap.min.js:formatted:1186)\r\n at HTMLDocument.dispatch (jquery-3.4.1.min.js:2)\r\n at HTMLDocument.v.handle (jquery-3.4.1.min.js:2)\r\n```\r\n\r\nand dropdowns on wiki pages don't work.\r\n\r\nWith correct version all is OK.\n", "before_files": [{"content": "\"\"\"\nGenerated by 'django-admin startproject' using Django 1.9.5.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/1.9/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/1.9/ref/settings/\n\"\"\"\nimport os\n\nfrom django.urls import reverse_lazy\n\nPROJECT_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\nBASE_DIR = os.path.dirname(PROJECT_DIR)\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/1.9/howto/deployment/checklist/\n\n# SECURITY WARNING: keep the secret key used in production secret!\nSECRET_KEY = \"b^fv_)t39h%9p40)fnkfblo##jkr!$0)lkp6bpy!fi*f$4*92!\"\n\n# SECURITY WARNING: don't run with debug turned on in production!\nDEBUG = False\n\nALLOWED_HOSTS = []\n\n\nINSTALLED_APPS = [\n \"django.contrib.humanize.apps.HumanizeConfig\",\n \"django.contrib.auth.apps.AuthConfig\",\n \"django.contrib.contenttypes.apps.ContentTypesConfig\",\n \"django.contrib.sessions.apps.SessionsConfig\",\n \"django.contrib.sites.apps.SitesConfig\",\n \"django.contrib.messages.apps.MessagesConfig\",\n \"django.contrib.staticfiles.apps.StaticFilesConfig\",\n \"django.contrib.admin.apps.AdminConfig\",\n \"django.contrib.admindocs.apps.AdminDocsConfig\",\n \"sekizai\",\n \"sorl.thumbnail\",\n \"django_nyt.apps.DjangoNytConfig\",\n \"wiki.apps.WikiConfig\",\n \"wiki.plugins.macros.apps.MacrosConfig\",\n \"wiki.plugins.help.apps.HelpConfig\",\n \"wiki.plugins.links.apps.LinksConfig\",\n \"wiki.plugins.images.apps.ImagesConfig\",\n \"wiki.plugins.attachments.apps.AttachmentsConfig\",\n \"wiki.plugins.notifications.apps.NotificationsConfig\",\n \"wiki.plugins.editsection.apps.EditSectionConfig\",\n \"wiki.plugins.globalhistory.apps.GlobalHistoryConfig\",\n \"mptt\",\n]\n\nTEST_RUNNER = \"django.test.runner.DiscoverRunner\"\n\n\nMIDDLEWARE = [\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n \"django.middleware.clickjacking.XFrameOptionsMiddleware\",\n \"django.middleware.security.SecurityMiddleware\",\n]\n\nROOT_URLCONF = \"testproject.urls\"\n\nTEMPLATES = [\n {\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n \"DIRS\": [\n os.path.join(PROJECT_DIR, \"templates\"),\n ],\n \"APP_DIRS\": True,\n \"OPTIONS\": {\n \"context_processors\": [\n \"django.contrib.auth.context_processors.auth\",\n \"django.template.context_processors.debug\",\n \"django.template.context_processors.i18n\",\n \"django.template.context_processors.request\",\n \"django.template.context_processors.tz\",\n \"django.contrib.messages.context_processors.messages\",\n \"sekizai.context_processors.sekizai\",\n ],\n \"debug\": DEBUG,\n },\n },\n]\n\nWSGI_APPLICATION = \"testproject.wsgi.application\"\n\n\nLOGIN_REDIRECT_URL = reverse_lazy(\"wiki:get\", kwargs={\"path\": \"\"})\n\n\n# Database\n# https://docs.djangoproject.com/en/1.9/ref/settings/#databases\nDATABASES = {\n \"default\": {\n \"ENGINE\": \"django.db.backends.sqlite3\",\n \"NAME\": os.path.join(PROJECT_DIR, \"db\", \"prepopulated.db\"),\n }\n}\n\n# Password validation\n# https://docs.djangoproject.com/en/1.9/ref/settings/#auth-password-validators\n\nAUTH_PASSWORD_VALIDATORS = [\n {\n \"NAME\": \"django.contrib.auth.password_validation.UserAttributeSimilarityValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.MinimumLengthValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.CommonPasswordValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.NumericPasswordValidator\",\n },\n]\n\n# Internationalization\n# https://docs.djangoproject.com/en/1.9/topics/i18n/\n\nTIME_ZONE = \"Europe/Berlin\"\n\n# Language code for this installation. All choices can be found here:\n# http://www.i18nguy.com/unicode/language-identifiers.html\nLANGUAGE_CODE = \"en-US\"\n\nSITE_ID = 1\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/1.9/howto/static-files/\n\nSTATIC_URL = \"/static/\"\nSTATIC_ROOT = os.path.join(PROJECT_DIR, \"static\")\nMEDIA_ROOT = os.path.join(PROJECT_DIR, \"media\")\nMEDIA_URL = \"/media/\"\n\n\nWIKI_ANONYMOUS_WRITE = True\nWIKI_ANONYMOUS_CREATE = False\n\nSESSION_COOKIE_SECURE = True\n", "path": "testproject/testproject/settings/base.py"}], "after_files": [{"content": "\"\"\"\nGenerated by 'django-admin startproject' using Django 1.9.5.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/1.9/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/1.9/ref/settings/\n\"\"\"\nimport os\n\nfrom django.urls import reverse_lazy\n\nPROJECT_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\nBASE_DIR = os.path.dirname(PROJECT_DIR)\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/1.9/howto/deployment/checklist/\n\n# SECURITY WARNING: keep the secret key used in production secret!\nSECRET_KEY = \"b^fv_)t39h%9p40)fnkfblo##jkr!$0)lkp6bpy!fi*f$4*92!\"\n\n# SECURITY WARNING: don't run with debug turned on in production!\nDEBUG = False\n\nALLOWED_HOSTS = []\n\n\nINSTALLED_APPS = [\n \"django.contrib.humanize.apps.HumanizeConfig\",\n \"django.contrib.auth.apps.AuthConfig\",\n \"django.contrib.contenttypes.apps.ContentTypesConfig\",\n \"django.contrib.sessions.apps.SessionsConfig\",\n \"django.contrib.sites.apps.SitesConfig\",\n \"django.contrib.messages.apps.MessagesConfig\",\n \"django.contrib.staticfiles.apps.StaticFilesConfig\",\n \"django.contrib.admin.apps.AdminConfig\",\n \"django.contrib.admindocs.apps.AdminDocsConfig\",\n \"sekizai\",\n \"sorl.thumbnail\",\n \"django_nyt.apps.DjangoNytConfig\",\n \"wiki.apps.WikiConfig\",\n \"wiki.plugins.macros.apps.MacrosConfig\",\n \"wiki.plugins.help.apps.HelpConfig\",\n \"wiki.plugins.links.apps.LinksConfig\",\n \"wiki.plugins.images.apps.ImagesConfig\",\n \"wiki.plugins.attachments.apps.AttachmentsConfig\",\n \"wiki.plugins.notifications.apps.NotificationsConfig\",\n \"wiki.plugins.editsection.apps.EditSectionConfig\",\n \"wiki.plugins.globalhistory.apps.GlobalHistoryConfig\",\n \"mptt\",\n]\n\nTEST_RUNNER = \"django.test.runner.DiscoverRunner\"\n\n\nMIDDLEWARE = [\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n \"django.middleware.clickjacking.XFrameOptionsMiddleware\",\n \"django.middleware.security.SecurityMiddleware\",\n]\n\nROOT_URLCONF = \"testproject.urls\"\n\nTEMPLATES = [\n {\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n \"DIRS\": [\n os.path.join(PROJECT_DIR, \"templates\"),\n ],\n \"APP_DIRS\": True,\n \"OPTIONS\": {\n \"context_processors\": [\n \"django.contrib.auth.context_processors.auth\",\n \"django.template.context_processors.debug\",\n \"django.template.context_processors.i18n\",\n \"django.template.context_processors.request\",\n \"django.template.context_processors.tz\",\n \"django.contrib.messages.context_processors.messages\",\n \"sekizai.context_processors.sekizai\",\n ],\n \"debug\": DEBUG,\n },\n },\n]\n\nWSGI_APPLICATION = \"testproject.wsgi.application\"\n\n\nLOGIN_REDIRECT_URL = reverse_lazy(\"wiki:get\", kwargs={\"path\": \"\"})\n\n\n# Database\n# https://docs.djangoproject.com/en/1.9/ref/settings/#databases\nDATABASES = {\n \"default\": {\n \"ENGINE\": \"django.db.backends.sqlite3\",\n \"NAME\": os.path.join(PROJECT_DIR, \"db\", \"prepopulated.db\"),\n }\n}\n\n# Password validation\n# https://docs.djangoproject.com/en/1.9/ref/settings/#auth-password-validators\n\nAUTH_PASSWORD_VALIDATORS = [\n {\n \"NAME\": \"django.contrib.auth.password_validation.UserAttributeSimilarityValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.MinimumLengthValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.CommonPasswordValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.NumericPasswordValidator\",\n },\n]\n\n# Internationalization\n# https://docs.djangoproject.com/en/1.9/topics/i18n/\n\nTIME_ZONE = \"Europe/Berlin\"\n\n# Language code for this installation. All choices can be found here:\n# http://www.i18nguy.com/unicode/language-identifiers.html\nLANGUAGE_CODE = \"en-US\"\n\nSITE_ID = 1\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/1.9/howto/static-files/\n\nSTATIC_URL = \"/static/\"\nSTATIC_ROOT = os.path.join(PROJECT_DIR, \"static\")\nMEDIA_ROOT = os.path.join(PROJECT_DIR, \"media\")\nMEDIA_URL = \"/media/\"\n\n\nWIKI_ANONYMOUS_WRITE = True\nWIKI_ANONYMOUS_CREATE = False\n\nSESSION_COOKIE_SECURE = True\n\nDEFAULT_AUTO_FIELD = \"django.db.models.AutoField\"\n", "path": "testproject/testproject/settings/base.py"}]} |
gh_patches_debug_30 | rasdani/github-patches | git_diff | googleapis__google-api-python-client-129 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
discovery_cache module not packaged during installation.
I've installed `google-api-python-client` from source, but when at some point my application was failing with this message:
```
...
...
File "build/bdist.linux-x86_64/egg/oauth2client/util.py", line 142, in positional_wrapper
return wrapped(*args, **kwargs)
File "build/bdist.linux-x86_64/egg/googleapiclient/discovery.py", line 193, in build
content = _retrieve_discovery_doc(requested_url, http, cache_discovery, cache)
File "build/bdist.linux-x86_64/egg/googleapiclient/discovery.py", line 215, in _retrieve_discovery_doc
from . import discovery_cache
ImportError: cannot import name discovery_cache
```
I've checked if `discovery_cache` module was actually part of the `egg`, and unfortunately it was not:
```
[root@e42fb97ce657 unit]# python
Python 2.7.5 (default, Jun 24 2015, 00:41:19)
[GCC 4.8.3 20140911 (Red Hat 4.8.3-9)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import googleapiclient.discovery_cache
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named discovery_cache
>>>
```
Here are all the files in `egg`
```
[root@e42fb97ce657 ~]# unzip -l /usr/lib/python2.7/site-packages/google_api_python_client-1.4.1-py2.7.egg
Archive: /usr/lib/python2.7/site-packages/google_api_python_client-1.4.1-py2.7.egg
Length Date Time Name
--------- ---------- ----- ----
1169 09-03-2015 16:09 apiclient/__init__.py
1301 09-03-2015 16:09 apiclient/__init__.pyc
1 09-03-2015 16:09 EGG-INFO/dependency_links.txt
62 09-03-2015 16:09 EGG-INFO/requires.txt
26 09-03-2015 16:09 EGG-INFO/top_level.txt
969 09-03-2015 16:09 EGG-INFO/PKG-INFO
1 09-03-2015 16:09 EGG-INFO/zip-safe
545 09-03-2015 16:09 EGG-INFO/SOURCES.txt
53575 09-03-2015 16:09 googleapiclient/http.py
9910 09-03-2015 16:09 googleapiclient/channel.py
40890 09-03-2015 16:09 googleapiclient/discovery.py
9907 09-03-2015 16:09 googleapiclient/schema.pyc
620 09-03-2015 16:09 googleapiclient/__init__.py
9317 09-03-2015 16:09 googleapiclient/schema.py
11830 09-03-2015 16:09 googleapiclient/model.py
4047 09-03-2015 16:09 googleapiclient/sample_tools.py
6552 09-03-2015 16:09 googleapiclient/mimeparse.py
53976 09-03-2015 16:09 googleapiclient/http.pyc
7043 09-03-2015 16:09 googleapiclient/mimeparse.pyc
6333 09-03-2015 16:09 googleapiclient/errors.pyc
3131 09-03-2015 16:09 googleapiclient/sample_tools.pyc
3622 09-03-2015 16:09 googleapiclient/errors.py
35534 09-03-2015 16:09 googleapiclient/discovery.pyc
14028 09-03-2015 16:09 googleapiclient/model.pyc
175 09-03-2015 16:09 googleapiclient/__init__.pyc
10690 09-03-2015 16:09 googleapiclient/channel.pyc
--------- -------
285254 26 files
[root@e42fb97ce657 ~]#
```
As a workaround I had to add `googleapiclient/discovery_cache` to the `packages` in `setup.py` so it looked like that:
```
[root@e42fb97ce657 google-api-python-client]# more setup.py | grep packages -A 4 -m1
packages = [
'apiclient',
'googleapiclient',
'googleapiclient/discovery_cache'
]
```
Then installed and everything magically started working.
```
[root@e42fb97ce657 google-api-python-client]# python
Python 2.7.5 (default, Jun 24 2015, 00:41:19)
[GCC 4.8.3 20140911 (Red Hat 4.8.3-9)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import googleapiclient.discovery_cache
>>>
```
Here is a quick sample that looks similar to my environment using `Docker`:
```
FROM centos:centos7
RUN yum install -y git python-devel python-setuptools unzip
RUN easy_install pip
RUN cd /tmp ;\
git clone https://github.com/google/google-api-python-client && \
cd google-api-python-client && \
python setup.py install
```
I've also tried to follow preferred suggestion from the `README.md` and install it from `pip` but it ended up in the same situation.
Please advice on how to proceed without making "manual" modifications to the official package?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # Copyright 2014 Google Inc. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Setup script for Google API Python client.
16
17 Also installs included versions of third party libraries, if those libraries
18 are not already installed.
19 """
20 from __future__ import print_function
21
22 import sys
23
24 if sys.version_info < (2, 6):
25 print('google-api-python-client requires python version >= 2.6.',
26 file=sys.stderr)
27 sys.exit(1)
28 if (3, 1) <= sys.version_info < (3, 3):
29 print('google-api-python-client requires python3 version >= 3.3.',
30 file=sys.stderr)
31 sys.exit(1)
32
33 from setuptools import setup
34 import pkg_resources
35
36 def _DetectBadness():
37 import os
38 if 'SKIP_GOOGLEAPICLIENT_COMPAT_CHECK' in os.environ:
39 return
40 o2c_pkg = None
41 try:
42 o2c_pkg = pkg_resources.get_distribution('oauth2client')
43 except pkg_resources.DistributionNotFound:
44 pass
45 oauth2client = None
46 try:
47 import oauth2client
48 except ImportError:
49 pass
50 if o2c_pkg is None and oauth2client is not None:
51 raise RuntimeError(
52 'Previous version of google-api-python-client detected; due to a '
53 'packaging issue, we cannot perform an in-place upgrade. Please remove '
54 'the old version and re-install this package.'
55 )
56
57 _DetectBadness()
58
59 packages = [
60 'apiclient',
61 'googleapiclient',
62 ]
63
64 install_requires = [
65 'httplib2>=0.8',
66 'oauth2client>=1.4.6',
67 'six>=1.6.1',
68 'uritemplate>=0.6',
69 ]
70
71 if sys.version_info < (2, 7):
72 install_requires.append('argparse')
73
74 long_desc = """The Google API Client for Python is a client library for
75 accessing the Plus, Moderator, and many other Google APIs."""
76
77 import googleapiclient
78 version = googleapiclient.__version__
79
80 setup(
81 name="google-api-python-client",
82 version=version,
83 description="Google API Client Library for Python",
84 long_description=long_desc,
85 author="Google Inc.",
86 url="http://github.com/google/google-api-python-client/",
87 install_requires=install_requires,
88 packages=packages,
89 package_data={},
90 license="Apache 2.0",
91 keywords="google api client",
92 classifiers=[
93 'Programming Language :: Python :: 2',
94 'Programming Language :: Python :: 2.6',
95 'Programming Language :: Python :: 2.7',
96 'Programming Language :: Python :: 3',
97 'Programming Language :: Python :: 3.3',
98 'Programming Language :: Python :: 3.4',
99 'Development Status :: 5 - Production/Stable',
100 'Intended Audience :: Developers',
101 'License :: OSI Approved :: Apache Software License',
102 'Operating System :: OS Independent',
103 'Topic :: Internet :: WWW/HTTP',
104 ],
105 )
106
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -59,6 +59,7 @@
packages = [
'apiclient',
'googleapiclient',
+ 'googleapiclient/discovery_cache',
]
install_requires = [
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -59,6 +59,7 @@\n packages = [\n 'apiclient',\n 'googleapiclient',\n+ 'googleapiclient/discovery_cache',\n ]\n \n install_requires = [\n", "issue": "discovery_cache module not packaged during installation.\nI've installed `google-api-python-client` from source, but when at some point my application was failing with this message:\n\n```\n ...\n ...\n File \"build/bdist.linux-x86_64/egg/oauth2client/util.py\", line 142, in positional_wrapper\n return wrapped(*args, **kwargs)\n File \"build/bdist.linux-x86_64/egg/googleapiclient/discovery.py\", line 193, in build\n content = _retrieve_discovery_doc(requested_url, http, cache_discovery, cache)\n File \"build/bdist.linux-x86_64/egg/googleapiclient/discovery.py\", line 215, in _retrieve_discovery_doc\n from . import discovery_cache\nImportError: cannot import name discovery_cache\n```\n\nI've checked if `discovery_cache` module was actually part of the `egg`, and unfortunately it was not:\n\n```\n[root@e42fb97ce657 unit]# python\nPython 2.7.5 (default, Jun 24 2015, 00:41:19) \n[GCC 4.8.3 20140911 (Red Hat 4.8.3-9)] on linux2\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import googleapiclient.discovery_cache\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nImportError: No module named discovery_cache\n>>> \n```\n\nHere are all the files in `egg`\n\n```\n[root@e42fb97ce657 ~]# unzip -l /usr/lib/python2.7/site-packages/google_api_python_client-1.4.1-py2.7.egg \nArchive: /usr/lib/python2.7/site-packages/google_api_python_client-1.4.1-py2.7.egg\n Length Date Time Name\n--------- ---------- ----- ----\n 1169 09-03-2015 16:09 apiclient/__init__.py\n 1301 09-03-2015 16:09 apiclient/__init__.pyc\n 1 09-03-2015 16:09 EGG-INFO/dependency_links.txt\n 62 09-03-2015 16:09 EGG-INFO/requires.txt\n 26 09-03-2015 16:09 EGG-INFO/top_level.txt\n 969 09-03-2015 16:09 EGG-INFO/PKG-INFO\n 1 09-03-2015 16:09 EGG-INFO/zip-safe\n 545 09-03-2015 16:09 EGG-INFO/SOURCES.txt\n 53575 09-03-2015 16:09 googleapiclient/http.py\n 9910 09-03-2015 16:09 googleapiclient/channel.py\n 40890 09-03-2015 16:09 googleapiclient/discovery.py\n 9907 09-03-2015 16:09 googleapiclient/schema.pyc\n 620 09-03-2015 16:09 googleapiclient/__init__.py\n 9317 09-03-2015 16:09 googleapiclient/schema.py\n 11830 09-03-2015 16:09 googleapiclient/model.py\n 4047 09-03-2015 16:09 googleapiclient/sample_tools.py\n 6552 09-03-2015 16:09 googleapiclient/mimeparse.py\n 53976 09-03-2015 16:09 googleapiclient/http.pyc\n 7043 09-03-2015 16:09 googleapiclient/mimeparse.pyc\n 6333 09-03-2015 16:09 googleapiclient/errors.pyc\n 3131 09-03-2015 16:09 googleapiclient/sample_tools.pyc\n 3622 09-03-2015 16:09 googleapiclient/errors.py\n 35534 09-03-2015 16:09 googleapiclient/discovery.pyc\n 14028 09-03-2015 16:09 googleapiclient/model.pyc\n 175 09-03-2015 16:09 googleapiclient/__init__.pyc\n 10690 09-03-2015 16:09 googleapiclient/channel.pyc\n--------- -------\n 285254 26 files\n[root@e42fb97ce657 ~]# \n```\n\nAs a workaround I had to add `googleapiclient/discovery_cache` to the `packages` in `setup.py` so it looked like that:\n\n```\n[root@e42fb97ce657 google-api-python-client]# more setup.py | grep packages -A 4 -m1\npackages = [\n 'apiclient',\n 'googleapiclient',\n 'googleapiclient/discovery_cache'\n]\n```\n\nThen installed and everything magically started working.\n\n```\n[root@e42fb97ce657 google-api-python-client]# python\nPython 2.7.5 (default, Jun 24 2015, 00:41:19) \n[GCC 4.8.3 20140911 (Red Hat 4.8.3-9)] on linux2\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import googleapiclient.discovery_cache\n>>> \n```\n\nHere is a quick sample that looks similar to my environment using `Docker`:\n\n```\nFROM centos:centos7\n\nRUN yum install -y git python-devel python-setuptools unzip\nRUN easy_install pip\nRUN cd /tmp ;\\\n git clone https://github.com/google/google-api-python-client && \\\n cd google-api-python-client && \\\n python setup.py install \n```\n\nI've also tried to follow preferred suggestion from the `README.md` and install it from `pip` but it ended up in the same situation.\n\nPlease advice on how to proceed without making \"manual\" modifications to the official package?\n\n", "before_files": [{"content": "# Copyright 2014 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Setup script for Google API Python client.\n\nAlso installs included versions of third party libraries, if those libraries\nare not already installed.\n\"\"\"\nfrom __future__ import print_function\n\nimport sys\n\nif sys.version_info < (2, 6):\n print('google-api-python-client requires python version >= 2.6.',\n file=sys.stderr)\n sys.exit(1)\nif (3, 1) <= sys.version_info < (3, 3):\n print('google-api-python-client requires python3 version >= 3.3.',\n file=sys.stderr)\n sys.exit(1)\n\nfrom setuptools import setup\nimport pkg_resources\n\ndef _DetectBadness():\n import os\n if 'SKIP_GOOGLEAPICLIENT_COMPAT_CHECK' in os.environ:\n return\n o2c_pkg = None\n try:\n o2c_pkg = pkg_resources.get_distribution('oauth2client')\n except pkg_resources.DistributionNotFound:\n pass\n oauth2client = None\n try:\n import oauth2client\n except ImportError:\n pass\n if o2c_pkg is None and oauth2client is not None:\n raise RuntimeError(\n 'Previous version of google-api-python-client detected; due to a '\n 'packaging issue, we cannot perform an in-place upgrade. Please remove '\n 'the old version and re-install this package.'\n )\n\n_DetectBadness()\n\npackages = [\n 'apiclient',\n 'googleapiclient',\n]\n\ninstall_requires = [\n 'httplib2>=0.8',\n 'oauth2client>=1.4.6',\n 'six>=1.6.1',\n 'uritemplate>=0.6',\n]\n\nif sys.version_info < (2, 7):\n install_requires.append('argparse')\n\nlong_desc = \"\"\"The Google API Client for Python is a client library for\naccessing the Plus, Moderator, and many other Google APIs.\"\"\"\n\nimport googleapiclient\nversion = googleapiclient.__version__\n\nsetup(\n name=\"google-api-python-client\",\n version=version,\n description=\"Google API Client Library for Python\",\n long_description=long_desc,\n author=\"Google Inc.\",\n url=\"http://github.com/google/google-api-python-client/\",\n install_requires=install_requires,\n packages=packages,\n package_data={},\n license=\"Apache 2.0\",\n keywords=\"google api client\",\n classifiers=[\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: Apache Software License',\n 'Operating System :: OS Independent',\n 'Topic :: Internet :: WWW/HTTP',\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "# Copyright 2014 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Setup script for Google API Python client.\n\nAlso installs included versions of third party libraries, if those libraries\nare not already installed.\n\"\"\"\nfrom __future__ import print_function\n\nimport sys\n\nif sys.version_info < (2, 6):\n print('google-api-python-client requires python version >= 2.6.',\n file=sys.stderr)\n sys.exit(1)\nif (3, 1) <= sys.version_info < (3, 3):\n print('google-api-python-client requires python3 version >= 3.3.',\n file=sys.stderr)\n sys.exit(1)\n\nfrom setuptools import setup\nimport pkg_resources\n\ndef _DetectBadness():\n import os\n if 'SKIP_GOOGLEAPICLIENT_COMPAT_CHECK' in os.environ:\n return\n o2c_pkg = None\n try:\n o2c_pkg = pkg_resources.get_distribution('oauth2client')\n except pkg_resources.DistributionNotFound:\n pass\n oauth2client = None\n try:\n import oauth2client\n except ImportError:\n pass\n if o2c_pkg is None and oauth2client is not None:\n raise RuntimeError(\n 'Previous version of google-api-python-client detected; due to a '\n 'packaging issue, we cannot perform an in-place upgrade. Please remove '\n 'the old version and re-install this package.'\n )\n\n_DetectBadness()\n\npackages = [\n 'apiclient',\n 'googleapiclient',\n 'googleapiclient/discovery_cache',\n]\n\ninstall_requires = [\n 'httplib2>=0.8',\n 'oauth2client>=1.4.6',\n 'six>=1.6.1',\n 'uritemplate>=0.6',\n]\n\nif sys.version_info < (2, 7):\n install_requires.append('argparse')\n\nlong_desc = \"\"\"The Google API Client for Python is a client library for\naccessing the Plus, Moderator, and many other Google APIs.\"\"\"\n\nimport googleapiclient\nversion = googleapiclient.__version__\n\nsetup(\n name=\"google-api-python-client\",\n version=version,\n description=\"Google API Client Library for Python\",\n long_description=long_desc,\n author=\"Google Inc.\",\n url=\"http://github.com/google/google-api-python-client/\",\n install_requires=install_requires,\n packages=packages,\n package_data={},\n license=\"Apache 2.0\",\n keywords=\"google api client\",\n classifiers=[\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: Apache Software License',\n 'Operating System :: OS Independent',\n 'Topic :: Internet :: WWW/HTTP',\n ],\n)\n", "path": "setup.py"}]} |
gh_patches_debug_31 | rasdani/github-patches | git_diff | redis__redis-py-1678 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CI run to install the built package
In light of bug #1645 we should amend our CI run to install the built package, in a new virtual env and run something simple like a redis.Redis().ping(). Eventually we could build up to running the full integration test against the package.
CI run to install the built package
In light of bug #1645 we should amend our CI run to install the built package, in a new virtual env and run something simple like a redis.Redis().ping(). Eventually we could build up to running the full integration test against the package.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tasks.py`
Content:
```
1 import os
2 import shutil
3 from invoke import task, run
4
5 with open('tox.ini') as fp:
6 lines = fp.read().split("\n")
7 dockers = [line.split("=")[1].strip() for line in lines
8 if line.find("name") != -1]
9
10
11 @task
12 def devenv(c):
13 """Builds a development environment: downloads, and starts all dockers
14 specified in the tox.ini file.
15 """
16 clean(c)
17 cmd = 'tox -e devenv'
18 for d in dockers:
19 cmd += " --docker-dont-stop={}".format(d)
20 run(cmd)
21
22
23 @task
24 def linters(c):
25 """Run code linters"""
26 run("tox -e linters")
27
28
29 @task
30 def all_tests(c):
31 """Run all linters, and tests in redis-py. This assumes you have all
32 the python versions specified in the tox.ini file.
33 """
34 linters(c)
35 tests(c)
36
37
38 @task
39 def tests(c):
40 """Run the redis-py test suite against the current python,
41 with and without hiredis.
42 """
43 run("tox -e plain -e hiredis")
44
45
46 @task
47 def clean(c):
48 """Stop all dockers, and clean up the built binaries, if generated."""
49 if os.path.isdir("build"):
50 shutil.rmtree("build")
51 if os.path.isdir("dist"):
52 shutil.rmtree("dist")
53 run("docker rm -f {}".format(' '.join(dockers)))
54
55
56 @task
57 def package(c):
58 """Create the python packages"""
59 run("python setup.py build install")
60
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/tasks.py b/tasks.py
--- a/tasks.py
+++ b/tasks.py
@@ -56,4 +56,4 @@
@task
def package(c):
"""Create the python packages"""
- run("python setup.py build install")
+ run("python setup.py sdist bdist_wheel")
| {"golden_diff": "diff --git a/tasks.py b/tasks.py\n--- a/tasks.py\n+++ b/tasks.py\n@@ -56,4 +56,4 @@\n @task\n def package(c):\n \"\"\"Create the python packages\"\"\"\n- run(\"python setup.py build install\")\n+ run(\"python setup.py sdist bdist_wheel\")\n", "issue": "CI run to install the built package\nIn light of bug #1645 we should amend our CI run to install the built package, in a new virtual env and run something simple like a redis.Redis().ping(). Eventually we could build up to running the full integration test against the package.\nCI run to install the built package\nIn light of bug #1645 we should amend our CI run to install the built package, in a new virtual env and run something simple like a redis.Redis().ping(). Eventually we could build up to running the full integration test against the package.\n", "before_files": [{"content": "import os\nimport shutil\nfrom invoke import task, run\n\nwith open('tox.ini') as fp:\n lines = fp.read().split(\"\\n\")\n dockers = [line.split(\"=\")[1].strip() for line in lines\n if line.find(\"name\") != -1]\n\n\n@task\ndef devenv(c):\n \"\"\"Builds a development environment: downloads, and starts all dockers\n specified in the tox.ini file.\n \"\"\"\n clean(c)\n cmd = 'tox -e devenv'\n for d in dockers:\n cmd += \" --docker-dont-stop={}\".format(d)\n run(cmd)\n\n\n@task\ndef linters(c):\n \"\"\"Run code linters\"\"\"\n run(\"tox -e linters\")\n\n\n@task\ndef all_tests(c):\n \"\"\"Run all linters, and tests in redis-py. This assumes you have all\n the python versions specified in the tox.ini file.\n \"\"\"\n linters(c)\n tests(c)\n\n\n@task\ndef tests(c):\n \"\"\"Run the redis-py test suite against the current python,\n with and without hiredis.\n \"\"\"\n run(\"tox -e plain -e hiredis\")\n\n\n@task\ndef clean(c):\n \"\"\"Stop all dockers, and clean up the built binaries, if generated.\"\"\"\n if os.path.isdir(\"build\"):\n shutil.rmtree(\"build\")\n if os.path.isdir(\"dist\"):\n shutil.rmtree(\"dist\")\n run(\"docker rm -f {}\".format(' '.join(dockers)))\n\n\n@task\ndef package(c):\n \"\"\"Create the python packages\"\"\"\n run(\"python setup.py build install\")\n", "path": "tasks.py"}], "after_files": [{"content": "import os\nimport shutil\nfrom invoke import task, run\n\nwith open('tox.ini') as fp:\n lines = fp.read().split(\"\\n\")\n dockers = [line.split(\"=\")[1].strip() for line in lines\n if line.find(\"name\") != -1]\n\n\n@task\ndef devenv(c):\n \"\"\"Builds a development environment: downloads, and starts all dockers\n specified in the tox.ini file.\n \"\"\"\n clean(c)\n cmd = 'tox -e devenv'\n for d in dockers:\n cmd += \" --docker-dont-stop={}\".format(d)\n run(cmd)\n\n\n@task\ndef linters(c):\n \"\"\"Run code linters\"\"\"\n run(\"tox -e linters\")\n\n\n@task\ndef all_tests(c):\n \"\"\"Run all linters, and tests in redis-py. This assumes you have all\n the python versions specified in the tox.ini file.\n \"\"\"\n linters(c)\n tests(c)\n\n\n@task\ndef tests(c):\n \"\"\"Run the redis-py test suite against the current python,\n with and without hiredis.\n \"\"\"\n run(\"tox -e plain -e hiredis\")\n\n\n@task\ndef clean(c):\n \"\"\"Stop all dockers, and clean up the built binaries, if generated.\"\"\"\n if os.path.isdir(\"build\"):\n shutil.rmtree(\"build\")\n if os.path.isdir(\"dist\"):\n shutil.rmtree(\"dist\")\n run(\"docker rm -f {}\".format(' '.join(dockers)))\n\n\n@task\ndef package(c):\n \"\"\"Create the python packages\"\"\"\n run(\"python setup.py sdist bdist_wheel\")\n", "path": "tasks.py"}]} |
gh_patches_debug_32 | rasdani/github-patches | git_diff | streamlit__streamlit-1931 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add screenshot test for syntax highlighting
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `e2e/scripts/st_code.py`
Content:
```
1 # Copyright 2018-2020 Streamlit Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import streamlit as st
16
17 st.code("# This code is awesome!")
18
19 st.code("")
20
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/e2e/scripts/st_code.py b/e2e/scripts/st_code.py
--- a/e2e/scripts/st_code.py
+++ b/e2e/scripts/st_code.py
@@ -17,3 +17,9 @@
st.code("# This code is awesome!")
st.code("")
+
+code = """
+def hello():
+ print("Hello, Streamlit!")
+"""
+st.code(code, language="python")
| {"golden_diff": "diff --git a/e2e/scripts/st_code.py b/e2e/scripts/st_code.py\n--- a/e2e/scripts/st_code.py\n+++ b/e2e/scripts/st_code.py\n@@ -17,3 +17,9 @@\n st.code(\"# This code is awesome!\")\n \n st.code(\"\")\n+\n+code = \"\"\"\n+def hello():\n+ print(\"Hello, Streamlit!\")\n+\"\"\"\n+st.code(code, language=\"python\")\n", "issue": "Add screenshot test for syntax highlighting\n\n", "before_files": [{"content": "# Copyright 2018-2020 Streamlit Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport streamlit as st\n\nst.code(\"# This code is awesome!\")\n\nst.code(\"\")\n", "path": "e2e/scripts/st_code.py"}], "after_files": [{"content": "# Copyright 2018-2020 Streamlit Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport streamlit as st\n\nst.code(\"# This code is awesome!\")\n\nst.code(\"\")\n\ncode = \"\"\"\ndef hello():\n print(\"Hello, Streamlit!\")\n\"\"\"\nst.code(code, language=\"python\")\n", "path": "e2e/scripts/st_code.py"}]} |
gh_patches_debug_33 | rasdani/github-patches | git_diff | qtile__qtile-1432 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
docs: Missing deps when building widget docs gives "alias to ImportErrorWidget"
See http://docs.qtile.org/en/latest/manual/ref/widgets.html#memory for example.
I guess the widget dependencies are not installed while building the docs, resulting in Sphinx telling the widget is an alias to `libqtile.widget.import_error.make_error.<locals>.ImportErrorWidget`.
EDIT: okay I see where the deps are listed: in `docs/conf.py`. Indeed `mpd` is present but `psutil` is not, so the `Memory` widget's docs do not build.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/conf.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # Qtile documentation build configuration file, created by
4 # sphinx-quickstart on Sat Feb 11 15:20:21 2012.
5 #
6 # This file is execfile()d with the current directory set to its containing dir.
7 #
8 # Note that not all possible configuration values are present in this
9 # autogenerated file.
10 #
11 # All configuration values have a default; values that are commented out
12 # serve to show the default.
13
14 import os
15 import sys
16 from unittest.mock import MagicMock
17
18
19 class Mock(MagicMock):
20 # xcbq does a dir() on objects and pull stuff out of them and tries to sort
21 # the result. MagicMock has a bunch of stuff that can't be sorted, so let's
22 # like about dir().
23 def __dir__(self):
24 return []
25
26 MOCK_MODULES = [
27 'libqtile._ffi_pango',
28 'libqtile.core._ffi_xcursors',
29 'cairocffi',
30 'cairocffi.pixbuf',
31 'cffi',
32 'dateutil',
33 'dateutil.parser',
34 'dbus',
35 'dbus.mainloop.glib',
36 'iwlib',
37 'keyring',
38 'mpd',
39 'trollius',
40 'xcffib',
41 'xcffib.randr',
42 'xcffib.xfixes',
43 'xcffib.xinerama',
44 'xcffib.xproto',
45 'xdg.IconTheme',
46 ]
47 sys.modules.update((mod_name, Mock()) for mod_name in MOCK_MODULES)
48
49 # If extensions (or modules to document with autodoc) are in another directory,
50 # add these directories to sys.path here. If the directory is relative to the
51 # documentation root, use os.path.abspath to make it absolute, like shown here.
52 sys.path.insert(0, os.path.abspath('.'))
53 sys.path.insert(0, os.path.abspath('../'))
54
55 # -- General configuration -----------------------------------------------------
56
57 # If your documentation needs a minimal Sphinx version, state it here.
58 #needs_sphinx = '1.0'
59
60 # Add any Sphinx extension module names here, as strings. They can be extensions
61 # coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
62 extensions = [
63 'sphinx.ext.autodoc',
64 'sphinx.ext.autosummary',
65 'sphinx.ext.coverage',
66 'sphinx.ext.graphviz',
67 'sphinx.ext.todo',
68 'sphinx.ext.viewcode',
69 'sphinxcontrib.seqdiag',
70 'sphinx_qtile',
71 'numpydoc',
72 ]
73
74 numpydoc_show_class_members = False
75
76 # Add any paths that contain templates here, relative to this directory.
77 templates_path = []
78
79 # The suffix of source filenames.
80 source_suffix = '.rst'
81
82 # The encoding of source files.
83 #source_encoding = 'utf-8-sig'
84
85 # The master toctree document.
86 master_doc = 'index'
87
88 # General information about the project.
89 project = u'Qtile'
90 copyright = u'2008-2019, Aldo Cortesi and contributers'
91
92 # The version info for the project you're documenting, acts as replacement for
93 # |version| and |release|, also used in various other places throughout the
94 # built documents.
95 #
96 # The short X.Y version.
97 version = '0.14.2'
98 # The full version, including alpha/beta/rc tags.
99 release = version
100
101 # The language for content autogenerated by Sphinx. Refer to documentation
102 # for a list of supported languages.
103 #language = None
104
105 # There are two options for replacing |today|: either, you set today to some
106 # non-false value, then it is used:
107 #today = ''
108 # Else, today_fmt is used as the format for a strftime call.
109 #today_fmt = '%B %d, %Y'
110
111 # List of patterns, relative to source directory, that match files and
112 # directories to ignore when looking for source files.
113 exclude_patterns = ['_build', 'man']
114
115 # The reST default role (used for this markup: `text`) to use for all documents.
116 #default_role = None
117
118 # If true, '()' will be appended to :func: etc. cross-reference text.
119 #add_function_parentheses = True
120
121 # If true, the current module name will be prepended to all description
122 # unit titles (such as .. function::).
123 #add_module_names = True
124
125 # If true, sectionauthor and moduleauthor directives will be shown in the
126 # output. They are ignored by default.
127 #show_authors = False
128
129 # The name of the Pygments (syntax highlighting) style to use.
130 pygments_style = 'sphinx'
131
132 # A list of ignored prefixes for module index sorting.
133 #modindex_common_prefix = []
134
135 # If true, `todo` and `todoList` produce output, else they produce nothing.
136 todo_include_todos = True
137
138
139 # -- Options for HTML output --------fautod-------------------------------------------
140
141 # The theme to use for HTML and HTML Help pages. See the documentation for
142 # a list of builtin themes.
143 #html_theme = 'default'
144
145 # Theme options are theme-specific and customize the look and feel of a theme
146 # further. For a list of options available for each theme, see the
147 # documentation.
148 #html_theme_options = {}
149
150 # Add any paths that contain custom themes here, relative to this directory.
151 #html_theme_path = []
152
153 # The name for this set of Sphinx documents. If None, it defaults to
154 # "<project> v<release> documentation".
155 #html_title = None
156
157 # A shorter title for the navigation bar. Default is the same as html_title.
158 #html_short_title = None
159
160 # The name of an image file (relative to this directory) to place at the top
161 # of the sidebar.
162 #html_logo = None
163
164 # The name of an image file (within the static path) to use as favicon of the
165 # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
166 # pixels large.
167 html_favicon = '_static/favicon.ico'
168
169 # Add any paths that contain custom static files (such as style sheets) here,
170 # relative to this directory. They are copied after the builtin static files,
171 # so a file named "default.css" will overwrite the builtin "default.css".
172 html_static_path = ['_static']
173
174 # If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
175 # using the given strftime format.
176 #html_last_updated_fmt = '%b %d, %Y'
177
178 # If true, SmartyPants will be used to convert quotes and dashes to
179 # typographically correct entities.
180 #html_use_smartypants = True
181
182 # Custom sidebar templates, maps document names to template names.
183 #html_sidebars = {}
184
185 # Additional templates that should be rendered to pages, maps page names to
186 # template names.
187 #html_additional_pages = {'index': 'index.html'}
188
189 # If false, no module index is generated.
190 #html_domain_indices = True
191
192 # If false, no index is generated.
193 html_use_index = True
194
195 # If true, the index is split into individual pages for each letter.
196 #html_split_index = False
197
198 # If true, links to the reST sources are added to the pages.
199 #html_show_sourcelink = True
200
201 # If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
202 #html_show_sphinx = True
203
204 # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
205 #html_show_copyright = True
206
207 # If true, an OpenSearch description file will be output, and all pages will
208 # contain a <link> tag referring to it. The value of this option must be the
209 # base URL from which the finished HTML is served.
210 #html_use_opensearch = ''
211
212 # This is the file name suffix for HTML files (e.g. ".xhtml").
213 #html_file_suffix = None
214
215 # Output file base name for HTML help builder.
216 htmlhelp_basename = 'Qtiledoc'
217
218
219 # -- Options for LaTeX output --------------------------------------------------
220
221 latex_elements = {
222 # The paper size ('letterpaper' or 'a4paper').
223 #'papersize': 'letterpaper',
224
225 # The font size ('10pt', '11pt' or '12pt').
226 #'pointsize': '10pt',
227
228 # Additional stuff for the LaTeX preamble.
229 #'preamble': '',
230 }
231
232 # Grouping the document tree into LaTeX files. List of tuples
233 # (source start file, target name, title, author, documentclass [howto/manual]).
234 latex_documents = [
235 ('index', 'Qtile.tex', u'Qtile Documentation',
236 u'Aldo Cortesi', 'manual'),
237 ]
238
239 # The name of an image file (relative to this directory) to place at the top of
240 # the title page.
241 #latex_logo = None
242
243 # For "manual" documents, if this is true, then toplevel headings are parts,
244 # not chapters.
245 #latex_use_parts = False
246
247 # If true, show page references after internal links.
248 #latex_show_pagerefs = False
249
250 # If true, show URL addresses after external links.
251 #latex_show_urls = False
252
253 # Documents to append as an appendix to all manuals.
254 #latex_appendices = []
255
256 # If false, no module index is generated.
257 #latex_domain_indices = True
258
259
260 # -- Options for manual page output --------------------------------------------
261
262 # One entry per manual page. List of tuples
263 # (source start file, name, description, authors, manual section).
264 man_pages = [
265 ('man/qtile', 'qtile', u'Qtile Documentation',
266 [u'Tycho Andersen'], 1),
267 ('man/qshell', 'qshell', u'Qtile Documentation',
268 [u'Tycho Andersen'], 1),
269 ]
270
271 # If true, show URL addresses after external links.
272 #man_show_urls = False
273
274
275 # -- Options for Texinfo output ------------------------------------------------
276
277 # Grouping the document tree into Texinfo files. List of tuples
278 # (source start file, target name, title, author,
279 # dir menu entry, description, category)
280 texinfo_documents = [
281 ('index', 'Qtile', u'Qtile Documentation',
282 u'Aldo Cortesi', 'Qtile', 'A hackable tiling window manager.',
283 'Miscellaneous'),
284 ]
285
286 # Documents to append as an appendix to all manuals.
287 #texinfo_appendices = []
288
289 # If false, no module index is generated.
290 #texinfo_domain_indices = True
291
292 # How to display URL addresses: 'footnote', 'no', or 'inline'.
293 #texinfo_show_urls = 'footnote'
294
295 # only import and set the theme if we're building docs locally
296 if not os.environ.get('READTHEDOCS'):
297 import sphinx_rtd_theme
298 html_theme = 'sphinx_rtd_theme'
299 html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
300
301
302 graphviz_dot_args = ['-Lg']
303
304 # A workaround for the responsive tables always having annoying scrollbars.
305 def setup(app):
306 app.add_stylesheet("no_scrollbars.css")
307
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -36,6 +36,7 @@
'iwlib',
'keyring',
'mpd',
+ 'psutil',
'trollius',
'xcffib',
'xcffib.randr',
| {"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -36,6 +36,7 @@\n 'iwlib',\n 'keyring',\n 'mpd',\n+ 'psutil',\n 'trollius',\n 'xcffib',\n 'xcffib.randr',\n", "issue": "docs: Missing deps when building widget docs gives \"alias to ImportErrorWidget\"\nSee http://docs.qtile.org/en/latest/manual/ref/widgets.html#memory for example.\r\n\r\nI guess the widget dependencies are not installed while building the docs, resulting in Sphinx telling the widget is an alias to `libqtile.widget.import_error.make_error.<locals>.ImportErrorWidget`.\r\n\r\nEDIT: okay I see where the deps are listed: in `docs/conf.py`. Indeed `mpd` is present but `psutil` is not, so the `Memory` widget's docs do not build.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Qtile documentation build configuration file, created by\n# sphinx-quickstart on Sat Feb 11 15:20:21 2012.\n#\n# This file is execfile()d with the current directory set to its containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\nimport os\nimport sys\nfrom unittest.mock import MagicMock\n\n\nclass Mock(MagicMock):\n # xcbq does a dir() on objects and pull stuff out of them and tries to sort\n # the result. MagicMock has a bunch of stuff that can't be sorted, so let's\n # like about dir().\n def __dir__(self):\n return []\n\nMOCK_MODULES = [\n 'libqtile._ffi_pango',\n 'libqtile.core._ffi_xcursors',\n 'cairocffi',\n 'cairocffi.pixbuf',\n 'cffi',\n 'dateutil',\n 'dateutil.parser',\n 'dbus',\n 'dbus.mainloop.glib',\n 'iwlib',\n 'keyring',\n 'mpd',\n 'trollius',\n 'xcffib',\n 'xcffib.randr',\n 'xcffib.xfixes',\n 'xcffib.xinerama',\n 'xcffib.xproto',\n 'xdg.IconTheme',\n]\nsys.modules.update((mod_name, Mock()) for mod_name in MOCK_MODULES)\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\nsys.path.insert(0, os.path.abspath('.'))\nsys.path.insert(0, os.path.abspath('../'))\n\n# -- General configuration -----------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be extensions\n# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.\nextensions = [\n 'sphinx.ext.autodoc',\n 'sphinx.ext.autosummary',\n 'sphinx.ext.coverage',\n 'sphinx.ext.graphviz',\n 'sphinx.ext.todo',\n 'sphinx.ext.viewcode',\n 'sphinxcontrib.seqdiag',\n 'sphinx_qtile',\n 'numpydoc',\n]\n\nnumpydoc_show_class_members = False\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = []\n\n# The suffix of source filenames.\nsource_suffix = '.rst'\n\n# The encoding of source files.\n#source_encoding = 'utf-8-sig'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = u'Qtile'\ncopyright = u'2008-2019, Aldo Cortesi and contributers'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = '0.14.2'\n# The full version, including alpha/beta/rc tags.\nrelease = version\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#language = None\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n#today = ''\n# Else, today_fmt is used as the format for a strftime call.\n#today_fmt = '%B %d, %Y'\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\nexclude_patterns = ['_build', 'man']\n\n# The reST default role (used for this markup: `text`) to use for all documents.\n#default_role = None\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\n#add_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\n#add_module_names = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\n#show_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# A list of ignored prefixes for module index sorting.\n#modindex_common_prefix = []\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = True\n\n\n# -- Options for HTML output --------fautod-------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#html_theme = 'default'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#html_theme_options = {}\n\n# Add any paths that contain custom themes here, relative to this directory.\n#html_theme_path = []\n\n# The name for this set of Sphinx documents. If None, it defaults to\n# \"<project> v<release> documentation\".\n#html_title = None\n\n# A shorter title for the navigation bar. Default is the same as html_title.\n#html_short_title = None\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\n#html_logo = None\n\n# The name of an image file (within the static path) to use as favicon of the\n# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\nhtml_favicon = '_static/favicon.ico'\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,\n# using the given strftime format.\n#html_last_updated_fmt = '%b %d, %Y'\n\n# If true, SmartyPants will be used to convert quotes and dashes to\n# typographically correct entities.\n#html_use_smartypants = True\n\n# Custom sidebar templates, maps document names to template names.\n#html_sidebars = {}\n\n# Additional templates that should be rendered to pages, maps page names to\n# template names.\n#html_additional_pages = {'index': 'index.html'}\n\n# If false, no module index is generated.\n#html_domain_indices = True\n\n# If false, no index is generated.\nhtml_use_index = True\n\n# If true, the index is split into individual pages for each letter.\n#html_split_index = False\n\n# If true, links to the reST sources are added to the pages.\n#html_show_sourcelink = True\n\n# If true, \"Created using Sphinx\" is shown in the HTML footer. Default is True.\n#html_show_sphinx = True\n\n# If true, \"(C) Copyright ...\" is shown in the HTML footer. Default is True.\n#html_show_copyright = True\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it. The value of this option must be the\n# base URL from which the finished HTML is served.\n#html_use_opensearch = ''\n\n# This is the file name suffix for HTML files (e.g. \".xhtml\").\n#html_file_suffix = None\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'Qtiledoc'\n\n\n# -- Options for LaTeX output --------------------------------------------------\n\nlatex_elements = {\n# The paper size ('letterpaper' or 'a4paper').\n#'papersize': 'letterpaper',\n\n# The font size ('10pt', '11pt' or '12pt').\n#'pointsize': '10pt',\n\n# Additional stuff for the LaTeX preamble.\n#'preamble': '',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title, author, documentclass [howto/manual]).\nlatex_documents = [\n ('index', 'Qtile.tex', u'Qtile Documentation',\n u'Aldo Cortesi', 'manual'),\n]\n\n# The name of an image file (relative to this directory) to place at the top of\n# the title page.\n#latex_logo = None\n\n# For \"manual\" documents, if this is true, then toplevel headings are parts,\n# not chapters.\n#latex_use_parts = False\n\n# If true, show page references after internal links.\n#latex_show_pagerefs = False\n\n# If true, show URL addresses after external links.\n#latex_show_urls = False\n\n# Documents to append as an appendix to all manuals.\n#latex_appendices = []\n\n# If false, no module index is generated.\n#latex_domain_indices = True\n\n\n# -- Options for manual page output --------------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n ('man/qtile', 'qtile', u'Qtile Documentation',\n [u'Tycho Andersen'], 1),\n ('man/qshell', 'qshell', u'Qtile Documentation',\n [u'Tycho Andersen'], 1),\n]\n\n# If true, show URL addresses after external links.\n#man_show_urls = False\n\n\n# -- Options for Texinfo output ------------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n ('index', 'Qtile', u'Qtile Documentation',\n u'Aldo Cortesi', 'Qtile', 'A hackable tiling window manager.',\n 'Miscellaneous'),\n]\n\n# Documents to append as an appendix to all manuals.\n#texinfo_appendices = []\n\n# If false, no module index is generated.\n#texinfo_domain_indices = True\n\n# How to display URL addresses: 'footnote', 'no', or 'inline'.\n#texinfo_show_urls = 'footnote'\n\n# only import and set the theme if we're building docs locally\nif not os.environ.get('READTHEDOCS'):\n import sphinx_rtd_theme\n html_theme = 'sphinx_rtd_theme'\n html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\n\n\ngraphviz_dot_args = ['-Lg']\n\n# A workaround for the responsive tables always having annoying scrollbars.\ndef setup(app):\n app.add_stylesheet(\"no_scrollbars.css\")\n", "path": "docs/conf.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Qtile documentation build configuration file, created by\n# sphinx-quickstart on Sat Feb 11 15:20:21 2012.\n#\n# This file is execfile()d with the current directory set to its containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\nimport os\nimport sys\nfrom unittest.mock import MagicMock\n\n\nclass Mock(MagicMock):\n # xcbq does a dir() on objects and pull stuff out of them and tries to sort\n # the result. MagicMock has a bunch of stuff that can't be sorted, so let's\n # like about dir().\n def __dir__(self):\n return []\n\nMOCK_MODULES = [\n 'libqtile._ffi_pango',\n 'libqtile.core._ffi_xcursors',\n 'cairocffi',\n 'cairocffi.pixbuf',\n 'cffi',\n 'dateutil',\n 'dateutil.parser',\n 'dbus',\n 'dbus.mainloop.glib',\n 'iwlib',\n 'keyring',\n 'mpd',\n 'psutil',\n 'trollius',\n 'xcffib',\n 'xcffib.randr',\n 'xcffib.xfixes',\n 'xcffib.xinerama',\n 'xcffib.xproto',\n 'xdg.IconTheme',\n]\nsys.modules.update((mod_name, Mock()) for mod_name in MOCK_MODULES)\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\nsys.path.insert(0, os.path.abspath('.'))\nsys.path.insert(0, os.path.abspath('../'))\n\n# -- General configuration -----------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be extensions\n# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.\nextensions = [\n 'sphinx.ext.autodoc',\n 'sphinx.ext.autosummary',\n 'sphinx.ext.coverage',\n 'sphinx.ext.graphviz',\n 'sphinx.ext.todo',\n 'sphinx.ext.viewcode',\n 'sphinxcontrib.seqdiag',\n 'sphinx_qtile',\n 'numpydoc',\n]\n\nnumpydoc_show_class_members = False\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = []\n\n# The suffix of source filenames.\nsource_suffix = '.rst'\n\n# The encoding of source files.\n#source_encoding = 'utf-8-sig'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = u'Qtile'\ncopyright = u'2008-2019, Aldo Cortesi and contributers'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = '0.14.2'\n# The full version, including alpha/beta/rc tags.\nrelease = version\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#language = None\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n#today = ''\n# Else, today_fmt is used as the format for a strftime call.\n#today_fmt = '%B %d, %Y'\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\nexclude_patterns = ['_build', 'man']\n\n# The reST default role (used for this markup: `text`) to use for all documents.\n#default_role = None\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\n#add_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\n#add_module_names = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\n#show_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# A list of ignored prefixes for module index sorting.\n#modindex_common_prefix = []\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = True\n\n\n# -- Options for HTML output --------fautod-------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#html_theme = 'default'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#html_theme_options = {}\n\n# Add any paths that contain custom themes here, relative to this directory.\n#html_theme_path = []\n\n# The name for this set of Sphinx documents. If None, it defaults to\n# \"<project> v<release> documentation\".\n#html_title = None\n\n# A shorter title for the navigation bar. Default is the same as html_title.\n#html_short_title = None\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\n#html_logo = None\n\n# The name of an image file (within the static path) to use as favicon of the\n# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\nhtml_favicon = '_static/favicon.ico'\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,\n# using the given strftime format.\n#html_last_updated_fmt = '%b %d, %Y'\n\n# If true, SmartyPants will be used to convert quotes and dashes to\n# typographically correct entities.\n#html_use_smartypants = True\n\n# Custom sidebar templates, maps document names to template names.\n#html_sidebars = {}\n\n# Additional templates that should be rendered to pages, maps page names to\n# template names.\n#html_additional_pages = {'index': 'index.html'}\n\n# If false, no module index is generated.\n#html_domain_indices = True\n\n# If false, no index is generated.\nhtml_use_index = True\n\n# If true, the index is split into individual pages for each letter.\n#html_split_index = False\n\n# If true, links to the reST sources are added to the pages.\n#html_show_sourcelink = True\n\n# If true, \"Created using Sphinx\" is shown in the HTML footer. Default is True.\n#html_show_sphinx = True\n\n# If true, \"(C) Copyright ...\" is shown in the HTML footer. Default is True.\n#html_show_copyright = True\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it. The value of this option must be the\n# base URL from which the finished HTML is served.\n#html_use_opensearch = ''\n\n# This is the file name suffix for HTML files (e.g. \".xhtml\").\n#html_file_suffix = None\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'Qtiledoc'\n\n\n# -- Options for LaTeX output --------------------------------------------------\n\nlatex_elements = {\n# The paper size ('letterpaper' or 'a4paper').\n#'papersize': 'letterpaper',\n\n# The font size ('10pt', '11pt' or '12pt').\n#'pointsize': '10pt',\n\n# Additional stuff for the LaTeX preamble.\n#'preamble': '',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title, author, documentclass [howto/manual]).\nlatex_documents = [\n ('index', 'Qtile.tex', u'Qtile Documentation',\n u'Aldo Cortesi', 'manual'),\n]\n\n# The name of an image file (relative to this directory) to place at the top of\n# the title page.\n#latex_logo = None\n\n# For \"manual\" documents, if this is true, then toplevel headings are parts,\n# not chapters.\n#latex_use_parts = False\n\n# If true, show page references after internal links.\n#latex_show_pagerefs = False\n\n# If true, show URL addresses after external links.\n#latex_show_urls = False\n\n# Documents to append as an appendix to all manuals.\n#latex_appendices = []\n\n# If false, no module index is generated.\n#latex_domain_indices = True\n\n\n# -- Options for manual page output --------------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n ('man/qtile', 'qtile', u'Qtile Documentation',\n [u'Tycho Andersen'], 1),\n ('man/qshell', 'qshell', u'Qtile Documentation',\n [u'Tycho Andersen'], 1),\n]\n\n# If true, show URL addresses after external links.\n#man_show_urls = False\n\n\n# -- Options for Texinfo output ------------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n ('index', 'Qtile', u'Qtile Documentation',\n u'Aldo Cortesi', 'Qtile', 'A hackable tiling window manager.',\n 'Miscellaneous'),\n]\n\n# Documents to append as an appendix to all manuals.\n#texinfo_appendices = []\n\n# If false, no module index is generated.\n#texinfo_domain_indices = True\n\n# How to display URL addresses: 'footnote', 'no', or 'inline'.\n#texinfo_show_urls = 'footnote'\n\n# only import and set the theme if we're building docs locally\nif not os.environ.get('READTHEDOCS'):\n import sphinx_rtd_theme\n html_theme = 'sphinx_rtd_theme'\n html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\n\n\ngraphviz_dot_args = ['-Lg']\n\n# A workaround for the responsive tables always having annoying scrollbars.\ndef setup(app):\n app.add_stylesheet(\"no_scrollbars.css\")\n", "path": "docs/conf.py"}]} |
gh_patches_debug_34 | rasdani/github-patches | git_diff | sanic-org__sanic-1527 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Publish 19.3 release to PyPI
Thank you for the release 3 days ago!
https://github.com/huge-success/sanic/releases/tag/19.3
It's missing from PyPI at the moment:
https://pypi.org/project/sanic/#history
Please publish it at your convenience π
Keep up the awesome work β€οΈ
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sanic/__init__.py`
Content:
```
1 from sanic.app import Sanic
2 from sanic.blueprints import Blueprint
3
4
5 __version__ = "18.12.0"
6
7 __all__ = ["Sanic", "Blueprint"]
8
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sanic/__init__.py b/sanic/__init__.py
--- a/sanic/__init__.py
+++ b/sanic/__init__.py
@@ -2,6 +2,6 @@
from sanic.blueprints import Blueprint
-__version__ = "18.12.0"
+__version__ = "19.03.0"
__all__ = ["Sanic", "Blueprint"]
| {"golden_diff": "diff --git a/sanic/__init__.py b/sanic/__init__.py\n--- a/sanic/__init__.py\n+++ b/sanic/__init__.py\n@@ -2,6 +2,6 @@\n from sanic.blueprints import Blueprint\n \n \n-__version__ = \"18.12.0\"\n+__version__ = \"19.03.0\"\n \n __all__ = [\"Sanic\", \"Blueprint\"]\n", "issue": "Publish 19.3 release to PyPI\nThank you for the release 3 days ago!\r\n\r\nhttps://github.com/huge-success/sanic/releases/tag/19.3\r\n\r\nIt's missing from PyPI at the moment:\r\n\r\nhttps://pypi.org/project/sanic/#history\r\n\r\nPlease publish it at your convenience \ud83d\ude47 \r\n\r\nKeep up the awesome work \u2764\ufe0f \n", "before_files": [{"content": "from sanic.app import Sanic\nfrom sanic.blueprints import Blueprint\n\n\n__version__ = \"18.12.0\"\n\n__all__ = [\"Sanic\", \"Blueprint\"]\n", "path": "sanic/__init__.py"}], "after_files": [{"content": "from sanic.app import Sanic\nfrom sanic.blueprints import Blueprint\n\n\n__version__ = \"19.03.0\"\n\n__all__ = [\"Sanic\", \"Blueprint\"]\n", "path": "sanic/__init__.py"}]} |
gh_patches_debug_35 | rasdani/github-patches | git_diff | ludwig-ai__ludwig-897 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
TF2 is slower than TF1, improve speed
https://github.com/tensorflow/tensorflow/issues/33487
Getting the same result: epochs became longer because of switching to TF2.
I noticed also that it's using less memory than TF1, but slower epochs are killing this advantage.
TF 2.3 β less epoch time, but still slow.
Looks like there are some issues with `experimental_run_functions_eagerly`.
Very disappointed. Going to switch back to ludwig 0.2.2.8
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ludwig/models/trainer.py`
Content:
```
1 #! /usr/bin/env python
2 # coding=utf-8
3 # Copyright (c) 2019 Uber Technologies, Inc.
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, software
12 # distributed under the License is distributed on an "AS IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16 # ==============================================================================
17 """
18 This module contains the class and auxiliary methods of a model.
19 """
20 from __future__ import absolute_import
21 from __future__ import division
22 from __future__ import print_function
23
24 import logging
25 import os
26 import os.path
27 import signal
28 import sys
29 import threading
30 import time
31 from collections import OrderedDict
32
33 import tensorflow as tf
34 from tabulate import tabulate
35 from tqdm import tqdm
36
37 from ludwig.constants import LOSS, COMBINED, TRAINING, VALIDATION, TEST, TYPE
38 from ludwig.contrib import contrib_command
39 from ludwig.globals import MODEL_HYPERPARAMETERS_FILE_NAME
40 from ludwig.globals import MODEL_WEIGHTS_FILE_NAME
41 from ludwig.globals import TRAINING_CHECKPOINTS_DIR_PATH
42 from ludwig.globals import TRAINING_PROGRESS_TRACKER_FILE_NAME
43 from ludwig.utils.horovod_utils import is_on_master
44 from ludwig.globals import is_progressbar_disabled
45 from ludwig.models.predictor import Predictor
46 from ludwig.modules.metric_modules import get_improved_fun
47 from ludwig.modules.metric_modules import get_initial_validation_value
48 from ludwig.modules.optimization_modules import ClippedOptimizer
49 from ludwig.utils import time_utils
50 from ludwig.utils.batcher import initialize_batcher
51 from ludwig.utils.data_utils import load_json, save_json
52 from ludwig.utils.defaults import default_random_seed
53 from ludwig.utils.math_utils import learning_rate_warmup, \
54 learning_rate_warmup_distributed
55 from ludwig.utils.misc_utils import set_random_seed
56
57 logger = logging.getLogger(__name__)
58
59 tf.config.experimental_run_functions_eagerly(True)
60
61
62 class Trainer:
63 """
64 Trainer is a class that train a model
65 """
66
67 def __init__(
68 self,
69 optimizer=None,
70 epochs=100,
71 regularization_lambda=0.0,
72 learning_rate=0.001,
73 batch_size=128,
74 eval_batch_size=0,
75 bucketing_field=None,
76 validation_field='combined',
77 validation_metric='loss',
78 early_stop=20,
79 reduce_learning_rate_on_plateau=0,
80 reduce_learning_rate_on_plateau_patience=5,
81 reduce_learning_rate_on_plateau_rate=0.5,
82 reduce_learning_rate_eval_metric=LOSS,
83 reduce_learning_rate_eval_split=TRAINING,
84 increase_batch_size_on_plateau=0,
85 increase_batch_size_on_plateau_patience=5,
86 increase_batch_size_on_plateau_rate=2,
87 increase_batch_size_on_plateau_max=512,
88 increase_batch_size_eval_metric=LOSS,
89 increase_batch_size_eval_split=TRAINING,
90 learning_rate_warmup_epochs=1,
91 resume=False,
92 skip_save_model=False,
93 skip_save_progress=False,
94 skip_save_log=False,
95 random_seed=default_random_seed,
96 horovod=None,
97 debug=False,
98 **kwargs
99 ):
100 """Trains a model with a set of hyperparameters listed below. Customizable
101 :param training_set: The training set
102 :param validation_set: The validation dataset
103 :param test_set: The test dataset
104 :param validation_field: The first output feature, by default it is set
105 as the same field of the first output feature.
106 :param validation_metric: metric used on the validation field, it is
107 accuracy by default
108 :type validation_metric:
109 :param save_path: The path to save the file
110 :type save_path: filepath (str)
111 :param regularization_lambda: Strength of the $L2$ regularization
112 :type regularization_lambda: Integer
113 :param epochs: Number of epochs the algorithm is intended to be run over
114 :type epochs: Integer
115 :param learning_rate: Learning rate for the algorithm, represents how
116 much to scale the gradients by
117 :type learning_rate: Integer
118 :param batch_size: Size of batch to pass to the model for training.
119 :type batch_size: Integer
120 :param batch_size: Size of batch to pass to the model for evaluation.
121 :type batch_size: Integer
122 :param bucketing_field: when batching, buckets datapoints based the
123 length of a field together. Bucketing on text length speeds up
124 training of RNNs consistently, 30% in some cases
125 :type bucketing_field:
126 :param validation_field: The first output feature, by default it is set
127 as the same field of the first output feature.
128 :param validation_metric: metric used on the validation field, it is
129 accuracy by default
130 :type validation_metric:
131 :param dropout: dropout probability (probability of dropping
132 a neuron in a given layer)
133 :type dropout: Float
134 :param early_stop: How many epochs without any improvement in the
135 validation_metric triggers the algorithm to stop
136 :type early_stop: Integer
137 :param reduce_learning_rate_on_plateau: Reduces the learning rate when
138 the algorithm hits a plateau (i.e. the performance on the
139 validation does not improve)
140 :type reduce_learning_rate_on_plateau: Float
141 :param reduce_learning_rate_on_plateau_patience: How many epochs have
142 to pass before the learning rate reduces
143 :type reduce_learning_rate_on_plateau_patience: Float
144 :param reduce_learning_rate_on_plateau_rate: Rate at which we reduce
145 the learning rate
146 :type reduce_learning_rate_on_plateau_rate: Float
147 :param increase_batch_size_on_plateau: Increase the batch size on a
148 plateau
149 :type increase_batch_size_on_plateau: Integer
150 :param increase_batch_size_on_plateau_patience: How many epochs to wait
151 for before increasing the batch size
152 :type increase_batch_size_on_plateau_patience: Integer
153 :param increase_batch_size_on_plateau_rate: The rate at which the batch
154 size increases.
155 :type increase_batch_size_on_plateau_rate: Float
156 :param increase_batch_size_on_plateau_max: The maximum size of the batch
157 :type increase_batch_size_on_plateau_max: Integer
158 :param learning_rate_warmup_epochs: The number of epochs to warmup the
159 learning rate for.
160 :type learning_rate_warmup_epochs: Integer
161 :param resume: Resume training a model that was being trained.
162 :type resume: Boolean
163 :param skip_save_model: disables
164 saving model weights and hyperparameters each time the model
165 improves. By default Ludwig saves model weights after each epoch
166 the validation metric imrpvoes, but if the model is really big
167 that can be time consuming if you do not want to keep
168 the weights and just find out what performance can a model get
169 with a set of hyperparameters, use this parameter to skip it,
170 but the model will not be loadable later on.
171 :type skip_save_model: Boolean
172 :param skip_save_progress: disables saving progress each epoch.
173 By default Ludwig saves weights and stats after each epoch
174 for enabling resuming of training, but if the model is
175 really big that can be time consuming and will uses twice
176 as much space, use this parameter to skip it, but training
177 cannot be resumed later on
178 :type skip_save_progress: Boolean
179 :param skip_save_log: Disables saving TensorBoard
180 logs. By default Ludwig saves logs for the TensorBoard, but if it
181 is not needed turning it off can slightly increase the
182 overall speed..
183 :type skip_save_log: Boolean
184 :param random_seed: Default initialization for the random seeds
185 :type: Float
186 """
187 self._epochs = epochs
188 self._regularization_lambda = regularization_lambda
189 self._learning_rate = learning_rate
190 self._batch_size = batch_size
191 self._eval_batch_size = batch_size if eval_batch_size < 1 else eval_batch_size
192 self._bucketing_field = bucketing_field
193 self._validation_field = validation_field
194 self._validation_metric = validation_metric
195 self._early_stop = early_stop
196 self._reduce_learning_rate_on_plateau = reduce_learning_rate_on_plateau
197 self._reduce_learning_rate_on_plateau_patience = reduce_learning_rate_on_plateau_patience
198 self._reduce_learning_rate_on_plateau_rate = reduce_learning_rate_on_plateau_rate
199 self._reduce_learning_rate_eval_metric = reduce_learning_rate_eval_metric
200 self._reduce_learning_rate_eval_split = reduce_learning_rate_eval_split
201 self._increase_batch_size_on_plateau = increase_batch_size_on_plateau
202 self._increase_batch_size_on_plateau_patience = increase_batch_size_on_plateau_patience
203 self._increase_batch_size_on_plateau_rate = increase_batch_size_on_plateau_rate
204 self._increase_batch_size_on_plateau_max = increase_batch_size_on_plateau_max
205 self._increase_batch_size_eval_metric = increase_batch_size_eval_metric
206 self._increase_batch_size_eval_split = increase_batch_size_eval_split
207 self._learning_rate_warmup_epochs = learning_rate_warmup_epochs
208 self._resume = resume
209 self._skip_save_model = skip_save_model
210 self._skip_save_progress = skip_save_progress
211 self._skip_save_log = skip_save_log
212 self._random_seed = random_seed
213 self._horovod = horovod
214 self._debug = debug
215 self._received_sigint = False
216
217 if self._horovod:
218 self._learning_rate *= self._horovod.size()
219
220 # ================ Optimizer ================
221 if optimizer is None:
222 optimizer = {TYPE: 'Adam'}
223 self._optimizer = ClippedOptimizer(
224 horovod=horovod,
225 **optimizer
226 )
227
228 @classmethod
229 def write_epoch_summary(
230 cls,
231 summary_writer,
232 metrics,
233 step,
234 learning_rate=None
235 ):
236 if not summary_writer:
237 return
238
239 with summary_writer.as_default():
240 for feature_name, output_feature in metrics.items():
241 for metric in output_feature:
242 metric_tag = "{}/epoch_{}".format(
243 feature_name, metric
244 )
245 metric_val = output_feature[metric][-1]
246 tf.summary.scalar(metric_tag, metric_val, step=step)
247 if learning_rate:
248 tf.summary.scalar("combined/epoch_learning_rate",
249 learning_rate, step=step)
250 summary_writer.flush()
251
252 @classmethod
253 def write_step_summary(
254 cls,
255 train_summary_writer,
256 combined_loss,
257 all_losses,
258 step
259 ):
260 if not train_summary_writer:
261 return
262
263 with train_summary_writer.as_default():
264 # combined loss
265 loss_tag = "{}/step_training_loss".format("combined")
266 tf.summary.scalar(loss_tag, combined_loss, step=step)
267
268 # all other losses
269 for feature_name, loss in all_losses.items():
270 loss_tag = "{}/step_training_loss".format(feature_name)
271 tf.summary.scalar(loss_tag, loss, step=step)
272
273 train_summary_writer.flush()
274
275 def train(
276 self,
277 model,
278 training_set,
279 validation_set=None,
280 test_set=None,
281 save_path='model',
282 **kwargs
283 ):
284 """Trains a model with a set of hyperparameters listed below. Customizable
285 :param training_set: The training set
286 :param validation_set: The validation dataset
287 :param test_set: The test dataset
288 """
289 # ====== General setup =======
290 tf.random.set_seed(self._random_seed)
291
292 output_features = model.output_features
293 digits_per_epochs = len(str(self._epochs))
294 # Only use signals when on the main thread to avoid issues with CherryPy: https://github.com/uber/ludwig/issues/286
295 if threading.current_thread() == threading.main_thread():
296 signal.signal(signal.SIGINT, self.set_epochs_to_1_or_quit)
297 should_validate = validation_set is not None and validation_set.size > 0
298
299 metrics_names = self.get_metrics_names(output_features)
300
301 # check if validation_field is valid
302 valid_validation_field = False
303 validation_output_feature_name = None
304 if self._validation_field == 'combined':
305 valid_validation_field = True
306 validation_output_feature_name = 'combined'
307 if self._validation_metric is not LOSS and len(
308 output_features) == 1:
309 only_of = next(iter(output_features))
310 if self._validation_metric in metrics_names[only_of]:
311 validation_output_feature_name = only_of
312 logger.warning(
313 "Replacing 'combined' validation field "
314 "with '{}' as the specified validation "
315 "metric {} is invalid for 'combined' "
316 "but is valid for '{}'.".format(
317 only_of, self._validation_metric, only_of
318 ))
319 else:
320 for output_feature in output_features:
321 if self._validation_field == output_feature:
322 valid_validation_field = True
323 validation_output_feature_name = self._validation_field
324 if not valid_validation_field:
325 raise ValueError(
326 'The specificed validation_field {} is not valid.'
327 'Available ones are: {}'.format(
328 self._validation_field,
329 [of['name'] for of in output_features] + ['combined']
330 )
331 )
332
333 # check if validation_metric is valid
334 valid_validation_metric = self._validation_metric in metrics_names[
335 validation_output_feature_name
336 ]
337 if not valid_validation_metric:
338 raise ValueError(
339 'The specificed metric {} is not valid. '
340 'Available metrics for {} output feature are: {}'.format(
341 self._validation_metric,
342 validation_output_feature_name,
343 metrics_names[validation_output_feature_name]
344 )
345 )
346
347 # ====== Setup file names =======
348 model_weights_path = model_hyperparameters_path = None
349 training_checkpoints_path = training_checkpoints_prefix_path = training_progress_tracker_path = None
350 tensorboard_log_dir = None
351 if is_on_master():
352 os.makedirs(save_path, exist_ok=True)
353 model_weights_path = os.path.join(save_path,
354 MODEL_WEIGHTS_FILE_NAME)
355 model_hyperparameters_path = os.path.join(
356 save_path, MODEL_HYPERPARAMETERS_FILE_NAME
357 )
358 training_checkpoints_path = os.path.join(
359 save_path, TRAINING_CHECKPOINTS_DIR_PATH
360 )
361 # training_checkpoints_prefix_path = os.path.join(
362 # training_checkpoints_path, "ckpt"
363 # )
364 training_progress_tracker_path = os.path.join(
365 save_path, TRAINING_PROGRESS_TRACKER_FILE_NAME
366 )
367 tensorboard_log_dir = os.path.join(
368 save_path, 'logs'
369 )
370
371 # ====== Setup session =======
372 checkpoint = checkpoint_manager = None
373 if is_on_master():
374 checkpoint = tf.train.Checkpoint(
375 optimizer=self._optimizer,
376 model=model
377 )
378 checkpoint_manager = tf.train.CheckpointManager(
379 checkpoint, training_checkpoints_path, max_to_keep=1
380 )
381
382 train_summary_writer = None
383 validation_summary_writer = None
384 test_summary_writer = None
385 if is_on_master() and not self._skip_save_log and tensorboard_log_dir:
386 train_summary_writer = tf.summary.create_file_writer(
387 os.path.join(
388 tensorboard_log_dir, TRAINING
389 )
390 )
391 if validation_set is not None and validation_set.size > 0:
392 validation_summary_writer = tf.summary.create_file_writer(
393 os.path.join(
394 tensorboard_log_dir, VALIDATION
395 )
396 )
397 if test_set is not None and test_set.size > 0:
398 test_summary_writer = tf.summary.create_file_writer(
399 os.path.join(
400 tensorboard_log_dir, TEST
401 )
402 )
403
404 if self._debug and is_on_master():
405 # See https://www.tensorflow.org/tensorboard/debugger_v2 for usage.
406 debug_path = os.path.join(
407 save_path, 'debug'
408 )
409 tf.debugging.experimental.enable_dump_debug_info(
410 debug_path,
411 tensor_debug_mode='FULL_HEALTH',
412 circular_buffer_size=-1,
413 )
414 tf.config.experimental_run_functions_eagerly(True)
415
416 # ================ Resume logic ================
417 if self._resume:
418 progress_tracker = self.resume_training_progress_tracker(
419 training_progress_tracker_path
420 )
421 if is_on_master():
422 self.resume_weights_and_optimzier(
423 training_checkpoints_path, checkpoint
424 )
425 else:
426 (
427 train_metrics,
428 vali_metrics,
429 test_metrics
430 ) = self.initialize_training_metrics(output_features)
431
432 progress_tracker = ProgressTracker(
433 batch_size=self._batch_size,
434 epoch=0,
435 steps=0,
436 last_improvement_epoch=0,
437 last_learning_rate_reduction_epoch=0,
438 last_increase_batch_size_epoch=0,
439 learning_rate=self._learning_rate,
440 best_eval_metric=get_initial_validation_value(
441 self._validation_metric
442 ),
443 best_reduce_learning_rate_eval_metric=get_initial_validation_value(
444 self._reduce_learning_rate_eval_metric
445 ),
446 last_reduce_learning_rate_eval_metric_improvement=0,
447 best_increase_batch_size_eval_metric=get_initial_validation_value(
448 self._increase_batch_size_eval_metric
449 ),
450 last_increase_batch_size_eval_metric_improvement=0,
451 num_reductions_learning_rate=0,
452 num_increases_batch_size=0,
453 train_metrics=train_metrics,
454 vali_metrics=vali_metrics,
455 test_metrics=test_metrics,
456 last_improvement=0,
457 last_learning_rate_reduction=0,
458 last_increase_batch_size=0,
459 )
460
461 set_random_seed(self._random_seed)
462 batcher = initialize_batcher(
463 training_set, self._batch_size, self._bucketing_field,
464 horovod=self._horovod
465 )
466
467 # ================ Training Loop ================
468 first_batch = True
469 while progress_tracker.epoch < self._epochs:
470 # epoch init
471 start_time = time.time()
472 if is_on_master():
473 logger.info(
474 '\nEpoch {epoch:{digits}d}'.format(
475 epoch=progress_tracker.epoch + 1,
476 digits=digits_per_epochs
477 )
478 )
479 current_learning_rate = progress_tracker.learning_rate
480 # needed because batch size may change
481 batcher.batch_size = progress_tracker.batch_size
482
483 # Reset the metrics at the start of the next epoch
484 model.reset_metrics()
485
486 # ================ Train ================
487 progress_bar = None
488 if is_on_master():
489 progress_bar = tqdm(
490 desc='Training',
491 total=batcher.steps_per_epoch,
492 file=sys.stdout,
493 disable=is_progressbar_disabled()
494 )
495
496 # training step loop
497 while not batcher.last_batch():
498 batch = batcher.next_batch()
499 inputs = {
500 i_feat.feature_name: batch[i_feat.feature_name]
501 for i_feat in model.input_features.values()
502 }
503 targets = {
504 o_feat.feature_name: batch[o_feat.feature_name]
505 for o_feat in model.output_features.values()
506 }
507
508 # Reintroduce for tensorboard graph
509 # if first_batch and is_on_master() and not skip_save_log:
510 # tf.summary.trace_on(graph=True, profiler=True)
511
512 loss, all_losses = model.train_step(
513 self._optimizer,
514 inputs,
515 targets,
516 self._regularization_lambda
517 )
518
519 # Reintroduce for tensorboard graph
520 # if first_batch and is_on_master() and not skip_save_log:
521 # with train_summary_writer.as_default():
522 # tf.summary.trace_export(
523 # name="Model",
524 # step=0,
525 # profiler_outdir=tensorboard_log_dir
526 # )
527
528 if is_on_master() and not self._skip_save_log:
529 self.write_step_summary(
530 train_summary_writer=train_summary_writer,
531 combined_loss=loss,
532 all_losses=all_losses,
533 step=progress_tracker.steps,
534 )
535
536 if self._horovod and first_batch:
537 # Horovod: broadcast initial variable states from rank 0 to all other processes.
538 # This is necessary to ensure consistent initialization of all workers when
539 # training is started with random weights or restored from a checkpoint.
540 #
541 # Note: broadcast should be done after the first gradient step to ensure
542 # optimizer initialization.
543 self._horovod.broadcast_variables(model.variables,
544 root_rank=0)
545 self._horovod.broadcast_variables(
546 self._optimizer.variables(), root_rank=0)
547
548 if self._horovod:
549 current_learning_rate = learning_rate_warmup_distributed(
550 current_learning_rate,
551 progress_tracker.epoch,
552 self._learning_rate_warmup_epochs,
553 self._horovod.size(),
554 batcher.step,
555 batcher.steps_per_epoch
556 ) * self._horovod.size()
557 else:
558 current_learning_rate = learning_rate_warmup(
559 current_learning_rate,
560 progress_tracker.epoch,
561 self._learning_rate_warmup_epochs,
562 batcher.step,
563 batcher.steps_per_epoch
564 )
565 self._optimizer.set_learning_rate(current_learning_rate)
566
567 progress_tracker.steps += 1
568 if is_on_master():
569 progress_bar.update(1)
570 first_batch = False
571
572 # ================ Post Training Epoch ================
573 if is_on_master():
574 progress_bar.close()
575
576 progress_tracker.epoch += 1
577 batcher.reset() # todo this may be useless, doublecheck
578
579 # ================ Eval ================
580 # init tables
581 tables = OrderedDict()
582 for output_feature_name, output_feature in output_features.items():
583 tables[output_feature_name] = [
584 [output_feature_name] + metrics_names[output_feature_name]
585 ]
586 tables[COMBINED] = [[COMBINED, LOSS]]
587
588 # eval metrics on train
589 self.evaluation(
590 model,
591 training_set,
592 'train',
593 progress_tracker.train_metrics,
594 tables,
595 self._eval_batch_size,
596 )
597
598 self.write_epoch_summary(
599 summary_writer=train_summary_writer,
600 metrics=progress_tracker.train_metrics,
601 step=progress_tracker.epoch,
602 learning_rate=current_learning_rate,
603 )
604
605 if validation_set is not None and validation_set.size > 0:
606 # eval metrics on validation set
607 self.evaluation(
608 model,
609 validation_set,
610 'vali',
611 progress_tracker.vali_metrics,
612 tables,
613 self._eval_batch_size,
614 )
615
616 self.write_epoch_summary(
617 summary_writer=validation_summary_writer,
618 metrics=progress_tracker.vali_metrics,
619 step=progress_tracker.epoch,
620 )
621
622 if test_set is not None and test_set.size > 0:
623 # eval metrics on test set
624 self.evaluation(
625 model,
626 test_set,
627 TEST,
628 progress_tracker.test_metrics,
629 tables,
630 self._eval_batch_size,
631 )
632
633 self.write_epoch_summary(
634 summary_writer=test_summary_writer,
635 metrics=progress_tracker.test_metrics,
636 step=progress_tracker.epoch,
637 )
638
639 elapsed_time = (time.time() - start_time) * 1000.0
640
641 if is_on_master():
642 logger.info('Took {time}'.format(
643 time=time_utils.strdelta(elapsed_time)))
644
645 # metric prints
646 if is_on_master():
647 for output_feature, table in tables.items():
648 logger.info(
649 tabulate(
650 table,
651 headers='firstrow',
652 tablefmt='fancy_grid',
653 floatfmt='.4f'
654 )
655 )
656
657 # ================ Validation Logic ================
658 if should_validate:
659 should_break = self.check_progress_on_validation(
660 model,
661 progress_tracker,
662 validation_output_feature_name,
663 self._validation_metric,
664 model_weights_path,
665 model_hyperparameters_path,
666 self._reduce_learning_rate_on_plateau,
667 self._reduce_learning_rate_on_plateau_patience,
668 self._reduce_learning_rate_on_plateau_rate,
669 self._reduce_learning_rate_eval_metric,
670 self._reduce_learning_rate_eval_split,
671 self._increase_batch_size_on_plateau,
672 self._increase_batch_size_on_plateau_patience,
673 self._increase_batch_size_on_plateau_rate,
674 self._increase_batch_size_on_plateau_max,
675 self._increase_batch_size_eval_metric,
676 self._increase_batch_size_eval_split,
677 self._early_stop,
678 self._skip_save_model,
679 )
680 if should_break:
681 break
682 else:
683 # there's no validation, so we save the model at each iteration
684 if is_on_master():
685 if not self._skip_save_model:
686 model.save_weights(model_weights_path)
687
688 # ========== Save training progress ==========
689 if is_on_master():
690 if not self._skip_save_progress:
691 checkpoint_manager.save()
692 progress_tracker.save(
693 os.path.join(
694 save_path,
695 TRAINING_PROGRESS_TRACKER_FILE_NAME
696 )
697 )
698
699 if is_on_master():
700 contrib_command("train_epoch_end", progress_tracker)
701 logger.info('')
702
703 if train_summary_writer is not None:
704 train_summary_writer.close()
705 if validation_summary_writer is not None:
706 validation_summary_writer.close()
707 if test_summary_writer is not None:
708 test_summary_writer.close()
709
710 return (
711 progress_tracker.train_metrics,
712 progress_tracker.vali_metrics,
713 progress_tracker.test_metrics
714 )
715
716 def train_online(
717 self,
718 model,
719 dataset,
720 ):
721 batcher = initialize_batcher(
722 dataset,
723 self._batch_size,
724 horovod=self._horovod
725 )
726
727 # training step loop
728 progress_bar = tqdm(
729 desc='Trainining online',
730 total=batcher.steps_per_epoch,
731 file=sys.stdout,
732 disable=is_progressbar_disabled()
733 )
734
735 while not batcher.last_batch():
736 batch = batcher.next_batch()
737 inputs = {
738 i_feat.feature_name: batch[i_feat.feature_name]
739 for i_feat in model.input_features.values()
740 }
741 targets = {
742 o_feat.feature_name: batch[o_feat.feature_name]
743 for o_feat in model.output_features.values()
744 }
745
746 model.train_step(
747 self._optimizer,
748 inputs,
749 targets,
750 self._regularization_lambda
751 )
752
753 progress_bar.update(1)
754
755 progress_bar.close()
756
757 def append_metrics(self, model, dataset_name, results, metrics_log,
758 tables):
759 for output_feature in model.output_features:
760 scores = [dataset_name]
761
762 # collect metric names based on output features metrics to
763 # ensure consistent order of reporting metrics
764 metric_names = model.output_features[output_feature] \
765 .metric_functions.keys()
766
767 for metric in metric_names:
768 score = results[output_feature][metric]
769 metrics_log[output_feature][metric].append(score)
770 scores.append(score)
771
772 tables[output_feature].append(scores)
773
774 metrics_log[COMBINED][LOSS].append(results[COMBINED][LOSS])
775 tables[COMBINED].append([dataset_name, results[COMBINED][LOSS]])
776
777 return metrics_log, tables
778
779 def evaluation(
780 self,
781 model,
782 dataset,
783 dataset_name,
784 metrics_log,
785 tables,
786 batch_size=128,
787 debug=False,
788 ):
789 predictor = Predictor(
790 batch_size=batch_size, horovod=self._horovod, debug=self._debug
791 )
792 metrics, predictions = predictor.batch_evaluation(
793 model,
794 dataset,
795 collect_predictions=False,
796 dataset_name=dataset_name
797 )
798
799 self.append_metrics(model, dataset_name, metrics, metrics_log, tables)
800
801 return metrics_log, tables
802
803 def check_progress_on_validation(
804 self,
805 model,
806 progress_tracker,
807 validation_output_feature_name,
808 validation_metric,
809 model_weights_path,
810 model_hyperparameters_path,
811 reduce_learning_rate_on_plateau,
812 reduce_learning_rate_on_plateau_patience,
813 reduce_learning_rate_on_plateau_rate,
814 reduce_learning_rate_eval_metric,
815 reduce_learning_rate_eval_split,
816 increase_batch_size_on_plateau,
817 increase_batch_size_on_plateau_patience,
818 increase_batch_size_on_plateau_rate,
819 increase_batch_size_on_plateau_max,
820 increase_batch_size_eval_metric,
821 increase_batch_size_eval_split,
822 early_stop,
823 skip_save_model
824 ):
825 should_break = False
826 # record how long its been since an improvement
827 improved = get_improved_fun(validation_metric)
828 if improved(
829 progress_tracker.vali_metrics[validation_output_feature_name][
830 validation_metric][-1],
831 progress_tracker.best_eval_metric
832 ):
833 progress_tracker.last_improvement_epoch = progress_tracker.epoch
834 progress_tracker.best_eval_metric = progress_tracker.vali_metrics[
835 validation_output_feature_name][validation_metric][-1]
836 if is_on_master():
837 if not skip_save_model:
838 model.save_weights(model_weights_path)
839 logger.info(
840 'Validation {} on {} improved, model saved'.format(
841 validation_metric,
842 validation_output_feature_name
843 )
844 )
845
846 progress_tracker.last_improvement = (
847 progress_tracker.epoch - progress_tracker.last_improvement_epoch
848 )
849 if progress_tracker.last_improvement != 0:
850 if is_on_master():
851 logger.info(
852 'Last improvement of {} validation {} '
853 'happened {} epoch{} ago'.format(
854 validation_output_feature_name,
855 validation_metric,
856 progress_tracker.last_improvement,
857 '' if progress_tracker.last_improvement == 1 else 's'
858 )
859 )
860
861 # ========== Reduce Learning Rate Plateau logic ========
862 if reduce_learning_rate_on_plateau > 0:
863 self.reduce_learning_rate(
864 progress_tracker,
865 validation_output_feature_name,
866 reduce_learning_rate_on_plateau,
867 reduce_learning_rate_on_plateau_patience,
868 reduce_learning_rate_on_plateau_rate,
869 reduce_learning_rate_eval_metric,
870 reduce_learning_rate_eval_split
871 )
872 progress_tracker.last_learning_rate_reduction = (
873 progress_tracker.epoch -
874 progress_tracker.last_learning_rate_reduction_epoch
875 )
876 if (
877 progress_tracker.last_learning_rate_reduction > 0
878 and
879 progress_tracker.last_reduce_learning_rate_eval_metric_improvement > 0
880 and
881 not progress_tracker.num_reductions_learning_rate >= reduce_learning_rate_on_plateau
882 ):
883 logger.info(
884 'Last learning rate reduction '
885 'happened {} epoch{} ago, '
886 'improvement of {} {} {} '
887 'happened {} epoch{} ago'
888 ''.format(
889 progress_tracker.last_learning_rate_reduction,
890 '' if progress_tracker.last_learning_rate_reduction == 1 else 's',
891 validation_output_feature_name,
892 reduce_learning_rate_eval_split,
893 reduce_learning_rate_eval_metric,
894 progress_tracker.last_reduce_learning_rate_eval_metric_improvement,
895 '' if progress_tracker.last_reduce_learning_rate_eval_metric_improvement == 1 else 's',
896 )
897 )
898
899 # ========== Increase Batch Size Plateau logic =========
900 if increase_batch_size_on_plateau > 0:
901 self.increase_batch_size(
902 progress_tracker,
903 validation_output_feature_name,
904 increase_batch_size_on_plateau,
905 increase_batch_size_on_plateau_patience,
906 increase_batch_size_on_plateau_rate,
907 increase_batch_size_on_plateau_max,
908 increase_batch_size_eval_metric,
909 increase_batch_size_eval_split
910 )
911 progress_tracker.last_increase_batch_size = (
912 progress_tracker.epoch -
913 progress_tracker.last_increase_batch_size_epoch
914 )
915 if (
916 progress_tracker.last_increase_batch_size > 0
917 and
918 progress_tracker.last_increase_batch_size_eval_metric_improvement > 0
919 and
920 not progress_tracker.num_increases_batch_size >= increase_batch_size_on_plateau
921 and
922 not progress_tracker.batch_size >= increase_batch_size_on_plateau_max
923 ):
924 logger.info(
925 'Last batch size increase '
926 'happened {} epoch{} ago, '
927 'improvement of {} {} {} '
928 'happened {} epoch{} ago'.format(
929 progress_tracker.last_increase_batch_size,
930 '' if progress_tracker.last_increase_batch_size == 1 else 's',
931 validation_output_feature_name,
932 increase_batch_size_eval_split,
933 increase_batch_size_eval_metric,
934 progress_tracker.last_increase_batch_size_eval_metric_improvement,
935 '' if progress_tracker.last_increase_batch_size_eval_metric_improvement == 1 else 's',
936 )
937 )
938
939 # ========== Early Stop logic ==========
940 if early_stop > 0:
941 if progress_tracker.last_improvement >= early_stop:
942 if is_on_master():
943 logger.info(
944 "\nEARLY STOPPING due to lack of "
945 "validation improvement, "
946 "it has been {0} epochs since last "
947 "validation improvement\n".format(
948 progress_tracker.epoch -
949 progress_tracker.last_improvement_epoch
950 )
951 )
952 should_break = True
953 return should_break
954
955 def set_epochs_to_1_or_quit(self, signum, frame):
956 if not self._received_sigint:
957 self._epochs = 1
958 self._received_sigint = True
959 logger.critical(
960 '\nReceived SIGINT, will finish this epoch and then conclude '
961 'the training'
962 )
963 logger.critical(
964 'Send another SIGINT to immediately interrupt the process'
965 )
966 else:
967 logger.critical('\nReceived a second SIGINT, will now quit')
968 sys.exit(1)
969
970 def quit_training(self, signum, frame):
971 logger.critical('Received SIGQUIT, will kill training')
972 sys.exit(1)
973
974 def resume_training_progress_tracker(self, training_progress_tracker_path):
975 if is_on_master():
976 logger.info('Resuming training of model: {0}'.format(
977 training_progress_tracker_path
978 ))
979 progress_tracker = ProgressTracker.load(training_progress_tracker_path)
980 return progress_tracker
981
982 def initialize_training_metrics(self, output_features):
983 train_metrics = OrderedDict()
984 vali_metrics = OrderedDict()
985 test_metrics = OrderedDict()
986
987 for output_feature_name, output_feature in output_features.items():
988 train_metrics[output_feature_name] = OrderedDict()
989 vali_metrics[output_feature_name] = OrderedDict()
990 test_metrics[output_feature_name] = OrderedDict()
991 for metric in output_feature.metric_functions:
992 train_metrics[output_feature_name][metric] = []
993 vali_metrics[output_feature_name][metric] = []
994 test_metrics[output_feature_name][metric] = []
995
996 for metrics in [train_metrics, vali_metrics, test_metrics]:
997 metrics[COMBINED] = {LOSS: []}
998
999 return train_metrics, vali_metrics, test_metrics
1000
1001 def get_metrics_names(self, output_features):
1002 metrics_names = {}
1003 for output_feature_name, output_feature in output_features.items():
1004 for metric in output_feature.metric_functions:
1005 metrics = metrics_names.get(output_feature_name, [])
1006 metrics.append(metric)
1007 metrics_names[output_feature_name] = metrics
1008 metrics_names[COMBINED] = [LOSS]
1009 return metrics_names
1010
1011 def resume_weights_and_optimzier(
1012 self,
1013 model_weights_progress_path,
1014 checkpoint
1015 ):
1016 checkpoint.restore(
1017 tf.train.latest_checkpoint(model_weights_progress_path)
1018 )
1019
1020 def reduce_learning_rate(
1021 self,
1022 progress_tracker,
1023 validation_output_feature_name,
1024 reduce_learning_rate_on_plateau,
1025 reduce_learning_rate_on_plateau_patience,
1026 reduce_learning_rate_on_plateau_rate,
1027 reduce_learning_rate_eval_metric=LOSS,
1028 reduce_learning_rate_eval_split=TRAINING
1029 ):
1030 if not (progress_tracker.num_reductions_learning_rate >=
1031 reduce_learning_rate_on_plateau):
1032
1033 if reduce_learning_rate_eval_split == TRAINING:
1034 split_metrics = progress_tracker.train_metrics
1035 elif reduce_learning_rate_eval_split == VALIDATION:
1036 split_metrics = progress_tracker.vali_metrics
1037 else: # if reduce_learning_rate_eval_split == TEST:
1038 split_metrics = progress_tracker.test_metrics
1039
1040 validation_metric = reduce_learning_rate_eval_metric
1041 last_metric_value = split_metrics[validation_output_feature_name][
1042 validation_metric][-1]
1043
1044 improved = get_improved_fun(validation_metric)
1045 is_improved = improved(
1046 last_metric_value,
1047 progress_tracker.best_reduce_learning_rate_eval_metric
1048 )
1049 if is_improved:
1050 # we update the best metric value and set it to the current one
1051 # and reset last improvement epoch count
1052 progress_tracker.best_reduce_learning_rate_eval_metric = last_metric_value
1053 progress_tracker.last_reduce_learning_rate_eval_metric_improvement = 0
1054 else:
1055 progress_tracker.last_reduce_learning_rate_eval_metric_improvement += 1
1056 if not is_improved and (
1057 # learning rate reduction happened more than N epochs ago
1058 progress_tracker.last_learning_rate_reduction >=
1059 reduce_learning_rate_on_plateau_patience
1060 and
1061 # we had no improvement of the evaluation metric since more than N epochs ago
1062 progress_tracker.last_reduce_learning_rate_eval_metric_improvement >=
1063 reduce_learning_rate_on_plateau_patience
1064 ):
1065 progress_tracker.learning_rate *= (
1066 reduce_learning_rate_on_plateau_rate
1067 )
1068
1069 if is_on_master():
1070 logger.info(
1071 'PLATEAU REACHED, reducing learning rate to {} '
1072 'due to lack of improvement of {} {} {}'.format(
1073 progress_tracker.batch_size,
1074 validation_output_feature_name,
1075 reduce_learning_rate_eval_split,
1076 validation_metric,
1077 )
1078 )
1079
1080 progress_tracker.last_learning_rate_reduction_epoch = progress_tracker.epoch
1081 progress_tracker.last_learning_rate_reduction = 0
1082 progress_tracker.num_reductions_learning_rate += 1
1083
1084 if (progress_tracker.num_reductions_learning_rate >=
1085 reduce_learning_rate_on_plateau):
1086 if is_on_master():
1087 logger.info(
1088 'Learning rate was already reduced '
1089 '{} times, not reducing it anymore'.format(
1090 progress_tracker.num_reductions_learning_rate
1091 )
1092 )
1093
1094 def increase_batch_size(
1095 self,
1096 progress_tracker,
1097 validation_output_feature_name,
1098 increase_batch_size_on_plateau,
1099 increase_batch_size_on_plateau_patience,
1100 increase_batch_size_on_plateau_rate,
1101 increase_batch_size_on_plateau_max,
1102 increase_batch_size_eval_metric=LOSS,
1103 increase_batch_size_eval_split=TRAINING
1104 ):
1105 if (not progress_tracker.num_increases_batch_size >=
1106 increase_batch_size_on_plateau
1107 and not progress_tracker.batch_size ==
1108 increase_batch_size_on_plateau_max):
1109
1110 if increase_batch_size_eval_split == TRAINING:
1111 split_metrics = progress_tracker.train_metrics
1112 elif increase_batch_size_eval_split == VALIDATION:
1113 split_metrics = progress_tracker.vali_metrics
1114 else: # if increase_batch_size_eval_split == TEST:
1115 split_metrics = progress_tracker.test_metrics
1116
1117 validation_metric = increase_batch_size_eval_metric
1118 last_metric_value = split_metrics[validation_output_feature_name][
1119 validation_metric][-1]
1120
1121 improved = get_improved_fun(validation_metric)
1122 is_improved = improved(
1123 last_metric_value,
1124 progress_tracker.best_increase_batch_size_eval_metric
1125 )
1126 if is_improved:
1127 # We update the best metric value and set it to the current one, and reset last improvement epoch count
1128 progress_tracker.best_increase_batch_size_eval_metric = last_metric_value
1129 progress_tracker.last_increase_batch_size_eval_metric_improvement = 0
1130 else:
1131 progress_tracker.last_increase_batch_size_eval_metric_improvement += 1
1132 if not is_improved and (
1133 # Batch size increase happened more than N epochs ago
1134 progress_tracker.last_increase_batch_size >=
1135 increase_batch_size_on_plateau_patience
1136 and
1137 # We had no improvement of the evaluation metric since more than N epochs ago
1138 progress_tracker.last_increase_batch_size_eval_metric_improvement >=
1139 increase_batch_size_on_plateau_patience
1140 ):
1141 progress_tracker.batch_size = min(
1142 (increase_batch_size_on_plateau_rate *
1143 progress_tracker.batch_size),
1144 increase_batch_size_on_plateau_max
1145 )
1146
1147 if is_on_master():
1148 logger.info(
1149 'PLATEAU REACHED, increasing batch size to {} '
1150 'due to lack of improvement of {} {} {}'.format(
1151 progress_tracker.batch_size,
1152 validation_output_feature_name,
1153 increase_batch_size_eval_split,
1154 validation_metric,
1155 )
1156 )
1157
1158 progress_tracker.last_increase_batch_size_epoch = progress_tracker.epoch
1159 progress_tracker.last_increase_batch_size = 0
1160 progress_tracker.num_increases_batch_size += 1
1161
1162 if (progress_tracker.num_increases_batch_size >=
1163 increase_batch_size_on_plateau):
1164 if is_on_master():
1165 logger.info(
1166 'Batch size was already increased '
1167 '{} times, not increasing it anymore'.format(
1168 progress_tracker.num_increases_batch_size
1169 )
1170 )
1171 elif (progress_tracker.batch_size >=
1172 increase_batch_size_on_plateau_max):
1173 if is_on_master():
1174 logger.info(
1175 'Batch size was already increased '
1176 '{} times, currently it is {}, '
1177 'the maximum allowed'.format(
1178 progress_tracker.num_increases_batch_size,
1179 progress_tracker.batch_size
1180 )
1181 )
1182
1183
1184 class ProgressTracker:
1185
1186 def __init__(
1187 self,
1188 epoch,
1189 batch_size,
1190 steps,
1191 last_improvement_epoch,
1192 last_learning_rate_reduction_epoch,
1193 last_increase_batch_size_epoch,
1194 best_eval_metric,
1195 best_reduce_learning_rate_eval_metric,
1196 last_reduce_learning_rate_eval_metric_improvement,
1197 best_increase_batch_size_eval_metric,
1198 last_increase_batch_size_eval_metric_improvement,
1199 learning_rate,
1200 num_reductions_learning_rate,
1201 num_increases_batch_size,
1202 train_metrics,
1203 vali_metrics,
1204 test_metrics,
1205 last_improvement,
1206 last_learning_rate_reduction,
1207 last_increase_batch_size
1208 ):
1209 self.batch_size = batch_size
1210 self.epoch = epoch
1211 self.steps = steps
1212 self.last_improvement_epoch = last_improvement_epoch
1213 self.last_improvement = last_improvement
1214 self.last_learning_rate_reduction_epoch = last_learning_rate_reduction_epoch
1215 self.last_learning_rate_reduction = last_learning_rate_reduction
1216 self.last_increase_batch_size_epoch = last_increase_batch_size_epoch
1217 self.last_increase_batch_size = last_increase_batch_size
1218 self.learning_rate = learning_rate
1219 self.best_eval_metric = best_eval_metric
1220 self.best_reduce_learning_rate_eval_metric = best_reduce_learning_rate_eval_metric
1221 self.last_reduce_learning_rate_eval_metric_improvement = last_reduce_learning_rate_eval_metric_improvement
1222 self.best_increase_batch_size_eval_metric = best_increase_batch_size_eval_metric
1223 self.last_increase_batch_size_eval_metric_improvement = last_increase_batch_size_eval_metric_improvement
1224 self.num_reductions_learning_rate = num_reductions_learning_rate
1225 self.num_increases_batch_size = num_increases_batch_size
1226 self.train_metrics = train_metrics
1227 self.vali_metrics = vali_metrics
1228 self.test_metrics = test_metrics
1229
1230 def save(self, filepath):
1231 save_json(filepath, self.__dict__)
1232
1233 @staticmethod
1234 def load(filepath):
1235 loaded = load_json(filepath)
1236 return ProgressTracker(**loaded)
1237
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ludwig/models/trainer.py b/ludwig/models/trainer.py
--- a/ludwig/models/trainer.py
+++ b/ludwig/models/trainer.py
@@ -56,8 +56,6 @@
logger = logging.getLogger(__name__)
-tf.config.experimental_run_functions_eagerly(True)
-
class Trainer:
"""
| {"golden_diff": "diff --git a/ludwig/models/trainer.py b/ludwig/models/trainer.py\n--- a/ludwig/models/trainer.py\n+++ b/ludwig/models/trainer.py\n@@ -56,8 +56,6 @@\n \n logger = logging.getLogger(__name__)\n \n-tf.config.experimental_run_functions_eagerly(True)\n-\n \n class Trainer:\n \"\"\"\n", "issue": "TF2 is slower than TF1, improve speed\nhttps://github.com/tensorflow/tensorflow/issues/33487\r\n\r\nGetting the same result: epochs became longer because of switching to TF2.\r\nI noticed also that it's using less memory than TF1, but slower epochs are killing this advantage.\r\n\r\nTF 2.3 \u2013 less epoch time, but still slow.\r\n\r\nLooks like there are some issues with `experimental_run_functions_eagerly`.\r\nVery disappointed. Going to switch back to ludwig 0.2.2.8\n", "before_files": [{"content": "#! /usr/bin/env python\n# coding=utf-8\n# Copyright (c) 2019 Uber Technologies, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"\nThis module contains the class and auxiliary methods of a model.\n\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport logging\nimport os\nimport os.path\nimport signal\nimport sys\nimport threading\nimport time\nfrom collections import OrderedDict\n\nimport tensorflow as tf\nfrom tabulate import tabulate\nfrom tqdm import tqdm\n\nfrom ludwig.constants import LOSS, COMBINED, TRAINING, VALIDATION, TEST, TYPE\nfrom ludwig.contrib import contrib_command\nfrom ludwig.globals import MODEL_HYPERPARAMETERS_FILE_NAME\nfrom ludwig.globals import MODEL_WEIGHTS_FILE_NAME\nfrom ludwig.globals import TRAINING_CHECKPOINTS_DIR_PATH\nfrom ludwig.globals import TRAINING_PROGRESS_TRACKER_FILE_NAME\nfrom ludwig.utils.horovod_utils import is_on_master\nfrom ludwig.globals import is_progressbar_disabled\nfrom ludwig.models.predictor import Predictor\nfrom ludwig.modules.metric_modules import get_improved_fun\nfrom ludwig.modules.metric_modules import get_initial_validation_value\nfrom ludwig.modules.optimization_modules import ClippedOptimizer\nfrom ludwig.utils import time_utils\nfrom ludwig.utils.batcher import initialize_batcher\nfrom ludwig.utils.data_utils import load_json, save_json\nfrom ludwig.utils.defaults import default_random_seed\nfrom ludwig.utils.math_utils import learning_rate_warmup, \\\n learning_rate_warmup_distributed\nfrom ludwig.utils.misc_utils import set_random_seed\n\nlogger = logging.getLogger(__name__)\n\ntf.config.experimental_run_functions_eagerly(True)\n\n\nclass Trainer:\n \"\"\"\n Trainer is a class that train a model\n \"\"\"\n\n def __init__(\n self,\n optimizer=None,\n epochs=100,\n regularization_lambda=0.0,\n learning_rate=0.001,\n batch_size=128,\n eval_batch_size=0,\n bucketing_field=None,\n validation_field='combined',\n validation_metric='loss',\n early_stop=20,\n reduce_learning_rate_on_plateau=0,\n reduce_learning_rate_on_plateau_patience=5,\n reduce_learning_rate_on_plateau_rate=0.5,\n reduce_learning_rate_eval_metric=LOSS,\n reduce_learning_rate_eval_split=TRAINING,\n increase_batch_size_on_plateau=0,\n increase_batch_size_on_plateau_patience=5,\n increase_batch_size_on_plateau_rate=2,\n increase_batch_size_on_plateau_max=512,\n increase_batch_size_eval_metric=LOSS,\n increase_batch_size_eval_split=TRAINING,\n learning_rate_warmup_epochs=1,\n resume=False,\n skip_save_model=False,\n skip_save_progress=False,\n skip_save_log=False,\n random_seed=default_random_seed,\n horovod=None,\n debug=False,\n **kwargs\n ):\n \"\"\"Trains a model with a set of hyperparameters listed below. Customizable\n :param training_set: The training set\n :param validation_set: The validation dataset\n :param test_set: The test dataset\n :param validation_field: The first output feature, by default it is set\n as the same field of the first output feature.\n :param validation_metric: metric used on the validation field, it is\n accuracy by default\n :type validation_metric:\n :param save_path: The path to save the file\n :type save_path: filepath (str)\n :param regularization_lambda: Strength of the $L2$ regularization\n :type regularization_lambda: Integer\n :param epochs: Number of epochs the algorithm is intended to be run over\n :type epochs: Integer\n :param learning_rate: Learning rate for the algorithm, represents how\n much to scale the gradients by\n :type learning_rate: Integer\n :param batch_size: Size of batch to pass to the model for training.\n :type batch_size: Integer\n :param batch_size: Size of batch to pass to the model for evaluation.\n :type batch_size: Integer\n :param bucketing_field: when batching, buckets datapoints based the\n length of a field together. Bucketing on text length speeds up\n training of RNNs consistently, 30% in some cases\n :type bucketing_field:\n :param validation_field: The first output feature, by default it is set\n as the same field of the first output feature.\n :param validation_metric: metric used on the validation field, it is\n accuracy by default\n :type validation_metric:\n :param dropout: dropout probability (probability of dropping\n a neuron in a given layer)\n :type dropout: Float\n :param early_stop: How many epochs without any improvement in the\n validation_metric triggers the algorithm to stop\n :type early_stop: Integer\n :param reduce_learning_rate_on_plateau: Reduces the learning rate when\n the algorithm hits a plateau (i.e. the performance on the\n validation does not improve)\n :type reduce_learning_rate_on_plateau: Float\n :param reduce_learning_rate_on_plateau_patience: How many epochs have\n to pass before the learning rate reduces\n :type reduce_learning_rate_on_plateau_patience: Float\n :param reduce_learning_rate_on_plateau_rate: Rate at which we reduce\n the learning rate\n :type reduce_learning_rate_on_plateau_rate: Float\n :param increase_batch_size_on_plateau: Increase the batch size on a\n plateau\n :type increase_batch_size_on_plateau: Integer\n :param increase_batch_size_on_plateau_patience: How many epochs to wait\n for before increasing the batch size\n :type increase_batch_size_on_plateau_patience: Integer\n :param increase_batch_size_on_plateau_rate: The rate at which the batch\n size increases.\n :type increase_batch_size_on_plateau_rate: Float\n :param increase_batch_size_on_plateau_max: The maximum size of the batch\n :type increase_batch_size_on_plateau_max: Integer\n :param learning_rate_warmup_epochs: The number of epochs to warmup the\n learning rate for.\n :type learning_rate_warmup_epochs: Integer\n :param resume: Resume training a model that was being trained.\n :type resume: Boolean\n :param skip_save_model: disables\n saving model weights and hyperparameters each time the model\n improves. By default Ludwig saves model weights after each epoch\n the validation metric imrpvoes, but if the model is really big\n that can be time consuming if you do not want to keep\n the weights and just find out what performance can a model get\n with a set of hyperparameters, use this parameter to skip it,\n but the model will not be loadable later on.\n :type skip_save_model: Boolean\n :param skip_save_progress: disables saving progress each epoch.\n By default Ludwig saves weights and stats after each epoch\n for enabling resuming of training, but if the model is\n really big that can be time consuming and will uses twice\n as much space, use this parameter to skip it, but training\n cannot be resumed later on\n :type skip_save_progress: Boolean\n :param skip_save_log: Disables saving TensorBoard\n logs. By default Ludwig saves logs for the TensorBoard, but if it\n is not needed turning it off can slightly increase the\n overall speed..\n :type skip_save_log: Boolean\n :param random_seed: Default initialization for the random seeds\n :type: Float\n \"\"\"\n self._epochs = epochs\n self._regularization_lambda = regularization_lambda\n self._learning_rate = learning_rate\n self._batch_size = batch_size\n self._eval_batch_size = batch_size if eval_batch_size < 1 else eval_batch_size\n self._bucketing_field = bucketing_field\n self._validation_field = validation_field\n self._validation_metric = validation_metric\n self._early_stop = early_stop\n self._reduce_learning_rate_on_plateau = reduce_learning_rate_on_plateau\n self._reduce_learning_rate_on_plateau_patience = reduce_learning_rate_on_plateau_patience\n self._reduce_learning_rate_on_plateau_rate = reduce_learning_rate_on_plateau_rate\n self._reduce_learning_rate_eval_metric = reduce_learning_rate_eval_metric\n self._reduce_learning_rate_eval_split = reduce_learning_rate_eval_split\n self._increase_batch_size_on_plateau = increase_batch_size_on_plateau\n self._increase_batch_size_on_plateau_patience = increase_batch_size_on_plateau_patience\n self._increase_batch_size_on_plateau_rate = increase_batch_size_on_plateau_rate\n self._increase_batch_size_on_plateau_max = increase_batch_size_on_plateau_max\n self._increase_batch_size_eval_metric = increase_batch_size_eval_metric\n self._increase_batch_size_eval_split = increase_batch_size_eval_split\n self._learning_rate_warmup_epochs = learning_rate_warmup_epochs\n self._resume = resume\n self._skip_save_model = skip_save_model\n self._skip_save_progress = skip_save_progress\n self._skip_save_log = skip_save_log\n self._random_seed = random_seed\n self._horovod = horovod\n self._debug = debug\n self._received_sigint = False\n\n if self._horovod:\n self._learning_rate *= self._horovod.size()\n\n # ================ Optimizer ================\n if optimizer is None:\n optimizer = {TYPE: 'Adam'}\n self._optimizer = ClippedOptimizer(\n horovod=horovod,\n **optimizer\n )\n\n @classmethod\n def write_epoch_summary(\n cls,\n summary_writer,\n metrics,\n step,\n learning_rate=None\n ):\n if not summary_writer:\n return\n\n with summary_writer.as_default():\n for feature_name, output_feature in metrics.items():\n for metric in output_feature:\n metric_tag = \"{}/epoch_{}\".format(\n feature_name, metric\n )\n metric_val = output_feature[metric][-1]\n tf.summary.scalar(metric_tag, metric_val, step=step)\n if learning_rate:\n tf.summary.scalar(\"combined/epoch_learning_rate\",\n learning_rate, step=step)\n summary_writer.flush()\n\n @classmethod\n def write_step_summary(\n cls,\n train_summary_writer,\n combined_loss,\n all_losses,\n step\n ):\n if not train_summary_writer:\n return\n\n with train_summary_writer.as_default():\n # combined loss\n loss_tag = \"{}/step_training_loss\".format(\"combined\")\n tf.summary.scalar(loss_tag, combined_loss, step=step)\n\n # all other losses\n for feature_name, loss in all_losses.items():\n loss_tag = \"{}/step_training_loss\".format(feature_name)\n tf.summary.scalar(loss_tag, loss, step=step)\n\n train_summary_writer.flush()\n\n def train(\n self,\n model,\n training_set,\n validation_set=None,\n test_set=None,\n save_path='model',\n **kwargs\n ):\n \"\"\"Trains a model with a set of hyperparameters listed below. Customizable\n :param training_set: The training set\n :param validation_set: The validation dataset\n :param test_set: The test dataset\n \"\"\"\n # ====== General setup =======\n tf.random.set_seed(self._random_seed)\n\n output_features = model.output_features\n digits_per_epochs = len(str(self._epochs))\n # Only use signals when on the main thread to avoid issues with CherryPy: https://github.com/uber/ludwig/issues/286\n if threading.current_thread() == threading.main_thread():\n signal.signal(signal.SIGINT, self.set_epochs_to_1_or_quit)\n should_validate = validation_set is not None and validation_set.size > 0\n\n metrics_names = self.get_metrics_names(output_features)\n\n # check if validation_field is valid\n valid_validation_field = False\n validation_output_feature_name = None\n if self._validation_field == 'combined':\n valid_validation_field = True\n validation_output_feature_name = 'combined'\n if self._validation_metric is not LOSS and len(\n output_features) == 1:\n only_of = next(iter(output_features))\n if self._validation_metric in metrics_names[only_of]:\n validation_output_feature_name = only_of\n logger.warning(\n \"Replacing 'combined' validation field \"\n \"with '{}' as the specified validation \"\n \"metric {} is invalid for 'combined' \"\n \"but is valid for '{}'.\".format(\n only_of, self._validation_metric, only_of\n ))\n else:\n for output_feature in output_features:\n if self._validation_field == output_feature:\n valid_validation_field = True\n validation_output_feature_name = self._validation_field\n if not valid_validation_field:\n raise ValueError(\n 'The specificed validation_field {} is not valid.'\n 'Available ones are: {}'.format(\n self._validation_field,\n [of['name'] for of in output_features] + ['combined']\n )\n )\n\n # check if validation_metric is valid\n valid_validation_metric = self._validation_metric in metrics_names[\n validation_output_feature_name\n ]\n if not valid_validation_metric:\n raise ValueError(\n 'The specificed metric {} is not valid. '\n 'Available metrics for {} output feature are: {}'.format(\n self._validation_metric,\n validation_output_feature_name,\n metrics_names[validation_output_feature_name]\n )\n )\n\n # ====== Setup file names =======\n model_weights_path = model_hyperparameters_path = None\n training_checkpoints_path = training_checkpoints_prefix_path = training_progress_tracker_path = None\n tensorboard_log_dir = None\n if is_on_master():\n os.makedirs(save_path, exist_ok=True)\n model_weights_path = os.path.join(save_path,\n MODEL_WEIGHTS_FILE_NAME)\n model_hyperparameters_path = os.path.join(\n save_path, MODEL_HYPERPARAMETERS_FILE_NAME\n )\n training_checkpoints_path = os.path.join(\n save_path, TRAINING_CHECKPOINTS_DIR_PATH\n )\n # training_checkpoints_prefix_path = os.path.join(\n # training_checkpoints_path, \"ckpt\"\n # )\n training_progress_tracker_path = os.path.join(\n save_path, TRAINING_PROGRESS_TRACKER_FILE_NAME\n )\n tensorboard_log_dir = os.path.join(\n save_path, 'logs'\n )\n\n # ====== Setup session =======\n checkpoint = checkpoint_manager = None\n if is_on_master():\n checkpoint = tf.train.Checkpoint(\n optimizer=self._optimizer,\n model=model\n )\n checkpoint_manager = tf.train.CheckpointManager(\n checkpoint, training_checkpoints_path, max_to_keep=1\n )\n\n train_summary_writer = None\n validation_summary_writer = None\n test_summary_writer = None\n if is_on_master() and not self._skip_save_log and tensorboard_log_dir:\n train_summary_writer = tf.summary.create_file_writer(\n os.path.join(\n tensorboard_log_dir, TRAINING\n )\n )\n if validation_set is not None and validation_set.size > 0:\n validation_summary_writer = tf.summary.create_file_writer(\n os.path.join(\n tensorboard_log_dir, VALIDATION\n )\n )\n if test_set is not None and test_set.size > 0:\n test_summary_writer = tf.summary.create_file_writer(\n os.path.join(\n tensorboard_log_dir, TEST\n )\n )\n\n if self._debug and is_on_master():\n # See https://www.tensorflow.org/tensorboard/debugger_v2 for usage.\n debug_path = os.path.join(\n save_path, 'debug'\n )\n tf.debugging.experimental.enable_dump_debug_info(\n debug_path,\n tensor_debug_mode='FULL_HEALTH',\n circular_buffer_size=-1,\n )\n tf.config.experimental_run_functions_eagerly(True)\n\n # ================ Resume logic ================\n if self._resume:\n progress_tracker = self.resume_training_progress_tracker(\n training_progress_tracker_path\n )\n if is_on_master():\n self.resume_weights_and_optimzier(\n training_checkpoints_path, checkpoint\n )\n else:\n (\n train_metrics,\n vali_metrics,\n test_metrics\n ) = self.initialize_training_metrics(output_features)\n\n progress_tracker = ProgressTracker(\n batch_size=self._batch_size,\n epoch=0,\n steps=0,\n last_improvement_epoch=0,\n last_learning_rate_reduction_epoch=0,\n last_increase_batch_size_epoch=0,\n learning_rate=self._learning_rate,\n best_eval_metric=get_initial_validation_value(\n self._validation_metric\n ),\n best_reduce_learning_rate_eval_metric=get_initial_validation_value(\n self._reduce_learning_rate_eval_metric\n ),\n last_reduce_learning_rate_eval_metric_improvement=0,\n best_increase_batch_size_eval_metric=get_initial_validation_value(\n self._increase_batch_size_eval_metric\n ),\n last_increase_batch_size_eval_metric_improvement=0,\n num_reductions_learning_rate=0,\n num_increases_batch_size=0,\n train_metrics=train_metrics,\n vali_metrics=vali_metrics,\n test_metrics=test_metrics,\n last_improvement=0,\n last_learning_rate_reduction=0,\n last_increase_batch_size=0,\n )\n\n set_random_seed(self._random_seed)\n batcher = initialize_batcher(\n training_set, self._batch_size, self._bucketing_field,\n horovod=self._horovod\n )\n\n # ================ Training Loop ================\n first_batch = True\n while progress_tracker.epoch < self._epochs:\n # epoch init\n start_time = time.time()\n if is_on_master():\n logger.info(\n '\\nEpoch {epoch:{digits}d}'.format(\n epoch=progress_tracker.epoch + 1,\n digits=digits_per_epochs\n )\n )\n current_learning_rate = progress_tracker.learning_rate\n # needed because batch size may change\n batcher.batch_size = progress_tracker.batch_size\n\n # Reset the metrics at the start of the next epoch\n model.reset_metrics()\n\n # ================ Train ================\n progress_bar = None\n if is_on_master():\n progress_bar = tqdm(\n desc='Training',\n total=batcher.steps_per_epoch,\n file=sys.stdout,\n disable=is_progressbar_disabled()\n )\n\n # training step loop\n while not batcher.last_batch():\n batch = batcher.next_batch()\n inputs = {\n i_feat.feature_name: batch[i_feat.feature_name]\n for i_feat in model.input_features.values()\n }\n targets = {\n o_feat.feature_name: batch[o_feat.feature_name]\n for o_feat in model.output_features.values()\n }\n\n # Reintroduce for tensorboard graph\n # if first_batch and is_on_master() and not skip_save_log:\n # tf.summary.trace_on(graph=True, profiler=True)\n\n loss, all_losses = model.train_step(\n self._optimizer,\n inputs,\n targets,\n self._regularization_lambda\n )\n\n # Reintroduce for tensorboard graph\n # if first_batch and is_on_master() and not skip_save_log:\n # with train_summary_writer.as_default():\n # tf.summary.trace_export(\n # name=\"Model\",\n # step=0,\n # profiler_outdir=tensorboard_log_dir\n # )\n\n if is_on_master() and not self._skip_save_log:\n self.write_step_summary(\n train_summary_writer=train_summary_writer,\n combined_loss=loss,\n all_losses=all_losses,\n step=progress_tracker.steps,\n )\n\n if self._horovod and first_batch:\n # Horovod: broadcast initial variable states from rank 0 to all other processes.\n # This is necessary to ensure consistent initialization of all workers when\n # training is started with random weights or restored from a checkpoint.\n #\n # Note: broadcast should be done after the first gradient step to ensure\n # optimizer initialization.\n self._horovod.broadcast_variables(model.variables,\n root_rank=0)\n self._horovod.broadcast_variables(\n self._optimizer.variables(), root_rank=0)\n\n if self._horovod:\n current_learning_rate = learning_rate_warmup_distributed(\n current_learning_rate,\n progress_tracker.epoch,\n self._learning_rate_warmup_epochs,\n self._horovod.size(),\n batcher.step,\n batcher.steps_per_epoch\n ) * self._horovod.size()\n else:\n current_learning_rate = learning_rate_warmup(\n current_learning_rate,\n progress_tracker.epoch,\n self._learning_rate_warmup_epochs,\n batcher.step,\n batcher.steps_per_epoch\n )\n self._optimizer.set_learning_rate(current_learning_rate)\n\n progress_tracker.steps += 1\n if is_on_master():\n progress_bar.update(1)\n first_batch = False\n\n # ================ Post Training Epoch ================\n if is_on_master():\n progress_bar.close()\n\n progress_tracker.epoch += 1\n batcher.reset() # todo this may be useless, doublecheck\n\n # ================ Eval ================\n # init tables\n tables = OrderedDict()\n for output_feature_name, output_feature in output_features.items():\n tables[output_feature_name] = [\n [output_feature_name] + metrics_names[output_feature_name]\n ]\n tables[COMBINED] = [[COMBINED, LOSS]]\n\n # eval metrics on train\n self.evaluation(\n model,\n training_set,\n 'train',\n progress_tracker.train_metrics,\n tables,\n self._eval_batch_size,\n )\n\n self.write_epoch_summary(\n summary_writer=train_summary_writer,\n metrics=progress_tracker.train_metrics,\n step=progress_tracker.epoch,\n learning_rate=current_learning_rate,\n )\n\n if validation_set is not None and validation_set.size > 0:\n # eval metrics on validation set\n self.evaluation(\n model,\n validation_set,\n 'vali',\n progress_tracker.vali_metrics,\n tables,\n self._eval_batch_size,\n )\n\n self.write_epoch_summary(\n summary_writer=validation_summary_writer,\n metrics=progress_tracker.vali_metrics,\n step=progress_tracker.epoch,\n )\n\n if test_set is not None and test_set.size > 0:\n # eval metrics on test set\n self.evaluation(\n model,\n test_set,\n TEST,\n progress_tracker.test_metrics,\n tables,\n self._eval_batch_size,\n )\n\n self.write_epoch_summary(\n summary_writer=test_summary_writer,\n metrics=progress_tracker.test_metrics,\n step=progress_tracker.epoch,\n )\n\n elapsed_time = (time.time() - start_time) * 1000.0\n\n if is_on_master():\n logger.info('Took {time}'.format(\n time=time_utils.strdelta(elapsed_time)))\n\n # metric prints\n if is_on_master():\n for output_feature, table in tables.items():\n logger.info(\n tabulate(\n table,\n headers='firstrow',\n tablefmt='fancy_grid',\n floatfmt='.4f'\n )\n )\n\n # ================ Validation Logic ================\n if should_validate:\n should_break = self.check_progress_on_validation(\n model,\n progress_tracker,\n validation_output_feature_name,\n self._validation_metric,\n model_weights_path,\n model_hyperparameters_path,\n self._reduce_learning_rate_on_plateau,\n self._reduce_learning_rate_on_plateau_patience,\n self._reduce_learning_rate_on_plateau_rate,\n self._reduce_learning_rate_eval_metric,\n self._reduce_learning_rate_eval_split,\n self._increase_batch_size_on_plateau,\n self._increase_batch_size_on_plateau_patience,\n self._increase_batch_size_on_plateau_rate,\n self._increase_batch_size_on_plateau_max,\n self._increase_batch_size_eval_metric,\n self._increase_batch_size_eval_split,\n self._early_stop,\n self._skip_save_model,\n )\n if should_break:\n break\n else:\n # there's no validation, so we save the model at each iteration\n if is_on_master():\n if not self._skip_save_model:\n model.save_weights(model_weights_path)\n\n # ========== Save training progress ==========\n if is_on_master():\n if not self._skip_save_progress:\n checkpoint_manager.save()\n progress_tracker.save(\n os.path.join(\n save_path,\n TRAINING_PROGRESS_TRACKER_FILE_NAME\n )\n )\n\n if is_on_master():\n contrib_command(\"train_epoch_end\", progress_tracker)\n logger.info('')\n\n if train_summary_writer is not None:\n train_summary_writer.close()\n if validation_summary_writer is not None:\n validation_summary_writer.close()\n if test_summary_writer is not None:\n test_summary_writer.close()\n\n return (\n progress_tracker.train_metrics,\n progress_tracker.vali_metrics,\n progress_tracker.test_metrics\n )\n\n def train_online(\n self,\n model,\n dataset,\n ):\n batcher = initialize_batcher(\n dataset,\n self._batch_size,\n horovod=self._horovod\n )\n\n # training step loop\n progress_bar = tqdm(\n desc='Trainining online',\n total=batcher.steps_per_epoch,\n file=sys.stdout,\n disable=is_progressbar_disabled()\n )\n\n while not batcher.last_batch():\n batch = batcher.next_batch()\n inputs = {\n i_feat.feature_name: batch[i_feat.feature_name]\n for i_feat in model.input_features.values()\n }\n targets = {\n o_feat.feature_name: batch[o_feat.feature_name]\n for o_feat in model.output_features.values()\n }\n\n model.train_step(\n self._optimizer,\n inputs,\n targets,\n self._regularization_lambda\n )\n\n progress_bar.update(1)\n\n progress_bar.close()\n\n def append_metrics(self, model, dataset_name, results, metrics_log,\n tables):\n for output_feature in model.output_features:\n scores = [dataset_name]\n\n # collect metric names based on output features metrics to\n # ensure consistent order of reporting metrics\n metric_names = model.output_features[output_feature] \\\n .metric_functions.keys()\n\n for metric in metric_names:\n score = results[output_feature][metric]\n metrics_log[output_feature][metric].append(score)\n scores.append(score)\n\n tables[output_feature].append(scores)\n\n metrics_log[COMBINED][LOSS].append(results[COMBINED][LOSS])\n tables[COMBINED].append([dataset_name, results[COMBINED][LOSS]])\n\n return metrics_log, tables\n\n def evaluation(\n self,\n model,\n dataset,\n dataset_name,\n metrics_log,\n tables,\n batch_size=128,\n debug=False,\n ):\n predictor = Predictor(\n batch_size=batch_size, horovod=self._horovod, debug=self._debug\n )\n metrics, predictions = predictor.batch_evaluation(\n model,\n dataset,\n collect_predictions=False,\n dataset_name=dataset_name\n )\n\n self.append_metrics(model, dataset_name, metrics, metrics_log, tables)\n\n return metrics_log, tables\n\n def check_progress_on_validation(\n self,\n model,\n progress_tracker,\n validation_output_feature_name,\n validation_metric,\n model_weights_path,\n model_hyperparameters_path,\n reduce_learning_rate_on_plateau,\n reduce_learning_rate_on_plateau_patience,\n reduce_learning_rate_on_plateau_rate,\n reduce_learning_rate_eval_metric,\n reduce_learning_rate_eval_split,\n increase_batch_size_on_plateau,\n increase_batch_size_on_plateau_patience,\n increase_batch_size_on_plateau_rate,\n increase_batch_size_on_plateau_max,\n increase_batch_size_eval_metric,\n increase_batch_size_eval_split,\n early_stop,\n skip_save_model\n ):\n should_break = False\n # record how long its been since an improvement\n improved = get_improved_fun(validation_metric)\n if improved(\n progress_tracker.vali_metrics[validation_output_feature_name][\n validation_metric][-1],\n progress_tracker.best_eval_metric\n ):\n progress_tracker.last_improvement_epoch = progress_tracker.epoch\n progress_tracker.best_eval_metric = progress_tracker.vali_metrics[\n validation_output_feature_name][validation_metric][-1]\n if is_on_master():\n if not skip_save_model:\n model.save_weights(model_weights_path)\n logger.info(\n 'Validation {} on {} improved, model saved'.format(\n validation_metric,\n validation_output_feature_name\n )\n )\n\n progress_tracker.last_improvement = (\n progress_tracker.epoch - progress_tracker.last_improvement_epoch\n )\n if progress_tracker.last_improvement != 0:\n if is_on_master():\n logger.info(\n 'Last improvement of {} validation {} '\n 'happened {} epoch{} ago'.format(\n validation_output_feature_name,\n validation_metric,\n progress_tracker.last_improvement,\n '' if progress_tracker.last_improvement == 1 else 's'\n )\n )\n\n # ========== Reduce Learning Rate Plateau logic ========\n if reduce_learning_rate_on_plateau > 0:\n self.reduce_learning_rate(\n progress_tracker,\n validation_output_feature_name,\n reduce_learning_rate_on_plateau,\n reduce_learning_rate_on_plateau_patience,\n reduce_learning_rate_on_plateau_rate,\n reduce_learning_rate_eval_metric,\n reduce_learning_rate_eval_split\n )\n progress_tracker.last_learning_rate_reduction = (\n progress_tracker.epoch -\n progress_tracker.last_learning_rate_reduction_epoch\n )\n if (\n progress_tracker.last_learning_rate_reduction > 0\n and\n progress_tracker.last_reduce_learning_rate_eval_metric_improvement > 0\n and\n not progress_tracker.num_reductions_learning_rate >= reduce_learning_rate_on_plateau\n ):\n logger.info(\n 'Last learning rate reduction '\n 'happened {} epoch{} ago, '\n 'improvement of {} {} {} '\n 'happened {} epoch{} ago'\n ''.format(\n progress_tracker.last_learning_rate_reduction,\n '' if progress_tracker.last_learning_rate_reduction == 1 else 's',\n validation_output_feature_name,\n reduce_learning_rate_eval_split,\n reduce_learning_rate_eval_metric,\n progress_tracker.last_reduce_learning_rate_eval_metric_improvement,\n '' if progress_tracker.last_reduce_learning_rate_eval_metric_improvement == 1 else 's',\n )\n )\n\n # ========== Increase Batch Size Plateau logic =========\n if increase_batch_size_on_plateau > 0:\n self.increase_batch_size(\n progress_tracker,\n validation_output_feature_name,\n increase_batch_size_on_plateau,\n increase_batch_size_on_plateau_patience,\n increase_batch_size_on_plateau_rate,\n increase_batch_size_on_plateau_max,\n increase_batch_size_eval_metric,\n increase_batch_size_eval_split\n )\n progress_tracker.last_increase_batch_size = (\n progress_tracker.epoch -\n progress_tracker.last_increase_batch_size_epoch\n )\n if (\n progress_tracker.last_increase_batch_size > 0\n and\n progress_tracker.last_increase_batch_size_eval_metric_improvement > 0\n and\n not progress_tracker.num_increases_batch_size >= increase_batch_size_on_plateau\n and\n not progress_tracker.batch_size >= increase_batch_size_on_plateau_max\n ):\n logger.info(\n 'Last batch size increase '\n 'happened {} epoch{} ago, '\n 'improvement of {} {} {} '\n 'happened {} epoch{} ago'.format(\n progress_tracker.last_increase_batch_size,\n '' if progress_tracker.last_increase_batch_size == 1 else 's',\n validation_output_feature_name,\n increase_batch_size_eval_split,\n increase_batch_size_eval_metric,\n progress_tracker.last_increase_batch_size_eval_metric_improvement,\n '' if progress_tracker.last_increase_batch_size_eval_metric_improvement == 1 else 's',\n )\n )\n\n # ========== Early Stop logic ==========\n if early_stop > 0:\n if progress_tracker.last_improvement >= early_stop:\n if is_on_master():\n logger.info(\n \"\\nEARLY STOPPING due to lack of \"\n \"validation improvement, \"\n \"it has been {0} epochs since last \"\n \"validation improvement\\n\".format(\n progress_tracker.epoch -\n progress_tracker.last_improvement_epoch\n )\n )\n should_break = True\n return should_break\n\n def set_epochs_to_1_or_quit(self, signum, frame):\n if not self._received_sigint:\n self._epochs = 1\n self._received_sigint = True\n logger.critical(\n '\\nReceived SIGINT, will finish this epoch and then conclude '\n 'the training'\n )\n logger.critical(\n 'Send another SIGINT to immediately interrupt the process'\n )\n else:\n logger.critical('\\nReceived a second SIGINT, will now quit')\n sys.exit(1)\n\n def quit_training(self, signum, frame):\n logger.critical('Received SIGQUIT, will kill training')\n sys.exit(1)\n\n def resume_training_progress_tracker(self, training_progress_tracker_path):\n if is_on_master():\n logger.info('Resuming training of model: {0}'.format(\n training_progress_tracker_path\n ))\n progress_tracker = ProgressTracker.load(training_progress_tracker_path)\n return progress_tracker\n\n def initialize_training_metrics(self, output_features):\n train_metrics = OrderedDict()\n vali_metrics = OrderedDict()\n test_metrics = OrderedDict()\n\n for output_feature_name, output_feature in output_features.items():\n train_metrics[output_feature_name] = OrderedDict()\n vali_metrics[output_feature_name] = OrderedDict()\n test_metrics[output_feature_name] = OrderedDict()\n for metric in output_feature.metric_functions:\n train_metrics[output_feature_name][metric] = []\n vali_metrics[output_feature_name][metric] = []\n test_metrics[output_feature_name][metric] = []\n\n for metrics in [train_metrics, vali_metrics, test_metrics]:\n metrics[COMBINED] = {LOSS: []}\n\n return train_metrics, vali_metrics, test_metrics\n\n def get_metrics_names(self, output_features):\n metrics_names = {}\n for output_feature_name, output_feature in output_features.items():\n for metric in output_feature.metric_functions:\n metrics = metrics_names.get(output_feature_name, [])\n metrics.append(metric)\n metrics_names[output_feature_name] = metrics\n metrics_names[COMBINED] = [LOSS]\n return metrics_names\n\n def resume_weights_and_optimzier(\n self,\n model_weights_progress_path,\n checkpoint\n ):\n checkpoint.restore(\n tf.train.latest_checkpoint(model_weights_progress_path)\n )\n\n def reduce_learning_rate(\n self,\n progress_tracker,\n validation_output_feature_name,\n reduce_learning_rate_on_plateau,\n reduce_learning_rate_on_plateau_patience,\n reduce_learning_rate_on_plateau_rate,\n reduce_learning_rate_eval_metric=LOSS,\n reduce_learning_rate_eval_split=TRAINING\n ):\n if not (progress_tracker.num_reductions_learning_rate >=\n reduce_learning_rate_on_plateau):\n\n if reduce_learning_rate_eval_split == TRAINING:\n split_metrics = progress_tracker.train_metrics\n elif reduce_learning_rate_eval_split == VALIDATION:\n split_metrics = progress_tracker.vali_metrics\n else: # if reduce_learning_rate_eval_split == TEST:\n split_metrics = progress_tracker.test_metrics\n\n validation_metric = reduce_learning_rate_eval_metric\n last_metric_value = split_metrics[validation_output_feature_name][\n validation_metric][-1]\n\n improved = get_improved_fun(validation_metric)\n is_improved = improved(\n last_metric_value,\n progress_tracker.best_reduce_learning_rate_eval_metric\n )\n if is_improved:\n # we update the best metric value and set it to the current one\n # and reset last improvement epoch count\n progress_tracker.best_reduce_learning_rate_eval_metric = last_metric_value\n progress_tracker.last_reduce_learning_rate_eval_metric_improvement = 0\n else:\n progress_tracker.last_reduce_learning_rate_eval_metric_improvement += 1\n if not is_improved and (\n # learning rate reduction happened more than N epochs ago\n progress_tracker.last_learning_rate_reduction >=\n reduce_learning_rate_on_plateau_patience\n and\n # we had no improvement of the evaluation metric since more than N epochs ago\n progress_tracker.last_reduce_learning_rate_eval_metric_improvement >=\n reduce_learning_rate_on_plateau_patience\n ):\n progress_tracker.learning_rate *= (\n reduce_learning_rate_on_plateau_rate\n )\n\n if is_on_master():\n logger.info(\n 'PLATEAU REACHED, reducing learning rate to {} '\n 'due to lack of improvement of {} {} {}'.format(\n progress_tracker.batch_size,\n validation_output_feature_name,\n reduce_learning_rate_eval_split,\n validation_metric,\n )\n )\n\n progress_tracker.last_learning_rate_reduction_epoch = progress_tracker.epoch\n progress_tracker.last_learning_rate_reduction = 0\n progress_tracker.num_reductions_learning_rate += 1\n\n if (progress_tracker.num_reductions_learning_rate >=\n reduce_learning_rate_on_plateau):\n if is_on_master():\n logger.info(\n 'Learning rate was already reduced '\n '{} times, not reducing it anymore'.format(\n progress_tracker.num_reductions_learning_rate\n )\n )\n\n def increase_batch_size(\n self,\n progress_tracker,\n validation_output_feature_name,\n increase_batch_size_on_plateau,\n increase_batch_size_on_plateau_patience,\n increase_batch_size_on_plateau_rate,\n increase_batch_size_on_plateau_max,\n increase_batch_size_eval_metric=LOSS,\n increase_batch_size_eval_split=TRAINING\n ):\n if (not progress_tracker.num_increases_batch_size >=\n increase_batch_size_on_plateau\n and not progress_tracker.batch_size ==\n increase_batch_size_on_plateau_max):\n\n if increase_batch_size_eval_split == TRAINING:\n split_metrics = progress_tracker.train_metrics\n elif increase_batch_size_eval_split == VALIDATION:\n split_metrics = progress_tracker.vali_metrics\n else: # if increase_batch_size_eval_split == TEST:\n split_metrics = progress_tracker.test_metrics\n\n validation_metric = increase_batch_size_eval_metric\n last_metric_value = split_metrics[validation_output_feature_name][\n validation_metric][-1]\n\n improved = get_improved_fun(validation_metric)\n is_improved = improved(\n last_metric_value,\n progress_tracker.best_increase_batch_size_eval_metric\n )\n if is_improved:\n # We update the best metric value and set it to the current one, and reset last improvement epoch count\n progress_tracker.best_increase_batch_size_eval_metric = last_metric_value\n progress_tracker.last_increase_batch_size_eval_metric_improvement = 0\n else:\n progress_tracker.last_increase_batch_size_eval_metric_improvement += 1\n if not is_improved and (\n # Batch size increase happened more than N epochs ago\n progress_tracker.last_increase_batch_size >=\n increase_batch_size_on_plateau_patience\n and\n # We had no improvement of the evaluation metric since more than N epochs ago\n progress_tracker.last_increase_batch_size_eval_metric_improvement >=\n increase_batch_size_on_plateau_patience\n ):\n progress_tracker.batch_size = min(\n (increase_batch_size_on_plateau_rate *\n progress_tracker.batch_size),\n increase_batch_size_on_plateau_max\n )\n\n if is_on_master():\n logger.info(\n 'PLATEAU REACHED, increasing batch size to {} '\n 'due to lack of improvement of {} {} {}'.format(\n progress_tracker.batch_size,\n validation_output_feature_name,\n increase_batch_size_eval_split,\n validation_metric,\n )\n )\n\n progress_tracker.last_increase_batch_size_epoch = progress_tracker.epoch\n progress_tracker.last_increase_batch_size = 0\n progress_tracker.num_increases_batch_size += 1\n\n if (progress_tracker.num_increases_batch_size >=\n increase_batch_size_on_plateau):\n if is_on_master():\n logger.info(\n 'Batch size was already increased '\n '{} times, not increasing it anymore'.format(\n progress_tracker.num_increases_batch_size\n )\n )\n elif (progress_tracker.batch_size >=\n increase_batch_size_on_plateau_max):\n if is_on_master():\n logger.info(\n 'Batch size was already increased '\n '{} times, currently it is {}, '\n 'the maximum allowed'.format(\n progress_tracker.num_increases_batch_size,\n progress_tracker.batch_size\n )\n )\n\n\nclass ProgressTracker:\n\n def __init__(\n self,\n epoch,\n batch_size,\n steps,\n last_improvement_epoch,\n last_learning_rate_reduction_epoch,\n last_increase_batch_size_epoch,\n best_eval_metric,\n best_reduce_learning_rate_eval_metric,\n last_reduce_learning_rate_eval_metric_improvement,\n best_increase_batch_size_eval_metric,\n last_increase_batch_size_eval_metric_improvement,\n learning_rate,\n num_reductions_learning_rate,\n num_increases_batch_size,\n train_metrics,\n vali_metrics,\n test_metrics,\n last_improvement,\n last_learning_rate_reduction,\n last_increase_batch_size\n ):\n self.batch_size = batch_size\n self.epoch = epoch\n self.steps = steps\n self.last_improvement_epoch = last_improvement_epoch\n self.last_improvement = last_improvement\n self.last_learning_rate_reduction_epoch = last_learning_rate_reduction_epoch\n self.last_learning_rate_reduction = last_learning_rate_reduction\n self.last_increase_batch_size_epoch = last_increase_batch_size_epoch\n self.last_increase_batch_size = last_increase_batch_size\n self.learning_rate = learning_rate\n self.best_eval_metric = best_eval_metric\n self.best_reduce_learning_rate_eval_metric = best_reduce_learning_rate_eval_metric\n self.last_reduce_learning_rate_eval_metric_improvement = last_reduce_learning_rate_eval_metric_improvement\n self.best_increase_batch_size_eval_metric = best_increase_batch_size_eval_metric\n self.last_increase_batch_size_eval_metric_improvement = last_increase_batch_size_eval_metric_improvement\n self.num_reductions_learning_rate = num_reductions_learning_rate\n self.num_increases_batch_size = num_increases_batch_size\n self.train_metrics = train_metrics\n self.vali_metrics = vali_metrics\n self.test_metrics = test_metrics\n\n def save(self, filepath):\n save_json(filepath, self.__dict__)\n\n @staticmethod\n def load(filepath):\n loaded = load_json(filepath)\n return ProgressTracker(**loaded)\n", "path": "ludwig/models/trainer.py"}], "after_files": [{"content": "#! /usr/bin/env python\n# coding=utf-8\n# Copyright (c) 2019 Uber Technologies, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"\nThis module contains the class and auxiliary methods of a model.\n\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport logging\nimport os\nimport os.path\nimport signal\nimport sys\nimport threading\nimport time\nfrom collections import OrderedDict\n\nimport tensorflow as tf\nfrom tabulate import tabulate\nfrom tqdm import tqdm\n\nfrom ludwig.constants import LOSS, COMBINED, TRAINING, VALIDATION, TEST, TYPE\nfrom ludwig.contrib import contrib_command\nfrom ludwig.globals import MODEL_HYPERPARAMETERS_FILE_NAME\nfrom ludwig.globals import MODEL_WEIGHTS_FILE_NAME\nfrom ludwig.globals import TRAINING_CHECKPOINTS_DIR_PATH\nfrom ludwig.globals import TRAINING_PROGRESS_TRACKER_FILE_NAME\nfrom ludwig.utils.horovod_utils import is_on_master\nfrom ludwig.globals import is_progressbar_disabled\nfrom ludwig.models.predictor import Predictor\nfrom ludwig.modules.metric_modules import get_improved_fun\nfrom ludwig.modules.metric_modules import get_initial_validation_value\nfrom ludwig.modules.optimization_modules import ClippedOptimizer\nfrom ludwig.utils import time_utils\nfrom ludwig.utils.batcher import initialize_batcher\nfrom ludwig.utils.data_utils import load_json, save_json\nfrom ludwig.utils.defaults import default_random_seed\nfrom ludwig.utils.math_utils import learning_rate_warmup, \\\n learning_rate_warmup_distributed\nfrom ludwig.utils.misc_utils import set_random_seed\n\nlogger = logging.getLogger(__name__)\n\n\nclass Trainer:\n \"\"\"\n Trainer is a class that train a model\n \"\"\"\n\n def __init__(\n self,\n optimizer=None,\n epochs=100,\n regularization_lambda=0.0,\n learning_rate=0.001,\n batch_size=128,\n eval_batch_size=0,\n bucketing_field=None,\n validation_field='combined',\n validation_metric='loss',\n early_stop=20,\n reduce_learning_rate_on_plateau=0,\n reduce_learning_rate_on_plateau_patience=5,\n reduce_learning_rate_on_plateau_rate=0.5,\n reduce_learning_rate_eval_metric=LOSS,\n reduce_learning_rate_eval_split=TRAINING,\n increase_batch_size_on_plateau=0,\n increase_batch_size_on_plateau_patience=5,\n increase_batch_size_on_plateau_rate=2,\n increase_batch_size_on_plateau_max=512,\n increase_batch_size_eval_metric=LOSS,\n increase_batch_size_eval_split=TRAINING,\n learning_rate_warmup_epochs=1,\n resume=False,\n skip_save_model=False,\n skip_save_progress=False,\n skip_save_log=False,\n random_seed=default_random_seed,\n horovod=None,\n debug=False,\n **kwargs\n ):\n \"\"\"Trains a model with a set of hyperparameters listed below. Customizable\n :param training_set: The training set\n :param validation_set: The validation dataset\n :param test_set: The test dataset\n :param validation_field: The first output feature, by default it is set\n as the same field of the first output feature.\n :param validation_metric: metric used on the validation field, it is\n accuracy by default\n :type validation_metric:\n :param save_path: The path to save the file\n :type save_path: filepath (str)\n :param regularization_lambda: Strength of the $L2$ regularization\n :type regularization_lambda: Integer\n :param epochs: Number of epochs the algorithm is intended to be run over\n :type epochs: Integer\n :param learning_rate: Learning rate for the algorithm, represents how\n much to scale the gradients by\n :type learning_rate: Integer\n :param batch_size: Size of batch to pass to the model for training.\n :type batch_size: Integer\n :param batch_size: Size of batch to pass to the model for evaluation.\n :type batch_size: Integer\n :param bucketing_field: when batching, buckets datapoints based the\n length of a field together. Bucketing on text length speeds up\n training of RNNs consistently, 30% in some cases\n :type bucketing_field:\n :param validation_field: The first output feature, by default it is set\n as the same field of the first output feature.\n :param validation_metric: metric used on the validation field, it is\n accuracy by default\n :type validation_metric:\n :param dropout: dropout probability (probability of dropping\n a neuron in a given layer)\n :type dropout: Float\n :param early_stop: How many epochs without any improvement in the\n validation_metric triggers the algorithm to stop\n :type early_stop: Integer\n :param reduce_learning_rate_on_plateau: Reduces the learning rate when\n the algorithm hits a plateau (i.e. the performance on the\n validation does not improve)\n :type reduce_learning_rate_on_plateau: Float\n :param reduce_learning_rate_on_plateau_patience: How many epochs have\n to pass before the learning rate reduces\n :type reduce_learning_rate_on_plateau_patience: Float\n :param reduce_learning_rate_on_plateau_rate: Rate at which we reduce\n the learning rate\n :type reduce_learning_rate_on_plateau_rate: Float\n :param increase_batch_size_on_plateau: Increase the batch size on a\n plateau\n :type increase_batch_size_on_plateau: Integer\n :param increase_batch_size_on_plateau_patience: How many epochs to wait\n for before increasing the batch size\n :type increase_batch_size_on_plateau_patience: Integer\n :param increase_batch_size_on_plateau_rate: The rate at which the batch\n size increases.\n :type increase_batch_size_on_plateau_rate: Float\n :param increase_batch_size_on_plateau_max: The maximum size of the batch\n :type increase_batch_size_on_plateau_max: Integer\n :param learning_rate_warmup_epochs: The number of epochs to warmup the\n learning rate for.\n :type learning_rate_warmup_epochs: Integer\n :param resume: Resume training a model that was being trained.\n :type resume: Boolean\n :param skip_save_model: disables\n saving model weights and hyperparameters each time the model\n improves. By default Ludwig saves model weights after each epoch\n the validation metric imrpvoes, but if the model is really big\n that can be time consuming if you do not want to keep\n the weights and just find out what performance can a model get\n with a set of hyperparameters, use this parameter to skip it,\n but the model will not be loadable later on.\n :type skip_save_model: Boolean\n :param skip_save_progress: disables saving progress each epoch.\n By default Ludwig saves weights and stats after each epoch\n for enabling resuming of training, but if the model is\n really big that can be time consuming and will uses twice\n as much space, use this parameter to skip it, but training\n cannot be resumed later on\n :type skip_save_progress: Boolean\n :param skip_save_log: Disables saving TensorBoard\n logs. By default Ludwig saves logs for the TensorBoard, but if it\n is not needed turning it off can slightly increase the\n overall speed..\n :type skip_save_log: Boolean\n :param random_seed: Default initialization for the random seeds\n :type: Float\n \"\"\"\n self._epochs = epochs\n self._regularization_lambda = regularization_lambda\n self._learning_rate = learning_rate\n self._batch_size = batch_size\n self._eval_batch_size = batch_size if eval_batch_size < 1 else eval_batch_size\n self._bucketing_field = bucketing_field\n self._validation_field = validation_field\n self._validation_metric = validation_metric\n self._early_stop = early_stop\n self._reduce_learning_rate_on_plateau = reduce_learning_rate_on_plateau\n self._reduce_learning_rate_on_plateau_patience = reduce_learning_rate_on_plateau_patience\n self._reduce_learning_rate_on_plateau_rate = reduce_learning_rate_on_plateau_rate\n self._reduce_learning_rate_eval_metric = reduce_learning_rate_eval_metric\n self._reduce_learning_rate_eval_split = reduce_learning_rate_eval_split\n self._increase_batch_size_on_plateau = increase_batch_size_on_plateau\n self._increase_batch_size_on_plateau_patience = increase_batch_size_on_plateau_patience\n self._increase_batch_size_on_plateau_rate = increase_batch_size_on_plateau_rate\n self._increase_batch_size_on_plateau_max = increase_batch_size_on_plateau_max\n self._increase_batch_size_eval_metric = increase_batch_size_eval_metric\n self._increase_batch_size_eval_split = increase_batch_size_eval_split\n self._learning_rate_warmup_epochs = learning_rate_warmup_epochs\n self._resume = resume\n self._skip_save_model = skip_save_model\n self._skip_save_progress = skip_save_progress\n self._skip_save_log = skip_save_log\n self._random_seed = random_seed\n self._horovod = horovod\n self._debug = debug\n self._received_sigint = False\n\n if self._horovod:\n self._learning_rate *= self._horovod.size()\n\n # ================ Optimizer ================\n if optimizer is None:\n optimizer = {TYPE: 'Adam'}\n self._optimizer = ClippedOptimizer(\n horovod=horovod,\n **optimizer\n )\n\n @classmethod\n def write_epoch_summary(\n cls,\n summary_writer,\n metrics,\n step,\n learning_rate=None\n ):\n if not summary_writer:\n return\n\n with summary_writer.as_default():\n for feature_name, output_feature in metrics.items():\n for metric in output_feature:\n metric_tag = \"{}/epoch_{}\".format(\n feature_name, metric\n )\n metric_val = output_feature[metric][-1]\n tf.summary.scalar(metric_tag, metric_val, step=step)\n if learning_rate:\n tf.summary.scalar(\"combined/epoch_learning_rate\",\n learning_rate, step=step)\n summary_writer.flush()\n\n @classmethod\n def write_step_summary(\n cls,\n train_summary_writer,\n combined_loss,\n all_losses,\n step\n ):\n if not train_summary_writer:\n return\n\n with train_summary_writer.as_default():\n # combined loss\n loss_tag = \"{}/step_training_loss\".format(\"combined\")\n tf.summary.scalar(loss_tag, combined_loss, step=step)\n\n # all other losses\n for feature_name, loss in all_losses.items():\n loss_tag = \"{}/step_training_loss\".format(feature_name)\n tf.summary.scalar(loss_tag, loss, step=step)\n\n train_summary_writer.flush()\n\n def train(\n self,\n model,\n training_set,\n validation_set=None,\n test_set=None,\n save_path='model',\n **kwargs\n ):\n \"\"\"Trains a model with a set of hyperparameters listed below. Customizable\n :param training_set: The training set\n :param validation_set: The validation dataset\n :param test_set: The test dataset\n \"\"\"\n # ====== General setup =======\n tf.random.set_seed(self._random_seed)\n\n output_features = model.output_features\n digits_per_epochs = len(str(self._epochs))\n # Only use signals when on the main thread to avoid issues with CherryPy: https://github.com/uber/ludwig/issues/286\n if threading.current_thread() == threading.main_thread():\n signal.signal(signal.SIGINT, self.set_epochs_to_1_or_quit)\n should_validate = validation_set is not None and validation_set.size > 0\n\n metrics_names = self.get_metrics_names(output_features)\n\n # check if validation_field is valid\n valid_validation_field = False\n validation_output_feature_name = None\n if self._validation_field == 'combined':\n valid_validation_field = True\n validation_output_feature_name = 'combined'\n if self._validation_metric is not LOSS and len(\n output_features) == 1:\n only_of = next(iter(output_features))\n if self._validation_metric in metrics_names[only_of]:\n validation_output_feature_name = only_of\n logger.warning(\n \"Replacing 'combined' validation field \"\n \"with '{}' as the specified validation \"\n \"metric {} is invalid for 'combined' \"\n \"but is valid for '{}'.\".format(\n only_of, self._validation_metric, only_of\n ))\n else:\n for output_feature in output_features:\n if self._validation_field == output_feature:\n valid_validation_field = True\n validation_output_feature_name = self._validation_field\n if not valid_validation_field:\n raise ValueError(\n 'The specificed validation_field {} is not valid.'\n 'Available ones are: {}'.format(\n self._validation_field,\n [of['name'] for of in output_features] + ['combined']\n )\n )\n\n # check if validation_metric is valid\n valid_validation_metric = self._validation_metric in metrics_names[\n validation_output_feature_name\n ]\n if not valid_validation_metric:\n raise ValueError(\n 'The specificed metric {} is not valid. '\n 'Available metrics for {} output feature are: {}'.format(\n self._validation_metric,\n validation_output_feature_name,\n metrics_names[validation_output_feature_name]\n )\n )\n\n # ====== Setup file names =======\n model_weights_path = model_hyperparameters_path = None\n training_checkpoints_path = training_checkpoints_prefix_path = training_progress_tracker_path = None\n tensorboard_log_dir = None\n if is_on_master():\n os.makedirs(save_path, exist_ok=True)\n model_weights_path = os.path.join(save_path,\n MODEL_WEIGHTS_FILE_NAME)\n model_hyperparameters_path = os.path.join(\n save_path, MODEL_HYPERPARAMETERS_FILE_NAME\n )\n training_checkpoints_path = os.path.join(\n save_path, TRAINING_CHECKPOINTS_DIR_PATH\n )\n # training_checkpoints_prefix_path = os.path.join(\n # training_checkpoints_path, \"ckpt\"\n # )\n training_progress_tracker_path = os.path.join(\n save_path, TRAINING_PROGRESS_TRACKER_FILE_NAME\n )\n tensorboard_log_dir = os.path.join(\n save_path, 'logs'\n )\n\n # ====== Setup session =======\n checkpoint = checkpoint_manager = None\n if is_on_master():\n checkpoint = tf.train.Checkpoint(\n optimizer=self._optimizer,\n model=model\n )\n checkpoint_manager = tf.train.CheckpointManager(\n checkpoint, training_checkpoints_path, max_to_keep=1\n )\n\n train_summary_writer = None\n validation_summary_writer = None\n test_summary_writer = None\n if is_on_master() and not self._skip_save_log and tensorboard_log_dir:\n train_summary_writer = tf.summary.create_file_writer(\n os.path.join(\n tensorboard_log_dir, TRAINING\n )\n )\n if validation_set is not None and validation_set.size > 0:\n validation_summary_writer = tf.summary.create_file_writer(\n os.path.join(\n tensorboard_log_dir, VALIDATION\n )\n )\n if test_set is not None and test_set.size > 0:\n test_summary_writer = tf.summary.create_file_writer(\n os.path.join(\n tensorboard_log_dir, TEST\n )\n )\n\n if self._debug and is_on_master():\n # See https://www.tensorflow.org/tensorboard/debugger_v2 for usage.\n debug_path = os.path.join(\n save_path, 'debug'\n )\n tf.debugging.experimental.enable_dump_debug_info(\n debug_path,\n tensor_debug_mode='FULL_HEALTH',\n circular_buffer_size=-1,\n )\n tf.config.experimental_run_functions_eagerly(True)\n\n # ================ Resume logic ================\n if self._resume:\n progress_tracker = self.resume_training_progress_tracker(\n training_progress_tracker_path\n )\n if is_on_master():\n self.resume_weights_and_optimzier(\n training_checkpoints_path, checkpoint\n )\n else:\n (\n train_metrics,\n vali_metrics,\n test_metrics\n ) = self.initialize_training_metrics(output_features)\n\n progress_tracker = ProgressTracker(\n batch_size=self._batch_size,\n epoch=0,\n steps=0,\n last_improvement_epoch=0,\n last_learning_rate_reduction_epoch=0,\n last_increase_batch_size_epoch=0,\n learning_rate=self._learning_rate,\n best_eval_metric=get_initial_validation_value(\n self._validation_metric\n ),\n best_reduce_learning_rate_eval_metric=get_initial_validation_value(\n self._reduce_learning_rate_eval_metric\n ),\n last_reduce_learning_rate_eval_metric_improvement=0,\n best_increase_batch_size_eval_metric=get_initial_validation_value(\n self._increase_batch_size_eval_metric\n ),\n last_increase_batch_size_eval_metric_improvement=0,\n num_reductions_learning_rate=0,\n num_increases_batch_size=0,\n train_metrics=train_metrics,\n vali_metrics=vali_metrics,\n test_metrics=test_metrics,\n last_improvement=0,\n last_learning_rate_reduction=0,\n last_increase_batch_size=0,\n )\n\n set_random_seed(self._random_seed)\n batcher = initialize_batcher(\n training_set, self._batch_size, self._bucketing_field,\n horovod=self._horovod\n )\n\n # ================ Training Loop ================\n first_batch = True\n while progress_tracker.epoch < self._epochs:\n # epoch init\n start_time = time.time()\n if is_on_master():\n logger.info(\n '\\nEpoch {epoch:{digits}d}'.format(\n epoch=progress_tracker.epoch + 1,\n digits=digits_per_epochs\n )\n )\n current_learning_rate = progress_tracker.learning_rate\n # needed because batch size may change\n batcher.batch_size = progress_tracker.batch_size\n\n # Reset the metrics at the start of the next epoch\n model.reset_metrics()\n\n # ================ Train ================\n progress_bar = None\n if is_on_master():\n progress_bar = tqdm(\n desc='Training',\n total=batcher.steps_per_epoch,\n file=sys.stdout,\n disable=is_progressbar_disabled()\n )\n\n # training step loop\n while not batcher.last_batch():\n batch = batcher.next_batch()\n inputs = {\n i_feat.feature_name: batch[i_feat.feature_name]\n for i_feat in model.input_features.values()\n }\n targets = {\n o_feat.feature_name: batch[o_feat.feature_name]\n for o_feat in model.output_features.values()\n }\n\n # Reintroduce for tensorboard graph\n # if first_batch and is_on_master() and not skip_save_log:\n # tf.summary.trace_on(graph=True, profiler=True)\n\n loss, all_losses = model.train_step(\n self._optimizer,\n inputs,\n targets,\n self._regularization_lambda\n )\n\n # Reintroduce for tensorboard graph\n # if first_batch and is_on_master() and not skip_save_log:\n # with train_summary_writer.as_default():\n # tf.summary.trace_export(\n # name=\"Model\",\n # step=0,\n # profiler_outdir=tensorboard_log_dir\n # )\n\n if is_on_master() and not self._skip_save_log:\n self.write_step_summary(\n train_summary_writer=train_summary_writer,\n combined_loss=loss,\n all_losses=all_losses,\n step=progress_tracker.steps,\n )\n\n if self._horovod and first_batch:\n # Horovod: broadcast initial variable states from rank 0 to all other processes.\n # This is necessary to ensure consistent initialization of all workers when\n # training is started with random weights or restored from a checkpoint.\n #\n # Note: broadcast should be done after the first gradient step to ensure\n # optimizer initialization.\n self._horovod.broadcast_variables(model.variables,\n root_rank=0)\n self._horovod.broadcast_variables(\n self._optimizer.variables(), root_rank=0)\n\n if self._horovod:\n current_learning_rate = learning_rate_warmup_distributed(\n current_learning_rate,\n progress_tracker.epoch,\n self._learning_rate_warmup_epochs,\n self._horovod.size(),\n batcher.step,\n batcher.steps_per_epoch\n ) * self._horovod.size()\n else:\n current_learning_rate = learning_rate_warmup(\n current_learning_rate,\n progress_tracker.epoch,\n self._learning_rate_warmup_epochs,\n batcher.step,\n batcher.steps_per_epoch\n )\n self._optimizer.set_learning_rate(current_learning_rate)\n\n progress_tracker.steps += 1\n if is_on_master():\n progress_bar.update(1)\n first_batch = False\n\n # ================ Post Training Epoch ================\n if is_on_master():\n progress_bar.close()\n\n progress_tracker.epoch += 1\n batcher.reset() # todo this may be useless, doublecheck\n\n # ================ Eval ================\n # init tables\n tables = OrderedDict()\n for output_feature_name, output_feature in output_features.items():\n tables[output_feature_name] = [\n [output_feature_name] + metrics_names[output_feature_name]\n ]\n tables[COMBINED] = [[COMBINED, LOSS]]\n\n # eval metrics on train\n self.evaluation(\n model,\n training_set,\n 'train',\n progress_tracker.train_metrics,\n tables,\n self._eval_batch_size,\n )\n\n self.write_epoch_summary(\n summary_writer=train_summary_writer,\n metrics=progress_tracker.train_metrics,\n step=progress_tracker.epoch,\n learning_rate=current_learning_rate,\n )\n\n if validation_set is not None and validation_set.size > 0:\n # eval metrics on validation set\n self.evaluation(\n model,\n validation_set,\n 'vali',\n progress_tracker.vali_metrics,\n tables,\n self._eval_batch_size,\n )\n\n self.write_epoch_summary(\n summary_writer=validation_summary_writer,\n metrics=progress_tracker.vali_metrics,\n step=progress_tracker.epoch,\n )\n\n if test_set is not None and test_set.size > 0:\n # eval metrics on test set\n self.evaluation(\n model,\n test_set,\n TEST,\n progress_tracker.test_metrics,\n tables,\n self._eval_batch_size,\n )\n\n self.write_epoch_summary(\n summary_writer=test_summary_writer,\n metrics=progress_tracker.test_metrics,\n step=progress_tracker.epoch,\n )\n\n elapsed_time = (time.time() - start_time) * 1000.0\n\n if is_on_master():\n logger.info('Took {time}'.format(\n time=time_utils.strdelta(elapsed_time)))\n\n # metric prints\n if is_on_master():\n for output_feature, table in tables.items():\n logger.info(\n tabulate(\n table,\n headers='firstrow',\n tablefmt='fancy_grid',\n floatfmt='.4f'\n )\n )\n\n # ================ Validation Logic ================\n if should_validate:\n should_break = self.check_progress_on_validation(\n model,\n progress_tracker,\n validation_output_feature_name,\n self._validation_metric,\n model_weights_path,\n model_hyperparameters_path,\n self._reduce_learning_rate_on_plateau,\n self._reduce_learning_rate_on_plateau_patience,\n self._reduce_learning_rate_on_plateau_rate,\n self._reduce_learning_rate_eval_metric,\n self._reduce_learning_rate_eval_split,\n self._increase_batch_size_on_plateau,\n self._increase_batch_size_on_plateau_patience,\n self._increase_batch_size_on_plateau_rate,\n self._increase_batch_size_on_plateau_max,\n self._increase_batch_size_eval_metric,\n self._increase_batch_size_eval_split,\n self._early_stop,\n self._skip_save_model,\n )\n if should_break:\n break\n else:\n # there's no validation, so we save the model at each iteration\n if is_on_master():\n if not self._skip_save_model:\n model.save_weights(model_weights_path)\n\n # ========== Save training progress ==========\n if is_on_master():\n if not self._skip_save_progress:\n checkpoint_manager.save()\n progress_tracker.save(\n os.path.join(\n save_path,\n TRAINING_PROGRESS_TRACKER_FILE_NAME\n )\n )\n\n if is_on_master():\n contrib_command(\"train_epoch_end\", progress_tracker)\n logger.info('')\n\n if train_summary_writer is not None:\n train_summary_writer.close()\n if validation_summary_writer is not None:\n validation_summary_writer.close()\n if test_summary_writer is not None:\n test_summary_writer.close()\n\n return (\n progress_tracker.train_metrics,\n progress_tracker.vali_metrics,\n progress_tracker.test_metrics\n )\n\n def train_online(\n self,\n model,\n dataset,\n ):\n batcher = initialize_batcher(\n dataset,\n self._batch_size,\n horovod=self._horovod\n )\n\n # training step loop\n progress_bar = tqdm(\n desc='Trainining online',\n total=batcher.steps_per_epoch,\n file=sys.stdout,\n disable=is_progressbar_disabled()\n )\n\n while not batcher.last_batch():\n batch = batcher.next_batch()\n inputs = {\n i_feat.feature_name: batch[i_feat.feature_name]\n for i_feat in model.input_features.values()\n }\n targets = {\n o_feat.feature_name: batch[o_feat.feature_name]\n for o_feat in model.output_features.values()\n }\n\n model.train_step(\n self._optimizer,\n inputs,\n targets,\n self._regularization_lambda\n )\n\n progress_bar.update(1)\n\n progress_bar.close()\n\n def append_metrics(self, model, dataset_name, results, metrics_log,\n tables):\n for output_feature in model.output_features:\n scores = [dataset_name]\n\n # collect metric names based on output features metrics to\n # ensure consistent order of reporting metrics\n metric_names = model.output_features[output_feature] \\\n .metric_functions.keys()\n\n for metric in metric_names:\n score = results[output_feature][metric]\n metrics_log[output_feature][metric].append(score)\n scores.append(score)\n\n tables[output_feature].append(scores)\n\n metrics_log[COMBINED][LOSS].append(results[COMBINED][LOSS])\n tables[COMBINED].append([dataset_name, results[COMBINED][LOSS]])\n\n return metrics_log, tables\n\n def evaluation(\n self,\n model,\n dataset,\n dataset_name,\n metrics_log,\n tables,\n batch_size=128,\n debug=False,\n ):\n predictor = Predictor(\n batch_size=batch_size, horovod=self._horovod, debug=self._debug\n )\n metrics, predictions = predictor.batch_evaluation(\n model,\n dataset,\n collect_predictions=False,\n dataset_name=dataset_name\n )\n\n self.append_metrics(model, dataset_name, metrics, metrics_log, tables)\n\n return metrics_log, tables\n\n def check_progress_on_validation(\n self,\n model,\n progress_tracker,\n validation_output_feature_name,\n validation_metric,\n model_weights_path,\n model_hyperparameters_path,\n reduce_learning_rate_on_plateau,\n reduce_learning_rate_on_plateau_patience,\n reduce_learning_rate_on_plateau_rate,\n reduce_learning_rate_eval_metric,\n reduce_learning_rate_eval_split,\n increase_batch_size_on_plateau,\n increase_batch_size_on_plateau_patience,\n increase_batch_size_on_plateau_rate,\n increase_batch_size_on_plateau_max,\n increase_batch_size_eval_metric,\n increase_batch_size_eval_split,\n early_stop,\n skip_save_model\n ):\n should_break = False\n # record how long its been since an improvement\n improved = get_improved_fun(validation_metric)\n if improved(\n progress_tracker.vali_metrics[validation_output_feature_name][\n validation_metric][-1],\n progress_tracker.best_eval_metric\n ):\n progress_tracker.last_improvement_epoch = progress_tracker.epoch\n progress_tracker.best_eval_metric = progress_tracker.vali_metrics[\n validation_output_feature_name][validation_metric][-1]\n if is_on_master():\n if not skip_save_model:\n model.save_weights(model_weights_path)\n logger.info(\n 'Validation {} on {} improved, model saved'.format(\n validation_metric,\n validation_output_feature_name\n )\n )\n\n progress_tracker.last_improvement = (\n progress_tracker.epoch - progress_tracker.last_improvement_epoch\n )\n if progress_tracker.last_improvement != 0:\n if is_on_master():\n logger.info(\n 'Last improvement of {} validation {} '\n 'happened {} epoch{} ago'.format(\n validation_output_feature_name,\n validation_metric,\n progress_tracker.last_improvement,\n '' if progress_tracker.last_improvement == 1 else 's'\n )\n )\n\n # ========== Reduce Learning Rate Plateau logic ========\n if reduce_learning_rate_on_plateau > 0:\n self.reduce_learning_rate(\n progress_tracker,\n validation_output_feature_name,\n reduce_learning_rate_on_plateau,\n reduce_learning_rate_on_plateau_patience,\n reduce_learning_rate_on_plateau_rate,\n reduce_learning_rate_eval_metric,\n reduce_learning_rate_eval_split\n )\n progress_tracker.last_learning_rate_reduction = (\n progress_tracker.epoch -\n progress_tracker.last_learning_rate_reduction_epoch\n )\n if (\n progress_tracker.last_learning_rate_reduction > 0\n and\n progress_tracker.last_reduce_learning_rate_eval_metric_improvement > 0\n and\n not progress_tracker.num_reductions_learning_rate >= reduce_learning_rate_on_plateau\n ):\n logger.info(\n 'Last learning rate reduction '\n 'happened {} epoch{} ago, '\n 'improvement of {} {} {} '\n 'happened {} epoch{} ago'\n ''.format(\n progress_tracker.last_learning_rate_reduction,\n '' if progress_tracker.last_learning_rate_reduction == 1 else 's',\n validation_output_feature_name,\n reduce_learning_rate_eval_split,\n reduce_learning_rate_eval_metric,\n progress_tracker.last_reduce_learning_rate_eval_metric_improvement,\n '' if progress_tracker.last_reduce_learning_rate_eval_metric_improvement == 1 else 's',\n )\n )\n\n # ========== Increase Batch Size Plateau logic =========\n if increase_batch_size_on_plateau > 0:\n self.increase_batch_size(\n progress_tracker,\n validation_output_feature_name,\n increase_batch_size_on_plateau,\n increase_batch_size_on_plateau_patience,\n increase_batch_size_on_plateau_rate,\n increase_batch_size_on_plateau_max,\n increase_batch_size_eval_metric,\n increase_batch_size_eval_split\n )\n progress_tracker.last_increase_batch_size = (\n progress_tracker.epoch -\n progress_tracker.last_increase_batch_size_epoch\n )\n if (\n progress_tracker.last_increase_batch_size > 0\n and\n progress_tracker.last_increase_batch_size_eval_metric_improvement > 0\n and\n not progress_tracker.num_increases_batch_size >= increase_batch_size_on_plateau\n and\n not progress_tracker.batch_size >= increase_batch_size_on_plateau_max\n ):\n logger.info(\n 'Last batch size increase '\n 'happened {} epoch{} ago, '\n 'improvement of {} {} {} '\n 'happened {} epoch{} ago'.format(\n progress_tracker.last_increase_batch_size,\n '' if progress_tracker.last_increase_batch_size == 1 else 's',\n validation_output_feature_name,\n increase_batch_size_eval_split,\n increase_batch_size_eval_metric,\n progress_tracker.last_increase_batch_size_eval_metric_improvement,\n '' if progress_tracker.last_increase_batch_size_eval_metric_improvement == 1 else 's',\n )\n )\n\n # ========== Early Stop logic ==========\n if early_stop > 0:\n if progress_tracker.last_improvement >= early_stop:\n if is_on_master():\n logger.info(\n \"\\nEARLY STOPPING due to lack of \"\n \"validation improvement, \"\n \"it has been {0} epochs since last \"\n \"validation improvement\\n\".format(\n progress_tracker.epoch -\n progress_tracker.last_improvement_epoch\n )\n )\n should_break = True\n return should_break\n\n def set_epochs_to_1_or_quit(self, signum, frame):\n if not self._received_sigint:\n self._epochs = 1\n self._received_sigint = True\n logger.critical(\n '\\nReceived SIGINT, will finish this epoch and then conclude '\n 'the training'\n )\n logger.critical(\n 'Send another SIGINT to immediately interrupt the process'\n )\n else:\n logger.critical('\\nReceived a second SIGINT, will now quit')\n sys.exit(1)\n\n def quit_training(self, signum, frame):\n logger.critical('Received SIGQUIT, will kill training')\n sys.exit(1)\n\n def resume_training_progress_tracker(self, training_progress_tracker_path):\n if is_on_master():\n logger.info('Resuming training of model: {0}'.format(\n training_progress_tracker_path\n ))\n progress_tracker = ProgressTracker.load(training_progress_tracker_path)\n return progress_tracker\n\n def initialize_training_metrics(self, output_features):\n train_metrics = OrderedDict()\n vali_metrics = OrderedDict()\n test_metrics = OrderedDict()\n\n for output_feature_name, output_feature in output_features.items():\n train_metrics[output_feature_name] = OrderedDict()\n vali_metrics[output_feature_name] = OrderedDict()\n test_metrics[output_feature_name] = OrderedDict()\n for metric in output_feature.metric_functions:\n train_metrics[output_feature_name][metric] = []\n vali_metrics[output_feature_name][metric] = []\n test_metrics[output_feature_name][metric] = []\n\n for metrics in [train_metrics, vali_metrics, test_metrics]:\n metrics[COMBINED] = {LOSS: []}\n\n return train_metrics, vali_metrics, test_metrics\n\n def get_metrics_names(self, output_features):\n metrics_names = {}\n for output_feature_name, output_feature in output_features.items():\n for metric in output_feature.metric_functions:\n metrics = metrics_names.get(output_feature_name, [])\n metrics.append(metric)\n metrics_names[output_feature_name] = metrics\n metrics_names[COMBINED] = [LOSS]\n return metrics_names\n\n def resume_weights_and_optimzier(\n self,\n model_weights_progress_path,\n checkpoint\n ):\n checkpoint.restore(\n tf.train.latest_checkpoint(model_weights_progress_path)\n )\n\n def reduce_learning_rate(\n self,\n progress_tracker,\n validation_output_feature_name,\n reduce_learning_rate_on_plateau,\n reduce_learning_rate_on_plateau_patience,\n reduce_learning_rate_on_plateau_rate,\n reduce_learning_rate_eval_metric=LOSS,\n reduce_learning_rate_eval_split=TRAINING\n ):\n if not (progress_tracker.num_reductions_learning_rate >=\n reduce_learning_rate_on_plateau):\n\n if reduce_learning_rate_eval_split == TRAINING:\n split_metrics = progress_tracker.train_metrics\n elif reduce_learning_rate_eval_split == VALIDATION:\n split_metrics = progress_tracker.vali_metrics\n else: # if reduce_learning_rate_eval_split == TEST:\n split_metrics = progress_tracker.test_metrics\n\n validation_metric = reduce_learning_rate_eval_metric\n last_metric_value = split_metrics[validation_output_feature_name][\n validation_metric][-1]\n\n improved = get_improved_fun(validation_metric)\n is_improved = improved(\n last_metric_value,\n progress_tracker.best_reduce_learning_rate_eval_metric\n )\n if is_improved:\n # we update the best metric value and set it to the current one\n # and reset last improvement epoch count\n progress_tracker.best_reduce_learning_rate_eval_metric = last_metric_value\n progress_tracker.last_reduce_learning_rate_eval_metric_improvement = 0\n else:\n progress_tracker.last_reduce_learning_rate_eval_metric_improvement += 1\n if not is_improved and (\n # learning rate reduction happened more than N epochs ago\n progress_tracker.last_learning_rate_reduction >=\n reduce_learning_rate_on_plateau_patience\n and\n # we had no improvement of the evaluation metric since more than N epochs ago\n progress_tracker.last_reduce_learning_rate_eval_metric_improvement >=\n reduce_learning_rate_on_plateau_patience\n ):\n progress_tracker.learning_rate *= (\n reduce_learning_rate_on_plateau_rate\n )\n\n if is_on_master():\n logger.info(\n 'PLATEAU REACHED, reducing learning rate to {} '\n 'due to lack of improvement of {} {} {}'.format(\n progress_tracker.batch_size,\n validation_output_feature_name,\n reduce_learning_rate_eval_split,\n validation_metric,\n )\n )\n\n progress_tracker.last_learning_rate_reduction_epoch = progress_tracker.epoch\n progress_tracker.last_learning_rate_reduction = 0\n progress_tracker.num_reductions_learning_rate += 1\n\n if (progress_tracker.num_reductions_learning_rate >=\n reduce_learning_rate_on_plateau):\n if is_on_master():\n logger.info(\n 'Learning rate was already reduced '\n '{} times, not reducing it anymore'.format(\n progress_tracker.num_reductions_learning_rate\n )\n )\n\n def increase_batch_size(\n self,\n progress_tracker,\n validation_output_feature_name,\n increase_batch_size_on_plateau,\n increase_batch_size_on_plateau_patience,\n increase_batch_size_on_plateau_rate,\n increase_batch_size_on_plateau_max,\n increase_batch_size_eval_metric=LOSS,\n increase_batch_size_eval_split=TRAINING\n ):\n if (not progress_tracker.num_increases_batch_size >=\n increase_batch_size_on_plateau\n and not progress_tracker.batch_size ==\n increase_batch_size_on_plateau_max):\n\n if increase_batch_size_eval_split == TRAINING:\n split_metrics = progress_tracker.train_metrics\n elif increase_batch_size_eval_split == VALIDATION:\n split_metrics = progress_tracker.vali_metrics\n else: # if increase_batch_size_eval_split == TEST:\n split_metrics = progress_tracker.test_metrics\n\n validation_metric = increase_batch_size_eval_metric\n last_metric_value = split_metrics[validation_output_feature_name][\n validation_metric][-1]\n\n improved = get_improved_fun(validation_metric)\n is_improved = improved(\n last_metric_value,\n progress_tracker.best_increase_batch_size_eval_metric\n )\n if is_improved:\n # We update the best metric value and set it to the current one, and reset last improvement epoch count\n progress_tracker.best_increase_batch_size_eval_metric = last_metric_value\n progress_tracker.last_increase_batch_size_eval_metric_improvement = 0\n else:\n progress_tracker.last_increase_batch_size_eval_metric_improvement += 1\n if not is_improved and (\n # Batch size increase happened more than N epochs ago\n progress_tracker.last_increase_batch_size >=\n increase_batch_size_on_plateau_patience\n and\n # We had no improvement of the evaluation metric since more than N epochs ago\n progress_tracker.last_increase_batch_size_eval_metric_improvement >=\n increase_batch_size_on_plateau_patience\n ):\n progress_tracker.batch_size = min(\n (increase_batch_size_on_plateau_rate *\n progress_tracker.batch_size),\n increase_batch_size_on_plateau_max\n )\n\n if is_on_master():\n logger.info(\n 'PLATEAU REACHED, increasing batch size to {} '\n 'due to lack of improvement of {} {} {}'.format(\n progress_tracker.batch_size,\n validation_output_feature_name,\n increase_batch_size_eval_split,\n validation_metric,\n )\n )\n\n progress_tracker.last_increase_batch_size_epoch = progress_tracker.epoch\n progress_tracker.last_increase_batch_size = 0\n progress_tracker.num_increases_batch_size += 1\n\n if (progress_tracker.num_increases_batch_size >=\n increase_batch_size_on_plateau):\n if is_on_master():\n logger.info(\n 'Batch size was already increased '\n '{} times, not increasing it anymore'.format(\n progress_tracker.num_increases_batch_size\n )\n )\n elif (progress_tracker.batch_size >=\n increase_batch_size_on_plateau_max):\n if is_on_master():\n logger.info(\n 'Batch size was already increased '\n '{} times, currently it is {}, '\n 'the maximum allowed'.format(\n progress_tracker.num_increases_batch_size,\n progress_tracker.batch_size\n )\n )\n\n\nclass ProgressTracker:\n\n def __init__(\n self,\n epoch,\n batch_size,\n steps,\n last_improvement_epoch,\n last_learning_rate_reduction_epoch,\n last_increase_batch_size_epoch,\n best_eval_metric,\n best_reduce_learning_rate_eval_metric,\n last_reduce_learning_rate_eval_metric_improvement,\n best_increase_batch_size_eval_metric,\n last_increase_batch_size_eval_metric_improvement,\n learning_rate,\n num_reductions_learning_rate,\n num_increases_batch_size,\n train_metrics,\n vali_metrics,\n test_metrics,\n last_improvement,\n last_learning_rate_reduction,\n last_increase_batch_size\n ):\n self.batch_size = batch_size\n self.epoch = epoch\n self.steps = steps\n self.last_improvement_epoch = last_improvement_epoch\n self.last_improvement = last_improvement\n self.last_learning_rate_reduction_epoch = last_learning_rate_reduction_epoch\n self.last_learning_rate_reduction = last_learning_rate_reduction\n self.last_increase_batch_size_epoch = last_increase_batch_size_epoch\n self.last_increase_batch_size = last_increase_batch_size\n self.learning_rate = learning_rate\n self.best_eval_metric = best_eval_metric\n self.best_reduce_learning_rate_eval_metric = best_reduce_learning_rate_eval_metric\n self.last_reduce_learning_rate_eval_metric_improvement = last_reduce_learning_rate_eval_metric_improvement\n self.best_increase_batch_size_eval_metric = best_increase_batch_size_eval_metric\n self.last_increase_batch_size_eval_metric_improvement = last_increase_batch_size_eval_metric_improvement\n self.num_reductions_learning_rate = num_reductions_learning_rate\n self.num_increases_batch_size = num_increases_batch_size\n self.train_metrics = train_metrics\n self.vali_metrics = vali_metrics\n self.test_metrics = test_metrics\n\n def save(self, filepath):\n save_json(filepath, self.__dict__)\n\n @staticmethod\n def load(filepath):\n loaded = load_json(filepath)\n return ProgressTracker(**loaded)\n", "path": "ludwig/models/trainer.py"}]} |
gh_patches_debug_36 | rasdani/github-patches | git_diff | zulip__zulip-29386 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add clarification tooltip when settings can't be saved due to invalid Jitsi URL
In SETTINGS / ORGANIZATION SETTINGS > Other settings, we disable the "Save changes" button when the custom Jitsi URL is invalid. We should add a tooltip do the disabled button to explain why it is disabled: "Cannot save invalid Jitsi server URL."
<img width="809" alt="Screenshot 2023-11-02 at 10 31 14β―PM" src="https://github.com/zulip/zulip/assets/2090066/b6bbb302-8b01-41ae-be98-1181497ecbf5">
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tools/lib/capitalization.py`
Content:
```
1 import re
2 from typing import List, Match, Tuple
3
4 from bs4 import BeautifulSoup
5
6 # The phrases in this list will be ignored. The longest phrase is
7 # tried first; this removes the chance of smaller phrases changing
8 # the text before longer phrases are tried.
9 # The errors shown by `tools/check-capitalization` can be added to
10 # this list without any modification.
11 IGNORED_PHRASES = [
12 # Proper nouns and acronyms
13 r"API",
14 r"APNS",
15 r"Botserver",
16 r"Cookie Bot",
17 r"DevAuthBackend",
18 r"DSN",
19 r"Esc",
20 r"GCM",
21 r"GitHub",
22 r"Gravatar",
23 r"Help Center",
24 r"HTTP",
25 r"ID",
26 r"IDs",
27 r"Inbox",
28 r"IP",
29 r"JSON",
30 r"Kerberos",
31 r"LinkedIn",
32 r"LDAP",
33 r"Markdown",
34 r"OTP",
35 r"Pivotal",
36 r"Recent conversations",
37 r"DM",
38 r"DMs",
39 r"Slack",
40 r"Google",
41 r"Terms of Service",
42 r"Tuesday",
43 r"URL",
44 r"UUID",
45 r"Webathena",
46 r"WordPress",
47 r"Zephyr",
48 r"Zoom",
49 r"Zulip",
50 r"Zulip Server",
51 r"Zulip Account Security",
52 r"Zulip Security",
53 r"Zulip Cloud",
54 r"Zulip Cloud Standard",
55 r"Zulip Cloud Plus",
56 r"BigBlueButton",
57 # Code things
58 r"\.zuliprc",
59 # BeautifulSoup will remove <z-user> which is horribly confusing,
60 # so we need more of the sentence.
61 r"<z-user></z-user> will have the same role",
62 r"<z-user></z-user> will have the same properties",
63 # Things using "I"
64 r"I understand",
65 r"I'm",
66 r"I've",
67 r"Topics I participate in",
68 r"Topics I send a message to",
69 r"Topics I start",
70 # Specific short words
71 r"beta",
72 r"and",
73 r"bot",
74 r"e\.g\.",
75 r"enabled",
76 r"signups",
77 # Placeholders
78 r"keyword",
79 r"streamname",
80 r"user@example\.com",
81 r"example\.com",
82 r"acme",
83 # Fragments of larger strings
84 r"is β¦",
85 r"your subscriptions on your Streams page",
86 r"Add global time<br />Everyone sees global times in their own time zone\.",
87 r"user",
88 r"an unknown operating system",
89 r"Go to Settings",
90 r"find accounts for another email address",
91 # SPECIAL CASES
92 # Because topics usually are lower-case, this would look weird if it were capitalized
93 r"more topics",
94 # Used alone in a parenthetical where capitalized looks worse.
95 r"^deprecated$",
96 # We want the similar text in the Private Messages section to have the same capitalization.
97 r"more conversations",
98 r"back to streams",
99 # Capital 'i' looks weird in reminders popover
100 r"in 1 hour",
101 r"in 20 minutes",
102 r"in 3 hours",
103 # these are used as topics
104 r"^new streams$",
105 r"^stream events$",
106 # These are used as example short names (e.g. an uncapitalized context):
107 r"^marketing$",
108 r"^cookie$",
109 # Used to refer custom time limits
110 r"\bN\b",
111 # Capital c feels obtrusive in clear status option
112 r"clear",
113 r"group direct messages with \{recipient\}",
114 r"direct messages with \{recipient\}",
115 r"direct messages with yourself",
116 r"GIF",
117 # Emoji name placeholder
118 r"leafy green vegetable",
119 # Subdomain placeholder
120 r"your-organization-url",
121 # Used in invite modal
122 r"or",
123 # Used in GIPHY integration setting. GIFs Rating.
124 r"rated Y",
125 r"rated G",
126 r"rated PG",
127 r"rated PG13",
128 r"rated R",
129 # Used in GIPHY popover.
130 r"GIFs",
131 r"GIPHY",
132 # Used in our case studies
133 r"Technical University of Munich",
134 r"University of California San Diego",
135 # Used in stream creation form
136 r"email hidden",
137 # Use in compose box.
138 r"to send",
139 r"to add a new line",
140 # Used in showing Notification Bot read receipts message
141 "Notification Bot",
142 # Used in presence_enabled setting label
143 r"invisible mode off",
144 # Typeahead suggestions for "Pronouns" custom field type.
145 r"he/him",
146 r"she/her",
147 r"they/them",
148 # Used in message-move-time-limit setting label
149 r"does not apply to moderators and administrators",
150 # Used in message-delete-time-limit setting label
151 r"does not apply to administrators",
152 # Used as indicator with names for guest users.
153 r"guest",
154 # Used in pills for deactivated users.
155 r"deactivated",
156 # This is a reference to a setting/secret and should be lowercase.
157 r"zulip_org_id",
158 ]
159
160 # Sort regexes in descending order of their lengths. As a result, the
161 # longer phrases will be ignored first.
162 IGNORED_PHRASES.sort(key=len, reverse=True)
163
164 # Compile regexes to improve performance. This also extracts the
165 # text using BeautifulSoup and then removes extra whitespaces from
166 # it. This step enables us to add HTML in our regexes directly.
167 COMPILED_IGNORED_PHRASES = [
168 re.compile(" ".join(BeautifulSoup(regex, "lxml").text.split())) for regex in IGNORED_PHRASES
169 ]
170
171 SPLIT_BOUNDARY = "?.!" # Used to split string into sentences.
172 SPLIT_BOUNDARY_REGEX = re.compile(rf"[{SPLIT_BOUNDARY}]")
173
174 # Regexes which check capitalization in sentences.
175 DISALLOWED = [
176 r"^[a-z](?!\})", # Checks if the sentence starts with a lower case character.
177 r"^[A-Z][a-z]+[\sa-z0-9]+[A-Z]", # Checks if an upper case character exists
178 # after a lower case character when the first character is in upper case.
179 ]
180 DISALLOWED_REGEX = re.compile(r"|".join(DISALLOWED))
181
182 BANNED_WORDS = {
183 "realm": "The term realm should not appear in user-facing strings. Use organization instead.",
184 }
185
186
187 def get_safe_phrase(phrase: str) -> str:
188 """
189 Safe phrase is in lower case and doesn't contain characters which can
190 conflict with split boundaries. All conflicting characters are replaced
191 with low dash (_).
192 """
193 phrase = SPLIT_BOUNDARY_REGEX.sub("_", phrase)
194 return phrase.lower()
195
196
197 def replace_with_safe_phrase(matchobj: Match[str]) -> str:
198 """
199 The idea is to convert IGNORED_PHRASES into safe phrases, see
200 `get_safe_phrase()` function. The only exception is when the
201 IGNORED_PHRASE is at the start of the text or after a split
202 boundary; in this case, we change the first letter of the phrase
203 to upper case.
204 """
205 ignored_phrase = matchobj.group(0)
206 safe_string = get_safe_phrase(ignored_phrase)
207
208 start_index = matchobj.start()
209 complete_string = matchobj.string
210
211 is_string_start = start_index == 0
212 # We expect that there will be one space between split boundary
213 # and the next word.
214 punctuation = complete_string[max(start_index - 2, 0)]
215 is_after_split_boundary = punctuation in SPLIT_BOUNDARY
216 if is_string_start or is_after_split_boundary:
217 return safe_string.capitalize()
218
219 return safe_string
220
221
222 def get_safe_text(text: str) -> str:
223 """
224 This returns text which is rendered by BeautifulSoup and is in the
225 form that can be split easily and has all IGNORED_PHRASES processed.
226 """
227 soup = BeautifulSoup(text, "lxml")
228 text = " ".join(soup.text.split()) # Remove extra whitespaces.
229 for phrase_regex in COMPILED_IGNORED_PHRASES:
230 text = phrase_regex.sub(replace_with_safe_phrase, text)
231
232 return text
233
234
235 def is_capitalized(safe_text: str) -> bool:
236 sentences = SPLIT_BOUNDARY_REGEX.split(safe_text)
237 return not any(DISALLOWED_REGEX.search(sentence.strip()) for sentence in sentences)
238
239
240 def check_banned_words(text: str) -> List[str]:
241 lower_cased_text = text.lower()
242 errors = []
243 for word, reason in BANNED_WORDS.items():
244 if word in lower_cased_text:
245 # Hack: Should move this into BANNED_WORDS framework; for
246 # now, just hand-code the skips:
247 if (
248 "realm_name" in lower_cased_text
249 or "realm_uri" in lower_cased_text
250 or "remote_realm_host" in lower_cased_text
251 ):
252 continue
253 kwargs = dict(word=word, text=text, reason=reason)
254 msg = "{word} found in '{text}'. {reason}".format(**kwargs)
255 errors.append(msg)
256
257 return errors
258
259
260 def check_capitalization(strings: List[str]) -> Tuple[List[str], List[str], List[str]]:
261 errors = []
262 ignored = []
263 banned_word_errors = []
264 for text in strings:
265 text = " ".join(text.split()) # Remove extra whitespaces.
266 safe_text = get_safe_text(text)
267 has_ignored_phrase = text != safe_text
268 capitalized = is_capitalized(safe_text)
269 if not capitalized:
270 errors.append(text)
271 elif has_ignored_phrase:
272 ignored.append(text)
273
274 banned_word_errors.extend(check_banned_words(text))
275
276 return sorted(errors), sorted(ignored), sorted(banned_word_errors)
277
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/tools/lib/capitalization.py b/tools/lib/capitalization.py
--- a/tools/lib/capitalization.py
+++ b/tools/lib/capitalization.py
@@ -27,6 +27,7 @@
r"Inbox",
r"IP",
r"JSON",
+ r"Jitsi",
r"Kerberos",
r"LinkedIn",
r"LDAP",
| {"golden_diff": "diff --git a/tools/lib/capitalization.py b/tools/lib/capitalization.py\n--- a/tools/lib/capitalization.py\n+++ b/tools/lib/capitalization.py\n@@ -27,6 +27,7 @@\n r\"Inbox\",\n r\"IP\",\n r\"JSON\",\n+ r\"Jitsi\",\n r\"Kerberos\",\n r\"LinkedIn\",\n r\"LDAP\",\n", "issue": "Add clarification tooltip when settings can't be saved due to invalid Jitsi URL\nIn SETTINGS / ORGANIZATION SETTINGS > Other settings, we disable the \"Save changes\" button when the custom Jitsi URL is invalid. We should add a tooltip do the disabled button to explain why it is disabled: \"Cannot save invalid Jitsi server URL.\"\r\n\r\n<img width=\"809\" alt=\"Screenshot 2023-11-02 at 10 31 14\u202fPM\" src=\"https://github.com/zulip/zulip/assets/2090066/b6bbb302-8b01-41ae-be98-1181497ecbf5\">\r\n\n", "before_files": [{"content": "import re\nfrom typing import List, Match, Tuple\n\nfrom bs4 import BeautifulSoup\n\n# The phrases in this list will be ignored. The longest phrase is\n# tried first; this removes the chance of smaller phrases changing\n# the text before longer phrases are tried.\n# The errors shown by `tools/check-capitalization` can be added to\n# this list without any modification.\nIGNORED_PHRASES = [\n # Proper nouns and acronyms\n r\"API\",\n r\"APNS\",\n r\"Botserver\",\n r\"Cookie Bot\",\n r\"DevAuthBackend\",\n r\"DSN\",\n r\"Esc\",\n r\"GCM\",\n r\"GitHub\",\n r\"Gravatar\",\n r\"Help Center\",\n r\"HTTP\",\n r\"ID\",\n r\"IDs\",\n r\"Inbox\",\n r\"IP\",\n r\"JSON\",\n r\"Kerberos\",\n r\"LinkedIn\",\n r\"LDAP\",\n r\"Markdown\",\n r\"OTP\",\n r\"Pivotal\",\n r\"Recent conversations\",\n r\"DM\",\n r\"DMs\",\n r\"Slack\",\n r\"Google\",\n r\"Terms of Service\",\n r\"Tuesday\",\n r\"URL\",\n r\"UUID\",\n r\"Webathena\",\n r\"WordPress\",\n r\"Zephyr\",\n r\"Zoom\",\n r\"Zulip\",\n r\"Zulip Server\",\n r\"Zulip Account Security\",\n r\"Zulip Security\",\n r\"Zulip Cloud\",\n r\"Zulip Cloud Standard\",\n r\"Zulip Cloud Plus\",\n r\"BigBlueButton\",\n # Code things\n r\"\\.zuliprc\",\n # BeautifulSoup will remove <z-user> which is horribly confusing,\n # so we need more of the sentence.\n r\"<z-user></z-user> will have the same role\",\n r\"<z-user></z-user> will have the same properties\",\n # Things using \"I\"\n r\"I understand\",\n r\"I'm\",\n r\"I've\",\n r\"Topics I participate in\",\n r\"Topics I send a message to\",\n r\"Topics I start\",\n # Specific short words\n r\"beta\",\n r\"and\",\n r\"bot\",\n r\"e\\.g\\.\",\n r\"enabled\",\n r\"signups\",\n # Placeholders\n r\"keyword\",\n r\"streamname\",\n r\"user@example\\.com\",\n r\"example\\.com\",\n r\"acme\",\n # Fragments of larger strings\n r\"is \u2026\",\n r\"your subscriptions on your Streams page\",\n r\"Add global time<br />Everyone sees global times in their own time zone\\.\",\n r\"user\",\n r\"an unknown operating system\",\n r\"Go to Settings\",\n r\"find accounts for another email address\",\n # SPECIAL CASES\n # Because topics usually are lower-case, this would look weird if it were capitalized\n r\"more topics\",\n # Used alone in a parenthetical where capitalized looks worse.\n r\"^deprecated$\",\n # We want the similar text in the Private Messages section to have the same capitalization.\n r\"more conversations\",\n r\"back to streams\",\n # Capital 'i' looks weird in reminders popover\n r\"in 1 hour\",\n r\"in 20 minutes\",\n r\"in 3 hours\",\n # these are used as topics\n r\"^new streams$\",\n r\"^stream events$\",\n # These are used as example short names (e.g. an uncapitalized context):\n r\"^marketing$\",\n r\"^cookie$\",\n # Used to refer custom time limits\n r\"\\bN\\b\",\n # Capital c feels obtrusive in clear status option\n r\"clear\",\n r\"group direct messages with \\{recipient\\}\",\n r\"direct messages with \\{recipient\\}\",\n r\"direct messages with yourself\",\n r\"GIF\",\n # Emoji name placeholder\n r\"leafy green vegetable\",\n # Subdomain placeholder\n r\"your-organization-url\",\n # Used in invite modal\n r\"or\",\n # Used in GIPHY integration setting. GIFs Rating.\n r\"rated Y\",\n r\"rated G\",\n r\"rated PG\",\n r\"rated PG13\",\n r\"rated R\",\n # Used in GIPHY popover.\n r\"GIFs\",\n r\"GIPHY\",\n # Used in our case studies\n r\"Technical University of Munich\",\n r\"University of California San Diego\",\n # Used in stream creation form\n r\"email hidden\",\n # Use in compose box.\n r\"to send\",\n r\"to add a new line\",\n # Used in showing Notification Bot read receipts message\n \"Notification Bot\",\n # Used in presence_enabled setting label\n r\"invisible mode off\",\n # Typeahead suggestions for \"Pronouns\" custom field type.\n r\"he/him\",\n r\"she/her\",\n r\"they/them\",\n # Used in message-move-time-limit setting label\n r\"does not apply to moderators and administrators\",\n # Used in message-delete-time-limit setting label\n r\"does not apply to administrators\",\n # Used as indicator with names for guest users.\n r\"guest\",\n # Used in pills for deactivated users.\n r\"deactivated\",\n # This is a reference to a setting/secret and should be lowercase.\n r\"zulip_org_id\",\n]\n\n# Sort regexes in descending order of their lengths. As a result, the\n# longer phrases will be ignored first.\nIGNORED_PHRASES.sort(key=len, reverse=True)\n\n# Compile regexes to improve performance. This also extracts the\n# text using BeautifulSoup and then removes extra whitespaces from\n# it. This step enables us to add HTML in our regexes directly.\nCOMPILED_IGNORED_PHRASES = [\n re.compile(\" \".join(BeautifulSoup(regex, \"lxml\").text.split())) for regex in IGNORED_PHRASES\n]\n\nSPLIT_BOUNDARY = \"?.!\" # Used to split string into sentences.\nSPLIT_BOUNDARY_REGEX = re.compile(rf\"[{SPLIT_BOUNDARY}]\")\n\n# Regexes which check capitalization in sentences.\nDISALLOWED = [\n r\"^[a-z](?!\\})\", # Checks if the sentence starts with a lower case character.\n r\"^[A-Z][a-z]+[\\sa-z0-9]+[A-Z]\", # Checks if an upper case character exists\n # after a lower case character when the first character is in upper case.\n]\nDISALLOWED_REGEX = re.compile(r\"|\".join(DISALLOWED))\n\nBANNED_WORDS = {\n \"realm\": \"The term realm should not appear in user-facing strings. Use organization instead.\",\n}\n\n\ndef get_safe_phrase(phrase: str) -> str:\n \"\"\"\n Safe phrase is in lower case and doesn't contain characters which can\n conflict with split boundaries. All conflicting characters are replaced\n with low dash (_).\n \"\"\"\n phrase = SPLIT_BOUNDARY_REGEX.sub(\"_\", phrase)\n return phrase.lower()\n\n\ndef replace_with_safe_phrase(matchobj: Match[str]) -> str:\n \"\"\"\n The idea is to convert IGNORED_PHRASES into safe phrases, see\n `get_safe_phrase()` function. The only exception is when the\n IGNORED_PHRASE is at the start of the text or after a split\n boundary; in this case, we change the first letter of the phrase\n to upper case.\n \"\"\"\n ignored_phrase = matchobj.group(0)\n safe_string = get_safe_phrase(ignored_phrase)\n\n start_index = matchobj.start()\n complete_string = matchobj.string\n\n is_string_start = start_index == 0\n # We expect that there will be one space between split boundary\n # and the next word.\n punctuation = complete_string[max(start_index - 2, 0)]\n is_after_split_boundary = punctuation in SPLIT_BOUNDARY\n if is_string_start or is_after_split_boundary:\n return safe_string.capitalize()\n\n return safe_string\n\n\ndef get_safe_text(text: str) -> str:\n \"\"\"\n This returns text which is rendered by BeautifulSoup and is in the\n form that can be split easily and has all IGNORED_PHRASES processed.\n \"\"\"\n soup = BeautifulSoup(text, \"lxml\")\n text = \" \".join(soup.text.split()) # Remove extra whitespaces.\n for phrase_regex in COMPILED_IGNORED_PHRASES:\n text = phrase_regex.sub(replace_with_safe_phrase, text)\n\n return text\n\n\ndef is_capitalized(safe_text: str) -> bool:\n sentences = SPLIT_BOUNDARY_REGEX.split(safe_text)\n return not any(DISALLOWED_REGEX.search(sentence.strip()) for sentence in sentences)\n\n\ndef check_banned_words(text: str) -> List[str]:\n lower_cased_text = text.lower()\n errors = []\n for word, reason in BANNED_WORDS.items():\n if word in lower_cased_text:\n # Hack: Should move this into BANNED_WORDS framework; for\n # now, just hand-code the skips:\n if (\n \"realm_name\" in lower_cased_text\n or \"realm_uri\" in lower_cased_text\n or \"remote_realm_host\" in lower_cased_text\n ):\n continue\n kwargs = dict(word=word, text=text, reason=reason)\n msg = \"{word} found in '{text}'. {reason}\".format(**kwargs)\n errors.append(msg)\n\n return errors\n\n\ndef check_capitalization(strings: List[str]) -> Tuple[List[str], List[str], List[str]]:\n errors = []\n ignored = []\n banned_word_errors = []\n for text in strings:\n text = \" \".join(text.split()) # Remove extra whitespaces.\n safe_text = get_safe_text(text)\n has_ignored_phrase = text != safe_text\n capitalized = is_capitalized(safe_text)\n if not capitalized:\n errors.append(text)\n elif has_ignored_phrase:\n ignored.append(text)\n\n banned_word_errors.extend(check_banned_words(text))\n\n return sorted(errors), sorted(ignored), sorted(banned_word_errors)\n", "path": "tools/lib/capitalization.py"}], "after_files": [{"content": "import re\nfrom typing import List, Match, Tuple\n\nfrom bs4 import BeautifulSoup\n\n# The phrases in this list will be ignored. The longest phrase is\n# tried first; this removes the chance of smaller phrases changing\n# the text before longer phrases are tried.\n# The errors shown by `tools/check-capitalization` can be added to\n# this list without any modification.\nIGNORED_PHRASES = [\n # Proper nouns and acronyms\n r\"API\",\n r\"APNS\",\n r\"Botserver\",\n r\"Cookie Bot\",\n r\"DevAuthBackend\",\n r\"DSN\",\n r\"Esc\",\n r\"GCM\",\n r\"GitHub\",\n r\"Gravatar\",\n r\"Help Center\",\n r\"HTTP\",\n r\"ID\",\n r\"IDs\",\n r\"Inbox\",\n r\"IP\",\n r\"JSON\",\n r\"Jitsi\",\n r\"Kerberos\",\n r\"LinkedIn\",\n r\"LDAP\",\n r\"Markdown\",\n r\"OTP\",\n r\"Pivotal\",\n r\"Recent conversations\",\n r\"DM\",\n r\"DMs\",\n r\"Slack\",\n r\"Google\",\n r\"Terms of Service\",\n r\"Tuesday\",\n r\"URL\",\n r\"UUID\",\n r\"Webathena\",\n r\"WordPress\",\n r\"Zephyr\",\n r\"Zoom\",\n r\"Zulip\",\n r\"Zulip Server\",\n r\"Zulip Account Security\",\n r\"Zulip Security\",\n r\"Zulip Cloud\",\n r\"Zulip Cloud Standard\",\n r\"Zulip Cloud Plus\",\n r\"BigBlueButton\",\n # Code things\n r\"\\.zuliprc\",\n # BeautifulSoup will remove <z-user> which is horribly confusing,\n # so we need more of the sentence.\n r\"<z-user></z-user> will have the same role\",\n r\"<z-user></z-user> will have the same properties\",\n # Things using \"I\"\n r\"I understand\",\n r\"I'm\",\n r\"I've\",\n r\"Topics I participate in\",\n r\"Topics I send a message to\",\n r\"Topics I start\",\n # Specific short words\n r\"beta\",\n r\"and\",\n r\"bot\",\n r\"e\\.g\\.\",\n r\"enabled\",\n r\"signups\",\n # Placeholders\n r\"keyword\",\n r\"streamname\",\n r\"user@example\\.com\",\n r\"example\\.com\",\n r\"acme\",\n # Fragments of larger strings\n r\"is \u2026\",\n r\"your subscriptions on your Streams page\",\n r\"Add global time<br />Everyone sees global times in their own time zone\\.\",\n r\"user\",\n r\"an unknown operating system\",\n r\"Go to Settings\",\n r\"find accounts for another email address\",\n # SPECIAL CASES\n # Because topics usually are lower-case, this would look weird if it were capitalized\n r\"more topics\",\n # Used alone in a parenthetical where capitalized looks worse.\n r\"^deprecated$\",\n # We want the similar text in the Private Messages section to have the same capitalization.\n r\"more conversations\",\n r\"back to streams\",\n # Capital 'i' looks weird in reminders popover\n r\"in 1 hour\",\n r\"in 20 minutes\",\n r\"in 3 hours\",\n # these are used as topics\n r\"^new streams$\",\n r\"^stream events$\",\n # These are used as example short names (e.g. an uncapitalized context):\n r\"^marketing$\",\n r\"^cookie$\",\n # Used to refer custom time limits\n r\"\\bN\\b\",\n # Capital c feels obtrusive in clear status option\n r\"clear\",\n r\"group direct messages with \\{recipient\\}\",\n r\"direct messages with \\{recipient\\}\",\n r\"direct messages with yourself\",\n r\"GIF\",\n # Emoji name placeholder\n r\"leafy green vegetable\",\n # Subdomain placeholder\n r\"your-organization-url\",\n # Used in invite modal\n r\"or\",\n # Used in GIPHY integration setting. GIFs Rating.\n r\"rated Y\",\n r\"rated G\",\n r\"rated PG\",\n r\"rated PG13\",\n r\"rated R\",\n # Used in GIPHY popover.\n r\"GIFs\",\n r\"GIPHY\",\n # Used in our case studies\n r\"Technical University of Munich\",\n r\"University of California San Diego\",\n # Used in stream creation form\n r\"email hidden\",\n # Use in compose box.\n r\"to send\",\n r\"to add a new line\",\n # Used in showing Notification Bot read receipts message\n \"Notification Bot\",\n # Used in presence_enabled setting label\n r\"invisible mode off\",\n # Typeahead suggestions for \"Pronouns\" custom field type.\n r\"he/him\",\n r\"she/her\",\n r\"they/them\",\n # Used in message-move-time-limit setting label\n r\"does not apply to moderators and administrators\",\n # Used in message-delete-time-limit setting label\n r\"does not apply to administrators\",\n # Used as indicator with names for guest users.\n r\"guest\",\n # Used in pills for deactivated users.\n r\"deactivated\",\n # This is a reference to a setting/secret and should be lowercase.\n r\"zulip_org_id\",\n]\n\n# Sort regexes in descending order of their lengths. As a result, the\n# longer phrases will be ignored first.\nIGNORED_PHRASES.sort(key=len, reverse=True)\n\n# Compile regexes to improve performance. This also extracts the\n# text using BeautifulSoup and then removes extra whitespaces from\n# it. This step enables us to add HTML in our regexes directly.\nCOMPILED_IGNORED_PHRASES = [\n re.compile(\" \".join(BeautifulSoup(regex, \"lxml\").text.split())) for regex in IGNORED_PHRASES\n]\n\nSPLIT_BOUNDARY = \"?.!\" # Used to split string into sentences.\nSPLIT_BOUNDARY_REGEX = re.compile(rf\"[{SPLIT_BOUNDARY}]\")\n\n# Regexes which check capitalization in sentences.\nDISALLOWED = [\n r\"^[a-z](?!\\})\", # Checks if the sentence starts with a lower case character.\n r\"^[A-Z][a-z]+[\\sa-z0-9]+[A-Z]\", # Checks if an upper case character exists\n # after a lower case character when the first character is in upper case.\n]\nDISALLOWED_REGEX = re.compile(r\"|\".join(DISALLOWED))\n\nBANNED_WORDS = {\n \"realm\": \"The term realm should not appear in user-facing strings. Use organization instead.\",\n}\n\n\ndef get_safe_phrase(phrase: str) -> str:\n \"\"\"\n Safe phrase is in lower case and doesn't contain characters which can\n conflict with split boundaries. All conflicting characters are replaced\n with low dash (_).\n \"\"\"\n phrase = SPLIT_BOUNDARY_REGEX.sub(\"_\", phrase)\n return phrase.lower()\n\n\ndef replace_with_safe_phrase(matchobj: Match[str]) -> str:\n \"\"\"\n The idea is to convert IGNORED_PHRASES into safe phrases, see\n `get_safe_phrase()` function. The only exception is when the\n IGNORED_PHRASE is at the start of the text or after a split\n boundary; in this case, we change the first letter of the phrase\n to upper case.\n \"\"\"\n ignored_phrase = matchobj.group(0)\n safe_string = get_safe_phrase(ignored_phrase)\n\n start_index = matchobj.start()\n complete_string = matchobj.string\n\n is_string_start = start_index == 0\n # We expect that there will be one space between split boundary\n # and the next word.\n punctuation = complete_string[max(start_index - 2, 0)]\n is_after_split_boundary = punctuation in SPLIT_BOUNDARY\n if is_string_start or is_after_split_boundary:\n return safe_string.capitalize()\n\n return safe_string\n\n\ndef get_safe_text(text: str) -> str:\n \"\"\"\n This returns text which is rendered by BeautifulSoup and is in the\n form that can be split easily and has all IGNORED_PHRASES processed.\n \"\"\"\n soup = BeautifulSoup(text, \"lxml\")\n text = \" \".join(soup.text.split()) # Remove extra whitespaces.\n for phrase_regex in COMPILED_IGNORED_PHRASES:\n text = phrase_regex.sub(replace_with_safe_phrase, text)\n\n return text\n\n\ndef is_capitalized(safe_text: str) -> bool:\n sentences = SPLIT_BOUNDARY_REGEX.split(safe_text)\n return not any(DISALLOWED_REGEX.search(sentence.strip()) for sentence in sentences)\n\n\ndef check_banned_words(text: str) -> List[str]:\n lower_cased_text = text.lower()\n errors = []\n for word, reason in BANNED_WORDS.items():\n if word in lower_cased_text:\n # Hack: Should move this into BANNED_WORDS framework; for\n # now, just hand-code the skips:\n if (\n \"realm_name\" in lower_cased_text\n or \"realm_uri\" in lower_cased_text\n or \"remote_realm_host\" in lower_cased_text\n ):\n continue\n kwargs = dict(word=word, text=text, reason=reason)\n msg = \"{word} found in '{text}'. {reason}\".format(**kwargs)\n errors.append(msg)\n\n return errors\n\n\ndef check_capitalization(strings: List[str]) -> Tuple[List[str], List[str], List[str]]:\n errors = []\n ignored = []\n banned_word_errors = []\n for text in strings:\n text = \" \".join(text.split()) # Remove extra whitespaces.\n safe_text = get_safe_text(text)\n has_ignored_phrase = text != safe_text\n capitalized = is_capitalized(safe_text)\n if not capitalized:\n errors.append(text)\n elif has_ignored_phrase:\n ignored.append(text)\n\n banned_word_errors.extend(check_banned_words(text))\n\n return sorted(errors), sorted(ignored), sorted(banned_word_errors)\n", "path": "tools/lib/capitalization.py"}]} |
gh_patches_debug_37 | rasdani/github-patches | git_diff | liqd__adhocracy4-58 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Extend linting to javascript and jsx files
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `adhocracy4/reports/emails.py`
Content:
```
1 from django.contrib.auth import get_user_model
2 from django.core import urlresolvers
3
4 from adhocracy4 import emails
5
6 User = get_user_model()
7
8
9 class ReportModeratorEmail(emails.ModeratorNotification):
10 template_name = 'a4reports/emails/report_moderators'
11
12
13 class ReportCreatorEmail(emails.Email):
14 template_name = 'a4reports/emails/report_creator'
15
16 def get_receivers(self):
17 return [self.object.content_object.creator]
18
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/adhocracy4/reports/emails.py b/adhocracy4/reports/emails.py
--- a/adhocracy4/reports/emails.py
+++ b/adhocracy4/reports/emails.py
@@ -1,5 +1,4 @@
from django.contrib.auth import get_user_model
-from django.core import urlresolvers
from adhocracy4 import emails
| {"golden_diff": "diff --git a/adhocracy4/reports/emails.py b/adhocracy4/reports/emails.py\n--- a/adhocracy4/reports/emails.py\n+++ b/adhocracy4/reports/emails.py\n@@ -1,5 +1,4 @@\n from django.contrib.auth import get_user_model\n-from django.core import urlresolvers\n \n from adhocracy4 import emails\n", "issue": "Extend linting to javascript and jsx files\n\n", "before_files": [{"content": "from django.contrib.auth import get_user_model\nfrom django.core import urlresolvers\n\nfrom adhocracy4 import emails\n\nUser = get_user_model()\n\n\nclass ReportModeratorEmail(emails.ModeratorNotification):\n template_name = 'a4reports/emails/report_moderators'\n\n\nclass ReportCreatorEmail(emails.Email):\n template_name = 'a4reports/emails/report_creator'\n\n def get_receivers(self):\n return [self.object.content_object.creator]\n", "path": "adhocracy4/reports/emails.py"}], "after_files": [{"content": "from django.contrib.auth import get_user_model\n\nfrom adhocracy4 import emails\n\nUser = get_user_model()\n\n\nclass ReportModeratorEmail(emails.ModeratorNotification):\n template_name = 'a4reports/emails/report_moderators'\n\n\nclass ReportCreatorEmail(emails.Email):\n template_name = 'a4reports/emails/report_creator'\n\n def get_receivers(self):\n return [self.object.content_object.creator]\n", "path": "adhocracy4/reports/emails.py"}]} |
gh_patches_debug_38 | rasdani/github-patches | git_diff | encode__httpx-362 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Releasing 0.7.3
Hi @encode/httpx-maintainers!
Itβs been 21 days since 0.7.2 was released, and weβve got [a bunch of features](https://github.com/encode/httpx/compare/0.7.2...HEAD) ready for 0.7.3 already, eg:
- Digest auth
- SSLKEYLOGFILE
- Response.elapsed
- A host of bug fixes
So regardless of what gets merged until then I think itβs time to release the next version. :)
As suggested by @sethmlarson I-cant-remember-where Iβd like to take on this release. Iβll probably take the opportunity to document the release process as well - #313. π
Probably will do tonight.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `httpx/__version__.py`
Content:
```
1 __title__ = "httpx"
2 __description__ = "A next generation HTTP client, for Python 3."
3 __version__ = "0.7.2"
4
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/httpx/__version__.py b/httpx/__version__.py
--- a/httpx/__version__.py
+++ b/httpx/__version__.py
@@ -1,3 +1,3 @@
__title__ = "httpx"
__description__ = "A next generation HTTP client, for Python 3."
-__version__ = "0.7.2"
+__version__ = "0.7.3"
| {"golden_diff": "diff --git a/httpx/__version__.py b/httpx/__version__.py\n--- a/httpx/__version__.py\n+++ b/httpx/__version__.py\n@@ -1,3 +1,3 @@\n __title__ = \"httpx\"\n __description__ = \"A next generation HTTP client, for Python 3.\"\n-__version__ = \"0.7.2\"\n+__version__ = \"0.7.3\"\n", "issue": "Releasing 0.7.3\nHi @encode/httpx-maintainers!\r\n\r\nIt\u2019s been 21 days since 0.7.2 was released, and we\u2019ve got [a bunch of features](https://github.com/encode/httpx/compare/0.7.2...HEAD) ready for 0.7.3 already, eg:\r\n\r\n- Digest auth\r\n- SSLKEYLOGFILE\r\n- Response.elapsed\r\n- A host of bug fixes\r\n\r\nSo regardless of what gets merged until then I think it\u2019s time to release the next version. :)\r\n\r\nAs suggested by @sethmlarson I-cant-remember-where I\u2019d like to take on this release. I\u2019ll probably take the opportunity to document the release process as well - #313. \ud83d\udc4d\r\n\r\nProbably will do tonight.\r\n\r\n\n", "before_files": [{"content": "__title__ = \"httpx\"\n__description__ = \"A next generation HTTP client, for Python 3.\"\n__version__ = \"0.7.2\"\n", "path": "httpx/__version__.py"}], "after_files": [{"content": "__title__ = \"httpx\"\n__description__ = \"A next generation HTTP client, for Python 3.\"\n__version__ = \"0.7.3\"\n", "path": "httpx/__version__.py"}]} |
gh_patches_debug_39 | rasdani/github-patches | git_diff | wagtail__wagtail-8800 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
patternlibrary_override_tags breaks all non-development installations
#8665 added a `patternlibrary_override_tags` tag library to wagtail.admin, which depends on the django-pattern-library package listed in our testing_extras dependencies. However, this tag library will be loaded on all environments that have `wagtail.admin` in their INSTALLED_APPS, so any Wagtail installation that doesn't include testing_extras is currently broken.
As a non-frontend dev, I don't know what this tag library is for, and there's no documentation for me to find out. Deleting it and uninstalling django-pattern-library doesn't break any tests, so it seems to me that it doesn't really belong in the testing dependencies. (If it really is a testing dependency, I'd suggest that moving patternlibrary_override_tags.py into one of the test apps in wagtail/test/ would let it do whatever it's meant to do without breaking non-testing environments.)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `wagtail/admin/templatetags/patternlibrary_override_tags.py`
Content:
```
1 from pattern_library.monkey_utils import override_tag
2
3 from wagtail.admin.templatetags.wagtailadmin_tags import register
4
5 override_tag(register, name="test_page_is_public")
6
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/wagtail/admin/templatetags/patternlibrary_override_tags.py b/wagtail/admin/templatetags/patternlibrary_override_tags.py
deleted file mode 100644
--- a/wagtail/admin/templatetags/patternlibrary_override_tags.py
+++ /dev/null
@@ -1,5 +0,0 @@
-from pattern_library.monkey_utils import override_tag
-
-from wagtail.admin.templatetags.wagtailadmin_tags import register
-
-override_tag(register, name="test_page_is_public")
| {"golden_diff": "diff --git a/wagtail/admin/templatetags/patternlibrary_override_tags.py b/wagtail/admin/templatetags/patternlibrary_override_tags.py\ndeleted file mode 100644\n--- a/wagtail/admin/templatetags/patternlibrary_override_tags.py\n+++ /dev/null\n@@ -1,5 +0,0 @@\n-from pattern_library.monkey_utils import override_tag\n-\n-from wagtail.admin.templatetags.wagtailadmin_tags import register\n-\n-override_tag(register, name=\"test_page_is_public\")\n", "issue": "patternlibrary_override_tags breaks all non-development installations\n#8665 added a `patternlibrary_override_tags` tag library to wagtail.admin, which depends on the django-pattern-library package listed in our testing_extras dependencies. However, this tag library will be loaded on all environments that have `wagtail.admin` in their INSTALLED_APPS, so any Wagtail installation that doesn't include testing_extras is currently broken.\r\n\r\nAs a non-frontend dev, I don't know what this tag library is for, and there's no documentation for me to find out. Deleting it and uninstalling django-pattern-library doesn't break any tests, so it seems to me that it doesn't really belong in the testing dependencies. (If it really is a testing dependency, I'd suggest that moving patternlibrary_override_tags.py into one of the test apps in wagtail/test/ would let it do whatever it's meant to do without breaking non-testing environments.)\n", "before_files": [{"content": "from pattern_library.monkey_utils import override_tag\n\nfrom wagtail.admin.templatetags.wagtailadmin_tags import register\n\noverride_tag(register, name=\"test_page_is_public\")\n", "path": "wagtail/admin/templatetags/patternlibrary_override_tags.py"}], "after_files": [{"content": null, "path": "wagtail/admin/templatetags/patternlibrary_override_tags.py"}]} |
gh_patches_debug_40 | rasdani/github-patches | git_diff | web2py__web2py-2127 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
After updating from 2.18.1 to 2.18.2 the session.flash messages all show as b'<message>'
**Describe the bug**
After updating from 2.18.1 to 2.18.2 the session.flsh messages all show as b'<message>'
**To Reproduce**
Just login on any app that shows session.flash. The 'Hello World' message from the welcome app uses response.flash and not session.flash and thus it does not show the problem.
**Desktop (please complete the following information):**
Windows 7 Pro x64 w/SP1 + all upgrades
Firefox 65.0.1 x64
Python 3.7.1 x86
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gluon/languages.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 """
5 | This file is part of the web2py Web Framework
6 | Copyrighted by Massimo Di Pierro <[email protected]>
7 | License: LGPLv3 (http://www.gnu.org/licenses/lgpl.html)
8 | Plural subsystem is created by Vladyslav Kozlovskyy (Ukraine) <[email protected]>
9
10 Translation system
11 --------------------------------------------
12 """
13
14 import os
15 import re
16 import sys
17 import pkgutil
18 import logging
19 from cgi import escape
20 from threading import RLock
21
22 from pydal._compat import copyreg, PY2, maketrans, iterkeys, unicodeT, to_unicode, to_bytes, iteritems, to_native, pjoin
23 from pydal.contrib.portalocker import read_locked, LockedFile
24
25 from yatl.sanitizer import xmlescape
26
27 from gluon.fileutils import listdir
28 from gluon.cfs import getcfs
29 from gluon.html import XML, xmlescape
30 from gluon.contrib.markmin.markmin2html import render, markmin_escape
31
32 __all__ = ['translator', 'findT', 'update_all_languages']
33
34 ostat = os.stat
35 oslistdir = os.listdir
36 pdirname = os.path.dirname
37 isdir = os.path.isdir
38
39 DEFAULT_LANGUAGE = 'en'
40 DEFAULT_LANGUAGE_NAME = 'English'
41
42 # DEFAULT PLURAL-FORMS RULES:
43 # language doesn't use plural forms
44 DEFAULT_NPLURALS = 1
45 # only one singular/plural form is used
46 DEFAULT_GET_PLURAL_ID = lambda n: 0
47 # word is unchangeable
48 DEFAULT_CONSTRUCT_PLURAL_FORM = lambda word, plural_id: word
49
50 if PY2:
51 NUMBERS = (int, long, float)
52 from gluon.utf8 import Utf8
53 else:
54 NUMBERS = (int, float)
55 Utf8 = str
56
57 # pattern to find T(blah blah blah) expressions
58 PY_STRING_LITERAL_RE = r'(?<=[^\w]T\()(?P<name>'\
59 + r"[uU]?[rR]?(?:'''(?:[^']|'{1,2}(?!'))*''')|"\
60 + r"(?:'(?:[^'\\]|\\.)*')|" + r'(?:"""(?:[^"]|"{1,2}(?!"))*""")|'\
61 + r'(?:"(?:[^"\\]|\\.)*"))'
62
63 PY_M_STRING_LITERAL_RE = r'(?<=[^\w]T\.M\()(?P<name>'\
64 + r"[uU]?[rR]?(?:'''(?:[^']|'{1,2}(?!'))*''')|"\
65 + r"(?:'(?:[^'\\]|\\.)*')|" + r'(?:"""(?:[^"]|"{1,2}(?!"))*""")|'\
66 + r'(?:"(?:[^"\\]|\\.)*"))'
67
68 regex_translate = re.compile(PY_STRING_LITERAL_RE, re.DOTALL)
69 regex_translate_m = re.compile(PY_M_STRING_LITERAL_RE, re.DOTALL)
70 regex_param = re.compile(r'{(?P<s>.+?)}')
71
72 # pattern for a valid accept_language
73 regex_language = \
74 re.compile('([a-z]{2,3}(?:\-[a-z]{2})?(?:\-[a-z]{2})?)(?:[,;]|$)')
75 regex_langfile = re.compile('^[a-z]{2,3}(-[a-z]{2})?\.py$')
76 regex_backslash = re.compile(r"\\([\\{}%])")
77 regex_plural = re.compile('%({.+?})')
78 regex_plural_dict = re.compile('^{(?P<w>[^()[\]][^()[\]]*?)\((?P<n>[^()\[\]]+)\)}$') # %%{word(varname or number)}
79 regex_plural_tuple = re.compile(
80 '^{(?P<w>[^[\]()]+)(?:\[(?P<i>\d+)\])?}$') # %%{word[index]} or %%{word}
81 regex_plural_file = re.compile('^plural-[a-zA-Z]{2}(-[a-zA-Z]{2})?\.py$')
82
83
84 def is_writable():
85 """ returns True if and only if the filesystem is writable """
86 from gluon.settings import global_settings
87 return not global_settings.web2py_runtime_gae
88
89
90 def safe_eval(text):
91 if text.strip():
92 try:
93 import ast
94 return ast.literal_eval(text)
95 except ImportError:
96 return eval(text, {}, {})
97 return None
98
99 # used as default filter in translator.M()
100
101
102 def markmin(s):
103 def markmin_aux(m):
104 return '{%s}' % markmin_escape(m.group('s'))
105 return render(regex_param.sub(markmin_aux, s),
106 sep='br', autolinks=None, id_prefix='')
107
108 # UTF8 helper functions
109
110
111 def upper_fun(s):
112 return to_bytes(to_unicode(s).upper())
113
114
115 def title_fun(s):
116 return to_bytes(to_unicode(s).title())
117
118
119 def cap_fun(s):
120 return to_bytes(to_unicode(s).capitalize())
121
122
123 ttab_in = maketrans("\\%{}", '\x1c\x1d\x1e\x1f')
124 ttab_out = maketrans('\x1c\x1d\x1e\x1f', "\\%{}")
125
126 # cache of translated messages:
127 # global_language_cache:
128 # { 'languages/xx.py':
129 # ( {"def-message": "xx-message",
130 # ...
131 # "def-message": "xx-message"}, lock_object )
132 # 'languages/yy.py': ( {dict}, lock_object )
133 # ...
134 # }
135
136 global_language_cache = {}
137
138
139 def get_from_cache(cache, val, fun):
140 lang_dict, lock = cache
141 lock.acquire()
142 try:
143 result = lang_dict.get(val)
144 finally:
145 lock.release()
146 if result:
147 return result
148 lock.acquire()
149 try:
150 result = lang_dict.setdefault(val, fun())
151 finally:
152 lock.release()
153 return result
154
155
156 def clear_cache(filename):
157 cache = global_language_cache.setdefault(
158 filename, ({}, RLock()))
159 lang_dict, lock = cache
160 lock.acquire()
161 try:
162 lang_dict.clear()
163 finally:
164 lock.release()
165
166
167 def read_dict_aux(filename):
168 lang_text = read_locked(filename).replace(b'\r\n', b'\n')
169 clear_cache(filename)
170 try:
171 return safe_eval(to_native(lang_text)) or {}
172 except Exception:
173 e = sys.exc_info()[1]
174 status = 'Syntax error in %s (%s)' % (filename, e)
175 logging.error(status)
176 return {'__corrupted__': status}
177
178
179 def read_dict(filename):
180 """ Returns dictionary with translation messages
181 """
182 return getcfs('lang:' + filename, filename,
183 lambda: read_dict_aux(filename))
184
185
186 def read_possible_plural_rules():
187 """
188 Creates list of all possible plural rules files
189 The result is cached in PLURAL_RULES dictionary to increase speed
190 """
191 plurals = {}
192 try:
193 import gluon.contrib.plural_rules as package
194 for importer, modname, ispkg in pkgutil.iter_modules(package.__path__):
195 if len(modname) == 2:
196 module = __import__(package.__name__ + '.' + modname,
197 fromlist=[modname])
198 lang = modname
199 pname = modname + '.py'
200 nplurals = getattr(module, 'nplurals', DEFAULT_NPLURALS)
201 get_plural_id = getattr(
202 module, 'get_plural_id',
203 DEFAULT_GET_PLURAL_ID)
204 construct_plural_form = getattr(
205 module, 'construct_plural_form',
206 DEFAULT_CONSTRUCT_PLURAL_FORM)
207 plurals[lang] = (lang, nplurals, get_plural_id,
208 construct_plural_form)
209 except ImportError:
210 e = sys.exc_info()[1]
211 logging.warn('Unable to import plural rules: %s' % e)
212 return plurals
213
214 PLURAL_RULES = read_possible_plural_rules()
215
216
217 def read_possible_languages_aux(langdir):
218 def get_lang_struct(lang, langcode, langname, langfile_mtime):
219 if lang == 'default':
220 real_lang = langcode.lower()
221 else:
222 real_lang = lang
223 (prules_langcode,
224 nplurals,
225 get_plural_id,
226 construct_plural_form
227 ) = PLURAL_RULES.get(real_lang[:2], ('default',
228 DEFAULT_NPLURALS,
229 DEFAULT_GET_PLURAL_ID,
230 DEFAULT_CONSTRUCT_PLURAL_FORM))
231 if prules_langcode != 'default':
232 (pluraldict_fname,
233 pluraldict_mtime) = plurals.get(real_lang,
234 plurals.get(real_lang[:2],
235 ('plural-%s.py' % real_lang, 0)))
236 else:
237 pluraldict_fname = None
238 pluraldict_mtime = 0
239 return (langcode, # language code from !langcode!
240 langname,
241 # language name in national spelling from !langname!
242 langfile_mtime, # m_time of language file
243 pluraldict_fname, # name of plural dictionary file or None (when default.py is not exist)
244 pluraldict_mtime, # m_time of plural dictionary file or 0 if file is not exist
245 prules_langcode, # code of plural rules language or 'default'
246 nplurals, # nplurals for current language
247 get_plural_id, # get_plural_id() for current language
248 construct_plural_form) # construct_plural_form() for current language
249
250 plurals = {}
251 flist = oslistdir(langdir) if isdir(langdir) else []
252
253 # scan languages directory for plural dict files:
254 for pname in flist:
255 if regex_plural_file.match(pname):
256 plurals[pname[7:-3]] = (pname,
257 ostat(pjoin(langdir, pname)).st_mtime)
258 langs = {}
259 # scan languages directory for langfiles:
260 for fname in flist:
261 if regex_langfile.match(fname) or fname == 'default.py':
262 fname_with_path = pjoin(langdir, fname)
263 d = read_dict(fname_with_path)
264 lang = fname[:-3]
265 langcode = d.get('!langcode!', lang if lang != 'default'
266 else DEFAULT_LANGUAGE)
267 langname = d.get('!langname!', langcode)
268 langfile_mtime = ostat(fname_with_path).st_mtime
269 langs[lang] = get_lang_struct(lang, langcode,
270 langname, langfile_mtime)
271 if 'default' not in langs:
272 # if default.py is not found,
273 # add DEFAULT_LANGUAGE as default language:
274 langs['default'] = get_lang_struct('default', DEFAULT_LANGUAGE,
275 DEFAULT_LANGUAGE_NAME, 0)
276 deflang = langs['default']
277 deflangcode = deflang[0]
278 if deflangcode not in langs:
279 # create language from default.py:
280 langs[deflangcode] = deflang[:2] + (0,) + deflang[3:]
281
282 return langs
283
284
285 def read_possible_languages(langpath):
286 return getcfs('langs:' + langpath, langpath,
287 lambda: read_possible_languages_aux(langpath))
288
289
290 def read_plural_dict_aux(filename):
291 lang_text = read_locked(filename).replace(b'\r\n', b'\n')
292 try:
293 return eval(lang_text) or {}
294 except Exception:
295 e = sys.exc_info()[1]
296 status = 'Syntax error in %s (%s)' % (filename, e)
297 logging.error(status)
298 return {'__corrupted__': status}
299
300
301 def read_plural_dict(filename):
302 return getcfs('plurals:' + filename, filename,
303 lambda: read_plural_dict_aux(filename))
304
305
306 def write_plural_dict(filename, contents):
307 if '__corrupted__' in contents:
308 return
309 fp = None
310 try:
311 fp = LockedFile(filename, 'w')
312 fp.write('#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n{\n# "singular form (0)": ["first plural form (1)", "second plural form (2)", ...],\n')
313 for key in sorted(contents, key=sort_function):
314 forms = '[' + ','.join([repr(Utf8(form))
315 for form in contents[key]]) + ']'
316 fp.write('%s: %s,\n' % (repr(Utf8(key)), forms))
317 fp.write('}\n')
318 except (IOError, OSError):
319 if is_writable():
320 logging.warning('Unable to write to file %s' % filename)
321 return
322 finally:
323 if fp:
324 fp.close()
325
326
327 def sort_function(x):
328 return to_unicode(x, 'utf-8').lower()
329
330
331 def write_dict(filename, contents):
332 if '__corrupted__' in contents:
333 return
334 fp = None
335 try:
336 fp = LockedFile(filename, 'w')
337 fp.write('# -*- coding: utf-8 -*-\n{\n')
338 for key in sorted(contents, key=lambda x: to_unicode(x, 'utf-8').lower()):
339 fp.write('%s: %s,\n' % (repr(Utf8(key)),
340 repr(Utf8(contents[key]))))
341 fp.write('}\n')
342 except (IOError, OSError):
343 if is_writable():
344 logging.warning('Unable to write to file %s' % filename)
345 return
346 finally:
347 if fp:
348 fp.close()
349
350
351 class lazyT(object):
352 """
353 Never to be called explicitly, returned by
354 translator.__call__() or translator.M()
355 """
356 m = s = T = f = t = None
357 M = is_copy = False
358
359 def __init__(
360 self,
361 message,
362 symbols={},
363 T=None,
364 filter=None,
365 ftag=None,
366 M=False
367 ):
368 if isinstance(message, lazyT):
369 self.m = message.m
370 self.s = message.s
371 self.T = message.T
372 self.f = message.f
373 self.t = message.t
374 self.M = message.M
375 self.is_copy = True
376 else:
377 self.m = message
378 self.s = symbols
379 self.T = T
380 self.f = filter
381 self.t = ftag
382 self.M = M
383 self.is_copy = False
384
385 def __repr__(self):
386 return "<lazyT %s>" % (repr(Utf8(self.m)), )
387
388 def __str__(self):
389 return str(self.T.apply_filter(self.m, self.s, self.f, self.t) if self.M else
390 self.T.translate(self.m, self.s))
391
392 def __eq__(self, other):
393 return str(self) == str(other)
394
395 def __ne__(self, other):
396 return str(self) != str(other)
397
398 def __add__(self, other):
399 return '%s%s' % (self, other)
400
401 def __radd__(self, other):
402 return '%s%s' % (other, self)
403
404 def __mul__(self, other):
405 return str(self) * other
406
407 def __cmp__(self, other):
408 return cmp(str(self), str(other))
409
410 def __hash__(self):
411 return hash(str(self))
412
413 def __getattr__(self, name):
414 return getattr(str(self), name)
415
416 def __getitem__(self, i):
417 return str(self)[i]
418
419 def __getslice__(self, i, j):
420 return str(self)[i:j]
421
422 def __iter__(self):
423 for c in str(self):
424 yield c
425
426 def __len__(self):
427 return len(str(self))
428
429 def xml(self):
430 return str(self) if self.M else xmlescape(str(self), quote=False)
431
432 def encode(self, *a, **b):
433 if PY2 and a[0] != 'utf8':
434 return to_unicode(str(self)).encode(*a, **b)
435 else:
436 return str(self)
437
438 def decode(self, *a, **b):
439 if PY2:
440 return str(self).decode(*a, **b)
441 else:
442 return str(self)
443
444 def read(self):
445 return str(self)
446
447 def __mod__(self, symbols):
448 if self.is_copy:
449 return lazyT(self)
450 return lazyT(self.m, symbols, self.T, self.f, self.t, self.M)
451
452
453 def pickle_lazyT(c):
454 return str, (c.xml(),)
455
456 copyreg.pickle(lazyT, pickle_lazyT)
457
458
459 class TranslatorFactory(object):
460 """
461 This class is instantiated by gluon.compileapp.build_environment
462 as the T object
463
464 Example:
465
466 T.force(None) # turns off translation
467 T.force('fr, it') # forces web2py to translate using fr.py or it.py
468
469 T("Hello World") # translates "Hello World" using the selected file
470
471 Note:
472 - there is no need to force since, by default, T uses
473 http_accept_language to determine a translation file.
474 - en and en-en are considered different languages!
475 - if language xx-yy is not found force() probes other similar languages
476 using such algorithm: `xx-yy.py -> xx.py -> xx-yy*.py -> xx*.py`
477 """
478
479 def __init__(self, langpath, http_accept_language):
480 self.langpath = langpath
481 self.http_accept_language = http_accept_language
482 # filled in self.force():
483 # ------------------------
484 # self.cache
485 # self.accepted_language
486 # self.language_file
487 # self.plural_language
488 # self.nplurals
489 # self.get_plural_id
490 # self.construct_plural_form
491 # self.plural_file
492 # self.plural_dict
493 # self.requested_languages
494 # ----------------------------------------
495 # filled in self.set_current_languages():
496 # ----------------------------------------
497 # self.default_language_file
498 # self.default_t
499 # self.current_languages
500 self.set_current_languages()
501 self.lazy = True
502 self.otherTs = {}
503 self.filter = markmin
504 self.ftag = 'markmin'
505 self.ns = None
506 self.is_writable = True
507
508 def get_possible_languages_info(self, lang=None):
509 """
510 Returns info for selected language or dictionary with all
511 possible languages info from `APP/languages/*.py`
512 It Returns:
513
514 - a tuple containing::
515
516 langcode, langname, langfile_mtime,
517 pluraldict_fname, pluraldict_mtime,
518 prules_langcode, nplurals,
519 get_plural_id, construct_plural_form
520
521 or None
522
523 - if *lang* is NOT defined a dictionary with all possible
524 languages::
525
526 { langcode(from filename):
527 ( langcode, # language code from !langcode!
528 langname,
529 # language name in national spelling from !langname!
530 langfile_mtime, # m_time of language file
531 pluraldict_fname,# name of plural dictionary file or None (when default.py is not exist)
532 pluraldict_mtime,# m_time of plural dictionary file or 0 if file is not exist
533 prules_langcode, # code of plural rules language or 'default'
534 nplurals, # nplurals for current language
535 get_plural_id, # get_plural_id() for current language
536 construct_plural_form) # construct_plural_form() for current language
537 }
538
539 Args:
540 lang (str): language
541
542 """
543 info = read_possible_languages(self.langpath)
544 if lang:
545 info = info.get(lang)
546 return info
547
548 def get_possible_languages(self):
549 """ Gets list of all possible languages for current application """
550 return list(set(self.current_languages +
551 [lang for lang in read_possible_languages(self.langpath)
552 if lang != 'default']))
553
554 def set_current_languages(self, *languages):
555 """
556 Sets current AKA "default" languages
557 Setting one of this languages makes the force() function to turn
558 translation off
559 """
560 if len(languages) == 1 and isinstance(languages[0], (tuple, list)):
561 languages = languages[0]
562 if not languages or languages[0] is None:
563 # set default language from default.py/DEFAULT_LANGUAGE
564 pl_info = self.get_possible_languages_info('default')
565 if pl_info[2] == 0: # langfile_mtime
566 # if languages/default.py is not found
567 self.default_language_file = self.langpath
568 self.default_t = {}
569 self.current_languages = [DEFAULT_LANGUAGE]
570 else:
571 self.default_language_file = pjoin(self.langpath,
572 'default.py')
573 self.default_t = read_dict(self.default_language_file)
574 self.current_languages = [pl_info[0]] # !langcode!
575 else:
576 self.current_languages = list(languages)
577 self.force(self.http_accept_language)
578
579 def plural(self, word, n):
580 """
581 Gets plural form of word for number *n*
582 invoked from T()/T.M() in `%%{}` tag
583
584 Note:
585 "word" MUST be defined in current language (T.accepted_language)
586
587 Args:
588 word (str): word in singular
589 n (numeric): number plural form created for
590
591 Returns:
592 word (str): word in appropriate singular/plural form
593
594 """
595 if int(n) == 1:
596 return word
597 elif word:
598 id = self.get_plural_id(abs(int(n)))
599 # id = 0 singular form
600 # id = 1 first plural form
601 # id = 2 second plural form
602 # etc.
603 if id != 0:
604 forms = self.plural_dict.get(word, [])
605 if len(forms) >= id:
606 # have this plural form:
607 return forms[id - 1]
608 else:
609 # guessing this plural form
610 forms += [''] * (self.nplurals - len(forms) - 1)
611 form = self.construct_plural_form(word, id)
612 forms[id - 1] = form
613 self.plural_dict[word] = forms
614 if self.is_writable and is_writable() and self.plural_file:
615 write_plural_dict(self.plural_file,
616 self.plural_dict)
617 return form
618 return word
619
620 def force(self, *languages):
621 """
622 Selects language(s) for translation
623
624 if a list of languages is passed as a parameter,
625 the first language from this list that matches the ones
626 from the possible_languages dictionary will be
627 selected
628
629 default language will be selected if none
630 of them matches possible_languages.
631 """
632 pl_info = read_possible_languages(self.langpath)
633 def set_plural(language):
634 """
635 initialize plural forms subsystem
636 """
637 lang_info = pl_info.get(language)
638 if lang_info:
639 (pname,
640 pmtime,
641 self.plural_language,
642 self.nplurals,
643 self.get_plural_id,
644 self.construct_plural_form
645 ) = lang_info[3:]
646 pdict = {}
647 if pname:
648 pname = pjoin(self.langpath, pname)
649 if pmtime != 0:
650 pdict = read_plural_dict(pname)
651 self.plural_file = pname
652 self.plural_dict = pdict
653 else:
654 self.plural_language = 'default'
655 self.nplurals = DEFAULT_NPLURALS
656 self.get_plural_id = DEFAULT_GET_PLURAL_ID
657 self.construct_plural_form = DEFAULT_CONSTRUCT_PLURAL_FORM
658 self.plural_file = None
659 self.plural_dict = {}
660 language = ''
661 if len(languages) == 1 and isinstance(languages[0], str):
662 languages = regex_language.findall(languages[0].lower())
663 elif not languages or languages[0] is None:
664 languages = []
665 self.requested_languages = languages = tuple(languages)
666 if languages:
667 all_languages = set(lang for lang in pl_info
668 if lang != 'default') \
669 | set(self.current_languages)
670 for lang in languages:
671 # compare "aa-bb" | "aa" from *language* parameter
672 # with strings from langlist using such alghorythm:
673 # xx-yy.py -> xx.py -> xx*.py
674 lang5 = lang[:5]
675 if lang5 in all_languages:
676 language = lang5
677 else:
678 lang2 = lang[:2]
679 if len(lang5) > 2 and lang2 in all_languages:
680 language = lang2
681 else:
682 for l in all_languages:
683 if l[:2] == lang2:
684 language = l
685 if language:
686 if language in self.current_languages:
687 break
688 self.language_file = pjoin(self.langpath, language + '.py')
689 self.t = read_dict(self.language_file)
690 self.cache = global_language_cache.setdefault(
691 self.language_file,
692 ({}, RLock()))
693 set_plural(language)
694 self.accepted_language = language
695 return languages
696 self.accepted_language = language
697 if not language:
698 if self.current_languages:
699 self.accepted_language = self.current_languages[0]
700 else:
701 self.accepted_language = DEFAULT_LANGUAGE
702 self.language_file = self.default_language_file
703 self.cache = global_language_cache.setdefault(self.language_file,
704 ({}, RLock()))
705 self.t = self.default_t
706 set_plural(self.accepted_language)
707 return languages
708
709 def __call__(self, message, symbols={}, language=None, lazy=None, ns=None):
710 """
711 get cached translated plain text message with inserted parameters(symbols)
712 if lazy==True lazyT object is returned
713 """
714 if lazy is None:
715 lazy = self.lazy
716 if not language and not ns:
717 if lazy:
718 return lazyT(message, symbols, self)
719 else:
720 return self.translate(message, symbols)
721 else:
722 if ns:
723 if ns != self.ns:
724 self.langpath = os.path.join(self.langpath, ns)
725 if self.ns is None:
726 self.ns = ns
727 otherT = self.__get_otherT__(language, ns)
728 return otherT(message, symbols, lazy=lazy)
729
730 def __get_otherT__(self, language=None, namespace=None):
731 if not language and not namespace:
732 raise Exception('Incorrect parameters')
733
734 if namespace:
735 if language:
736 index = '%s/%s' % (namespace, language)
737 else:
738 index = namespace
739 else:
740 index = language
741 try:
742 otherT = self.otherTs[index]
743 except KeyError:
744 otherT = self.otherTs[index] = TranslatorFactory(self.langpath,
745 self.http_accept_language)
746 if language:
747 otherT.force(language)
748 return otherT
749
750 def apply_filter(self, message, symbols={}, filter=None, ftag=None):
751 def get_tr(message, prefix, filter):
752 s = self.get_t(message, prefix)
753 return filter(s) if filter else self.filter(s)
754 if filter:
755 prefix = '@' + (ftag or 'userdef') + '\x01'
756 else:
757 prefix = '@' + self.ftag + '\x01'
758 message = get_from_cache(
759 self.cache, prefix + message,
760 lambda: get_tr(message, prefix, filter))
761 if symbols or symbols == 0 or symbols == "":
762 if isinstance(symbols, dict):
763 symbols.update(
764 (key, xmlescape(value).translate(ttab_in))
765 for key, value in iteritems(symbols)
766 if not isinstance(value, NUMBERS))
767 else:
768 if not isinstance(symbols, tuple):
769 symbols = (symbols,)
770 symbols = tuple(
771 value if isinstance(value, NUMBERS)
772 else to_native(xmlescape(value)).translate(ttab_in)
773 for value in symbols)
774 message = self.params_substitution(message, symbols)
775 return to_native(XML(message.translate(ttab_out)).xml())
776
777 def M(self, message, symbols={}, language=None,
778 lazy=None, filter=None, ftag=None, ns=None):
779 """
780 Gets cached translated markmin-message with inserted parametes
781 if lazy==True lazyT object is returned
782 """
783 if lazy is None:
784 lazy = self.lazy
785 if not language and not ns:
786 if lazy:
787 return lazyT(message, symbols, self, filter, ftag, True)
788 else:
789 return self.apply_filter(message, symbols, filter, ftag)
790 else:
791 if ns:
792 self.langpath = os.path.join(self.langpath, ns)
793 otherT = self.__get_otherT__(language, ns)
794 return otherT.M(message, symbols, lazy=lazy)
795
796 def get_t(self, message, prefix=''):
797 """
798 Use ## to add a comment into a translation string
799 the comment can be useful do discriminate different possible
800 translations for the same string (for example different locations):
801
802 T(' hello world ') -> ' hello world '
803 T(' hello world ## token') -> ' hello world '
804 T('hello ## world## token') -> 'hello ## world'
805
806 the ## notation is ignored in multiline strings and strings that
807 start with ##. This is needed to allow markmin syntax to be translated
808 """
809 message = to_native(message, 'utf8')
810 prefix = to_native(prefix, 'utf8')
811 key = prefix + message
812 mt = self.t.get(key, None)
813 if mt is not None:
814 return mt
815 # we did not find a translation
816 if message.find('##') > 0:
817 pass
818 if message.find('##') > 0 and not '\n' in message:
819 # remove comments
820 message = message.rsplit('##', 1)[0]
821 # guess translation same as original
822 self.t[key] = mt = self.default_t.get(key, message)
823 # update language file for latter translation
824 if self.is_writable and is_writable() and \
825 self.language_file != self.default_language_file:
826 write_dict(self.language_file, self.t)
827 return regex_backslash.sub(
828 lambda m: m.group(1).translate(ttab_in), to_native(mt))
829
830 def params_substitution(self, message, symbols):
831 """
832 Substitutes parameters from symbols into message using %.
833 also parse `%%{}` placeholders for plural-forms processing.
834
835 Returns:
836 string with parameters
837
838 Note:
839 *symbols* MUST BE OR tuple OR dict of parameters!
840 """
841 def sub_plural(m):
842 """String in `%{}` is transformed by this rules:
843 If string starts with `!` or `?` such transformations
844 take place:
845
846 "!string of words" -> "String of word" (Capitalize)
847 "!!string of words" -> "String Of Word" (Title)
848 "!!!string of words" -> "STRING OF WORD" (Upper)
849
850 "?word1?number" -> "word1" or "number"
851 (return word1 if number == 1,
852 return number otherwise)
853 "??number" or "?number" -> "" or "number"
854 (as above with word1 = "")
855
856 "?word1?number?word0" -> "word1" or "number" or "word0"
857 (return word1 if number == 1,
858 return word0 if number == 0,
859 return number otherwise)
860 "?word1?number?" -> "word1" or "number" or ""
861 (as above with word0 = "")
862 "??number?word0" -> "number" or "word0"
863 (as above with word1 = "")
864 "??number?" -> "number" or ""
865 (as above with word1 = word0 = "")
866
867 "?word1?word[number]" -> "word1" or "word"
868 (return word1 if symbols[number] == 1,
869 return word otherwise)
870 "?word1?[number]" -> "" or "word1"
871 (as above with word = "")
872 "??word[number]" or "?word[number]" -> "" or "word"
873 (as above with word1 = "")
874
875 "?word1?word?word0[number]" -> "word1" or "word" or "word0"
876 (return word1 if symbols[number] == 1,
877 return word0 if symbols[number] == 0,
878 return word otherwise)
879 "?word1?word?[number]" -> "word1" or "word" or ""
880 (as above with word0 = "")
881 "??word?word0[number]" -> "" or "word" or "word0"
882 (as above with word1 = "")
883 "??word?[number]" -> "" or "word"
884 (as above with word1 = word0 = "")
885
886 Other strings, (those not starting with `!` or `?`)
887 are processed by self.plural
888 """
889 def sub_tuple(m):
890 """ word
891 !word, !!word, !!!word
892 ?word1?number
893 ??number, ?number
894 ?word1?number?word0
895 ?word1?number?
896 ??number?word0
897 ??number?
898
899 word[number]
900 !word[number], !!word[number], !!!word[number]
901 ?word1?word[number]
902 ?word1?[number]
903 ??word[number], ?word[number]
904 ?word1?word?word0[number]
905 ?word1?word?[number]
906 ??word?word0[number]
907 ??word?[number]
908 """
909 w, i = m.group('w', 'i')
910 c = w[0]
911 if c not in '!?':
912 return self.plural(w, symbols[int(i or 0)])
913 elif c == '?':
914 (p1, sep, p2) = w[1:].partition("?")
915 part1 = p1 if sep else ""
916 (part2, sep, part3) = (p2 if sep else p1).partition("?")
917 if not sep:
918 part3 = part2
919 if i is None:
920 # ?[word]?number[?number] or ?number
921 if not part2:
922 return m.group(0)
923 num = int(part2)
924 else:
925 # ?[word1]?word[?word0][number]
926 num = int(symbols[int(i or 0)])
927 return part1 if num == 1 else part3 if num == 0 else part2
928 elif w.startswith('!!!'):
929 word = w[3:]
930 fun = upper_fun
931 elif w.startswith('!!'):
932 word = w[2:]
933 fun = title_fun
934 else:
935 word = w[1:]
936 fun = cap_fun
937 if i is not None:
938 return to_native(fun(self.plural(word, symbols[int(i)])))
939 return to_native(fun(word))
940
941 def sub_dict(m):
942 """ word(key or num)
943 !word(key or num), !!word(key or num), !!!word(key or num)
944 ?word1?word(key or num)
945 ??word(key or num), ?word(key or num)
946 ?word1?word?word0(key or num)
947 ?word1?word?(key or num)
948 ??word?word0(key or num)
949 ?word1?word?(key or num)
950 ??word?(key or num), ?word?(key or num)
951 """
952 w, n = m.group('w', 'n')
953 c = w[0]
954 n = int(n) if n.isdigit() else symbols[n]
955 if c not in '!?':
956 return self.plural(w, n)
957 elif c == '?':
958 # ?[word1]?word[?word0](key or num), ?[word1]?word(key or num) or ?word(key or num)
959 (p1, sep, p2) = w[1:].partition("?")
960 part1 = p1 if sep else ""
961 (part2, sep, part3) = (p2 if sep else p1).partition("?")
962 if not sep:
963 part3 = part2
964 num = int(n)
965 return part1 if num == 1 else part3 if num == 0 else part2
966 elif w.startswith('!!!'):
967 word = w[3:]
968 fun = upper_fun
969 elif w.startswith('!!'):
970 word = w[2:]
971 fun = title_fun
972 else:
973 word = w[1:]
974 fun = cap_fun
975 s = fun(self.plural(word, n))
976 return s if PY2 else to_unicode(s)
977
978 s = m.group(1)
979 part = regex_plural_tuple.sub(sub_tuple, s)
980 if part == s:
981 part = regex_plural_dict.sub(sub_dict, s)
982 if part == s:
983 return m.group(0)
984 return part
985 message = message % symbols
986 message = regex_plural.sub(sub_plural, message)
987 return message
988
989 def translate(self, message, symbols):
990 """
991 Gets cached translated message with inserted parameters(symbols)
992 """
993 message = get_from_cache(self.cache, message,
994 lambda: self.get_t(message))
995 if symbols or symbols == 0 or symbols == "":
996 if isinstance(symbols, dict):
997 symbols.update(
998 (key, str(value).translate(ttab_in))
999 for key, value in iteritems(symbols)
1000 if not isinstance(value, NUMBERS))
1001 else:
1002 if not isinstance(symbols, tuple):
1003 symbols = (symbols,)
1004 symbols = tuple(
1005 value if isinstance(value, NUMBERS)
1006 else str(value).translate(ttab_in)
1007 for value in symbols)
1008 message = self.params_substitution(message, symbols)
1009 return message.translate(ttab_out)
1010
1011
1012 def findT(path, language=DEFAULT_LANGUAGE):
1013 """
1014 Note:
1015 Must be run by the admin app
1016 """
1017 from gluon.tools import Auth, Crud
1018 lang_file = pjoin(path, 'languages', language + '.py')
1019 sentences = read_dict(lang_file)
1020 mp = pjoin(path, 'models')
1021 cp = pjoin(path, 'controllers')
1022 vp = pjoin(path, 'views')
1023 mop = pjoin(path, 'modules')
1024 def add_message(message):
1025 if not message.startswith('#') and not '\n' in message:
1026 tokens = message.rsplit('##', 1)
1027 else:
1028 # this allows markmin syntax in translations
1029 tokens = [message]
1030 if len(tokens) == 2:
1031 message = tokens[0].strip() + '##' + tokens[1].strip()
1032 if message and not message in sentences:
1033 sentences[message] = message.replace("@markmin\x01", "")
1034 for filename in \
1035 listdir(mp, '^.+\.py$', 0) + listdir(cp, '^.+\.py$', 0)\
1036 + listdir(vp, '^.+\.html$', 0) + listdir(mop, '^.+\.py$', 0):
1037 data = to_native(read_locked(filename))
1038 items = regex_translate.findall(data)
1039 for x in regex_translate_m.findall(data):
1040 if x[0:3] in ["'''", '"""']: items.append("%s@markmin\x01%s" %(x[0:3], x[3:]))
1041 else: items.append("%s@markmin\x01%s" %(x[0], x[1:]))
1042 for item in items:
1043 try:
1044 message = safe_eval(item)
1045 except:
1046 continue # silently ignore inproperly formatted strings
1047 add_message(message)
1048 gluon_msg = [Auth.default_messages, Crud.default_messages]
1049 for item in [x for m in gluon_msg for x in m.values() if x is not None]:
1050 add_message(item)
1051 if not '!langcode!' in sentences:
1052 sentences['!langcode!'] = (
1053 DEFAULT_LANGUAGE if language in ('default', DEFAULT_LANGUAGE) else language)
1054 if not '!langname!' in sentences:
1055 sentences['!langname!'] = (
1056 DEFAULT_LANGUAGE_NAME if language in ('default', DEFAULT_LANGUAGE)
1057 else sentences['!langcode!'])
1058 write_dict(lang_file, sentences)
1059
1060
1061 def update_all_languages(application_path):
1062 """
1063 Note:
1064 Must be run by the admin app
1065 """
1066 path = pjoin(application_path, 'languages/')
1067 for language in oslistdir(path):
1068 if regex_langfile.match(language):
1069 findT(application_path, language[:-3])
1070
1071
1072 def update_from_langfile(target, source, force_update=False):
1073 """this will update untranslated messages in target from source (where both are language files)
1074 this can be used as first step when creating language file for new but very similar language
1075 or if you want update your app from welcome app of newer web2py version
1076 or in non-standard scenarios when you work on target and from any reason you have partial translation in source
1077 Args:
1078 force_update: if False existing translations remain unchanged, if True existing translations will update from source
1079 """
1080 src = read_dict(source)
1081 sentences = read_dict(target)
1082 for key in sentences:
1083 val = sentences[key]
1084 if not val or val == key or force_update:
1085 new_val = src.get(key)
1086 if new_val and new_val != val:
1087 sentences[key] = new_val
1088 write_dict(target, sentences)
1089
1090
1091 if __name__ == '__main__':
1092 import doctest
1093 doctest.testmod()
1094
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/gluon/languages.py b/gluon/languages.py
--- a/gluon/languages.py
+++ b/gluon/languages.py
@@ -451,7 +451,7 @@
def pickle_lazyT(c):
- return str, (c.xml(),)
+ return str, (to_native(c.xml()),)
copyreg.pickle(lazyT, pickle_lazyT)
| {"golden_diff": "diff --git a/gluon/languages.py b/gluon/languages.py\n--- a/gluon/languages.py\n+++ b/gluon/languages.py\n@@ -451,7 +451,7 @@\n \n \n def pickle_lazyT(c):\n- return str, (c.xml(),)\n+ return str, (to_native(c.xml()),)\n \n copyreg.pickle(lazyT, pickle_lazyT)\n", "issue": "After updating from 2.18.1 to 2.18.2 the session.flash messages all show as b'<message>'\n**Describe the bug**\r\nAfter updating from 2.18.1 to 2.18.2 the session.flsh messages all show as b'<message>'\r\n\r\n**To Reproduce**\r\nJust login on any app that shows session.flash. The 'Hello World' message from the welcome app uses response.flash and not session.flash and thus it does not show the problem.\r\n\r\n**Desktop (please complete the following information):**\r\nWindows 7 Pro x64 w/SP1 + all upgrades\r\nFirefox 65.0.1 x64\r\nPython 3.7.1 x86\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n\"\"\"\n| This file is part of the web2py Web Framework\n| Copyrighted by Massimo Di Pierro <[email protected]>\n| License: LGPLv3 (http://www.gnu.org/licenses/lgpl.html)\n| Plural subsystem is created by Vladyslav Kozlovskyy (Ukraine) <[email protected]>\n\nTranslation system\n--------------------------------------------\n\"\"\"\n\nimport os\nimport re\nimport sys\nimport pkgutil\nimport logging\nfrom cgi import escape\nfrom threading import RLock\n\nfrom pydal._compat import copyreg, PY2, maketrans, iterkeys, unicodeT, to_unicode, to_bytes, iteritems, to_native, pjoin\nfrom pydal.contrib.portalocker import read_locked, LockedFile\n\nfrom yatl.sanitizer import xmlescape\n\nfrom gluon.fileutils import listdir\nfrom gluon.cfs import getcfs\nfrom gluon.html import XML, xmlescape\nfrom gluon.contrib.markmin.markmin2html import render, markmin_escape\n\n__all__ = ['translator', 'findT', 'update_all_languages']\n\nostat = os.stat\noslistdir = os.listdir\npdirname = os.path.dirname\nisdir = os.path.isdir\n\nDEFAULT_LANGUAGE = 'en'\nDEFAULT_LANGUAGE_NAME = 'English'\n\n# DEFAULT PLURAL-FORMS RULES:\n# language doesn't use plural forms\nDEFAULT_NPLURALS = 1\n# only one singular/plural form is used\nDEFAULT_GET_PLURAL_ID = lambda n: 0\n# word is unchangeable\nDEFAULT_CONSTRUCT_PLURAL_FORM = lambda word, plural_id: word\n\nif PY2:\n NUMBERS = (int, long, float)\n from gluon.utf8 import Utf8\nelse:\n NUMBERS = (int, float)\n Utf8 = str\n\n# pattern to find T(blah blah blah) expressions\nPY_STRING_LITERAL_RE = r'(?<=[^\\w]T\\()(?P<name>'\\\n + r\"[uU]?[rR]?(?:'''(?:[^']|'{1,2}(?!'))*''')|\"\\\n + r\"(?:'(?:[^'\\\\]|\\\\.)*')|\" + r'(?:\"\"\"(?:[^\"]|\"{1,2}(?!\"))*\"\"\")|'\\\n + r'(?:\"(?:[^\"\\\\]|\\\\.)*\"))'\n\nPY_M_STRING_LITERAL_RE = r'(?<=[^\\w]T\\.M\\()(?P<name>'\\\n + r\"[uU]?[rR]?(?:'''(?:[^']|'{1,2}(?!'))*''')|\"\\\n + r\"(?:'(?:[^'\\\\]|\\\\.)*')|\" + r'(?:\"\"\"(?:[^\"]|\"{1,2}(?!\"))*\"\"\")|'\\\n + r'(?:\"(?:[^\"\\\\]|\\\\.)*\"))'\n\nregex_translate = re.compile(PY_STRING_LITERAL_RE, re.DOTALL)\nregex_translate_m = re.compile(PY_M_STRING_LITERAL_RE, re.DOTALL)\nregex_param = re.compile(r'{(?P<s>.+?)}')\n\n# pattern for a valid accept_language\nregex_language = \\\n re.compile('([a-z]{2,3}(?:\\-[a-z]{2})?(?:\\-[a-z]{2})?)(?:[,;]|$)')\nregex_langfile = re.compile('^[a-z]{2,3}(-[a-z]{2})?\\.py$')\nregex_backslash = re.compile(r\"\\\\([\\\\{}%])\")\nregex_plural = re.compile('%({.+?})')\nregex_plural_dict = re.compile('^{(?P<w>[^()[\\]][^()[\\]]*?)\\((?P<n>[^()\\[\\]]+)\\)}$') # %%{word(varname or number)}\nregex_plural_tuple = re.compile(\n '^{(?P<w>[^[\\]()]+)(?:\\[(?P<i>\\d+)\\])?}$') # %%{word[index]} or %%{word}\nregex_plural_file = re.compile('^plural-[a-zA-Z]{2}(-[a-zA-Z]{2})?\\.py$')\n\n\ndef is_writable():\n \"\"\" returns True if and only if the filesystem is writable \"\"\"\n from gluon.settings import global_settings\n return not global_settings.web2py_runtime_gae\n\n\ndef safe_eval(text):\n if text.strip():\n try:\n import ast\n return ast.literal_eval(text)\n except ImportError:\n return eval(text, {}, {})\n return None\n\n# used as default filter in translator.M()\n\n\ndef markmin(s):\n def markmin_aux(m):\n return '{%s}' % markmin_escape(m.group('s'))\n return render(regex_param.sub(markmin_aux, s),\n sep='br', autolinks=None, id_prefix='')\n\n# UTF8 helper functions\n\n\ndef upper_fun(s):\n return to_bytes(to_unicode(s).upper())\n\n\ndef title_fun(s):\n return to_bytes(to_unicode(s).title())\n\n\ndef cap_fun(s):\n return to_bytes(to_unicode(s).capitalize())\n\n\nttab_in = maketrans(\"\\\\%{}\", '\\x1c\\x1d\\x1e\\x1f')\nttab_out = maketrans('\\x1c\\x1d\\x1e\\x1f', \"\\\\%{}\")\n\n# cache of translated messages:\n# global_language_cache:\n# { 'languages/xx.py':\n# ( {\"def-message\": \"xx-message\",\n# ...\n# \"def-message\": \"xx-message\"}, lock_object )\n# 'languages/yy.py': ( {dict}, lock_object )\n# ...\n# }\n\nglobal_language_cache = {}\n\n\ndef get_from_cache(cache, val, fun):\n lang_dict, lock = cache\n lock.acquire()\n try:\n result = lang_dict.get(val)\n finally:\n lock.release()\n if result:\n return result\n lock.acquire()\n try:\n result = lang_dict.setdefault(val, fun())\n finally:\n lock.release()\n return result\n\n\ndef clear_cache(filename):\n cache = global_language_cache.setdefault(\n filename, ({}, RLock()))\n lang_dict, lock = cache\n lock.acquire()\n try:\n lang_dict.clear()\n finally:\n lock.release()\n\n\ndef read_dict_aux(filename):\n lang_text = read_locked(filename).replace(b'\\r\\n', b'\\n')\n clear_cache(filename)\n try:\n return safe_eval(to_native(lang_text)) or {}\n except Exception:\n e = sys.exc_info()[1]\n status = 'Syntax error in %s (%s)' % (filename, e)\n logging.error(status)\n return {'__corrupted__': status}\n\n\ndef read_dict(filename):\n \"\"\" Returns dictionary with translation messages\n \"\"\"\n return getcfs('lang:' + filename, filename,\n lambda: read_dict_aux(filename))\n\n\ndef read_possible_plural_rules():\n \"\"\"\n Creates list of all possible plural rules files\n The result is cached in PLURAL_RULES dictionary to increase speed\n \"\"\"\n plurals = {}\n try:\n import gluon.contrib.plural_rules as package\n for importer, modname, ispkg in pkgutil.iter_modules(package.__path__):\n if len(modname) == 2:\n module = __import__(package.__name__ + '.' + modname,\n fromlist=[modname])\n lang = modname\n pname = modname + '.py'\n nplurals = getattr(module, 'nplurals', DEFAULT_NPLURALS)\n get_plural_id = getattr(\n module, 'get_plural_id',\n DEFAULT_GET_PLURAL_ID)\n construct_plural_form = getattr(\n module, 'construct_plural_form',\n DEFAULT_CONSTRUCT_PLURAL_FORM)\n plurals[lang] = (lang, nplurals, get_plural_id,\n construct_plural_form)\n except ImportError:\n e = sys.exc_info()[1]\n logging.warn('Unable to import plural rules: %s' % e)\n return plurals\n\nPLURAL_RULES = read_possible_plural_rules()\n\n\ndef read_possible_languages_aux(langdir):\n def get_lang_struct(lang, langcode, langname, langfile_mtime):\n if lang == 'default':\n real_lang = langcode.lower()\n else:\n real_lang = lang\n (prules_langcode,\n nplurals,\n get_plural_id,\n construct_plural_form\n ) = PLURAL_RULES.get(real_lang[:2], ('default',\n DEFAULT_NPLURALS,\n DEFAULT_GET_PLURAL_ID,\n DEFAULT_CONSTRUCT_PLURAL_FORM))\n if prules_langcode != 'default':\n (pluraldict_fname,\n pluraldict_mtime) = plurals.get(real_lang,\n plurals.get(real_lang[:2],\n ('plural-%s.py' % real_lang, 0)))\n else:\n pluraldict_fname = None\n pluraldict_mtime = 0\n return (langcode, # language code from !langcode!\n langname,\n # language name in national spelling from !langname!\n langfile_mtime, # m_time of language file\n pluraldict_fname, # name of plural dictionary file or None (when default.py is not exist)\n pluraldict_mtime, # m_time of plural dictionary file or 0 if file is not exist\n prules_langcode, # code of plural rules language or 'default'\n nplurals, # nplurals for current language\n get_plural_id, # get_plural_id() for current language\n construct_plural_form) # construct_plural_form() for current language\n\n plurals = {}\n flist = oslistdir(langdir) if isdir(langdir) else []\n\n # scan languages directory for plural dict files:\n for pname in flist:\n if regex_plural_file.match(pname):\n plurals[pname[7:-3]] = (pname,\n ostat(pjoin(langdir, pname)).st_mtime)\n langs = {}\n # scan languages directory for langfiles:\n for fname in flist:\n if regex_langfile.match(fname) or fname == 'default.py':\n fname_with_path = pjoin(langdir, fname)\n d = read_dict(fname_with_path)\n lang = fname[:-3]\n langcode = d.get('!langcode!', lang if lang != 'default'\n else DEFAULT_LANGUAGE)\n langname = d.get('!langname!', langcode)\n langfile_mtime = ostat(fname_with_path).st_mtime\n langs[lang] = get_lang_struct(lang, langcode,\n langname, langfile_mtime)\n if 'default' not in langs:\n # if default.py is not found,\n # add DEFAULT_LANGUAGE as default language:\n langs['default'] = get_lang_struct('default', DEFAULT_LANGUAGE,\n DEFAULT_LANGUAGE_NAME, 0)\n deflang = langs['default']\n deflangcode = deflang[0]\n if deflangcode not in langs:\n # create language from default.py:\n langs[deflangcode] = deflang[:2] + (0,) + deflang[3:]\n\n return langs\n\n\ndef read_possible_languages(langpath):\n return getcfs('langs:' + langpath, langpath,\n lambda: read_possible_languages_aux(langpath))\n\n\ndef read_plural_dict_aux(filename):\n lang_text = read_locked(filename).replace(b'\\r\\n', b'\\n')\n try:\n return eval(lang_text) or {}\n except Exception:\n e = sys.exc_info()[1]\n status = 'Syntax error in %s (%s)' % (filename, e)\n logging.error(status)\n return {'__corrupted__': status}\n\n\ndef read_plural_dict(filename):\n return getcfs('plurals:' + filename, filename,\n lambda: read_plural_dict_aux(filename))\n\n\ndef write_plural_dict(filename, contents):\n if '__corrupted__' in contents:\n return\n fp = None\n try:\n fp = LockedFile(filename, 'w')\n fp.write('#!/usr/bin/env python\\n# -*- coding: utf-8 -*-\\n{\\n# \"singular form (0)\": [\"first plural form (1)\", \"second plural form (2)\", ...],\\n')\n for key in sorted(contents, key=sort_function):\n forms = '[' + ','.join([repr(Utf8(form))\n for form in contents[key]]) + ']'\n fp.write('%s: %s,\\n' % (repr(Utf8(key)), forms))\n fp.write('}\\n')\n except (IOError, OSError):\n if is_writable():\n logging.warning('Unable to write to file %s' % filename)\n return\n finally:\n if fp:\n fp.close()\n\n\ndef sort_function(x):\n return to_unicode(x, 'utf-8').lower()\n\n\ndef write_dict(filename, contents):\n if '__corrupted__' in contents:\n return\n fp = None\n try:\n fp = LockedFile(filename, 'w')\n fp.write('# -*- coding: utf-8 -*-\\n{\\n')\n for key in sorted(contents, key=lambda x: to_unicode(x, 'utf-8').lower()):\n fp.write('%s: %s,\\n' % (repr(Utf8(key)),\n repr(Utf8(contents[key]))))\n fp.write('}\\n')\n except (IOError, OSError):\n if is_writable():\n logging.warning('Unable to write to file %s' % filename)\n return\n finally:\n if fp:\n fp.close()\n\n\nclass lazyT(object):\n \"\"\"\n Never to be called explicitly, returned by\n translator.__call__() or translator.M()\n \"\"\"\n m = s = T = f = t = None\n M = is_copy = False\n\n def __init__(\n self,\n message,\n symbols={},\n T=None,\n filter=None,\n ftag=None,\n M=False\n ):\n if isinstance(message, lazyT):\n self.m = message.m\n self.s = message.s\n self.T = message.T\n self.f = message.f\n self.t = message.t\n self.M = message.M\n self.is_copy = True\n else:\n self.m = message\n self.s = symbols\n self.T = T\n self.f = filter\n self.t = ftag\n self.M = M\n self.is_copy = False\n\n def __repr__(self):\n return \"<lazyT %s>\" % (repr(Utf8(self.m)), )\n\n def __str__(self):\n return str(self.T.apply_filter(self.m, self.s, self.f, self.t) if self.M else\n self.T.translate(self.m, self.s))\n\n def __eq__(self, other):\n return str(self) == str(other)\n\n def __ne__(self, other):\n return str(self) != str(other)\n\n def __add__(self, other):\n return '%s%s' % (self, other)\n\n def __radd__(self, other):\n return '%s%s' % (other, self)\n\n def __mul__(self, other):\n return str(self) * other\n\n def __cmp__(self, other):\n return cmp(str(self), str(other))\n\n def __hash__(self):\n return hash(str(self))\n\n def __getattr__(self, name):\n return getattr(str(self), name)\n\n def __getitem__(self, i):\n return str(self)[i]\n\n def __getslice__(self, i, j):\n return str(self)[i:j]\n\n def __iter__(self):\n for c in str(self):\n yield c\n\n def __len__(self):\n return len(str(self))\n\n def xml(self):\n return str(self) if self.M else xmlescape(str(self), quote=False)\n\n def encode(self, *a, **b):\n if PY2 and a[0] != 'utf8':\n return to_unicode(str(self)).encode(*a, **b)\n else:\n return str(self)\n\n def decode(self, *a, **b):\n if PY2:\n return str(self).decode(*a, **b)\n else:\n return str(self)\n\n def read(self):\n return str(self)\n\n def __mod__(self, symbols):\n if self.is_copy:\n return lazyT(self)\n return lazyT(self.m, symbols, self.T, self.f, self.t, self.M)\n\n\ndef pickle_lazyT(c):\n return str, (c.xml(),)\n\ncopyreg.pickle(lazyT, pickle_lazyT)\n\n\nclass TranslatorFactory(object):\n \"\"\"\n This class is instantiated by gluon.compileapp.build_environment\n as the T object\n\n Example:\n\n T.force(None) # turns off translation\n T.force('fr, it') # forces web2py to translate using fr.py or it.py\n\n T(\"Hello World\") # translates \"Hello World\" using the selected file\n\n Note:\n - there is no need to force since, by default, T uses\n http_accept_language to determine a translation file.\n - en and en-en are considered different languages!\n - if language xx-yy is not found force() probes other similar languages\n using such algorithm: `xx-yy.py -> xx.py -> xx-yy*.py -> xx*.py`\n \"\"\"\n\n def __init__(self, langpath, http_accept_language):\n self.langpath = langpath\n self.http_accept_language = http_accept_language\n # filled in self.force():\n # ------------------------\n # self.cache\n # self.accepted_language\n # self.language_file\n # self.plural_language\n # self.nplurals\n # self.get_plural_id\n # self.construct_plural_form\n # self.plural_file\n # self.plural_dict\n # self.requested_languages\n # ----------------------------------------\n # filled in self.set_current_languages():\n # ----------------------------------------\n # self.default_language_file\n # self.default_t\n # self.current_languages\n self.set_current_languages()\n self.lazy = True\n self.otherTs = {}\n self.filter = markmin\n self.ftag = 'markmin'\n self.ns = None\n self.is_writable = True\n\n def get_possible_languages_info(self, lang=None):\n \"\"\"\n Returns info for selected language or dictionary with all\n possible languages info from `APP/languages/*.py`\n It Returns:\n\n - a tuple containing::\n\n langcode, langname, langfile_mtime,\n pluraldict_fname, pluraldict_mtime,\n prules_langcode, nplurals,\n get_plural_id, construct_plural_form\n\n or None\n\n - if *lang* is NOT defined a dictionary with all possible\n languages::\n\n { langcode(from filename):\n ( langcode, # language code from !langcode!\n langname,\n # language name in national spelling from !langname!\n langfile_mtime, # m_time of language file\n pluraldict_fname,# name of plural dictionary file or None (when default.py is not exist)\n pluraldict_mtime,# m_time of plural dictionary file or 0 if file is not exist\n prules_langcode, # code of plural rules language or 'default'\n nplurals, # nplurals for current language\n get_plural_id, # get_plural_id() for current language\n construct_plural_form) # construct_plural_form() for current language\n }\n\n Args:\n lang (str): language\n\n \"\"\"\n info = read_possible_languages(self.langpath)\n if lang:\n info = info.get(lang)\n return info\n\n def get_possible_languages(self):\n \"\"\" Gets list of all possible languages for current application \"\"\"\n return list(set(self.current_languages +\n [lang for lang in read_possible_languages(self.langpath)\n if lang != 'default']))\n\n def set_current_languages(self, *languages):\n \"\"\"\n Sets current AKA \"default\" languages\n Setting one of this languages makes the force() function to turn\n translation off\n \"\"\"\n if len(languages) == 1 and isinstance(languages[0], (tuple, list)):\n languages = languages[0]\n if not languages or languages[0] is None:\n # set default language from default.py/DEFAULT_LANGUAGE\n pl_info = self.get_possible_languages_info('default')\n if pl_info[2] == 0: # langfile_mtime\n # if languages/default.py is not found\n self.default_language_file = self.langpath\n self.default_t = {}\n self.current_languages = [DEFAULT_LANGUAGE]\n else:\n self.default_language_file = pjoin(self.langpath,\n 'default.py')\n self.default_t = read_dict(self.default_language_file)\n self.current_languages = [pl_info[0]] # !langcode!\n else:\n self.current_languages = list(languages)\n self.force(self.http_accept_language)\n\n def plural(self, word, n):\n \"\"\"\n Gets plural form of word for number *n*\n invoked from T()/T.M() in `%%{}` tag\n\n Note:\n \"word\" MUST be defined in current language (T.accepted_language)\n\n Args:\n word (str): word in singular\n n (numeric): number plural form created for\n\n Returns:\n word (str): word in appropriate singular/plural form\n\n \"\"\"\n if int(n) == 1:\n return word\n elif word:\n id = self.get_plural_id(abs(int(n)))\n # id = 0 singular form\n # id = 1 first plural form\n # id = 2 second plural form\n # etc.\n if id != 0:\n forms = self.plural_dict.get(word, [])\n if len(forms) >= id:\n # have this plural form:\n return forms[id - 1]\n else:\n # guessing this plural form\n forms += [''] * (self.nplurals - len(forms) - 1)\n form = self.construct_plural_form(word, id)\n forms[id - 1] = form\n self.plural_dict[word] = forms\n if self.is_writable and is_writable() and self.plural_file:\n write_plural_dict(self.plural_file,\n self.plural_dict)\n return form\n return word\n\n def force(self, *languages):\n \"\"\"\n Selects language(s) for translation\n\n if a list of languages is passed as a parameter,\n the first language from this list that matches the ones\n from the possible_languages dictionary will be\n selected\n\n default language will be selected if none\n of them matches possible_languages.\n \"\"\"\n pl_info = read_possible_languages(self.langpath)\n def set_plural(language):\n \"\"\"\n initialize plural forms subsystem\n \"\"\"\n lang_info = pl_info.get(language)\n if lang_info:\n (pname,\n pmtime,\n self.plural_language,\n self.nplurals,\n self.get_plural_id,\n self.construct_plural_form\n ) = lang_info[3:]\n pdict = {}\n if pname:\n pname = pjoin(self.langpath, pname)\n if pmtime != 0:\n pdict = read_plural_dict(pname)\n self.plural_file = pname\n self.plural_dict = pdict\n else:\n self.plural_language = 'default'\n self.nplurals = DEFAULT_NPLURALS\n self.get_plural_id = DEFAULT_GET_PLURAL_ID\n self.construct_plural_form = DEFAULT_CONSTRUCT_PLURAL_FORM\n self.plural_file = None\n self.plural_dict = {}\n language = ''\n if len(languages) == 1 and isinstance(languages[0], str):\n languages = regex_language.findall(languages[0].lower())\n elif not languages or languages[0] is None:\n languages = []\n self.requested_languages = languages = tuple(languages)\n if languages:\n all_languages = set(lang for lang in pl_info\n if lang != 'default') \\\n | set(self.current_languages)\n for lang in languages:\n # compare \"aa-bb\" | \"aa\" from *language* parameter\n # with strings from langlist using such alghorythm:\n # xx-yy.py -> xx.py -> xx*.py\n lang5 = lang[:5]\n if lang5 in all_languages:\n language = lang5\n else:\n lang2 = lang[:2]\n if len(lang5) > 2 and lang2 in all_languages:\n language = lang2\n else:\n for l in all_languages:\n if l[:2] == lang2:\n language = l\n if language:\n if language in self.current_languages:\n break\n self.language_file = pjoin(self.langpath, language + '.py')\n self.t = read_dict(self.language_file)\n self.cache = global_language_cache.setdefault(\n self.language_file,\n ({}, RLock()))\n set_plural(language)\n self.accepted_language = language\n return languages\n self.accepted_language = language\n if not language:\n if self.current_languages:\n self.accepted_language = self.current_languages[0]\n else:\n self.accepted_language = DEFAULT_LANGUAGE\n self.language_file = self.default_language_file\n self.cache = global_language_cache.setdefault(self.language_file,\n ({}, RLock()))\n self.t = self.default_t\n set_plural(self.accepted_language)\n return languages\n\n def __call__(self, message, symbols={}, language=None, lazy=None, ns=None):\n \"\"\"\n get cached translated plain text message with inserted parameters(symbols)\n if lazy==True lazyT object is returned\n \"\"\"\n if lazy is None:\n lazy = self.lazy\n if not language and not ns:\n if lazy:\n return lazyT(message, symbols, self)\n else:\n return self.translate(message, symbols)\n else:\n if ns:\n if ns != self.ns:\n self.langpath = os.path.join(self.langpath, ns)\n if self.ns is None:\n self.ns = ns\n otherT = self.__get_otherT__(language, ns)\n return otherT(message, symbols, lazy=lazy)\n\n def __get_otherT__(self, language=None, namespace=None):\n if not language and not namespace:\n raise Exception('Incorrect parameters')\n\n if namespace:\n if language:\n index = '%s/%s' % (namespace, language)\n else:\n index = namespace\n else:\n index = language\n try:\n otherT = self.otherTs[index]\n except KeyError:\n otherT = self.otherTs[index] = TranslatorFactory(self.langpath,\n self.http_accept_language)\n if language:\n otherT.force(language)\n return otherT\n\n def apply_filter(self, message, symbols={}, filter=None, ftag=None):\n def get_tr(message, prefix, filter):\n s = self.get_t(message, prefix)\n return filter(s) if filter else self.filter(s)\n if filter:\n prefix = '@' + (ftag or 'userdef') + '\\x01'\n else:\n prefix = '@' + self.ftag + '\\x01'\n message = get_from_cache(\n self.cache, prefix + message,\n lambda: get_tr(message, prefix, filter))\n if symbols or symbols == 0 or symbols == \"\":\n if isinstance(symbols, dict):\n symbols.update(\n (key, xmlescape(value).translate(ttab_in))\n for key, value in iteritems(symbols)\n if not isinstance(value, NUMBERS))\n else:\n if not isinstance(symbols, tuple):\n symbols = (symbols,)\n symbols = tuple(\n value if isinstance(value, NUMBERS)\n else to_native(xmlescape(value)).translate(ttab_in)\n for value in symbols)\n message = self.params_substitution(message, symbols)\n return to_native(XML(message.translate(ttab_out)).xml())\n\n def M(self, message, symbols={}, language=None,\n lazy=None, filter=None, ftag=None, ns=None):\n \"\"\"\n Gets cached translated markmin-message with inserted parametes\n if lazy==True lazyT object is returned\n \"\"\"\n if lazy is None:\n lazy = self.lazy\n if not language and not ns:\n if lazy:\n return lazyT(message, symbols, self, filter, ftag, True)\n else:\n return self.apply_filter(message, symbols, filter, ftag)\n else:\n if ns:\n self.langpath = os.path.join(self.langpath, ns)\n otherT = self.__get_otherT__(language, ns)\n return otherT.M(message, symbols, lazy=lazy)\n\n def get_t(self, message, prefix=''):\n \"\"\"\n Use ## to add a comment into a translation string\n the comment can be useful do discriminate different possible\n translations for the same string (for example different locations):\n\n T(' hello world ') -> ' hello world '\n T(' hello world ## token') -> ' hello world '\n T('hello ## world## token') -> 'hello ## world'\n\n the ## notation is ignored in multiline strings and strings that\n start with ##. This is needed to allow markmin syntax to be translated\n \"\"\"\n message = to_native(message, 'utf8')\n prefix = to_native(prefix, 'utf8')\n key = prefix + message\n mt = self.t.get(key, None)\n if mt is not None:\n return mt\n # we did not find a translation\n if message.find('##') > 0:\n pass\n if message.find('##') > 0 and not '\\n' in message:\n # remove comments\n message = message.rsplit('##', 1)[0]\n # guess translation same as original\n self.t[key] = mt = self.default_t.get(key, message)\n # update language file for latter translation\n if self.is_writable and is_writable() and \\\n self.language_file != self.default_language_file:\n write_dict(self.language_file, self.t)\n return regex_backslash.sub(\n lambda m: m.group(1).translate(ttab_in), to_native(mt))\n\n def params_substitution(self, message, symbols):\n \"\"\"\n Substitutes parameters from symbols into message using %.\n also parse `%%{}` placeholders for plural-forms processing.\n\n Returns:\n string with parameters\n\n Note:\n *symbols* MUST BE OR tuple OR dict of parameters!\n \"\"\"\n def sub_plural(m):\n \"\"\"String in `%{}` is transformed by this rules:\n If string starts with `!` or `?` such transformations\n take place:\n\n \"!string of words\" -> \"String of word\" (Capitalize)\n \"!!string of words\" -> \"String Of Word\" (Title)\n \"!!!string of words\" -> \"STRING OF WORD\" (Upper)\n\n \"?word1?number\" -> \"word1\" or \"number\"\n (return word1 if number == 1,\n return number otherwise)\n \"??number\" or \"?number\" -> \"\" or \"number\"\n (as above with word1 = \"\")\n\n \"?word1?number?word0\" -> \"word1\" or \"number\" or \"word0\"\n (return word1 if number == 1,\n return word0 if number == 0,\n return number otherwise)\n \"?word1?number?\" -> \"word1\" or \"number\" or \"\"\n (as above with word0 = \"\")\n \"??number?word0\" -> \"number\" or \"word0\"\n (as above with word1 = \"\")\n \"??number?\" -> \"number\" or \"\"\n (as above with word1 = word0 = \"\")\n\n \"?word1?word[number]\" -> \"word1\" or \"word\"\n (return word1 if symbols[number] == 1,\n return word otherwise)\n \"?word1?[number]\" -> \"\" or \"word1\"\n (as above with word = \"\")\n \"??word[number]\" or \"?word[number]\" -> \"\" or \"word\"\n (as above with word1 = \"\")\n\n \"?word1?word?word0[number]\" -> \"word1\" or \"word\" or \"word0\"\n (return word1 if symbols[number] == 1,\n return word0 if symbols[number] == 0,\n return word otherwise)\n \"?word1?word?[number]\" -> \"word1\" or \"word\" or \"\"\n (as above with word0 = \"\")\n \"??word?word0[number]\" -> \"\" or \"word\" or \"word0\"\n (as above with word1 = \"\")\n \"??word?[number]\" -> \"\" or \"word\"\n (as above with word1 = word0 = \"\")\n\n Other strings, (those not starting with `!` or `?`)\n are processed by self.plural\n \"\"\"\n def sub_tuple(m):\n \"\"\" word\n !word, !!word, !!!word\n ?word1?number\n ??number, ?number\n ?word1?number?word0\n ?word1?number?\n ??number?word0\n ??number?\n\n word[number]\n !word[number], !!word[number], !!!word[number]\n ?word1?word[number]\n ?word1?[number]\n ??word[number], ?word[number]\n ?word1?word?word0[number]\n ?word1?word?[number]\n ??word?word0[number]\n ??word?[number]\n \"\"\"\n w, i = m.group('w', 'i')\n c = w[0]\n if c not in '!?':\n return self.plural(w, symbols[int(i or 0)])\n elif c == '?':\n (p1, sep, p2) = w[1:].partition(\"?\")\n part1 = p1 if sep else \"\"\n (part2, sep, part3) = (p2 if sep else p1).partition(\"?\")\n if not sep:\n part3 = part2\n if i is None:\n # ?[word]?number[?number] or ?number\n if not part2:\n return m.group(0)\n num = int(part2)\n else:\n # ?[word1]?word[?word0][number]\n num = int(symbols[int(i or 0)])\n return part1 if num == 1 else part3 if num == 0 else part2\n elif w.startswith('!!!'):\n word = w[3:]\n fun = upper_fun\n elif w.startswith('!!'):\n word = w[2:]\n fun = title_fun\n else:\n word = w[1:]\n fun = cap_fun\n if i is not None:\n return to_native(fun(self.plural(word, symbols[int(i)])))\n return to_native(fun(word))\n\n def sub_dict(m):\n \"\"\" word(key or num)\n !word(key or num), !!word(key or num), !!!word(key or num)\n ?word1?word(key or num)\n ??word(key or num), ?word(key or num)\n ?word1?word?word0(key or num)\n ?word1?word?(key or num)\n ??word?word0(key or num)\n ?word1?word?(key or num)\n ??word?(key or num), ?word?(key or num)\n \"\"\"\n w, n = m.group('w', 'n')\n c = w[0]\n n = int(n) if n.isdigit() else symbols[n]\n if c not in '!?':\n return self.plural(w, n)\n elif c == '?':\n # ?[word1]?word[?word0](key or num), ?[word1]?word(key or num) or ?word(key or num)\n (p1, sep, p2) = w[1:].partition(\"?\")\n part1 = p1 if sep else \"\"\n (part2, sep, part3) = (p2 if sep else p1).partition(\"?\")\n if not sep:\n part3 = part2\n num = int(n)\n return part1 if num == 1 else part3 if num == 0 else part2\n elif w.startswith('!!!'):\n word = w[3:]\n fun = upper_fun\n elif w.startswith('!!'):\n word = w[2:]\n fun = title_fun\n else:\n word = w[1:]\n fun = cap_fun\n s = fun(self.plural(word, n))\n return s if PY2 else to_unicode(s)\n\n s = m.group(1)\n part = regex_plural_tuple.sub(sub_tuple, s)\n if part == s:\n part = regex_plural_dict.sub(sub_dict, s)\n if part == s:\n return m.group(0)\n return part\n message = message % symbols\n message = regex_plural.sub(sub_plural, message)\n return message\n\n def translate(self, message, symbols):\n \"\"\"\n Gets cached translated message with inserted parameters(symbols)\n \"\"\"\n message = get_from_cache(self.cache, message,\n lambda: self.get_t(message))\n if symbols or symbols == 0 or symbols == \"\":\n if isinstance(symbols, dict):\n symbols.update(\n (key, str(value).translate(ttab_in))\n for key, value in iteritems(symbols)\n if not isinstance(value, NUMBERS))\n else:\n if not isinstance(symbols, tuple):\n symbols = (symbols,)\n symbols = tuple(\n value if isinstance(value, NUMBERS)\n else str(value).translate(ttab_in)\n for value in symbols)\n message = self.params_substitution(message, symbols)\n return message.translate(ttab_out)\n\n\ndef findT(path, language=DEFAULT_LANGUAGE):\n \"\"\"\n Note:\n Must be run by the admin app\n \"\"\"\n from gluon.tools import Auth, Crud\n lang_file = pjoin(path, 'languages', language + '.py')\n sentences = read_dict(lang_file)\n mp = pjoin(path, 'models')\n cp = pjoin(path, 'controllers')\n vp = pjoin(path, 'views')\n mop = pjoin(path, 'modules')\n def add_message(message):\n if not message.startswith('#') and not '\\n' in message:\n tokens = message.rsplit('##', 1)\n else:\n # this allows markmin syntax in translations\n tokens = [message]\n if len(tokens) == 2:\n message = tokens[0].strip() + '##' + tokens[1].strip()\n if message and not message in sentences:\n sentences[message] = message.replace(\"@markmin\\x01\", \"\")\n for filename in \\\n listdir(mp, '^.+\\.py$', 0) + listdir(cp, '^.+\\.py$', 0)\\\n + listdir(vp, '^.+\\.html$', 0) + listdir(mop, '^.+\\.py$', 0):\n data = to_native(read_locked(filename))\n items = regex_translate.findall(data)\n for x in regex_translate_m.findall(data):\n if x[0:3] in [\"'''\", '\"\"\"']: items.append(\"%s@markmin\\x01%s\" %(x[0:3], x[3:]))\n else: items.append(\"%s@markmin\\x01%s\" %(x[0], x[1:]))\n for item in items:\n try:\n message = safe_eval(item)\n except:\n continue # silently ignore inproperly formatted strings\n add_message(message)\n gluon_msg = [Auth.default_messages, Crud.default_messages]\n for item in [x for m in gluon_msg for x in m.values() if x is not None]:\n add_message(item)\n if not '!langcode!' in sentences:\n sentences['!langcode!'] = (\n DEFAULT_LANGUAGE if language in ('default', DEFAULT_LANGUAGE) else language)\n if not '!langname!' in sentences:\n sentences['!langname!'] = (\n DEFAULT_LANGUAGE_NAME if language in ('default', DEFAULT_LANGUAGE)\n else sentences['!langcode!'])\n write_dict(lang_file, sentences)\n\n\ndef update_all_languages(application_path):\n \"\"\"\n Note:\n Must be run by the admin app\n \"\"\"\n path = pjoin(application_path, 'languages/')\n for language in oslistdir(path):\n if regex_langfile.match(language):\n findT(application_path, language[:-3])\n\n\ndef update_from_langfile(target, source, force_update=False):\n \"\"\"this will update untranslated messages in target from source (where both are language files)\n this can be used as first step when creating language file for new but very similar language\n or if you want update your app from welcome app of newer web2py version\n or in non-standard scenarios when you work on target and from any reason you have partial translation in source\n Args:\n force_update: if False existing translations remain unchanged, if True existing translations will update from source\n \"\"\"\n src = read_dict(source)\n sentences = read_dict(target)\n for key in sentences:\n val = sentences[key]\n if not val or val == key or force_update:\n new_val = src.get(key)\n if new_val and new_val != val:\n sentences[key] = new_val\n write_dict(target, sentences)\n\n\nif __name__ == '__main__':\n import doctest\n doctest.testmod()\n", "path": "gluon/languages.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n\"\"\"\n| This file is part of the web2py Web Framework\n| Copyrighted by Massimo Di Pierro <[email protected]>\n| License: LGPLv3 (http://www.gnu.org/licenses/lgpl.html)\n| Plural subsystem is created by Vladyslav Kozlovskyy (Ukraine) <[email protected]>\n\nTranslation system\n--------------------------------------------\n\"\"\"\n\nimport os\nimport re\nimport sys\nimport pkgutil\nimport logging\nfrom cgi import escape\nfrom threading import RLock\n\nfrom pydal._compat import copyreg, PY2, maketrans, iterkeys, unicodeT, to_unicode, to_bytes, iteritems, to_native, pjoin\nfrom pydal.contrib.portalocker import read_locked, LockedFile\n\nfrom yatl.sanitizer import xmlescape\n\nfrom gluon.fileutils import listdir\nfrom gluon.cfs import getcfs\nfrom gluon.html import XML, xmlescape\nfrom gluon.contrib.markmin.markmin2html import render, markmin_escape\n\n__all__ = ['translator', 'findT', 'update_all_languages']\n\nostat = os.stat\noslistdir = os.listdir\npdirname = os.path.dirname\nisdir = os.path.isdir\n\nDEFAULT_LANGUAGE = 'en'\nDEFAULT_LANGUAGE_NAME = 'English'\n\n# DEFAULT PLURAL-FORMS RULES:\n# language doesn't use plural forms\nDEFAULT_NPLURALS = 1\n# only one singular/plural form is used\nDEFAULT_GET_PLURAL_ID = lambda n: 0\n# word is unchangeable\nDEFAULT_CONSTRUCT_PLURAL_FORM = lambda word, plural_id: word\n\nif PY2:\n NUMBERS = (int, long, float)\n from gluon.utf8 import Utf8\nelse:\n NUMBERS = (int, float)\n Utf8 = str\n\n# pattern to find T(blah blah blah) expressions\nPY_STRING_LITERAL_RE = r'(?<=[^\\w]T\\()(?P<name>'\\\n + r\"[uU]?[rR]?(?:'''(?:[^']|'{1,2}(?!'))*''')|\"\\\n + r\"(?:'(?:[^'\\\\]|\\\\.)*')|\" + r'(?:\"\"\"(?:[^\"]|\"{1,2}(?!\"))*\"\"\")|'\\\n + r'(?:\"(?:[^\"\\\\]|\\\\.)*\"))'\n\nPY_M_STRING_LITERAL_RE = r'(?<=[^\\w]T\\.M\\()(?P<name>'\\\n + r\"[uU]?[rR]?(?:'''(?:[^']|'{1,2}(?!'))*''')|\"\\\n + r\"(?:'(?:[^'\\\\]|\\\\.)*')|\" + r'(?:\"\"\"(?:[^\"]|\"{1,2}(?!\"))*\"\"\")|'\\\n + r'(?:\"(?:[^\"\\\\]|\\\\.)*\"))'\n\nregex_translate = re.compile(PY_STRING_LITERAL_RE, re.DOTALL)\nregex_translate_m = re.compile(PY_M_STRING_LITERAL_RE, re.DOTALL)\nregex_param = re.compile(r'{(?P<s>.+?)}')\n\n# pattern for a valid accept_language\nregex_language = \\\n re.compile('([a-z]{2,3}(?:\\-[a-z]{2})?(?:\\-[a-z]{2})?)(?:[,;]|$)')\nregex_langfile = re.compile('^[a-z]{2,3}(-[a-z]{2})?\\.py$')\nregex_backslash = re.compile(r\"\\\\([\\\\{}%])\")\nregex_plural = re.compile('%({.+?})')\nregex_plural_dict = re.compile('^{(?P<w>[^()[\\]][^()[\\]]*?)\\((?P<n>[^()\\[\\]]+)\\)}$') # %%{word(varname or number)}\nregex_plural_tuple = re.compile(\n '^{(?P<w>[^[\\]()]+)(?:\\[(?P<i>\\d+)\\])?}$') # %%{word[index]} or %%{word}\nregex_plural_file = re.compile('^plural-[a-zA-Z]{2}(-[a-zA-Z]{2})?\\.py$')\n\n\ndef is_writable():\n \"\"\" returns True if and only if the filesystem is writable \"\"\"\n from gluon.settings import global_settings\n return not global_settings.web2py_runtime_gae\n\n\ndef safe_eval(text):\n if text.strip():\n try:\n import ast\n return ast.literal_eval(text)\n except ImportError:\n return eval(text, {}, {})\n return None\n\n# used as default filter in translator.M()\n\n\ndef markmin(s):\n def markmin_aux(m):\n return '{%s}' % markmin_escape(m.group('s'))\n return render(regex_param.sub(markmin_aux, s),\n sep='br', autolinks=None, id_prefix='')\n\n# UTF8 helper functions\n\n\ndef upper_fun(s):\n return to_bytes(to_unicode(s).upper())\n\n\ndef title_fun(s):\n return to_bytes(to_unicode(s).title())\n\n\ndef cap_fun(s):\n return to_bytes(to_unicode(s).capitalize())\n\n\nttab_in = maketrans(\"\\\\%{}\", '\\x1c\\x1d\\x1e\\x1f')\nttab_out = maketrans('\\x1c\\x1d\\x1e\\x1f', \"\\\\%{}\")\n\n# cache of translated messages:\n# global_language_cache:\n# { 'languages/xx.py':\n# ( {\"def-message\": \"xx-message\",\n# ...\n# \"def-message\": \"xx-message\"}, lock_object )\n# 'languages/yy.py': ( {dict}, lock_object )\n# ...\n# }\n\nglobal_language_cache = {}\n\n\ndef get_from_cache(cache, val, fun):\n lang_dict, lock = cache\n lock.acquire()\n try:\n result = lang_dict.get(val)\n finally:\n lock.release()\n if result:\n return result\n lock.acquire()\n try:\n result = lang_dict.setdefault(val, fun())\n finally:\n lock.release()\n return result\n\n\ndef clear_cache(filename):\n cache = global_language_cache.setdefault(\n filename, ({}, RLock()))\n lang_dict, lock = cache\n lock.acquire()\n try:\n lang_dict.clear()\n finally:\n lock.release()\n\n\ndef read_dict_aux(filename):\n lang_text = read_locked(filename).replace(b'\\r\\n', b'\\n')\n clear_cache(filename)\n try:\n return safe_eval(to_native(lang_text)) or {}\n except Exception:\n e = sys.exc_info()[1]\n status = 'Syntax error in %s (%s)' % (filename, e)\n logging.error(status)\n return {'__corrupted__': status}\n\n\ndef read_dict(filename):\n \"\"\" Returns dictionary with translation messages\n \"\"\"\n return getcfs('lang:' + filename, filename,\n lambda: read_dict_aux(filename))\n\n\ndef read_possible_plural_rules():\n \"\"\"\n Creates list of all possible plural rules files\n The result is cached in PLURAL_RULES dictionary to increase speed\n \"\"\"\n plurals = {}\n try:\n import gluon.contrib.plural_rules as package\n for importer, modname, ispkg in pkgutil.iter_modules(package.__path__):\n if len(modname) == 2:\n module = __import__(package.__name__ + '.' + modname,\n fromlist=[modname])\n lang = modname\n pname = modname + '.py'\n nplurals = getattr(module, 'nplurals', DEFAULT_NPLURALS)\n get_plural_id = getattr(\n module, 'get_plural_id',\n DEFAULT_GET_PLURAL_ID)\n construct_plural_form = getattr(\n module, 'construct_plural_form',\n DEFAULT_CONSTRUCT_PLURAL_FORM)\n plurals[lang] = (lang, nplurals, get_plural_id,\n construct_plural_form)\n except ImportError:\n e = sys.exc_info()[1]\n logging.warn('Unable to import plural rules: %s' % e)\n return plurals\n\nPLURAL_RULES = read_possible_plural_rules()\n\n\ndef read_possible_languages_aux(langdir):\n def get_lang_struct(lang, langcode, langname, langfile_mtime):\n if lang == 'default':\n real_lang = langcode.lower()\n else:\n real_lang = lang\n (prules_langcode,\n nplurals,\n get_plural_id,\n construct_plural_form\n ) = PLURAL_RULES.get(real_lang[:2], ('default',\n DEFAULT_NPLURALS,\n DEFAULT_GET_PLURAL_ID,\n DEFAULT_CONSTRUCT_PLURAL_FORM))\n if prules_langcode != 'default':\n (pluraldict_fname,\n pluraldict_mtime) = plurals.get(real_lang,\n plurals.get(real_lang[:2],\n ('plural-%s.py' % real_lang, 0)))\n else:\n pluraldict_fname = None\n pluraldict_mtime = 0\n return (langcode, # language code from !langcode!\n langname,\n # language name in national spelling from !langname!\n langfile_mtime, # m_time of language file\n pluraldict_fname, # name of plural dictionary file or None (when default.py is not exist)\n pluraldict_mtime, # m_time of plural dictionary file or 0 if file is not exist\n prules_langcode, # code of plural rules language or 'default'\n nplurals, # nplurals for current language\n get_plural_id, # get_plural_id() for current language\n construct_plural_form) # construct_plural_form() for current language\n\n plurals = {}\n flist = oslistdir(langdir) if isdir(langdir) else []\n\n # scan languages directory for plural dict files:\n for pname in flist:\n if regex_plural_file.match(pname):\n plurals[pname[7:-3]] = (pname,\n ostat(pjoin(langdir, pname)).st_mtime)\n langs = {}\n # scan languages directory for langfiles:\n for fname in flist:\n if regex_langfile.match(fname) or fname == 'default.py':\n fname_with_path = pjoin(langdir, fname)\n d = read_dict(fname_with_path)\n lang = fname[:-3]\n langcode = d.get('!langcode!', lang if lang != 'default'\n else DEFAULT_LANGUAGE)\n langname = d.get('!langname!', langcode)\n langfile_mtime = ostat(fname_with_path).st_mtime\n langs[lang] = get_lang_struct(lang, langcode,\n langname, langfile_mtime)\n if 'default' not in langs:\n # if default.py is not found,\n # add DEFAULT_LANGUAGE as default language:\n langs['default'] = get_lang_struct('default', DEFAULT_LANGUAGE,\n DEFAULT_LANGUAGE_NAME, 0)\n deflang = langs['default']\n deflangcode = deflang[0]\n if deflangcode not in langs:\n # create language from default.py:\n langs[deflangcode] = deflang[:2] + (0,) + deflang[3:]\n\n return langs\n\n\ndef read_possible_languages(langpath):\n return getcfs('langs:' + langpath, langpath,\n lambda: read_possible_languages_aux(langpath))\n\n\ndef read_plural_dict_aux(filename):\n lang_text = read_locked(filename).replace(b'\\r\\n', b'\\n')\n try:\n return eval(lang_text) or {}\n except Exception:\n e = sys.exc_info()[1]\n status = 'Syntax error in %s (%s)' % (filename, e)\n logging.error(status)\n return {'__corrupted__': status}\n\n\ndef read_plural_dict(filename):\n return getcfs('plurals:' + filename, filename,\n lambda: read_plural_dict_aux(filename))\n\n\ndef write_plural_dict(filename, contents):\n if '__corrupted__' in contents:\n return\n fp = None\n try:\n fp = LockedFile(filename, 'w')\n fp.write('#!/usr/bin/env python\\n# -*- coding: utf-8 -*-\\n{\\n# \"singular form (0)\": [\"first plural form (1)\", \"second plural form (2)\", ...],\\n')\n for key in sorted(contents, key=sort_function):\n forms = '[' + ','.join([repr(Utf8(form))\n for form in contents[key]]) + ']'\n fp.write('%s: %s,\\n' % (repr(Utf8(key)), forms))\n fp.write('}\\n')\n except (IOError, OSError):\n if is_writable():\n logging.warning('Unable to write to file %s' % filename)\n return\n finally:\n if fp:\n fp.close()\n\n\ndef sort_function(x):\n return to_unicode(x, 'utf-8').lower()\n\n\ndef write_dict(filename, contents):\n if '__corrupted__' in contents:\n return\n fp = None\n try:\n fp = LockedFile(filename, 'w')\n fp.write('# -*- coding: utf-8 -*-\\n{\\n')\n for key in sorted(contents, key=lambda x: to_unicode(x, 'utf-8').lower()):\n fp.write('%s: %s,\\n' % (repr(Utf8(key)),\n repr(Utf8(contents[key]))))\n fp.write('}\\n')\n except (IOError, OSError):\n if is_writable():\n logging.warning('Unable to write to file %s' % filename)\n return\n finally:\n if fp:\n fp.close()\n\n\nclass lazyT(object):\n \"\"\"\n Never to be called explicitly, returned by\n translator.__call__() or translator.M()\n \"\"\"\n m = s = T = f = t = None\n M = is_copy = False\n\n def __init__(\n self,\n message,\n symbols={},\n T=None,\n filter=None,\n ftag=None,\n M=False\n ):\n if isinstance(message, lazyT):\n self.m = message.m\n self.s = message.s\n self.T = message.T\n self.f = message.f\n self.t = message.t\n self.M = message.M\n self.is_copy = True\n else:\n self.m = message\n self.s = symbols\n self.T = T\n self.f = filter\n self.t = ftag\n self.M = M\n self.is_copy = False\n\n def __repr__(self):\n return \"<lazyT %s>\" % (repr(Utf8(self.m)), )\n\n def __str__(self):\n return str(self.T.apply_filter(self.m, self.s, self.f, self.t) if self.M else\n self.T.translate(self.m, self.s))\n\n def __eq__(self, other):\n return str(self) == str(other)\n\n def __ne__(self, other):\n return str(self) != str(other)\n\n def __add__(self, other):\n return '%s%s' % (self, other)\n\n def __radd__(self, other):\n return '%s%s' % (other, self)\n\n def __mul__(self, other):\n return str(self) * other\n\n def __cmp__(self, other):\n return cmp(str(self), str(other))\n\n def __hash__(self):\n return hash(str(self))\n\n def __getattr__(self, name):\n return getattr(str(self), name)\n\n def __getitem__(self, i):\n return str(self)[i]\n\n def __getslice__(self, i, j):\n return str(self)[i:j]\n\n def __iter__(self):\n for c in str(self):\n yield c\n\n def __len__(self):\n return len(str(self))\n\n def xml(self):\n return str(self) if self.M else xmlescape(str(self), quote=False)\n\n def encode(self, *a, **b):\n if PY2 and a[0] != 'utf8':\n return to_unicode(str(self)).encode(*a, **b)\n else:\n return str(self)\n\n def decode(self, *a, **b):\n if PY2:\n return str(self).decode(*a, **b)\n else:\n return str(self)\n\n def read(self):\n return str(self)\n\n def __mod__(self, symbols):\n if self.is_copy:\n return lazyT(self)\n return lazyT(self.m, symbols, self.T, self.f, self.t, self.M)\n\n\ndef pickle_lazyT(c):\n return str, (to_native(c.xml()),)\n\ncopyreg.pickle(lazyT, pickle_lazyT)\n\n\nclass TranslatorFactory(object):\n \"\"\"\n This class is instantiated by gluon.compileapp.build_environment\n as the T object\n\n Example:\n\n T.force(None) # turns off translation\n T.force('fr, it') # forces web2py to translate using fr.py or it.py\n\n T(\"Hello World\") # translates \"Hello World\" using the selected file\n\n Note:\n - there is no need to force since, by default, T uses\n http_accept_language to determine a translation file.\n - en and en-en are considered different languages!\n - if language xx-yy is not found force() probes other similar languages\n using such algorithm: `xx-yy.py -> xx.py -> xx-yy*.py -> xx*.py`\n \"\"\"\n\n def __init__(self, langpath, http_accept_language):\n self.langpath = langpath\n self.http_accept_language = http_accept_language\n # filled in self.force():\n # ------------------------\n # self.cache\n # self.accepted_language\n # self.language_file\n # self.plural_language\n # self.nplurals\n # self.get_plural_id\n # self.construct_plural_form\n # self.plural_file\n # self.plural_dict\n # self.requested_languages\n # ----------------------------------------\n # filled in self.set_current_languages():\n # ----------------------------------------\n # self.default_language_file\n # self.default_t\n # self.current_languages\n self.set_current_languages()\n self.lazy = True\n self.otherTs = {}\n self.filter = markmin\n self.ftag = 'markmin'\n self.ns = None\n self.is_writable = True\n\n def get_possible_languages_info(self, lang=None):\n \"\"\"\n Returns info for selected language or dictionary with all\n possible languages info from `APP/languages/*.py`\n It Returns:\n\n - a tuple containing::\n\n langcode, langname, langfile_mtime,\n pluraldict_fname, pluraldict_mtime,\n prules_langcode, nplurals,\n get_plural_id, construct_plural_form\n\n or None\n\n - if *lang* is NOT defined a dictionary with all possible\n languages::\n\n { langcode(from filename):\n ( langcode, # language code from !langcode!\n langname,\n # language name in national spelling from !langname!\n langfile_mtime, # m_time of language file\n pluraldict_fname,# name of plural dictionary file or None (when default.py is not exist)\n pluraldict_mtime,# m_time of plural dictionary file or 0 if file is not exist\n prules_langcode, # code of plural rules language or 'default'\n nplurals, # nplurals for current language\n get_plural_id, # get_plural_id() for current language\n construct_plural_form) # construct_plural_form() for current language\n }\n\n Args:\n lang (str): language\n\n \"\"\"\n info = read_possible_languages(self.langpath)\n if lang:\n info = info.get(lang)\n return info\n\n def get_possible_languages(self):\n \"\"\" Gets list of all possible languages for current application \"\"\"\n return list(set(self.current_languages +\n [lang for lang in read_possible_languages(self.langpath)\n if lang != 'default']))\n\n def set_current_languages(self, *languages):\n \"\"\"\n Sets current AKA \"default\" languages\n Setting one of this languages makes the force() function to turn\n translation off\n \"\"\"\n if len(languages) == 1 and isinstance(languages[0], (tuple, list)):\n languages = languages[0]\n if not languages or languages[0] is None:\n # set default language from default.py/DEFAULT_LANGUAGE\n pl_info = self.get_possible_languages_info('default')\n if pl_info[2] == 0: # langfile_mtime\n # if languages/default.py is not found\n self.default_language_file = self.langpath\n self.default_t = {}\n self.current_languages = [DEFAULT_LANGUAGE]\n else:\n self.default_language_file = pjoin(self.langpath,\n 'default.py')\n self.default_t = read_dict(self.default_language_file)\n self.current_languages = [pl_info[0]] # !langcode!\n else:\n self.current_languages = list(languages)\n self.force(self.http_accept_language)\n\n def plural(self, word, n):\n \"\"\"\n Gets plural form of word for number *n*\n invoked from T()/T.M() in `%%{}` tag\n\n Note:\n \"word\" MUST be defined in current language (T.accepted_language)\n\n Args:\n word (str): word in singular\n n (numeric): number plural form created for\n\n Returns:\n word (str): word in appropriate singular/plural form\n\n \"\"\"\n if int(n) == 1:\n return word\n elif word:\n id = self.get_plural_id(abs(int(n)))\n # id = 0 singular form\n # id = 1 first plural form\n # id = 2 second plural form\n # etc.\n if id != 0:\n forms = self.plural_dict.get(word, [])\n if len(forms) >= id:\n # have this plural form:\n return forms[id - 1]\n else:\n # guessing this plural form\n forms += [''] * (self.nplurals - len(forms) - 1)\n form = self.construct_plural_form(word, id)\n forms[id - 1] = form\n self.plural_dict[word] = forms\n if self.is_writable and is_writable() and self.plural_file:\n write_plural_dict(self.plural_file,\n self.plural_dict)\n return form\n return word\n\n def force(self, *languages):\n \"\"\"\n Selects language(s) for translation\n\n if a list of languages is passed as a parameter,\n the first language from this list that matches the ones\n from the possible_languages dictionary will be\n selected\n\n default language will be selected if none\n of them matches possible_languages.\n \"\"\"\n pl_info = read_possible_languages(self.langpath)\n def set_plural(language):\n \"\"\"\n initialize plural forms subsystem\n \"\"\"\n lang_info = pl_info.get(language)\n if lang_info:\n (pname,\n pmtime,\n self.plural_language,\n self.nplurals,\n self.get_plural_id,\n self.construct_plural_form\n ) = lang_info[3:]\n pdict = {}\n if pname:\n pname = pjoin(self.langpath, pname)\n if pmtime != 0:\n pdict = read_plural_dict(pname)\n self.plural_file = pname\n self.plural_dict = pdict\n else:\n self.plural_language = 'default'\n self.nplurals = DEFAULT_NPLURALS\n self.get_plural_id = DEFAULT_GET_PLURAL_ID\n self.construct_plural_form = DEFAULT_CONSTRUCT_PLURAL_FORM\n self.plural_file = None\n self.plural_dict = {}\n language = ''\n if len(languages) == 1 and isinstance(languages[0], str):\n languages = regex_language.findall(languages[0].lower())\n elif not languages or languages[0] is None:\n languages = []\n self.requested_languages = languages = tuple(languages)\n if languages:\n all_languages = set(lang for lang in pl_info\n if lang != 'default') \\\n | set(self.current_languages)\n for lang in languages:\n # compare \"aa-bb\" | \"aa\" from *language* parameter\n # with strings from langlist using such alghorythm:\n # xx-yy.py -> xx.py -> xx*.py\n lang5 = lang[:5]\n if lang5 in all_languages:\n language = lang5\n else:\n lang2 = lang[:2]\n if len(lang5) > 2 and lang2 in all_languages:\n language = lang2\n else:\n for l in all_languages:\n if l[:2] == lang2:\n language = l\n if language:\n if language in self.current_languages:\n break\n self.language_file = pjoin(self.langpath, language + '.py')\n self.t = read_dict(self.language_file)\n self.cache = global_language_cache.setdefault(\n self.language_file,\n ({}, RLock()))\n set_plural(language)\n self.accepted_language = language\n return languages\n self.accepted_language = language\n if not language:\n if self.current_languages:\n self.accepted_language = self.current_languages[0]\n else:\n self.accepted_language = DEFAULT_LANGUAGE\n self.language_file = self.default_language_file\n self.cache = global_language_cache.setdefault(self.language_file,\n ({}, RLock()))\n self.t = self.default_t\n set_plural(self.accepted_language)\n return languages\n\n def __call__(self, message, symbols={}, language=None, lazy=None, ns=None):\n \"\"\"\n get cached translated plain text message with inserted parameters(symbols)\n if lazy==True lazyT object is returned\n \"\"\"\n if lazy is None:\n lazy = self.lazy\n if not language and not ns:\n if lazy:\n return lazyT(message, symbols, self)\n else:\n return self.translate(message, symbols)\n else:\n if ns:\n if ns != self.ns:\n self.langpath = os.path.join(self.langpath, ns)\n if self.ns is None:\n self.ns = ns\n otherT = self.__get_otherT__(language, ns)\n return otherT(message, symbols, lazy=lazy)\n\n def __get_otherT__(self, language=None, namespace=None):\n if not language and not namespace:\n raise Exception('Incorrect parameters')\n\n if namespace:\n if language:\n index = '%s/%s' % (namespace, language)\n else:\n index = namespace\n else:\n index = language\n try:\n otherT = self.otherTs[index]\n except KeyError:\n otherT = self.otherTs[index] = TranslatorFactory(self.langpath,\n self.http_accept_language)\n if language:\n otherT.force(language)\n return otherT\n\n def apply_filter(self, message, symbols={}, filter=None, ftag=None):\n def get_tr(message, prefix, filter):\n s = self.get_t(message, prefix)\n return filter(s) if filter else self.filter(s)\n if filter:\n prefix = '@' + (ftag or 'userdef') + '\\x01'\n else:\n prefix = '@' + self.ftag + '\\x01'\n message = get_from_cache(\n self.cache, prefix + message,\n lambda: get_tr(message, prefix, filter))\n if symbols or symbols == 0 or symbols == \"\":\n if isinstance(symbols, dict):\n symbols.update(\n (key, xmlescape(value).translate(ttab_in))\n for key, value in iteritems(symbols)\n if not isinstance(value, NUMBERS))\n else:\n if not isinstance(symbols, tuple):\n symbols = (symbols,)\n symbols = tuple(\n value if isinstance(value, NUMBERS)\n else to_native(xmlescape(value)).translate(ttab_in)\n for value in symbols)\n message = self.params_substitution(message, symbols)\n return to_native(XML(message.translate(ttab_out)).xml())\n\n def M(self, message, symbols={}, language=None,\n lazy=None, filter=None, ftag=None, ns=None):\n \"\"\"\n Gets cached translated markmin-message with inserted parametes\n if lazy==True lazyT object is returned\n \"\"\"\n if lazy is None:\n lazy = self.lazy\n if not language and not ns:\n if lazy:\n return lazyT(message, symbols, self, filter, ftag, True)\n else:\n return self.apply_filter(message, symbols, filter, ftag)\n else:\n if ns:\n self.langpath = os.path.join(self.langpath, ns)\n otherT = self.__get_otherT__(language, ns)\n return otherT.M(message, symbols, lazy=lazy)\n\n def get_t(self, message, prefix=''):\n \"\"\"\n Use ## to add a comment into a translation string\n the comment can be useful do discriminate different possible\n translations for the same string (for example different locations):\n\n T(' hello world ') -> ' hello world '\n T(' hello world ## token') -> ' hello world '\n T('hello ## world## token') -> 'hello ## world'\n\n the ## notation is ignored in multiline strings and strings that\n start with ##. This is needed to allow markmin syntax to be translated\n \"\"\"\n message = to_native(message, 'utf8')\n prefix = to_native(prefix, 'utf8')\n key = prefix + message\n mt = self.t.get(key, None)\n if mt is not None:\n return mt\n # we did not find a translation\n if message.find('##') > 0:\n pass\n if message.find('##') > 0 and not '\\n' in message:\n # remove comments\n message = message.rsplit('##', 1)[0]\n # guess translation same as original\n self.t[key] = mt = self.default_t.get(key, message)\n # update language file for latter translation\n if self.is_writable and is_writable() and \\\n self.language_file != self.default_language_file:\n write_dict(self.language_file, self.t)\n return regex_backslash.sub(\n lambda m: m.group(1).translate(ttab_in), to_native(mt))\n\n def params_substitution(self, message, symbols):\n \"\"\"\n Substitutes parameters from symbols into message using %.\n also parse `%%{}` placeholders for plural-forms processing.\n\n Returns:\n string with parameters\n\n Note:\n *symbols* MUST BE OR tuple OR dict of parameters!\n \"\"\"\n def sub_plural(m):\n \"\"\"String in `%{}` is transformed by this rules:\n If string starts with `!` or `?` such transformations\n take place:\n\n \"!string of words\" -> \"String of word\" (Capitalize)\n \"!!string of words\" -> \"String Of Word\" (Title)\n \"!!!string of words\" -> \"STRING OF WORD\" (Upper)\n\n \"?word1?number\" -> \"word1\" or \"number\"\n (return word1 if number == 1,\n return number otherwise)\n \"??number\" or \"?number\" -> \"\" or \"number\"\n (as above with word1 = \"\")\n\n \"?word1?number?word0\" -> \"word1\" or \"number\" or \"word0\"\n (return word1 if number == 1,\n return word0 if number == 0,\n return number otherwise)\n \"?word1?number?\" -> \"word1\" or \"number\" or \"\"\n (as above with word0 = \"\")\n \"??number?word0\" -> \"number\" or \"word0\"\n (as above with word1 = \"\")\n \"??number?\" -> \"number\" or \"\"\n (as above with word1 = word0 = \"\")\n\n \"?word1?word[number]\" -> \"word1\" or \"word\"\n (return word1 if symbols[number] == 1,\n return word otherwise)\n \"?word1?[number]\" -> \"\" or \"word1\"\n (as above with word = \"\")\n \"??word[number]\" or \"?word[number]\" -> \"\" or \"word\"\n (as above with word1 = \"\")\n\n \"?word1?word?word0[number]\" -> \"word1\" or \"word\" or \"word0\"\n (return word1 if symbols[number] == 1,\n return word0 if symbols[number] == 0,\n return word otherwise)\n \"?word1?word?[number]\" -> \"word1\" or \"word\" or \"\"\n (as above with word0 = \"\")\n \"??word?word0[number]\" -> \"\" or \"word\" or \"word0\"\n (as above with word1 = \"\")\n \"??word?[number]\" -> \"\" or \"word\"\n (as above with word1 = word0 = \"\")\n\n Other strings, (those not starting with `!` or `?`)\n are processed by self.plural\n \"\"\"\n def sub_tuple(m):\n \"\"\" word\n !word, !!word, !!!word\n ?word1?number\n ??number, ?number\n ?word1?number?word0\n ?word1?number?\n ??number?word0\n ??number?\n\n word[number]\n !word[number], !!word[number], !!!word[number]\n ?word1?word[number]\n ?word1?[number]\n ??word[number], ?word[number]\n ?word1?word?word0[number]\n ?word1?word?[number]\n ??word?word0[number]\n ??word?[number]\n \"\"\"\n w, i = m.group('w', 'i')\n c = w[0]\n if c not in '!?':\n return self.plural(w, symbols[int(i or 0)])\n elif c == '?':\n (p1, sep, p2) = w[1:].partition(\"?\")\n part1 = p1 if sep else \"\"\n (part2, sep, part3) = (p2 if sep else p1).partition(\"?\")\n if not sep:\n part3 = part2\n if i is None:\n # ?[word]?number[?number] or ?number\n if not part2:\n return m.group(0)\n num = int(part2)\n else:\n # ?[word1]?word[?word0][number]\n num = int(symbols[int(i or 0)])\n return part1 if num == 1 else part3 if num == 0 else part2\n elif w.startswith('!!!'):\n word = w[3:]\n fun = upper_fun\n elif w.startswith('!!'):\n word = w[2:]\n fun = title_fun\n else:\n word = w[1:]\n fun = cap_fun\n if i is not None:\n return to_native(fun(self.plural(word, symbols[int(i)])))\n return to_native(fun(word))\n\n def sub_dict(m):\n \"\"\" word(key or num)\n !word(key or num), !!word(key or num), !!!word(key or num)\n ?word1?word(key or num)\n ??word(key or num), ?word(key or num)\n ?word1?word?word0(key or num)\n ?word1?word?(key or num)\n ??word?word0(key or num)\n ?word1?word?(key or num)\n ??word?(key or num), ?word?(key or num)\n \"\"\"\n w, n = m.group('w', 'n')\n c = w[0]\n n = int(n) if n.isdigit() else symbols[n]\n if c not in '!?':\n return self.plural(w, n)\n elif c == '?':\n # ?[word1]?word[?word0](key or num), ?[word1]?word(key or num) or ?word(key or num)\n (p1, sep, p2) = w[1:].partition(\"?\")\n part1 = p1 if sep else \"\"\n (part2, sep, part3) = (p2 if sep else p1).partition(\"?\")\n if not sep:\n part3 = part2\n num = int(n)\n return part1 if num == 1 else part3 if num == 0 else part2\n elif w.startswith('!!!'):\n word = w[3:]\n fun = upper_fun\n elif w.startswith('!!'):\n word = w[2:]\n fun = title_fun\n else:\n word = w[1:]\n fun = cap_fun\n s = fun(self.plural(word, n))\n return s if PY2 else to_unicode(s)\n\n s = m.group(1)\n part = regex_plural_tuple.sub(sub_tuple, s)\n if part == s:\n part = regex_plural_dict.sub(sub_dict, s)\n if part == s:\n return m.group(0)\n return part\n message = message % symbols\n message = regex_plural.sub(sub_plural, message)\n return message\n\n def translate(self, message, symbols):\n \"\"\"\n Gets cached translated message with inserted parameters(symbols)\n \"\"\"\n message = get_from_cache(self.cache, message,\n lambda: self.get_t(message))\n if symbols or symbols == 0 or symbols == \"\":\n if isinstance(symbols, dict):\n symbols.update(\n (key, str(value).translate(ttab_in))\n for key, value in iteritems(symbols)\n if not isinstance(value, NUMBERS))\n else:\n if not isinstance(symbols, tuple):\n symbols = (symbols,)\n symbols = tuple(\n value if isinstance(value, NUMBERS)\n else str(value).translate(ttab_in)\n for value in symbols)\n message = self.params_substitution(message, symbols)\n return message.translate(ttab_out)\n\n\ndef findT(path, language=DEFAULT_LANGUAGE):\n \"\"\"\n Note:\n Must be run by the admin app\n \"\"\"\n from gluon.tools import Auth, Crud\n lang_file = pjoin(path, 'languages', language + '.py')\n sentences = read_dict(lang_file)\n mp = pjoin(path, 'models')\n cp = pjoin(path, 'controllers')\n vp = pjoin(path, 'views')\n mop = pjoin(path, 'modules')\n def add_message(message):\n if not message.startswith('#') and not '\\n' in message:\n tokens = message.rsplit('##', 1)\n else:\n # this allows markmin syntax in translations\n tokens = [message]\n if len(tokens) == 2:\n message = tokens[0].strip() + '##' + tokens[1].strip()\n if message and not message in sentences:\n sentences[message] = message.replace(\"@markmin\\x01\", \"\")\n for filename in \\\n listdir(mp, '^.+\\.py$', 0) + listdir(cp, '^.+\\.py$', 0)\\\n + listdir(vp, '^.+\\.html$', 0) + listdir(mop, '^.+\\.py$', 0):\n data = to_native(read_locked(filename))\n items = regex_translate.findall(data)\n for x in regex_translate_m.findall(data):\n if x[0:3] in [\"'''\", '\"\"\"']: items.append(\"%s@markmin\\x01%s\" %(x[0:3], x[3:]))\n else: items.append(\"%s@markmin\\x01%s\" %(x[0], x[1:]))\n for item in items:\n try:\n message = safe_eval(item)\n except:\n continue # silently ignore inproperly formatted strings\n add_message(message)\n gluon_msg = [Auth.default_messages, Crud.default_messages]\n for item in [x for m in gluon_msg for x in m.values() if x is not None]:\n add_message(item)\n if not '!langcode!' in sentences:\n sentences['!langcode!'] = (\n DEFAULT_LANGUAGE if language in ('default', DEFAULT_LANGUAGE) else language)\n if not '!langname!' in sentences:\n sentences['!langname!'] = (\n DEFAULT_LANGUAGE_NAME if language in ('default', DEFAULT_LANGUAGE)\n else sentences['!langcode!'])\n write_dict(lang_file, sentences)\n\n\ndef update_all_languages(application_path):\n \"\"\"\n Note:\n Must be run by the admin app\n \"\"\"\n path = pjoin(application_path, 'languages/')\n for language in oslistdir(path):\n if regex_langfile.match(language):\n findT(application_path, language[:-3])\n\n\ndef update_from_langfile(target, source, force_update=False):\n \"\"\"this will update untranslated messages in target from source (where both are language files)\n this can be used as first step when creating language file for new but very similar language\n or if you want update your app from welcome app of newer web2py version\n or in non-standard scenarios when you work on target and from any reason you have partial translation in source\n Args:\n force_update: if False existing translations remain unchanged, if True existing translations will update from source\n \"\"\"\n src = read_dict(source)\n sentences = read_dict(target)\n for key in sentences:\n val = sentences[key]\n if not val or val == key or force_update:\n new_val = src.get(key)\n if new_val and new_val != val:\n sentences[key] = new_val\n write_dict(target, sentences)\n\n\nif __name__ == '__main__':\n import doctest\n doctest.testmod()\n", "path": "gluon/languages.py"}]} |
gh_patches_debug_41 | rasdani/github-patches | git_diff | liqd__a4-opin-614 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
project page header: more vertical space for byline
The byline in the project pageβs header area, which showβs the projectβs organization is vertically too close to the headline of the project.

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `euth/organisations/views.py`
Content:
```
1 from django.views import generic
2
3 from . import models
4
5
6 class OrganisationDetailView(generic.DetailView):
7 model = models.Organisation
8
9 def visible_projects(self):
10 if self.request.user in self.object.initiators.all():
11 return self.object.project_set.all()
12 else:
13 return self.object.project_set.filter(is_draft=False)
14
15
16 class OrganisationListView(generic.ListView):
17 model = models.Organisation
18 paginate_by = 10
19
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/euth/organisations/views.py b/euth/organisations/views.py
--- a/euth/organisations/views.py
+++ b/euth/organisations/views.py
@@ -15,4 +15,4 @@
class OrganisationListView(generic.ListView):
model = models.Organisation
- paginate_by = 10
+ paginate_by = 12
| {"golden_diff": "diff --git a/euth/organisations/views.py b/euth/organisations/views.py\n--- a/euth/organisations/views.py\n+++ b/euth/organisations/views.py\n@@ -15,4 +15,4 @@\n \n class OrganisationListView(generic.ListView):\n model = models.Organisation\n- paginate_by = 10\n+ paginate_by = 12\n", "issue": "project page header: more vertical space for byline\nThe byline in the project page\u2019s header area, which show\u2019s the project\u2019s organization is vertically too close to the headline of the project. \r\n\r\n\n", "before_files": [{"content": "from django.views import generic\n\nfrom . import models\n\n\nclass OrganisationDetailView(generic.DetailView):\n model = models.Organisation\n\n def visible_projects(self):\n if self.request.user in self.object.initiators.all():\n return self.object.project_set.all()\n else:\n return self.object.project_set.filter(is_draft=False)\n\n\nclass OrganisationListView(generic.ListView):\n model = models.Organisation\n paginate_by = 10\n", "path": "euth/organisations/views.py"}], "after_files": [{"content": "from django.views import generic\n\nfrom . import models\n\n\nclass OrganisationDetailView(generic.DetailView):\n model = models.Organisation\n\n def visible_projects(self):\n if self.request.user in self.object.initiators.all():\n return self.object.project_set.all()\n else:\n return self.object.project_set.filter(is_draft=False)\n\n\nclass OrganisationListView(generic.ListView):\n model = models.Organisation\n paginate_by = 12\n", "path": "euth/organisations/views.py"}]} |
gh_patches_debug_42 | rasdani/github-patches | git_diff | joke2k__faker-1043 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
BBAN for en_GB too short
* Faker version: v2.0.3
* OS: linux
Numeric part of the en_GB BBAN needs to be 14 digits long, it currently only returns 13, failing further validation.
### Steps to reproduce
Invoke `fake.iban()` or `fake.bban()` with the en_GB locale, an IBAN or BBAN with 1 digit missing is returned.
### Expected behavior
GB ibans should be 22 chars long: https://www.xe.com/ibancalculator/sample/?ibancountry=united kingdom
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `faker/providers/bank/en_GB/__init__.py`
Content:
```
1 from .. import Provider as BankProvider
2
3
4 class Provider(BankProvider):
5 bban_format = '????#############'
6 country_code = 'GB'
7
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/faker/providers/bank/en_GB/__init__.py b/faker/providers/bank/en_GB/__init__.py
--- a/faker/providers/bank/en_GB/__init__.py
+++ b/faker/providers/bank/en_GB/__init__.py
@@ -2,5 +2,5 @@
class Provider(BankProvider):
- bban_format = '????#############'
+ bban_format = '????##############'
country_code = 'GB'
| {"golden_diff": "diff --git a/faker/providers/bank/en_GB/__init__.py b/faker/providers/bank/en_GB/__init__.py\n--- a/faker/providers/bank/en_GB/__init__.py\n+++ b/faker/providers/bank/en_GB/__init__.py\n@@ -2,5 +2,5 @@\n \n \n class Provider(BankProvider):\n- bban_format = '????#############'\n+ bban_format = '????##############'\n country_code = 'GB'\n", "issue": "BBAN for en_GB too short\n* Faker version: v2.0.3\r\n* OS: linux\r\n\r\nNumeric part of the en_GB BBAN needs to be 14 digits long, it currently only returns 13, failing further validation.\r\n\r\n### Steps to reproduce\r\n\r\nInvoke `fake.iban()` or `fake.bban()` with the en_GB locale, an IBAN or BBAN with 1 digit missing is returned.\r\n\r\n### Expected behavior\r\n\r\nGB ibans should be 22 chars long: https://www.xe.com/ibancalculator/sample/?ibancountry=united kingdom\r\n\r\n\n", "before_files": [{"content": "from .. import Provider as BankProvider\n\n\nclass Provider(BankProvider):\n bban_format = '????#############'\n country_code = 'GB'\n", "path": "faker/providers/bank/en_GB/__init__.py"}], "after_files": [{"content": "from .. import Provider as BankProvider\n\n\nclass Provider(BankProvider):\n bban_format = '????##############'\n country_code = 'GB'\n", "path": "faker/providers/bank/en_GB/__init__.py"}]} |
gh_patches_debug_43 | rasdani/github-patches | git_diff | wemake-services__wemake-python-styleguide-188 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Feature: forbid `credits()` builtin function
We should add `credits()` as a forbidden function:
```
Β» python -c 'credits()'
Thanks to CWI, CNRI, BeOpen.com, Zope Corporation and a cast of thousands
for supporting Python development. See www.python.org for more information.
```
We need to add it here: https://github.com/wemake-services/wemake-python-styleguide/blob/3cedeb3c13ab6b16980a39edf657ab93d4c1f19e/wemake_python_styleguide/constants.py#L36-L38
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `wemake_python_styleguide/constants.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 """
4 This module contains list of white- and black-listed ``python`` members.
5
6 It contains lists of keywords and built-in functions we discourage to use.
7 It also contains some exceptions that we allow to use in our codebase.
8 """
9
10 import re
11 import sys
12
13 # TODO: use consistent `.` for the `#:` comments
14 # TODO: use consistent names: `*_BLACKLIST` and `*_WHITELIST`
15
16 #: List of functions we forbid to use.
17 BAD_FUNCTIONS = frozenset((
18 # Code generation:
19 'eval',
20 'exec',
21 'compile',
22
23 # Magic:
24 'globals',
25 'locals',
26 'vars',
27 'dir',
28
29 # IO:
30 'input',
31
32 # Attribute access:
33 'hasattr',
34 'delattr',
35
36 # Misc:
37 'copyright',
38 'help',
39
40 # Dynamic imports:
41 '__import__',
42
43 # OOP:
44 'staticmethod',
45 ))
46
47 #: List of module metadata we forbid to use.
48 BAD_MODULE_METADATA_VARIABLES = frozenset((
49 '__author__',
50 '__all__',
51 '__version__',
52 '__about__',
53 ))
54
55
56 _BAD_VARIABLE_NAMES = [
57 # Meaningless words:
58 'data',
59 'result',
60 'results',
61 'item',
62 'items',
63 'value',
64 'values',
65 'val',
66 'vals',
67 'var',
68 'vars',
69 'content',
70 'contents',
71 'info',
72 'handle',
73 'handler',
74 'file',
75 'obj',
76 'objects',
77 'objs',
78 'some',
79
80 # Confusables:
81 'no',
82 'true',
83 'false',
84
85 # Names from examples:
86 'foo',
87 'bar',
88 'baz',
89 ]
90
91 if sys.version_info < (3, 7): # pragma: no cover
92 _BAD_VARIABLE_NAMES.extend([
93 # Compatibility with `python3.7`:
94 'async',
95 'await',
96 ])
97
98 #: List of variable names we forbid to use.
99 BAD_VARIABLE_NAMES = frozenset(_BAD_VARIABLE_NAMES)
100
101 #: List of magic methods that are forbiden to use.
102 BAD_MAGIC_METHODS = frozenset((
103 # Since we don't use `del`:
104 '__del__',
105 '__delitem__',
106 '__delete__',
107
108 '__dir__', # since we don't use `dir()`
109 '__delattr__', # since we don't use `delattr()`
110 ))
111
112 #: List of nested classes' names we allow to use.
113 NESTED_CLASSES_WHITELIST = frozenset((
114 'Meta', # django forms, models, drf, etc
115 'Params', # factoryboy specific
116 ))
117
118 #: List of nested functions' names we allow to use.
119 NESTED_FUNCTIONS_WHITELIST = frozenset((
120 'decorator',
121 'factory',
122 ))
123
124 #: List of allowed ``__future__`` imports.
125 FUTURE_IMPORTS_WHITELIST = frozenset((
126 'annotations',
127 'generator_stop',
128 ))
129
130 #: List of blacklisted module names:
131 BAD_MODULE_NAMES = frozenset((
132 'util',
133 'utils',
134 'utilities',
135 'helpers',
136 ))
137
138 #: List of allowed module magic names:
139 MAGIC_MODULE_NAMES_WHITELIST = frozenset((
140 '__init__',
141 '__main__',
142 ))
143
144 #: Regex pattern to name modules:
145 MODULE_NAME_PATTERN = re.compile(r'^_?_?[a-z][a-z\d_]+[a-z\d](__)?$')
146
147 #: Common numbers that are allowed to be used without being called "magic":
148 MAGIC_NUMBERS_WHITELIST = frozenset((
149 0.5,
150 100,
151 1000,
152 1024, # bytes
153 24, # hours
154 60, # seconds, minutes
155 ))
156
157
158 # Internal variables
159 # They are not publicly documented since they are not used by the end user.
160
161 # This variable is used as a default filename, when it is not passed by flake8:
162 STDIN = 'stdin'
163
164 # TODO: rename to `INIT_MODULE`
165 # This variable is used to specify as a placeholder for `__init__.py`:
166 INIT = '__init__'
167
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/wemake_python_styleguide/constants.py b/wemake_python_styleguide/constants.py
--- a/wemake_python_styleguide/constants.py
+++ b/wemake_python_styleguide/constants.py
@@ -36,6 +36,7 @@
# Misc:
'copyright',
'help',
+ 'credits',
# Dynamic imports:
'__import__',
| {"golden_diff": "diff --git a/wemake_python_styleguide/constants.py b/wemake_python_styleguide/constants.py\n--- a/wemake_python_styleguide/constants.py\n+++ b/wemake_python_styleguide/constants.py\n@@ -36,6 +36,7 @@\n # Misc:\n 'copyright',\n 'help',\n+ 'credits',\n \n # Dynamic imports:\n '__import__',\n", "issue": "Feature: forbid `credits()` builtin function\nWe should add `credits()` as a forbidden function:\r\n\r\n```\r\n\u00bb python -c 'credits()'\r\n Thanks to CWI, CNRI, BeOpen.com, Zope Corporation and a cast of thousands\r\n for supporting Python development. See www.python.org for more information.\r\n\r\n```\r\n\r\nWe need to add it here: https://github.com/wemake-services/wemake-python-styleguide/blob/3cedeb3c13ab6b16980a39edf657ab93d4c1f19e/wemake_python_styleguide/constants.py#L36-L38\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"\nThis module contains list of white- and black-listed ``python`` members.\n\nIt contains lists of keywords and built-in functions we discourage to use.\nIt also contains some exceptions that we allow to use in our codebase.\n\"\"\"\n\nimport re\nimport sys\n\n# TODO: use consistent `.` for the `#:` comments\n# TODO: use consistent names: `*_BLACKLIST` and `*_WHITELIST`\n\n#: List of functions we forbid to use.\nBAD_FUNCTIONS = frozenset((\n # Code generation:\n 'eval',\n 'exec',\n 'compile',\n\n # Magic:\n 'globals',\n 'locals',\n 'vars',\n 'dir',\n\n # IO:\n 'input',\n\n # Attribute access:\n 'hasattr',\n 'delattr',\n\n # Misc:\n 'copyright',\n 'help',\n\n # Dynamic imports:\n '__import__',\n\n # OOP:\n 'staticmethod',\n))\n\n#: List of module metadata we forbid to use.\nBAD_MODULE_METADATA_VARIABLES = frozenset((\n '__author__',\n '__all__',\n '__version__',\n '__about__',\n))\n\n\n_BAD_VARIABLE_NAMES = [\n # Meaningless words:\n 'data',\n 'result',\n 'results',\n 'item',\n 'items',\n 'value',\n 'values',\n 'val',\n 'vals',\n 'var',\n 'vars',\n 'content',\n 'contents',\n 'info',\n 'handle',\n 'handler',\n 'file',\n 'obj',\n 'objects',\n 'objs',\n 'some',\n\n # Confusables:\n 'no',\n 'true',\n 'false',\n\n # Names from examples:\n 'foo',\n 'bar',\n 'baz',\n]\n\nif sys.version_info < (3, 7): # pragma: no cover\n _BAD_VARIABLE_NAMES.extend([\n # Compatibility with `python3.7`:\n 'async',\n 'await',\n ])\n\n#: List of variable names we forbid to use.\nBAD_VARIABLE_NAMES = frozenset(_BAD_VARIABLE_NAMES)\n\n#: List of magic methods that are forbiden to use.\nBAD_MAGIC_METHODS = frozenset((\n # Since we don't use `del`:\n '__del__',\n '__delitem__',\n '__delete__',\n\n '__dir__', # since we don't use `dir()`\n '__delattr__', # since we don't use `delattr()`\n))\n\n#: List of nested classes' names we allow to use.\nNESTED_CLASSES_WHITELIST = frozenset((\n 'Meta', # django forms, models, drf, etc\n 'Params', # factoryboy specific\n))\n\n#: List of nested functions' names we allow to use.\nNESTED_FUNCTIONS_WHITELIST = frozenset((\n 'decorator',\n 'factory',\n))\n\n#: List of allowed ``__future__`` imports.\nFUTURE_IMPORTS_WHITELIST = frozenset((\n 'annotations',\n 'generator_stop',\n))\n\n#: List of blacklisted module names:\nBAD_MODULE_NAMES = frozenset((\n 'util',\n 'utils',\n 'utilities',\n 'helpers',\n))\n\n#: List of allowed module magic names:\nMAGIC_MODULE_NAMES_WHITELIST = frozenset((\n '__init__',\n '__main__',\n))\n\n#: Regex pattern to name modules:\nMODULE_NAME_PATTERN = re.compile(r'^_?_?[a-z][a-z\\d_]+[a-z\\d](__)?$')\n\n#: Common numbers that are allowed to be used without being called \"magic\":\nMAGIC_NUMBERS_WHITELIST = frozenset((\n 0.5,\n 100,\n 1000,\n 1024, # bytes\n 24, # hours\n 60, # seconds, minutes\n))\n\n\n# Internal variables\n# They are not publicly documented since they are not used by the end user.\n\n# This variable is used as a default filename, when it is not passed by flake8:\nSTDIN = 'stdin'\n\n# TODO: rename to `INIT_MODULE`\n# This variable is used to specify as a placeholder for `__init__.py`:\nINIT = '__init__'\n", "path": "wemake_python_styleguide/constants.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"\nThis module contains list of white- and black-listed ``python`` members.\n\nIt contains lists of keywords and built-in functions we discourage to use.\nIt also contains some exceptions that we allow to use in our codebase.\n\"\"\"\n\nimport re\nimport sys\n\n# TODO: use consistent `.` for the `#:` comments\n# TODO: use consistent names: `*_BLACKLIST` and `*_WHITELIST`\n\n#: List of functions we forbid to use.\nBAD_FUNCTIONS = frozenset((\n # Code generation:\n 'eval',\n 'exec',\n 'compile',\n\n # Magic:\n 'globals',\n 'locals',\n 'vars',\n 'dir',\n\n # IO:\n 'input',\n\n # Attribute access:\n 'hasattr',\n 'delattr',\n\n # Misc:\n 'copyright',\n 'help',\n 'credits',\n\n # Dynamic imports:\n '__import__',\n\n # OOP:\n 'staticmethod',\n))\n\n#: List of module metadata we forbid to use.\nBAD_MODULE_METADATA_VARIABLES = frozenset((\n '__author__',\n '__all__',\n '__version__',\n '__about__',\n))\n\n\n_BAD_VARIABLE_NAMES = [\n # Meaningless words:\n 'data',\n 'result',\n 'results',\n 'item',\n 'items',\n 'value',\n 'values',\n 'val',\n 'vals',\n 'var',\n 'vars',\n 'content',\n 'contents',\n 'info',\n 'handle',\n 'handler',\n 'file',\n 'obj',\n 'objects',\n 'objs',\n 'some',\n\n # Confusables:\n 'no',\n 'true',\n 'false',\n\n # Names from examples:\n 'foo',\n 'bar',\n 'baz',\n]\n\nif sys.version_info < (3, 7): # pragma: no cover\n _BAD_VARIABLE_NAMES.extend([\n # Compatibility with `python3.7`:\n 'async',\n 'await',\n ])\n\n#: List of variable names we forbid to use.\nBAD_VARIABLE_NAMES = frozenset(_BAD_VARIABLE_NAMES)\n\n#: List of magic methods that are forbiden to use.\nBAD_MAGIC_METHODS = frozenset((\n # Since we don't use `del`:\n '__del__',\n '__delitem__',\n '__delete__',\n\n '__dir__', # since we don't use `dir()`\n '__delattr__', # since we don't use `delattr()`\n))\n\n#: List of nested classes' names we allow to use.\nNESTED_CLASSES_WHITELIST = frozenset((\n 'Meta', # django forms, models, drf, etc\n 'Params', # factoryboy specific\n))\n\n#: List of nested functions' names we allow to use.\nNESTED_FUNCTIONS_WHITELIST = frozenset((\n 'decorator',\n 'factory',\n))\n\n#: List of allowed ``__future__`` imports.\nFUTURE_IMPORTS_WHITELIST = frozenset((\n 'annotations',\n 'generator_stop',\n))\n\n#: List of blacklisted module names:\nBAD_MODULE_NAMES = frozenset((\n 'util',\n 'utils',\n 'utilities',\n 'helpers',\n))\n\n#: List of allowed module magic names:\nMAGIC_MODULE_NAMES_WHITELIST = frozenset((\n '__init__',\n '__main__',\n))\n\n#: Regex pattern to name modules:\nMODULE_NAME_PATTERN = re.compile(r'^_?_?[a-z][a-z\\d_]+[a-z\\d](__)?$')\n\n#: Common numbers that are allowed to be used without being called \"magic\":\nMAGIC_NUMBERS_WHITELIST = frozenset((\n 0.5,\n 100,\n 1000,\n 1024, # bytes\n 24, # hours\n 60, # seconds, minutes\n))\n\n\n# Internal variables\n# They are not publicly documented since they are not used by the end user.\n\n# This variable is used as a default filename, when it is not passed by flake8:\nSTDIN = 'stdin'\n\n# TODO: rename to `INIT_MODULE`\n# This variable is used to specify as a placeholder for `__init__.py`:\nINIT = '__init__'\n", "path": "wemake_python_styleguide/constants.py"}]} |
gh_patches_debug_44 | rasdani/github-patches | git_diff | scikit-hep__pyhf-363 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
consolidation: add simplemodls to __all__
# Description
It would be nice if the snippet in the README could be shorter:
right now this is needed
```
import pyhf
import pyhf.simplemodels
pdf = pyhf.simplemodels.hepdata_like(signal_data=[12.0], bkg_data=[50.0], bkg_uncerts=[3.0])
CLs_obs = pyhf.utils.hypotest(1.0, [51] + pdf.config.auxdata, pdf)
```
whereas if we pre-import `simplemodels` it could be
```
import pyhf
pdf = pyhf.simplemodels.hepdata_like(signal_data=[12.0], bkg_data=[50.0], bkg_uncerts=[3.0])
CLs_obs = pyhf.utils.hypotest(1.0, [51] + pdf.config.auxdata, pdf)
```
since `simplemodels.py` doesn't add much code, i don't think it would slow down things a lot
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pyhf/__init__.py`
Content:
```
1 from .tensor import BackendRetriever as tensor
2 from .optimize import OptimizerRetriever as optimize
3 from .version import __version__
4 from . import events
5
6 tensorlib = tensor.numpy_backend()
7 default_backend = tensorlib
8 optimizer = optimize.scipy_optimizer()
9 default_optimizer = optimizer
10
11
12 def get_backend():
13 """
14 Get the current backend and the associated optimizer
15
16 Example:
17 >>> import pyhf
18 >>> pyhf.get_backend()
19 (<pyhf.tensor.numpy_backend.numpy_backend object at 0x...>, <pyhf.optimize.opt_scipy.scipy_optimizer object at 0x...>)
20
21 Returns:
22 backend, optimizer
23 """
24 global tensorlib
25 global optimizer
26 return tensorlib, optimizer
27
28
29 @events.register('change_backend')
30 def set_backend(backend, custom_optimizer=None):
31 """
32 Set the backend and the associated optimizer
33
34 Example:
35 >>> import pyhf
36 >>> import tensorflow as tf
37 >>> pyhf.set_backend(pyhf.tensor.tensorflow_backend(session=tf.Session()))
38
39 Args:
40 backend: One of the supported pyhf backends: NumPy,
41 TensorFlow, PyTorch, and MXNet
42
43 Returns:
44 None
45 """
46 global tensorlib
47 global optimizer
48
49 # need to determine if the tensorlib changed or the optimizer changed for events
50 tensorlib_changed = bool(backend.name != tensorlib.name)
51 optimizer_changed = False
52
53 if backend.name == 'tensorflow':
54 new_optimizer = (
55 custom_optimizer if custom_optimizer else optimize.tflow_optimizer(backend)
56 )
57 if tensorlib.name == 'tensorflow':
58 tensorlib_changed |= bool(backend.session != tensorlib.session)
59 elif backend.name == 'pytorch':
60 new_optimizer = (
61 custom_optimizer
62 if custom_optimizer
63 else optimize.pytorch_optimizer(tensorlib=backend)
64 )
65 # TODO: Add support for mxnet_optimizer()
66 # elif tensorlib.name == 'mxnet':
67 # new_optimizer = custom_optimizer if custom_optimizer else mxnet_optimizer()
68 else:
69 new_optimizer = (
70 custom_optimizer if custom_optimizer else optimize.scipy_optimizer()
71 )
72
73 optimizer_changed = bool(optimizer != new_optimizer)
74 # set new backend
75 tensorlib = backend
76 optimizer = new_optimizer
77 # trigger events
78 if tensorlib_changed:
79 events.trigger("tensorlib_changed")()
80 if optimizer_changed:
81 events.trigger("optimizer_changed")()
82
83
84 from .pdf import Model
85
86 __all__ = ['Model', 'utils', 'modifiers', '__version__']
87
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pyhf/__init__.py b/pyhf/__init__.py
--- a/pyhf/__init__.py
+++ b/pyhf/__init__.py
@@ -82,5 +82,6 @@
from .pdf import Model
+from . import simplemodels
-__all__ = ['Model', 'utils', 'modifiers', '__version__']
+__all__ = ['Model', 'utils', 'modifiers', 'simplemodels', '__version__']
| {"golden_diff": "diff --git a/pyhf/__init__.py b/pyhf/__init__.py\n--- a/pyhf/__init__.py\n+++ b/pyhf/__init__.py\n@@ -82,5 +82,6 @@\n \n \n from .pdf import Model\n+from . import simplemodels\n \n-__all__ = ['Model', 'utils', 'modifiers', '__version__']\n+__all__ = ['Model', 'utils', 'modifiers', 'simplemodels', '__version__']\n", "issue": "consolidation: add simplemodls to __all__\n# Description\r\n\r\nIt would be nice if the snippet in the README could be shorter:\r\n\r\nright now this is needed\r\n```\r\nimport pyhf\r\nimport pyhf.simplemodels\r\npdf = pyhf.simplemodels.hepdata_like(signal_data=[12.0], bkg_data=[50.0], bkg_uncerts=[3.0])\r\nCLs_obs = pyhf.utils.hypotest(1.0, [51] + pdf.config.auxdata, pdf)\r\n```\r\n\r\nwhereas if we pre-import `simplemodels` it could be \r\n```\r\nimport pyhf\r\npdf = pyhf.simplemodels.hepdata_like(signal_data=[12.0], bkg_data=[50.0], bkg_uncerts=[3.0])\r\nCLs_obs = pyhf.utils.hypotest(1.0, [51] + pdf.config.auxdata, pdf)\r\n```\r\n\r\nsince `simplemodels.py` doesn't add much code, i don't think it would slow down things a lot\n", "before_files": [{"content": "from .tensor import BackendRetriever as tensor\nfrom .optimize import OptimizerRetriever as optimize\nfrom .version import __version__\nfrom . import events\n\ntensorlib = tensor.numpy_backend()\ndefault_backend = tensorlib\noptimizer = optimize.scipy_optimizer()\ndefault_optimizer = optimizer\n\n\ndef get_backend():\n \"\"\"\n Get the current backend and the associated optimizer\n\n Example:\n >>> import pyhf\n >>> pyhf.get_backend()\n (<pyhf.tensor.numpy_backend.numpy_backend object at 0x...>, <pyhf.optimize.opt_scipy.scipy_optimizer object at 0x...>)\n\n Returns:\n backend, optimizer\n \"\"\"\n global tensorlib\n global optimizer\n return tensorlib, optimizer\n\n\[email protected]('change_backend')\ndef set_backend(backend, custom_optimizer=None):\n \"\"\"\n Set the backend and the associated optimizer\n\n Example:\n >>> import pyhf\n >>> import tensorflow as tf\n >>> pyhf.set_backend(pyhf.tensor.tensorflow_backend(session=tf.Session()))\n\n Args:\n backend: One of the supported pyhf backends: NumPy,\n TensorFlow, PyTorch, and MXNet\n\n Returns:\n None\n \"\"\"\n global tensorlib\n global optimizer\n\n # need to determine if the tensorlib changed or the optimizer changed for events\n tensorlib_changed = bool(backend.name != tensorlib.name)\n optimizer_changed = False\n\n if backend.name == 'tensorflow':\n new_optimizer = (\n custom_optimizer if custom_optimizer else optimize.tflow_optimizer(backend)\n )\n if tensorlib.name == 'tensorflow':\n tensorlib_changed |= bool(backend.session != tensorlib.session)\n elif backend.name == 'pytorch':\n new_optimizer = (\n custom_optimizer\n if custom_optimizer\n else optimize.pytorch_optimizer(tensorlib=backend)\n )\n # TODO: Add support for mxnet_optimizer()\n # elif tensorlib.name == 'mxnet':\n # new_optimizer = custom_optimizer if custom_optimizer else mxnet_optimizer()\n else:\n new_optimizer = (\n custom_optimizer if custom_optimizer else optimize.scipy_optimizer()\n )\n\n optimizer_changed = bool(optimizer != new_optimizer)\n # set new backend\n tensorlib = backend\n optimizer = new_optimizer\n # trigger events\n if tensorlib_changed:\n events.trigger(\"tensorlib_changed\")()\n if optimizer_changed:\n events.trigger(\"optimizer_changed\")()\n\n\nfrom .pdf import Model\n\n__all__ = ['Model', 'utils', 'modifiers', '__version__']\n", "path": "pyhf/__init__.py"}], "after_files": [{"content": "from .tensor import BackendRetriever as tensor\nfrom .optimize import OptimizerRetriever as optimize\nfrom .version import __version__\nfrom . import events\n\ntensorlib = tensor.numpy_backend()\ndefault_backend = tensorlib\noptimizer = optimize.scipy_optimizer()\ndefault_optimizer = optimizer\n\n\ndef get_backend():\n \"\"\"\n Get the current backend and the associated optimizer\n\n Example:\n >>> import pyhf\n >>> pyhf.get_backend()\n (<pyhf.tensor.numpy_backend.numpy_backend object at 0x...>, <pyhf.optimize.opt_scipy.scipy_optimizer object at 0x...>)\n\n Returns:\n backend, optimizer\n \"\"\"\n global tensorlib\n global optimizer\n return tensorlib, optimizer\n\n\[email protected]('change_backend')\ndef set_backend(backend, custom_optimizer=None):\n \"\"\"\n Set the backend and the associated optimizer\n\n Example:\n >>> import pyhf\n >>> import tensorflow as tf\n >>> pyhf.set_backend(pyhf.tensor.tensorflow_backend(session=tf.Session()))\n\n Args:\n backend: One of the supported pyhf backends: NumPy,\n TensorFlow, PyTorch, and MXNet\n\n Returns:\n None\n \"\"\"\n global tensorlib\n global optimizer\n\n # need to determine if the tensorlib changed or the optimizer changed for events\n tensorlib_changed = bool(backend.name != tensorlib.name)\n optimizer_changed = False\n\n if backend.name == 'tensorflow':\n new_optimizer = (\n custom_optimizer if custom_optimizer else optimize.tflow_optimizer(backend)\n )\n if tensorlib.name == 'tensorflow':\n tensorlib_changed |= bool(backend.session != tensorlib.session)\n elif backend.name == 'pytorch':\n new_optimizer = (\n custom_optimizer\n if custom_optimizer\n else optimize.pytorch_optimizer(tensorlib=backend)\n )\n # TODO: Add support for mxnet_optimizer()\n # elif tensorlib.name == 'mxnet':\n # new_optimizer = custom_optimizer if custom_optimizer else mxnet_optimizer()\n else:\n new_optimizer = (\n custom_optimizer if custom_optimizer else optimize.scipy_optimizer()\n )\n\n optimizer_changed = bool(optimizer != new_optimizer)\n # set new backend\n tensorlib = backend\n optimizer = new_optimizer\n # trigger events\n if tensorlib_changed:\n events.trigger(\"tensorlib_changed\")()\n if optimizer_changed:\n events.trigger(\"optimizer_changed\")()\n\n\nfrom .pdf import Model\nfrom . import simplemodels\n\n__all__ = ['Model', 'utils', 'modifiers', 'simplemodels', '__version__']\n", "path": "pyhf/__init__.py"}]} |
gh_patches_debug_45 | rasdani/github-patches | git_diff | pandas-dev__pandas-19628 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
DateTimeIndex.__iter__().next() rounds time to microseconds, when timezone aware
#### Code Sample
```python
>> import pandas as pd
>> datetimeindex = pd.DatetimeIndex(["2018-02-08 15:00:00.168456358"])
>> datetimeindex
DatetimeIndex(['2018-02-08 15:00:00.168456358'], dtype='datetime64[ns]', freq=None)
>> datetimeindex = datetimeindex.tz_localize(datetime.timezone.utc)
>> datetimeindex
DatetimeIndex(['2018-02-08 15:00:00.168456358+00:00'], dtype='datetime64[ns, UTC+00:00]', freq=None)
>> datetimeindex.__getitem__(0)
Timestamp('2018-02-08 15:00:00.168456358+0000', tz='UTC+00:00')
>> datetimeindex.__iter__().__next__()
Timestamp('2018-02-08 15:00:00.168456+0000', tz='UTC+00:00')
```
#### Problem description
When using localize DateTimeIndex with nanosecond precision, __getitem__ behavious differs from __iter__().__next__ behaviour, as when iterating thought the DateTimeIndex the date is round to microseconds. This doen not happends if the DatetimeIndex has no timezone.
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.4.2.final.0
python-bits: 64
OS: Linux
OS-release: 4.9.0-0.bpo.2-amd64
machine: x86_64
processor:
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: en_US.UTF-8
pandas: 0.22.0
pytest: None
pip: 9.0.1
setuptools: 36.5.0
Cython: None
numpy: 1.14.0
scipy: 1.0.0
pyarrow: None
xarray: None
IPython: 6.2.1
sphinx: None
patsy: None
dateutil: 2.6.1
pytz: 2017.3
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: 2.0.2
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: None
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: None
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
</details>
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pandas/conftest.py`
Content:
```
1 import pytest
2
3 from distutils.version import LooseVersion
4 import numpy
5 import pandas
6 import dateutil
7 import pandas.util._test_decorators as td
8
9
10 def pytest_addoption(parser):
11 parser.addoption("--skip-slow", action="store_true",
12 help="skip slow tests")
13 parser.addoption("--skip-network", action="store_true",
14 help="skip network tests")
15 parser.addoption("--run-high-memory", action="store_true",
16 help="run high memory tests")
17 parser.addoption("--only-slow", action="store_true",
18 help="run only slow tests")
19
20
21 def pytest_runtest_setup(item):
22 if 'slow' in item.keywords and item.config.getoption("--skip-slow"):
23 pytest.skip("skipping due to --skip-slow")
24
25 if 'slow' not in item.keywords and item.config.getoption("--only-slow"):
26 pytest.skip("skipping due to --only-slow")
27
28 if 'network' in item.keywords and item.config.getoption("--skip-network"):
29 pytest.skip("skipping due to --skip-network")
30
31 if 'high_memory' in item.keywords and not item.config.getoption(
32 "--run-high-memory"):
33 pytest.skip(
34 "skipping high memory test since --run-high-memory was not set")
35
36
37 # Configurations for all tests and all test modules
38
39 @pytest.fixture(autouse=True)
40 def configure_tests():
41 pandas.set_option('chained_assignment', 'raise')
42
43
44 # For running doctests: make np and pd names available
45
46 @pytest.fixture(autouse=True)
47 def add_imports(doctest_namespace):
48 doctest_namespace['np'] = numpy
49 doctest_namespace['pd'] = pandas
50
51
52 @pytest.fixture(params=['bsr', 'coo', 'csc', 'csr', 'dia', 'dok', 'lil'])
53 def spmatrix(request):
54 from scipy import sparse
55 return getattr(sparse, request.param + '_matrix')
56
57
58 @pytest.fixture
59 def ip():
60 """
61 Get an instance of IPython.InteractiveShell.
62
63 Will raise a skip if IPython is not installed.
64 """
65
66 pytest.importorskip('IPython', minversion="6.0.0")
67 from IPython.core.interactiveshell import InteractiveShell
68 return InteractiveShell()
69
70
71 is_dateutil_le_261 = pytest.mark.skipif(
72 LooseVersion(dateutil.__version__) > LooseVersion('2.6.1'),
73 reason="dateutil api change version")
74 is_dateutil_gt_261 = pytest.mark.skipif(
75 LooseVersion(dateutil.__version__) <= LooseVersion('2.6.1'),
76 reason="dateutil stable version")
77
78
79 @pytest.fixture(params=[None, 'gzip', 'bz2', 'zip',
80 pytest.param('xz', marks=td.skip_if_no_lzma)])
81 def compression(request):
82 """
83 Fixture for trying common compression types in compression tests
84 """
85 return request.param
86
87
88 @pytest.fixture(params=[None, 'gzip', 'bz2',
89 pytest.param('xz', marks=td.skip_if_no_lzma)])
90 def compression_no_zip(request):
91 """
92 Fixture for trying common compression types in compression tests
93 except zip
94 """
95 return request.param
96
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pandas/conftest.py b/pandas/conftest.py
--- a/pandas/conftest.py
+++ b/pandas/conftest.py
@@ -93,3 +93,9 @@
except zip
"""
return request.param
+
+
[email protected](scope='module')
+def datetime_tz_utc():
+ from datetime import timezone
+ return timezone.utc
| {"golden_diff": "diff --git a/pandas/conftest.py b/pandas/conftest.py\n--- a/pandas/conftest.py\n+++ b/pandas/conftest.py\n@@ -93,3 +93,9 @@\n except zip\n \"\"\"\n return request.param\n+\n+\[email protected](scope='module')\n+def datetime_tz_utc():\n+ from datetime import timezone\n+ return timezone.utc\n", "issue": "DateTimeIndex.__iter__().next() rounds time to microseconds, when timezone aware\n#### Code Sample\r\n\r\n```python\r\n>> import pandas as pd\r\n>> datetimeindex = pd.DatetimeIndex([\"2018-02-08 15:00:00.168456358\"])\r\n>> datetimeindex\r\nDatetimeIndex(['2018-02-08 15:00:00.168456358'], dtype='datetime64[ns]', freq=None)\r\n>> datetimeindex = datetimeindex.tz_localize(datetime.timezone.utc)\r\n>> datetimeindex\r\nDatetimeIndex(['2018-02-08 15:00:00.168456358+00:00'], dtype='datetime64[ns, UTC+00:00]', freq=None)\r\n>> datetimeindex.__getitem__(0)\r\nTimestamp('2018-02-08 15:00:00.168456358+0000', tz='UTC+00:00')\r\n>> datetimeindex.__iter__().__next__()\r\nTimestamp('2018-02-08 15:00:00.168456+0000', tz='UTC+00:00')\r\n```\r\n#### Problem description\r\n\r\nWhen using localize DateTimeIndex with nanosecond precision, __getitem__ behavious differs from __iter__().__next__ behaviour, as when iterating thought the DateTimeIndex the date is round to microseconds. This doen not happends if the DatetimeIndex has no timezone.\r\n\r\n#### Output of ``pd.show_versions()``\r\n\r\n<details>\r\n\r\nINSTALLED VERSIONS\r\n------------------\r\ncommit: None\r\npython: 3.4.2.final.0\r\npython-bits: 64\r\nOS: Linux\r\nOS-release: 4.9.0-0.bpo.2-amd64\r\nmachine: x86_64\r\nprocessor: \r\nbyteorder: little\r\nLC_ALL: None\r\nLANG: en_US.UTF-8\r\nLOCALE: en_US.UTF-8\r\n\r\npandas: 0.22.0\r\npytest: None\r\npip: 9.0.1\r\nsetuptools: 36.5.0\r\nCython: None\r\nnumpy: 1.14.0\r\nscipy: 1.0.0\r\npyarrow: None\r\nxarray: None\r\nIPython: 6.2.1\r\nsphinx: None\r\npatsy: None\r\ndateutil: 2.6.1\r\npytz: 2017.3\r\nblosc: None\r\nbottleneck: None\r\ntables: None\r\nnumexpr: None\r\nfeather: None\r\nmatplotlib: 2.0.2\r\nopenpyxl: None\r\nxlrd: None\r\nxlwt: None\r\nxlsxwriter: None\r\nlxml: None\r\nbs4: None\r\nhtml5lib: None\r\nsqlalchemy: None\r\npymysql: None\r\npsycopg2: None\r\njinja2: None\r\ns3fs: None\r\nfastparquet: None\r\npandas_gbq: None\r\npandas_datareader: None\r\n\r\n</details>\r\n\n", "before_files": [{"content": "import pytest\n\nfrom distutils.version import LooseVersion\nimport numpy\nimport pandas\nimport dateutil\nimport pandas.util._test_decorators as td\n\n\ndef pytest_addoption(parser):\n parser.addoption(\"--skip-slow\", action=\"store_true\",\n help=\"skip slow tests\")\n parser.addoption(\"--skip-network\", action=\"store_true\",\n help=\"skip network tests\")\n parser.addoption(\"--run-high-memory\", action=\"store_true\",\n help=\"run high memory tests\")\n parser.addoption(\"--only-slow\", action=\"store_true\",\n help=\"run only slow tests\")\n\n\ndef pytest_runtest_setup(item):\n if 'slow' in item.keywords and item.config.getoption(\"--skip-slow\"):\n pytest.skip(\"skipping due to --skip-slow\")\n\n if 'slow' not in item.keywords and item.config.getoption(\"--only-slow\"):\n pytest.skip(\"skipping due to --only-slow\")\n\n if 'network' in item.keywords and item.config.getoption(\"--skip-network\"):\n pytest.skip(\"skipping due to --skip-network\")\n\n if 'high_memory' in item.keywords and not item.config.getoption(\n \"--run-high-memory\"):\n pytest.skip(\n \"skipping high memory test since --run-high-memory was not set\")\n\n\n# Configurations for all tests and all test modules\n\[email protected](autouse=True)\ndef configure_tests():\n pandas.set_option('chained_assignment', 'raise')\n\n\n# For running doctests: make np and pd names available\n\[email protected](autouse=True)\ndef add_imports(doctest_namespace):\n doctest_namespace['np'] = numpy\n doctest_namespace['pd'] = pandas\n\n\[email protected](params=['bsr', 'coo', 'csc', 'csr', 'dia', 'dok', 'lil'])\ndef spmatrix(request):\n from scipy import sparse\n return getattr(sparse, request.param + '_matrix')\n\n\[email protected]\ndef ip():\n \"\"\"\n Get an instance of IPython.InteractiveShell.\n\n Will raise a skip if IPython is not installed.\n \"\"\"\n\n pytest.importorskip('IPython', minversion=\"6.0.0\")\n from IPython.core.interactiveshell import InteractiveShell\n return InteractiveShell()\n\n\nis_dateutil_le_261 = pytest.mark.skipif(\n LooseVersion(dateutil.__version__) > LooseVersion('2.6.1'),\n reason=\"dateutil api change version\")\nis_dateutil_gt_261 = pytest.mark.skipif(\n LooseVersion(dateutil.__version__) <= LooseVersion('2.6.1'),\n reason=\"dateutil stable version\")\n\n\[email protected](params=[None, 'gzip', 'bz2', 'zip',\n pytest.param('xz', marks=td.skip_if_no_lzma)])\ndef compression(request):\n \"\"\"\n Fixture for trying common compression types in compression tests\n \"\"\"\n return request.param\n\n\[email protected](params=[None, 'gzip', 'bz2',\n pytest.param('xz', marks=td.skip_if_no_lzma)])\ndef compression_no_zip(request):\n \"\"\"\n Fixture for trying common compression types in compression tests\n except zip\n \"\"\"\n return request.param\n", "path": "pandas/conftest.py"}], "after_files": [{"content": "import pytest\n\nfrom distutils.version import LooseVersion\nimport numpy\nimport pandas\nimport dateutil\nimport pandas.util._test_decorators as td\n\n\ndef pytest_addoption(parser):\n parser.addoption(\"--skip-slow\", action=\"store_true\",\n help=\"skip slow tests\")\n parser.addoption(\"--skip-network\", action=\"store_true\",\n help=\"skip network tests\")\n parser.addoption(\"--run-high-memory\", action=\"store_true\",\n help=\"run high memory tests\")\n parser.addoption(\"--only-slow\", action=\"store_true\",\n help=\"run only slow tests\")\n\n\ndef pytest_runtest_setup(item):\n if 'slow' in item.keywords and item.config.getoption(\"--skip-slow\"):\n pytest.skip(\"skipping due to --skip-slow\")\n\n if 'slow' not in item.keywords and item.config.getoption(\"--only-slow\"):\n pytest.skip(\"skipping due to --only-slow\")\n\n if 'network' in item.keywords and item.config.getoption(\"--skip-network\"):\n pytest.skip(\"skipping due to --skip-network\")\n\n if 'high_memory' in item.keywords and not item.config.getoption(\n \"--run-high-memory\"):\n pytest.skip(\n \"skipping high memory test since --run-high-memory was not set\")\n\n\n# Configurations for all tests and all test modules\n\[email protected](autouse=True)\ndef configure_tests():\n pandas.set_option('chained_assignment', 'raise')\n\n\n# For running doctests: make np and pd names available\n\[email protected](autouse=True)\ndef add_imports(doctest_namespace):\n doctest_namespace['np'] = numpy\n doctest_namespace['pd'] = pandas\n\n\[email protected](params=['bsr', 'coo', 'csc', 'csr', 'dia', 'dok', 'lil'])\ndef spmatrix(request):\n from scipy import sparse\n return getattr(sparse, request.param + '_matrix')\n\n\[email protected]\ndef ip():\n \"\"\"\n Get an instance of IPython.InteractiveShell.\n\n Will raise a skip if IPython is not installed.\n \"\"\"\n\n pytest.importorskip('IPython', minversion=\"6.0.0\")\n from IPython.core.interactiveshell import InteractiveShell\n return InteractiveShell()\n\n\nis_dateutil_le_261 = pytest.mark.skipif(\n LooseVersion(dateutil.__version__) > LooseVersion('2.6.1'),\n reason=\"dateutil api change version\")\nis_dateutil_gt_261 = pytest.mark.skipif(\n LooseVersion(dateutil.__version__) <= LooseVersion('2.6.1'),\n reason=\"dateutil stable version\")\n\n\[email protected](params=[None, 'gzip', 'bz2', 'zip',\n pytest.param('xz', marks=td.skip_if_no_lzma)])\ndef compression(request):\n \"\"\"\n Fixture for trying common compression types in compression tests\n \"\"\"\n return request.param\n\n\[email protected](params=[None, 'gzip', 'bz2',\n pytest.param('xz', marks=td.skip_if_no_lzma)])\ndef compression_no_zip(request):\n \"\"\"\n Fixture for trying common compression types in compression tests\n except zip\n \"\"\"\n return request.param\n\n\[email protected](scope='module')\ndef datetime_tz_utc():\n from datetime import timezone\n return timezone.utc\n", "path": "pandas/conftest.py"}]} |
gh_patches_debug_46 | rasdani/github-patches | git_diff | webkom__lego-2342 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Phone number not saved from registration form
When creating a new user, LEGO ignores the phone number inserted into the registration form.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lego/apps/users/serializers/registration.py`
Content:
```
1 from django.contrib.auth import password_validation
2 from rest_framework import exceptions, serializers
3
4 from lego.apps.users.models import User
5 from lego.utils.functions import verify_captcha
6
7
8 class RegistrationSerializer(serializers.ModelSerializer):
9 captcha_response = serializers.CharField(required=True)
10
11 def validate_captcha_response(self, captcha_response):
12 if not verify_captcha(captcha_response):
13 raise exceptions.ValidationError("invalid_captcha")
14 return captcha_response
15
16 class Meta:
17 model = User
18 fields = ("email", "captcha_response")
19
20
21 class RegistrationConfirmationSerializer(serializers.ModelSerializer):
22
23 password = serializers.CharField(required=True, write_only=True)
24
25 def validate_username(self, username):
26 username_exists = User.objects.filter(username__iexact=username).exists()
27 if username_exists:
28 raise exceptions.ValidationError("Username exists")
29 return username
30
31 def validate_password(self, password):
32 password_validation.validate_password(password)
33 return password
34
35 class Meta:
36 model = User
37 fields = (
38 "username",
39 "first_name",
40 "last_name",
41 "gender",
42 "password",
43 "allergies",
44 )
45
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lego/apps/users/serializers/registration.py b/lego/apps/users/serializers/registration.py
--- a/lego/apps/users/serializers/registration.py
+++ b/lego/apps/users/serializers/registration.py
@@ -41,4 +41,5 @@
"gender",
"password",
"allergies",
+ "phone_number",
)
| {"golden_diff": "diff --git a/lego/apps/users/serializers/registration.py b/lego/apps/users/serializers/registration.py\n--- a/lego/apps/users/serializers/registration.py\n+++ b/lego/apps/users/serializers/registration.py\n@@ -41,4 +41,5 @@\n \"gender\",\n \"password\",\n \"allergies\",\n+ \"phone_number\",\n )\n", "issue": "Phone number not saved from registration form\nWhen creating a new user, LEGO ignores the phone number inserted into the registration form.\n", "before_files": [{"content": "from django.contrib.auth import password_validation\nfrom rest_framework import exceptions, serializers\n\nfrom lego.apps.users.models import User\nfrom lego.utils.functions import verify_captcha\n\n\nclass RegistrationSerializer(serializers.ModelSerializer):\n captcha_response = serializers.CharField(required=True)\n\n def validate_captcha_response(self, captcha_response):\n if not verify_captcha(captcha_response):\n raise exceptions.ValidationError(\"invalid_captcha\")\n return captcha_response\n\n class Meta:\n model = User\n fields = (\"email\", \"captcha_response\")\n\n\nclass RegistrationConfirmationSerializer(serializers.ModelSerializer):\n\n password = serializers.CharField(required=True, write_only=True)\n\n def validate_username(self, username):\n username_exists = User.objects.filter(username__iexact=username).exists()\n if username_exists:\n raise exceptions.ValidationError(\"Username exists\")\n return username\n\n def validate_password(self, password):\n password_validation.validate_password(password)\n return password\n\n class Meta:\n model = User\n fields = (\n \"username\",\n \"first_name\",\n \"last_name\",\n \"gender\",\n \"password\",\n \"allergies\",\n )\n", "path": "lego/apps/users/serializers/registration.py"}], "after_files": [{"content": "from django.contrib.auth import password_validation\nfrom rest_framework import exceptions, serializers\n\nfrom lego.apps.users.models import User\nfrom lego.utils.functions import verify_captcha\n\n\nclass RegistrationSerializer(serializers.ModelSerializer):\n captcha_response = serializers.CharField(required=True)\n\n def validate_captcha_response(self, captcha_response):\n if not verify_captcha(captcha_response):\n raise exceptions.ValidationError(\"invalid_captcha\")\n return captcha_response\n\n class Meta:\n model = User\n fields = (\"email\", \"captcha_response\")\n\n\nclass RegistrationConfirmationSerializer(serializers.ModelSerializer):\n\n password = serializers.CharField(required=True, write_only=True)\n\n def validate_username(self, username):\n username_exists = User.objects.filter(username__iexact=username).exists()\n if username_exists:\n raise exceptions.ValidationError(\"Username exists\")\n return username\n\n def validate_password(self, password):\n password_validation.validate_password(password)\n return password\n\n class Meta:\n model = User\n fields = (\n \"username\",\n \"first_name\",\n \"last_name\",\n \"gender\",\n \"password\",\n \"allergies\",\n \"phone_number\",\n )\n", "path": "lego/apps/users/serializers/registration.py"}]} |
gh_patches_debug_47 | rasdani/github-patches | git_diff | bokeh__bokeh-9477 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] Non-daemon worker thread prevents gunicorn from shutting down cleanly.
#### ALL software version info (bokeh, python, notebook, OS, browser, any other relevant packages)
bokeh HEAD e605297
gunicorn (version 20.0.4)
Python 3.7.4
macOS 10.14.6
#### Description of expected behavior and the observed behavior
I am learning about embedding Bokeh in a Flask project and tried the sample script flask_gunicorn_embed.py from the macOS terminal. After viewing the working web page in Safari, I then pressed Ctrl-C in the terminal to stop the gunicorn server. The expected behaviour was a clean shutdown of gunicorn, but instead it hangs.
Marking the bk_worker thread as a daemon before starting it resolves the hang.
#### Stack traceback and/or browser JavaScript console output
greent7@avocado:~/development/bokeh/examples/howto/server_embed$ BOKEH_ALLOW_WS_ORIGIN=127.0.0.1:8000 gunicorn -w 4 flask_gunicorn_embed:app
[2019-11-29 01:06:31 -0700] [53812] [INFO] Starting gunicorn 20.0.4
[2019-11-29 01:06:31 -0700] [53812] [INFO] Listening at: http://127.0.0.1:8000 (53812)
[2019-11-29 01:06:31 -0700] [53812] [INFO] Using worker: sync
[2019-11-29 01:06:31 -0700] [53815] [INFO] Booting worker with pid: 53815
[2019-11-29 01:06:32 -0700] [53816] [INFO] Booting worker with pid: 53816
[2019-11-29 01:06:32 -0700] [53817] [INFO] Booting worker with pid: 53817
[2019-11-29 01:06:32 -0700] [53818] [INFO] Booting worker with pid: 53818
^C[2019-11-29 01:06:33 -0700] [53812] [INFO] Handling signal: int
[2019-11-29 01:06:33 -0700] [53818] [INFO] Worker exiting (pid: 53818)
[2019-11-29 01:06:33 -0700] [53815] [INFO] Worker exiting (pid: 53815)
[2019-11-29 01:06:33 -0700] [53817] [INFO] Worker exiting (pid: 53817)
[2019-11-29 01:06:33 -0700] [53816] [INFO] Worker exiting (pid: 53816)
If I hit Ctrl-C again, it continues and exits noisily:
^CException ignored in: <module 'threading' from '/usr/local/Cellar/python/3.7.4/Frameworks/Python.framework/Versions/3.7/lib/python3.7/threading.py'>
Traceback (most recent call last):
File "/usr/local/Cellar/python/3.7.4/Frameworks/Python.framework/Versions/3.7/lib/python3.7/threading.py", line 1308, in _shutdown
lock.acquire()
File "/usr/local/lib/python3.7/site-packages/gunicorn/workers/base.py", line 196, in handle_quit
sys.exit(0)
SystemExit: 0
[2019-11-29 01:06:56 -0700] [53812] [INFO] Shutting down: Master
[BUG] Non-daemon worker thread prevents gunicorn from shutting down cleanly.
#### ALL software version info (bokeh, python, notebook, OS, browser, any other relevant packages)
bokeh HEAD e605297
gunicorn (version 20.0.4)
Python 3.7.4
macOS 10.14.6
#### Description of expected behavior and the observed behavior
I am learning about embedding Bokeh in a Flask project and tried the sample script flask_gunicorn_embed.py from the macOS terminal. After viewing the working web page in Safari, I then pressed Ctrl-C in the terminal to stop the gunicorn server. The expected behaviour was a clean shutdown of gunicorn, but instead it hangs.
Marking the bk_worker thread as a daemon before starting it resolves the hang.
#### Stack traceback and/or browser JavaScript console output
greent7@avocado:~/development/bokeh/examples/howto/server_embed$ BOKEH_ALLOW_WS_ORIGIN=127.0.0.1:8000 gunicorn -w 4 flask_gunicorn_embed:app
[2019-11-29 01:06:31 -0700] [53812] [INFO] Starting gunicorn 20.0.4
[2019-11-29 01:06:31 -0700] [53812] [INFO] Listening at: http://127.0.0.1:8000 (53812)
[2019-11-29 01:06:31 -0700] [53812] [INFO] Using worker: sync
[2019-11-29 01:06:31 -0700] [53815] [INFO] Booting worker with pid: 53815
[2019-11-29 01:06:32 -0700] [53816] [INFO] Booting worker with pid: 53816
[2019-11-29 01:06:32 -0700] [53817] [INFO] Booting worker with pid: 53817
[2019-11-29 01:06:32 -0700] [53818] [INFO] Booting worker with pid: 53818
^C[2019-11-29 01:06:33 -0700] [53812] [INFO] Handling signal: int
[2019-11-29 01:06:33 -0700] [53818] [INFO] Worker exiting (pid: 53818)
[2019-11-29 01:06:33 -0700] [53815] [INFO] Worker exiting (pid: 53815)
[2019-11-29 01:06:33 -0700] [53817] [INFO] Worker exiting (pid: 53817)
[2019-11-29 01:06:33 -0700] [53816] [INFO] Worker exiting (pid: 53816)
If I hit Ctrl-C again, it continues and exits noisily:
^CException ignored in: <module 'threading' from '/usr/local/Cellar/python/3.7.4/Frameworks/Python.framework/Versions/3.7/lib/python3.7/threading.py'>
Traceback (most recent call last):
File "/usr/local/Cellar/python/3.7.4/Frameworks/Python.framework/Versions/3.7/lib/python3.7/threading.py", line 1308, in _shutdown
lock.acquire()
File "/usr/local/lib/python3.7/site-packages/gunicorn/workers/base.py", line 196, in handle_quit
sys.exit(0)
SystemExit: 0
[2019-11-29 01:06:56 -0700] [53812] [INFO] Shutting down: Master
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/howto/server_embed/flask_gunicorn_embed.py`
Content:
```
1 try:
2 import asyncio
3 except ImportError:
4 raise RuntimeError("This example requries Python3 / asyncio")
5
6 from threading import Thread
7
8 from flask import Flask, render_template
9 from tornado.httpserver import HTTPServer
10 from tornado.ioloop import IOLoop
11
12 from bokeh.application import Application
13 from bokeh.application.handlers import FunctionHandler
14 from bokeh.embed import server_document
15 from bokeh.layouts import column
16 from bokeh.models import ColumnDataSource, Slider
17 from bokeh.plotting import figure
18 from bokeh.sampledata.sea_surface_temperature import sea_surface_temperature
19 from bokeh.server.server import BaseServer
20 from bokeh.server.tornado import BokehTornado
21 from bokeh.server.util import bind_sockets
22 from bokeh.themes import Theme
23
24 if __name__ == '__main__':
25 print('This script is intended to be run with gunicorn. e.g.')
26 print()
27 print(' gunicorn -w 4 flask_gunicorn_embed:app')
28 print()
29 print('will start the app on four processes')
30 import sys
31 sys.exit()
32
33
34 app = Flask(__name__)
35
36 def bkapp(doc):
37 df = sea_surface_temperature.copy()
38 source = ColumnDataSource(data=df)
39
40 plot = figure(x_axis_type='datetime', y_range=(0, 25), y_axis_label='Temperature (Celsius)',
41 title="Sea Surface Temperature at 43.18, -70.43")
42 plot.line('time', 'temperature', source=source)
43
44 def callback(attr, old, new):
45 if new == 0:
46 data = df
47 else:
48 data = df.rolling('{0}D'.format(new)).mean()
49 source.data = ColumnDataSource.from_df(data)
50
51 slider = Slider(start=0, end=30, value=0, step=1, title="Smoothing by N Days")
52 slider.on_change('value', callback)
53
54 doc.add_root(column(slider, plot))
55
56 doc.theme = Theme(filename="theme.yaml")
57
58 # can't use shortcuts here, since we are passing to low level BokehTornado
59 bkapp = Application(FunctionHandler(bkapp))
60
61 # This is so that if this app is run using something like "gunicorn -w 4" then
62 # each process will listen on its own port
63 sockets, port = bind_sockets("localhost", 0)
64
65 @app.route('/', methods=['GET'])
66 def bkapp_page():
67 script = server_document('http://localhost:%d/bkapp' % port)
68 return render_template("embed.html", script=script, template="Flask")
69
70 def bk_worker():
71 asyncio.set_event_loop(asyncio.new_event_loop())
72
73 bokeh_tornado = BokehTornado({'/bkapp': bkapp}, extra_websocket_origins=["localhost:8000"])
74 bokeh_http = HTTPServer(bokeh_tornado)
75 bokeh_http.add_sockets(sockets)
76
77 server = BaseServer(IOLoop.current(), bokeh_tornado, bokeh_http)
78 server.start()
79 server.io_loop.start()
80
81 Thread(target=bk_worker).start()
82
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/examples/howto/server_embed/flask_gunicorn_embed.py b/examples/howto/server_embed/flask_gunicorn_embed.py
--- a/examples/howto/server_embed/flask_gunicorn_embed.py
+++ b/examples/howto/server_embed/flask_gunicorn_embed.py
@@ -78,4 +78,6 @@
server.start()
server.io_loop.start()
-Thread(target=bk_worker).start()
+t = Thread(target=bk_worker)
+t.daemon = True
+t.start()
| {"golden_diff": "diff --git a/examples/howto/server_embed/flask_gunicorn_embed.py b/examples/howto/server_embed/flask_gunicorn_embed.py\n--- a/examples/howto/server_embed/flask_gunicorn_embed.py\n+++ b/examples/howto/server_embed/flask_gunicorn_embed.py\n@@ -78,4 +78,6 @@\n server.start()\n server.io_loop.start()\n \n-Thread(target=bk_worker).start()\n+t = Thread(target=bk_worker)\n+t.daemon = True\n+t.start()\n", "issue": "[BUG] Non-daemon worker thread prevents gunicorn from shutting down cleanly.\n#### ALL software version info (bokeh, python, notebook, OS, browser, any other relevant packages)\r\nbokeh HEAD e605297\r\ngunicorn (version 20.0.4)\r\nPython 3.7.4\r\nmacOS 10.14.6\r\n\r\n#### Description of expected behavior and the observed behavior\r\nI am learning about embedding Bokeh in a Flask project and tried the sample script flask_gunicorn_embed.py from the macOS terminal. After viewing the working web page in Safari, I then pressed Ctrl-C in the terminal to stop the gunicorn server. The expected behaviour was a clean shutdown of gunicorn, but instead it hangs.\r\n\r\nMarking the bk_worker thread as a daemon before starting it resolves the hang.\r\n\r\n#### Stack traceback and/or browser JavaScript console output\r\ngreent7@avocado:~/development/bokeh/examples/howto/server_embed$ BOKEH_ALLOW_WS_ORIGIN=127.0.0.1:8000 gunicorn -w 4 flask_gunicorn_embed:app\r\n[2019-11-29 01:06:31 -0700] [53812] [INFO] Starting gunicorn 20.0.4\r\n[2019-11-29 01:06:31 -0700] [53812] [INFO] Listening at: http://127.0.0.1:8000 (53812)\r\n[2019-11-29 01:06:31 -0700] [53812] [INFO] Using worker: sync\r\n[2019-11-29 01:06:31 -0700] [53815] [INFO] Booting worker with pid: 53815\r\n[2019-11-29 01:06:32 -0700] [53816] [INFO] Booting worker with pid: 53816\r\n[2019-11-29 01:06:32 -0700] [53817] [INFO] Booting worker with pid: 53817\r\n[2019-11-29 01:06:32 -0700] [53818] [INFO] Booting worker with pid: 53818\r\n^C[2019-11-29 01:06:33 -0700] [53812] [INFO] Handling signal: int\r\n[2019-11-29 01:06:33 -0700] [53818] [INFO] Worker exiting (pid: 53818)\r\n[2019-11-29 01:06:33 -0700] [53815] [INFO] Worker exiting (pid: 53815)\r\n[2019-11-29 01:06:33 -0700] [53817] [INFO] Worker exiting (pid: 53817)\r\n[2019-11-29 01:06:33 -0700] [53816] [INFO] Worker exiting (pid: 53816)\r\n\r\nIf I hit Ctrl-C again, it continues and exits noisily:\r\n\r\n^CException ignored in: <module 'threading' from '/usr/local/Cellar/python/3.7.4/Frameworks/Python.framework/Versions/3.7/lib/python3.7/threading.py'>\r\nTraceback (most recent call last):\r\n File \"/usr/local/Cellar/python/3.7.4/Frameworks/Python.framework/Versions/3.7/lib/python3.7/threading.py\", line 1308, in _shutdown\r\n lock.acquire()\r\n File \"/usr/local/lib/python3.7/site-packages/gunicorn/workers/base.py\", line 196, in handle_quit\r\n sys.exit(0)\r\nSystemExit: 0\r\n[2019-11-29 01:06:56 -0700] [53812] [INFO] Shutting down: Master\r\n\n[BUG] Non-daemon worker thread prevents gunicorn from shutting down cleanly.\n#### ALL software version info (bokeh, python, notebook, OS, browser, any other relevant packages)\r\nbokeh HEAD e605297\r\ngunicorn (version 20.0.4)\r\nPython 3.7.4\r\nmacOS 10.14.6\r\n\r\n#### Description of expected behavior and the observed behavior\r\nI am learning about embedding Bokeh in a Flask project and tried the sample script flask_gunicorn_embed.py from the macOS terminal. After viewing the working web page in Safari, I then pressed Ctrl-C in the terminal to stop the gunicorn server. The expected behaviour was a clean shutdown of gunicorn, but instead it hangs.\r\n\r\nMarking the bk_worker thread as a daemon before starting it resolves the hang.\r\n\r\n#### Stack traceback and/or browser JavaScript console output\r\ngreent7@avocado:~/development/bokeh/examples/howto/server_embed$ BOKEH_ALLOW_WS_ORIGIN=127.0.0.1:8000 gunicorn -w 4 flask_gunicorn_embed:app\r\n[2019-11-29 01:06:31 -0700] [53812] [INFO] Starting gunicorn 20.0.4\r\n[2019-11-29 01:06:31 -0700] [53812] [INFO] Listening at: http://127.0.0.1:8000 (53812)\r\n[2019-11-29 01:06:31 -0700] [53812] [INFO] Using worker: sync\r\n[2019-11-29 01:06:31 -0700] [53815] [INFO] Booting worker with pid: 53815\r\n[2019-11-29 01:06:32 -0700] [53816] [INFO] Booting worker with pid: 53816\r\n[2019-11-29 01:06:32 -0700] [53817] [INFO] Booting worker with pid: 53817\r\n[2019-11-29 01:06:32 -0700] [53818] [INFO] Booting worker with pid: 53818\r\n^C[2019-11-29 01:06:33 -0700] [53812] [INFO] Handling signal: int\r\n[2019-11-29 01:06:33 -0700] [53818] [INFO] Worker exiting (pid: 53818)\r\n[2019-11-29 01:06:33 -0700] [53815] [INFO] Worker exiting (pid: 53815)\r\n[2019-11-29 01:06:33 -0700] [53817] [INFO] Worker exiting (pid: 53817)\r\n[2019-11-29 01:06:33 -0700] [53816] [INFO] Worker exiting (pid: 53816)\r\n\r\nIf I hit Ctrl-C again, it continues and exits noisily:\r\n\r\n^CException ignored in: <module 'threading' from '/usr/local/Cellar/python/3.7.4/Frameworks/Python.framework/Versions/3.7/lib/python3.7/threading.py'>\r\nTraceback (most recent call last):\r\n File \"/usr/local/Cellar/python/3.7.4/Frameworks/Python.framework/Versions/3.7/lib/python3.7/threading.py\", line 1308, in _shutdown\r\n lock.acquire()\r\n File \"/usr/local/lib/python3.7/site-packages/gunicorn/workers/base.py\", line 196, in handle_quit\r\n sys.exit(0)\r\nSystemExit: 0\r\n[2019-11-29 01:06:56 -0700] [53812] [INFO] Shutting down: Master\r\n\n", "before_files": [{"content": "try:\n import asyncio\nexcept ImportError:\n raise RuntimeError(\"This example requries Python3 / asyncio\")\n\nfrom threading import Thread\n\nfrom flask import Flask, render_template\nfrom tornado.httpserver import HTTPServer\nfrom tornado.ioloop import IOLoop\n\nfrom bokeh.application import Application\nfrom bokeh.application.handlers import FunctionHandler\nfrom bokeh.embed import server_document\nfrom bokeh.layouts import column\nfrom bokeh.models import ColumnDataSource, Slider\nfrom bokeh.plotting import figure\nfrom bokeh.sampledata.sea_surface_temperature import sea_surface_temperature\nfrom bokeh.server.server import BaseServer\nfrom bokeh.server.tornado import BokehTornado\nfrom bokeh.server.util import bind_sockets\nfrom bokeh.themes import Theme\n\nif __name__ == '__main__':\n print('This script is intended to be run with gunicorn. e.g.')\n print()\n print(' gunicorn -w 4 flask_gunicorn_embed:app')\n print()\n print('will start the app on four processes')\n import sys\n sys.exit()\n\n\napp = Flask(__name__)\n\ndef bkapp(doc):\n df = sea_surface_temperature.copy()\n source = ColumnDataSource(data=df)\n\n plot = figure(x_axis_type='datetime', y_range=(0, 25), y_axis_label='Temperature (Celsius)',\n title=\"Sea Surface Temperature at 43.18, -70.43\")\n plot.line('time', 'temperature', source=source)\n\n def callback(attr, old, new):\n if new == 0:\n data = df\n else:\n data = df.rolling('{0}D'.format(new)).mean()\n source.data = ColumnDataSource.from_df(data)\n\n slider = Slider(start=0, end=30, value=0, step=1, title=\"Smoothing by N Days\")\n slider.on_change('value', callback)\n\n doc.add_root(column(slider, plot))\n\n doc.theme = Theme(filename=\"theme.yaml\")\n\n# can't use shortcuts here, since we are passing to low level BokehTornado\nbkapp = Application(FunctionHandler(bkapp))\n\n# This is so that if this app is run using something like \"gunicorn -w 4\" then\n# each process will listen on its own port\nsockets, port = bind_sockets(\"localhost\", 0)\n\[email protected]('/', methods=['GET'])\ndef bkapp_page():\n script = server_document('http://localhost:%d/bkapp' % port)\n return render_template(\"embed.html\", script=script, template=\"Flask\")\n\ndef bk_worker():\n asyncio.set_event_loop(asyncio.new_event_loop())\n\n bokeh_tornado = BokehTornado({'/bkapp': bkapp}, extra_websocket_origins=[\"localhost:8000\"])\n bokeh_http = HTTPServer(bokeh_tornado)\n bokeh_http.add_sockets(sockets)\n\n server = BaseServer(IOLoop.current(), bokeh_tornado, bokeh_http)\n server.start()\n server.io_loop.start()\n\nThread(target=bk_worker).start()\n", "path": "examples/howto/server_embed/flask_gunicorn_embed.py"}], "after_files": [{"content": "try:\n import asyncio\nexcept ImportError:\n raise RuntimeError(\"This example requries Python3 / asyncio\")\n\nfrom threading import Thread\n\nfrom flask import Flask, render_template\nfrom tornado.httpserver import HTTPServer\nfrom tornado.ioloop import IOLoop\n\nfrom bokeh.application import Application\nfrom bokeh.application.handlers import FunctionHandler\nfrom bokeh.embed import server_document\nfrom bokeh.layouts import column\nfrom bokeh.models import ColumnDataSource, Slider\nfrom bokeh.plotting import figure\nfrom bokeh.sampledata.sea_surface_temperature import sea_surface_temperature\nfrom bokeh.server.server import BaseServer\nfrom bokeh.server.tornado import BokehTornado\nfrom bokeh.server.util import bind_sockets\nfrom bokeh.themes import Theme\n\nif __name__ == '__main__':\n print('This script is intended to be run with gunicorn. e.g.')\n print()\n print(' gunicorn -w 4 flask_gunicorn_embed:app')\n print()\n print('will start the app on four processes')\n import sys\n sys.exit()\n\n\napp = Flask(__name__)\n\ndef bkapp(doc):\n df = sea_surface_temperature.copy()\n source = ColumnDataSource(data=df)\n\n plot = figure(x_axis_type='datetime', y_range=(0, 25), y_axis_label='Temperature (Celsius)',\n title=\"Sea Surface Temperature at 43.18, -70.43\")\n plot.line('time', 'temperature', source=source)\n\n def callback(attr, old, new):\n if new == 0:\n data = df\n else:\n data = df.rolling('{0}D'.format(new)).mean()\n source.data = ColumnDataSource.from_df(data)\n\n slider = Slider(start=0, end=30, value=0, step=1, title=\"Smoothing by N Days\")\n slider.on_change('value', callback)\n\n doc.add_root(column(slider, plot))\n\n doc.theme = Theme(filename=\"theme.yaml\")\n\n# can't use shortcuts here, since we are passing to low level BokehTornado\nbkapp = Application(FunctionHandler(bkapp))\n\n# This is so that if this app is run using something like \"gunicorn -w 4\" then\n# each process will listen on its own port\nsockets, port = bind_sockets(\"localhost\", 0)\n\[email protected]('/', methods=['GET'])\ndef bkapp_page():\n script = server_document('http://localhost:%d/bkapp' % port)\n return render_template(\"embed.html\", script=script, template=\"Flask\")\n\ndef bk_worker():\n asyncio.set_event_loop(asyncio.new_event_loop())\n\n bokeh_tornado = BokehTornado({'/bkapp': bkapp}, extra_websocket_origins=[\"localhost:8000\"])\n bokeh_http = HTTPServer(bokeh_tornado)\n bokeh_http.add_sockets(sockets)\n\n server = BaseServer(IOLoop.current(), bokeh_tornado, bokeh_http)\n server.start()\n server.io_loop.start()\n\nt = Thread(target=bk_worker)\nt.daemon = True\nt.start()\n", "path": "examples/howto/server_embed/flask_gunicorn_embed.py"}]} |
gh_patches_debug_48 | rasdani/github-patches | git_diff | django-crispy-forms__django-crispy-forms-1015 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Specify Python version requirement (>=3.x)
https://github.com/django-crispy-forms/django-crispy-forms/blob/ba53410f752402436d84dc8ab00e2b6e1e67a74c/setup.py#L22
The drop of Python 2 support in release 1.9.0 has broken installation of the package for users of Python 2 because it does not specify that Python 3 is required.
The recommendation is specified here, including instructions for `setup.py`:
https://packaging.python.org/guides/dropping-older-python-versions/
Would you mind adding the specification to the package?
This would also mean either re-releasing 1.9.0 :see_no_evil: or update 1.9.0 directly in Pypi with that information (is it possible?) or releasing something like 1.9.0.1 or 1.9.1 and removing 1.9.0 from Pypi...
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 import os
2 import sys
3
4 from setuptools import find_packages, setup
5
6 import crispy_forms
7
8 if sys.argv[-1] == 'publish':
9 if os.system("pip freeze | grep wheel"):
10 print("wheel not installed.\nUse `pip install wheel`.\nExiting.")
11 sys.exit()
12 if os.system("pip freeze | grep twine"):
13 print("twine not installed.\nUse `pip install twine`.\nExiting.")
14 sys.exit()
15 os.system("python setup.py sdist bdist_wheel")
16 os.system("twine upload dist/*")
17 print("You probably want to also tag the version now:")
18 print(" git tag -a {} -m 'version {}'".format(crispy_forms.__version__, crispy_forms.__version__))
19 print(" git push --tags")
20 sys.exit()
21
22 setup(
23 name='django-crispy-forms',
24 version=crispy_forms.__version__,
25 description="Best way to have Django DRY forms",
26 long_description=open('README.rst').read(),
27 classifiers=[
28 "Development Status :: 5 - Production/Stable",
29 "Environment :: Web Environment",
30 "Framework :: Django",
31 "Framework :: Django :: 2.2",
32 "Framework :: Django :: 3.0",
33 "License :: OSI Approved :: MIT License",
34 "Operating System :: OS Independent",
35 "Programming Language :: JavaScript",
36 "Programming Language :: Python :: 3",
37 "Programming Language :: Python :: 3.5",
38 "Programming Language :: Python :: 3.6",
39 "Programming Language :: Python :: 3.7",
40 "Programming Language :: Python :: 3.8",
41 "Topic :: Internet :: WWW/HTTP",
42 "Topic :: Internet :: WWW/HTTP :: Dynamic Content",
43 "Topic :: Software Development :: Libraries :: Python Modules",
44 ],
45 keywords=['forms', 'django', 'crispy', 'DRY'],
46 author='Miguel Araujo',
47 author_email='[email protected]',
48 url='https://github.com/django-crispy-forms/django-crispy-forms',
49 license='MIT',
50 packages=find_packages(exclude=['docs']),
51 include_package_data=True,
52 zip_safe=False,
53 )
54
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -50,4 +50,5 @@
packages=find_packages(exclude=['docs']),
include_package_data=True,
zip_safe=False,
+ python_requires='>=3.5',
)
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -50,4 +50,5 @@\n packages=find_packages(exclude=['docs']),\n include_package_data=True,\n zip_safe=False,\n+ python_requires='>=3.5',\n )\n", "issue": "Specify Python version requirement (>=3.x)\nhttps://github.com/django-crispy-forms/django-crispy-forms/blob/ba53410f752402436d84dc8ab00e2b6e1e67a74c/setup.py#L22\r\n\r\nThe drop of Python 2 support in release 1.9.0 has broken installation of the package for users of Python 2 because it does not specify that Python 3 is required.\r\n\r\nThe recommendation is specified here, including instructions for `setup.py`: \r\nhttps://packaging.python.org/guides/dropping-older-python-versions/\r\n\r\nWould you mind adding the specification to the package?\r\nThis would also mean either re-releasing 1.9.0 :see_no_evil: or update 1.9.0 directly in Pypi with that information (is it possible?) or releasing something like 1.9.0.1 or 1.9.1 and removing 1.9.0 from Pypi...\n", "before_files": [{"content": "import os\nimport sys\n\nfrom setuptools import find_packages, setup\n\nimport crispy_forms\n\nif sys.argv[-1] == 'publish':\n if os.system(\"pip freeze | grep wheel\"):\n print(\"wheel not installed.\\nUse `pip install wheel`.\\nExiting.\")\n sys.exit()\n if os.system(\"pip freeze | grep twine\"):\n print(\"twine not installed.\\nUse `pip install twine`.\\nExiting.\")\n sys.exit()\n os.system(\"python setup.py sdist bdist_wheel\")\n os.system(\"twine upload dist/*\")\n print(\"You probably want to also tag the version now:\")\n print(\" git tag -a {} -m 'version {}'\".format(crispy_forms.__version__, crispy_forms.__version__))\n print(\" git push --tags\")\n sys.exit()\n\nsetup(\n name='django-crispy-forms',\n version=crispy_forms.__version__,\n description=\"Best way to have Django DRY forms\",\n long_description=open('README.rst').read(),\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Web Environment\",\n \"Framework :: Django\",\n \"Framework :: Django :: 2.2\",\n \"Framework :: Django :: 3.0\",\n \"License :: OSI Approved :: MIT License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: JavaScript\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Topic :: Internet :: WWW/HTTP :: Dynamic Content\",\n \"Topic :: Software Development :: Libraries :: Python Modules\",\n ],\n keywords=['forms', 'django', 'crispy', 'DRY'],\n author='Miguel Araujo',\n author_email='[email protected]',\n url='https://github.com/django-crispy-forms/django-crispy-forms',\n license='MIT',\n packages=find_packages(exclude=['docs']),\n include_package_data=True,\n zip_safe=False,\n)\n", "path": "setup.py"}], "after_files": [{"content": "import os\nimport sys\n\nfrom setuptools import find_packages, setup\n\nimport crispy_forms\n\nif sys.argv[-1] == 'publish':\n if os.system(\"pip freeze | grep wheel\"):\n print(\"wheel not installed.\\nUse `pip install wheel`.\\nExiting.\")\n sys.exit()\n if os.system(\"pip freeze | grep twine\"):\n print(\"twine not installed.\\nUse `pip install twine`.\\nExiting.\")\n sys.exit()\n os.system(\"python setup.py sdist bdist_wheel\")\n os.system(\"twine upload dist/*\")\n print(\"You probably want to also tag the version now:\")\n print(\" git tag -a {} -m 'version {}'\".format(crispy_forms.__version__, crispy_forms.__version__))\n print(\" git push --tags\")\n sys.exit()\n\nsetup(\n name='django-crispy-forms',\n version=crispy_forms.__version__,\n description=\"Best way to have Django DRY forms\",\n long_description=open('README.rst').read(),\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Web Environment\",\n \"Framework :: Django\",\n \"Framework :: Django :: 2.2\",\n \"Framework :: Django :: 3.0\",\n \"License :: OSI Approved :: MIT License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: JavaScript\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Topic :: Internet :: WWW/HTTP :: Dynamic Content\",\n \"Topic :: Software Development :: Libraries :: Python Modules\",\n ],\n keywords=['forms', 'django', 'crispy', 'DRY'],\n author='Miguel Araujo',\n author_email='[email protected]',\n url='https://github.com/django-crispy-forms/django-crispy-forms',\n license='MIT',\n packages=find_packages(exclude=['docs']),\n include_package_data=True,\n zip_safe=False,\n python_requires='>=3.5',\n)\n", "path": "setup.py"}]} |
gh_patches_debug_49 | rasdani/github-patches | git_diff | inventree__InvenTree-1692 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Table ordering not working for any parameter

Table ordering not working for any parameter

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `InvenTree/part/api.py`
Content:
```
1 """
2 Provides a JSON API for the Part app
3 """
4
5 # -*- coding: utf-8 -*-
6 from __future__ import unicode_literals
7
8 from django_filters.rest_framework import DjangoFilterBackend
9 from django.http import JsonResponse
10 from django.db.models import Q, F, Count, Min, Max, Avg
11 from django.utils.translation import ugettext_lazy as _
12
13 from rest_framework import status
14 from rest_framework.response import Response
15 from rest_framework import filters, serializers
16 from rest_framework import generics
17
18 from djmoney.money import Money
19 from djmoney.contrib.exchange.models import convert_money
20 from djmoney.contrib.exchange.exceptions import MissingRate
21
22 from django.conf.urls import url, include
23 from django.urls import reverse
24
25 from .models import Part, PartCategory, BomItem
26 from .models import PartParameter, PartParameterTemplate
27 from .models import PartAttachment, PartTestTemplate
28 from .models import PartSellPriceBreak, PartInternalPriceBreak
29 from .models import PartCategoryParameterTemplate
30
31 from common.models import InvenTreeSetting
32 from build.models import Build
33
34 from . import serializers as part_serializers
35
36 from InvenTree.views import TreeSerializer
37 from InvenTree.helpers import str2bool, isNull
38 from InvenTree.api import AttachmentMixin
39
40 from InvenTree.status_codes import BuildStatus
41
42
43 class PartCategoryTree(TreeSerializer):
44
45 title = _("Parts")
46 model = PartCategory
47
48 queryset = PartCategory.objects.all()
49
50 @property
51 def root_url(self):
52 return reverse('part-index')
53
54 def get_items(self):
55 return PartCategory.objects.all().prefetch_related('parts', 'children')
56
57
58 class CategoryList(generics.ListCreateAPIView):
59 """ API endpoint for accessing a list of PartCategory objects.
60
61 - GET: Return a list of PartCategory objects
62 - POST: Create a new PartCategory object
63 """
64
65 queryset = PartCategory.objects.all()
66 serializer_class = part_serializers.CategorySerializer
67
68 def filter_queryset(self, queryset):
69 """
70 Custom filtering:
71 - Allow filtering by "null" parent to retrieve top-level part categories
72 """
73
74 queryset = super().filter_queryset(queryset)
75
76 params = self.request.query_params
77
78 cat_id = params.get('parent', None)
79
80 cascade = str2bool(params.get('cascade', False))
81
82 # Do not filter by category
83 if cat_id is None:
84 pass
85 # Look for top-level categories
86 elif isNull(cat_id):
87
88 if not cascade:
89 queryset = queryset.filter(parent=None)
90
91 else:
92 try:
93 category = PartCategory.objects.get(pk=cat_id)
94
95 if cascade:
96 parents = category.get_descendants(include_self=True)
97 parent_ids = [p.id for p in parents]
98
99 queryset = queryset.filter(parent__in=parent_ids)
100 else:
101 queryset = queryset.filter(parent=category)
102
103 except (ValueError, PartCategory.DoesNotExist):
104 pass
105
106 return queryset
107
108 filter_backends = [
109 DjangoFilterBackend,
110 filters.SearchFilter,
111 filters.OrderingFilter,
112 ]
113
114 filter_fields = [
115 ]
116
117 ordering_fields = [
118 'name',
119 ]
120
121 ordering = 'name'
122
123 search_fields = [
124 'name',
125 'description',
126 ]
127
128
129 class CategoryDetail(generics.RetrieveUpdateDestroyAPIView):
130 """ API endpoint for detail view of a single PartCategory object """
131 serializer_class = part_serializers.CategorySerializer
132 queryset = PartCategory.objects.all()
133
134
135 class CategoryParameters(generics.ListAPIView):
136 """ API endpoint for accessing a list of PartCategoryParameterTemplate objects.
137
138 - GET: Return a list of PartCategoryParameterTemplate objects
139 """
140
141 queryset = PartCategoryParameterTemplate.objects.all()
142 serializer_class = part_serializers.CategoryParameterTemplateSerializer
143
144 def get_queryset(self):
145 """
146 Custom filtering:
147 - Allow filtering by "null" parent to retrieve all categories parameter templates
148 - Allow filtering by category
149 - Allow traversing all parent categories
150 """
151
152 try:
153 cat_id = int(self.kwargs.get('pk', None))
154 except TypeError:
155 cat_id = None
156 fetch_parent = str2bool(self.request.query_params.get('fetch_parent', 'true'))
157
158 queryset = super().get_queryset()
159
160 if isinstance(cat_id, int):
161
162 try:
163 category = PartCategory.objects.get(pk=cat_id)
164 except PartCategory.DoesNotExist:
165 # Return empty queryset
166 return PartCategoryParameterTemplate.objects.none()
167
168 category_list = [cat_id]
169
170 if fetch_parent:
171 parent_categories = category.get_ancestors()
172 for parent in parent_categories:
173 category_list.append(parent.pk)
174
175 queryset = queryset.filter(category__in=category_list)
176
177 return queryset
178
179
180 class PartSalePriceList(generics.ListCreateAPIView):
181 """
182 API endpoint for list view of PartSalePriceBreak model
183 """
184
185 queryset = PartSellPriceBreak.objects.all()
186 serializer_class = part_serializers.PartSalePriceSerializer
187
188 filter_backends = [
189 DjangoFilterBackend
190 ]
191
192 filter_fields = [
193 'part',
194 ]
195
196
197 class PartInternalPriceList(generics.ListCreateAPIView):
198 """
199 API endpoint for list view of PartInternalPriceBreak model
200 """
201
202 queryset = PartInternalPriceBreak.objects.all()
203 serializer_class = part_serializers.PartInternalPriceSerializer
204 permission_required = 'roles.sales_order.show'
205
206 filter_backends = [
207 DjangoFilterBackend
208 ]
209
210 filter_fields = [
211 'part',
212 ]
213
214
215 class PartAttachmentList(generics.ListCreateAPIView, AttachmentMixin):
216 """
217 API endpoint for listing (and creating) a PartAttachment (file upload).
218 """
219
220 queryset = PartAttachment.objects.all()
221 serializer_class = part_serializers.PartAttachmentSerializer
222
223 filter_backends = [
224 DjangoFilterBackend,
225 ]
226
227 filter_fields = [
228 'part',
229 ]
230
231
232 class PartTestTemplateList(generics.ListCreateAPIView):
233 """
234 API endpoint for listing (and creating) a PartTestTemplate.
235 """
236
237 queryset = PartTestTemplate.objects.all()
238 serializer_class = part_serializers.PartTestTemplateSerializer
239
240 def filter_queryset(self, queryset):
241 """
242 Filter the test list queryset.
243
244 If filtering by 'part', we include results for any parts "above" the specified part.
245 """
246
247 queryset = super().filter_queryset(queryset)
248
249 params = self.request.query_params
250
251 part = params.get('part', None)
252
253 # Filter by part
254 if part:
255 try:
256 part = Part.objects.get(pk=part)
257 queryset = queryset.filter(part__in=part.get_ancestors(include_self=True))
258 except (ValueError, Part.DoesNotExist):
259 pass
260
261 # Filter by 'required' status
262 required = params.get('required', None)
263
264 if required is not None:
265 queryset = queryset.filter(required=required)
266
267 return queryset
268
269 filter_backends = [
270 DjangoFilterBackend,
271 filters.OrderingFilter,
272 filters.SearchFilter,
273 ]
274
275
276 class PartThumbs(generics.ListAPIView):
277 """
278 API endpoint for retrieving information on available Part thumbnails
279 """
280
281 queryset = Part.objects.all()
282 serializer_class = part_serializers.PartThumbSerializer
283
284 def get_queryset(self):
285
286 queryset = super().get_queryset()
287
288 # Get all Parts which have an associated image
289 queryset = queryset.exclude(image='')
290
291 return queryset
292
293 def list(self, request, *args, **kwargs):
294 """
295 Serialize the available Part images.
296 - Images may be used for multiple parts!
297 """
298
299 queryset = self.get_queryset()
300
301 # TODO - We should return the thumbnails here, not the full image!
302
303 # Return the most popular parts first
304 data = queryset.values(
305 'image',
306 ).annotate(count=Count('image')).order_by('-count')
307
308 return Response(data)
309
310
311 class PartThumbsUpdate(generics.RetrieveUpdateAPIView):
312 """ API endpoint for updating Part thumbnails"""
313
314 queryset = Part.objects.all()
315 serializer_class = part_serializers.PartThumbSerializerUpdate
316
317 filter_backends = [
318 DjangoFilterBackend
319 ]
320
321
322 class PartDetail(generics.RetrieveUpdateDestroyAPIView):
323 """ API endpoint for detail view of a single Part object """
324
325 queryset = Part.objects.all()
326 serializer_class = part_serializers.PartSerializer
327
328 starred_parts = None
329
330 def get_queryset(self, *args, **kwargs):
331 queryset = super().get_queryset(*args, **kwargs)
332
333 queryset = part_serializers.PartSerializer.prefetch_queryset(queryset)
334 queryset = part_serializers.PartSerializer.annotate_queryset(queryset)
335
336 return queryset
337
338 def get_serializer(self, *args, **kwargs):
339
340 try:
341 kwargs['category_detail'] = str2bool(self.request.query_params.get('category_detail', False))
342 except AttributeError:
343 pass
344
345 # Ensure the request context is passed through
346 kwargs['context'] = self.get_serializer_context()
347
348 # Pass a list of "starred" parts fo the current user to the serializer
349 # We do this to reduce the number of database queries required!
350 if self.starred_parts is None and self.request is not None:
351 self.starred_parts = [star.part for star in self.request.user.starred_parts.all()]
352
353 kwargs['starred_parts'] = self.starred_parts
354
355 return self.serializer_class(*args, **kwargs)
356
357 def destroy(self, request, *args, **kwargs):
358 # Retrieve part
359 part = Part.objects.get(pk=int(kwargs['pk']))
360 # Check if inactive
361 if not part.active:
362 # Delete
363 return super(PartDetail, self).destroy(request, *args, **kwargs)
364 else:
365 # Return 405 error
366 message = f'Part \'{part.name}\' (pk = {part.pk}) is active: cannot delete'
367 return Response(status=status.HTTP_405_METHOD_NOT_ALLOWED, data=message)
368
369 def update(self, request, *args, **kwargs):
370 """
371 Custom update functionality for Part instance.
372
373 - If the 'starred' field is provided, update the 'starred' status against current user
374 """
375
376 if 'starred' in request.data:
377 starred = str2bool(request.data.get('starred', None))
378
379 self.get_object().setStarred(request.user, starred)
380
381 response = super().update(request, *args, **kwargs)
382
383 return response
384
385
386 class PartList(generics.ListCreateAPIView):
387 """ API endpoint for accessing a list of Part objects
388
389 - GET: Return list of objects
390 - POST: Create a new Part object
391
392 The Part object list can be filtered by:
393 - category: Filter by PartCategory reference
394 - cascade: If true, include parts from sub-categories
395 - starred: Is the part "starred" by the current user?
396 - is_template: Is the part a template part?
397 - variant_of: Filter by variant_of Part reference
398 - assembly: Filter by assembly field
399 - component: Filter by component field
400 - trackable: Filter by trackable field
401 - purchaseable: Filter by purcahseable field
402 - salable: Filter by salable field
403 - active: Filter by active field
404 - ancestor: Filter parts by 'ancestor' (template / variant tree)
405 """
406
407 serializer_class = part_serializers.PartSerializer
408
409 queryset = Part.objects.all()
410
411 starred_parts = None
412
413 def get_serializer(self, *args, **kwargs):
414
415 # Ensure the request context is passed through
416 kwargs['context'] = self.get_serializer_context()
417
418 # Pass a list of "starred" parts fo the current user to the serializer
419 # We do this to reduce the number of database queries required!
420 if self.starred_parts is None and self.request is not None:
421 self.starred_parts = [star.part for star in self.request.user.starred_parts.all()]
422
423 kwargs['starred_parts'] = self.starred_parts
424
425 return self.serializer_class(*args, **kwargs)
426
427 def list(self, request, *args, **kwargs):
428 """
429 Overide the 'list' method, as the PartCategory objects are
430 very expensive to serialize!
431
432 So we will serialize them first, and keep them in memory,
433 so that they do not have to be serialized multiple times...
434 """
435
436 queryset = self.filter_queryset(self.get_queryset())
437
438 page = self.paginate_queryset(queryset)
439
440 if page is not None:
441 serializer = self.get_serializer(page, many=True)
442 else:
443 serializer = self.get_serializer(queryset, many=True)
444
445 data = serializer.data
446
447 # Do we wish to include PartCategory detail?
448 if str2bool(request.query_params.get('category_detail', False)):
449
450 # Work out which part categorie we need to query
451 category_ids = set()
452
453 for part in data:
454 cat_id = part['category']
455
456 if cat_id is not None:
457 category_ids.add(cat_id)
458
459 # Fetch only the required PartCategory objects from the database
460 categories = PartCategory.objects.filter(pk__in=category_ids).prefetch_related(
461 'parts',
462 'parent',
463 'children',
464 )
465
466 category_map = {}
467
468 # Serialize each PartCategory object
469 for category in categories:
470 category_map[category.pk] = part_serializers.CategorySerializer(category).data
471
472 for part in data:
473 cat_id = part['category']
474
475 if cat_id is not None and cat_id in category_map.keys():
476 detail = category_map[cat_id]
477 else:
478 detail = None
479
480 part['category_detail'] = detail
481
482 """
483 Determine the response type based on the request.
484 a) For HTTP requests (e.g. via the browseable API) return a DRF response
485 b) For AJAX requests, simply return a JSON rendered response.
486 """
487 if page is not None:
488 return self.get_paginated_response(data)
489 elif request.is_ajax():
490 return JsonResponse(data, safe=False)
491 else:
492 return Response(data)
493
494 def perform_create(self, serializer):
495 """
496 We wish to save the user who created this part!
497
498 Note: Implementation copied from DRF class CreateModelMixin
499 """
500
501 part = serializer.save()
502 part.creation_user = self.request.user
503 part.save()
504
505 def get_queryset(self, *args, **kwargs):
506
507 queryset = super().get_queryset(*args, **kwargs)
508
509 queryset = part_serializers.PartSerializer.prefetch_queryset(queryset)
510 queryset = part_serializers.PartSerializer.annotate_queryset(queryset)
511
512 return queryset
513
514 def filter_queryset(self, queryset):
515 """
516 Perform custom filtering of the queryset.
517 We overide the DRF filter_fields here because
518 """
519
520 params = self.request.query_params
521
522 queryset = super().filter_queryset(queryset)
523
524 # Filter by "uses" query - Limit to parts which use the provided part
525 uses = params.get('uses', None)
526
527 if uses:
528 try:
529 uses = Part.objects.get(pk=uses)
530
531 queryset = queryset.filter(uses.get_used_in_filter())
532
533 except (ValueError, Part.DoesNotExist):
534 pass
535
536 # Filter by 'ancestor'?
537 ancestor = params.get('ancestor', None)
538
539 if ancestor is not None:
540 # If an 'ancestor' part is provided, filter to match only children
541 try:
542 ancestor = Part.objects.get(pk=ancestor)
543 descendants = ancestor.get_descendants(include_self=False)
544 queryset = queryset.filter(pk__in=[d.pk for d in descendants])
545 except (ValueError, Part.DoesNotExist):
546 pass
547
548 # Filter by whether the part has an IPN (internal part number) defined
549 has_ipn = params.get('has_ipn', None)
550
551 if has_ipn is not None:
552 has_ipn = str2bool(has_ipn)
553
554 if has_ipn:
555 queryset = queryset.exclude(IPN='')
556 else:
557 queryset = queryset.filter(IPN='')
558
559 # Filter by whether the BOM has been validated (or not)
560 bom_valid = params.get('bom_valid', None)
561
562 # TODO: Querying bom_valid status may be quite expensive
563 # TODO: (It needs to be profiled!)
564 # TODO: It might be worth caching the bom_valid status to a database column
565
566 if bom_valid is not None:
567
568 bom_valid = str2bool(bom_valid)
569
570 # Limit queryset to active assemblies
571 queryset = queryset.filter(active=True, assembly=True)
572
573 pks = []
574
575 for part in queryset:
576 if part.is_bom_valid() == bom_valid:
577 pks.append(part.pk)
578
579 queryset = queryset.filter(pk__in=pks)
580
581 # Filter by 'starred' parts?
582 starred = params.get('starred', None)
583
584 if starred is not None:
585 starred = str2bool(starred)
586 starred_parts = [star.part.pk for star in self.request.user.starred_parts.all()]
587
588 if starred:
589 queryset = queryset.filter(pk__in=starred_parts)
590 else:
591 queryset = queryset.exclude(pk__in=starred_parts)
592
593 # Cascade? (Default = True)
594 cascade = str2bool(params.get('cascade', True))
595
596 # Does the user wish to filter by category?
597 cat_id = params.get('category', None)
598
599 if cat_id is None:
600 # No category filtering if category is not specified
601 pass
602
603 else:
604 # Category has been specified!
605 if isNull(cat_id):
606 # A 'null' category is the top-level category
607 if cascade is False:
608 # Do not cascade, only list parts in the top-level category
609 queryset = queryset.filter(category=None)
610
611 else:
612 try:
613 category = PartCategory.objects.get(pk=cat_id)
614
615 # If '?cascade=true' then include parts which exist in sub-categories
616 if cascade:
617 queryset = queryset.filter(category__in=category.getUniqueChildren())
618 # Just return parts directly in the requested category
619 else:
620 queryset = queryset.filter(category=cat_id)
621 except (ValueError, PartCategory.DoesNotExist):
622 pass
623
624 # Annotate calculated data to the queryset
625 # (This will be used for further filtering)
626 queryset = part_serializers.PartSerializer.annotate_queryset(queryset)
627
628 # Filter by whether the part has stock
629 has_stock = params.get("has_stock", None)
630
631 if has_stock is not None:
632 has_stock = str2bool(has_stock)
633
634 if has_stock:
635 queryset = queryset.filter(Q(in_stock__gt=0))
636 else:
637 queryset = queryset.filter(Q(in_stock__lte=0))
638
639 # If we are filtering by 'low_stock' status
640 low_stock = params.get('low_stock', None)
641
642 if low_stock is not None:
643 low_stock = str2bool(low_stock)
644
645 if low_stock:
646 # Ignore any parts which do not have a specified 'minimum_stock' level
647 queryset = queryset.exclude(minimum_stock=0)
648 # Filter items which have an 'in_stock' level lower than 'minimum_stock'
649 queryset = queryset.filter(Q(in_stock__lt=F('minimum_stock')))
650 else:
651 # Filter items which have an 'in_stock' level higher than 'minimum_stock'
652 queryset = queryset.filter(Q(in_stock__gte=F('minimum_stock')))
653
654 # Filter by "parts which need stock to complete build"
655 stock_to_build = params.get('stock_to_build', None)
656
657 # TODO: This is super expensive, database query wise...
658 # TODO: Need to figure out a cheaper way of making this filter query
659
660 if stock_to_build is not None:
661 # Get active builds
662 builds = Build.objects.filter(status__in=BuildStatus.ACTIVE_CODES)
663 # Store parts with builds needing stock
664 parts_needed_to_complete_builds = []
665 # Filter required parts
666 for build in builds:
667 parts_needed_to_complete_builds += [part.pk for part in build.required_parts_to_complete_build]
668
669 queryset = queryset.filter(pk__in=parts_needed_to_complete_builds)
670
671 # Optionally limit the maximum number of returned results
672 # e.g. for displaying "recent part" list
673 max_results = params.get('max_results', None)
674
675 if max_results is not None:
676 try:
677 max_results = int(max_results)
678
679 if max_results > 0:
680 queryset = queryset[:max_results]
681
682 except (ValueError):
683 pass
684
685 return queryset
686
687 filter_backends = [
688 DjangoFilterBackend,
689 filters.SearchFilter,
690 filters.OrderingFilter,
691 ]
692
693 filter_fields = [
694 'is_template',
695 'variant_of',
696 'assembly',
697 'component',
698 'trackable',
699 'purchaseable',
700 'salable',
701 'active',
702 ]
703
704 ordering_fields = [
705 'name',
706 'creation_date',
707 'IPN',
708 'in_stock',
709 ]
710
711 # Default ordering
712 ordering = 'name'
713
714 search_fields = [
715 'name',
716 'description',
717 'IPN',
718 'revision',
719 'keywords',
720 'category__name',
721 ]
722
723
724 class PartParameterTemplateList(generics.ListCreateAPIView):
725 """ API endpoint for accessing a list of PartParameterTemplate objects.
726
727 - GET: Return list of PartParameterTemplate objects
728 - POST: Create a new PartParameterTemplate object
729 """
730
731 queryset = PartParameterTemplate.objects.all()
732 serializer_class = part_serializers.PartParameterTemplateSerializer
733
734 filter_backends = [
735 filters.OrderingFilter,
736 ]
737
738 filter_fields = [
739 'name',
740 ]
741
742
743 class PartParameterList(generics.ListCreateAPIView):
744 """ API endpoint for accessing a list of PartParameter objects
745
746 - GET: Return list of PartParameter objects
747 - POST: Create a new PartParameter object
748 """
749
750 queryset = PartParameter.objects.all()
751 serializer_class = part_serializers.PartParameterSerializer
752
753 filter_backends = [
754 DjangoFilterBackend
755 ]
756
757 filter_fields = [
758 'part',
759 'template',
760 ]
761
762
763 class PartParameterDetail(generics.RetrieveUpdateDestroyAPIView):
764 """
765 API endpoint for detail view of a single PartParameter object
766 """
767
768 queryset = PartParameter.objects.all()
769 serializer_class = part_serializers.PartParameterSerializer
770
771
772 class BomList(generics.ListCreateAPIView):
773 """ API endpoint for accessing a list of BomItem objects.
774
775 - GET: Return list of BomItem objects
776 - POST: Create a new BomItem object
777 """
778
779 serializer_class = part_serializers.BomItemSerializer
780
781 def list(self, request, *args, **kwargs):
782
783 queryset = self.filter_queryset(self.get_queryset())
784
785 serializer = self.get_serializer(queryset, many=True)
786
787 data = serializer.data
788
789 if request.is_ajax():
790 return JsonResponse(data, safe=False)
791 else:
792 return Response(data)
793
794 def get_serializer(self, *args, **kwargs):
795
796 # Do we wish to include extra detail?
797 try:
798 kwargs['part_detail'] = str2bool(self.request.GET.get('part_detail', None))
799 except AttributeError:
800 pass
801
802 try:
803 kwargs['sub_part_detail'] = str2bool(self.request.GET.get('sub_part_detail', None))
804 except AttributeError:
805 pass
806
807 # Ensure the request context is passed through!
808 kwargs['context'] = self.get_serializer_context()
809
810 return self.serializer_class(*args, **kwargs)
811
812 def get_queryset(self, *args, **kwargs):
813
814 queryset = BomItem.objects.all()
815
816 queryset = self.get_serializer_class().setup_eager_loading(queryset)
817
818 return queryset
819
820 def filter_queryset(self, queryset):
821
822 queryset = super().filter_queryset(queryset)
823
824 params = self.request.query_params
825
826 # Filter by "optional" status?
827 optional = params.get('optional', None)
828
829 if optional is not None:
830 optional = str2bool(optional)
831
832 queryset = queryset.filter(optional=optional)
833
834 # Filter by "inherited" status
835 inherited = params.get('inherited', None)
836
837 if inherited is not None:
838 inherited = str2bool(inherited)
839
840 queryset = queryset.filter(inherited=inherited)
841
842 # Filter by "allow_variants"
843 variants = params.get("allow_variants", None)
844
845 if variants is not None:
846 variants = str2bool(variants)
847
848 queryset = queryset.filter(allow_variants=variants)
849
850 # Filter by part?
851 part = params.get('part', None)
852
853 if part is not None:
854 """
855 If we are filtering by "part", there are two cases to consider:
856
857 a) Bom items which are defined for *this* part
858 b) Inherited parts which are defined for a *parent* part
859
860 So we need to construct two queries!
861 """
862
863 # First, check that the part is actually valid!
864 try:
865 part = Part.objects.get(pk=part)
866
867 queryset = queryset.filter(part.get_bom_item_filter())
868
869 except (ValueError, Part.DoesNotExist):
870 pass
871
872 # Filter by "active" status of the part
873 part_active = params.get('part_active', None)
874
875 if part_active is not None:
876 part_active = str2bool(part_active)
877 queryset = queryset.filter(part__active=part_active)
878
879 # Filter by "trackable" status of the part
880 part_trackable = params.get('part_trackable', None)
881
882 if part_trackable is not None:
883 part_trackable = str2bool(part_trackable)
884 queryset = queryset.filter(part__trackable=part_trackable)
885
886 # Filter by "trackable" status of the sub-part
887 sub_part_trackable = params.get('sub_part_trackable', None)
888
889 if sub_part_trackable is not None:
890 sub_part_trackable = str2bool(sub_part_trackable)
891 queryset = queryset.filter(sub_part__trackable=sub_part_trackable)
892
893 # Filter by whether the BOM line has been validated
894 validated = params.get('validated', None)
895
896 if validated is not None:
897 validated = str2bool(validated)
898
899 # Work out which lines have actually been validated
900 pks = []
901
902 for bom_item in queryset.all():
903 if bom_item.is_line_valid:
904 pks.append(bom_item.pk)
905
906 if validated:
907 queryset = queryset.filter(pk__in=pks)
908 else:
909 queryset = queryset.exclude(pk__in=pks)
910
911 # Annotate with purchase prices
912 queryset = queryset.annotate(
913 purchase_price_min=Min('sub_part__stock_items__purchase_price'),
914 purchase_price_max=Max('sub_part__stock_items__purchase_price'),
915 purchase_price_avg=Avg('sub_part__stock_items__purchase_price'),
916 )
917
918 # Get values for currencies
919 currencies = queryset.annotate(
920 purchase_price_currency=F('sub_part__stock_items__purchase_price_currency'),
921 ).values('pk', 'sub_part', 'purchase_price_currency')
922
923 def convert_price(price, currency, decimal_places=4):
924 """ Convert price field, returns Money field """
925
926 price_adjusted = None
927
928 # Get default currency from settings
929 default_currency = InvenTreeSetting.get_setting('INVENTREE_DEFAULT_CURRENCY')
930
931 if price:
932 if currency and default_currency:
933 try:
934 # Get adjusted price
935 price_adjusted = convert_money(Money(price, currency), default_currency)
936 except MissingRate:
937 # No conversion rate set
938 price_adjusted = Money(price, currency)
939 else:
940 # Currency exists
941 if currency:
942 price_adjusted = Money(price, currency)
943 # Default currency exists
944 if default_currency:
945 price_adjusted = Money(price, default_currency)
946
947 if price_adjusted and decimal_places:
948 price_adjusted.decimal_places = decimal_places
949
950 return price_adjusted
951
952 # Convert prices to default currency (using backend conversion rates)
953 for bom_item in queryset:
954 # Find associated currency (select first found)
955 purchase_price_currency = None
956 for currency_item in currencies:
957 if currency_item['pk'] == bom_item.pk and currency_item['sub_part'] == bom_item.sub_part.pk:
958 purchase_price_currency = currency_item['purchase_price_currency']
959 break
960 # Convert prices
961 bom_item.purchase_price_min = convert_price(bom_item.purchase_price_min, purchase_price_currency)
962 bom_item.purchase_price_max = convert_price(bom_item.purchase_price_max, purchase_price_currency)
963 bom_item.purchase_price_avg = convert_price(bom_item.purchase_price_avg, purchase_price_currency)
964
965 return queryset
966
967 filter_backends = [
968 DjangoFilterBackend,
969 filters.SearchFilter,
970 filters.OrderingFilter,
971 ]
972
973 filter_fields = [
974 ]
975
976
977 class BomDetail(generics.RetrieveUpdateDestroyAPIView):
978 """ API endpoint for detail view of a single BomItem object """
979
980 queryset = BomItem.objects.all()
981 serializer_class = part_serializers.BomItemSerializer
982
983
984 class BomItemValidate(generics.UpdateAPIView):
985 """ API endpoint for validating a BomItem """
986
987 # Very simple serializers
988 class BomItemValidationSerializer(serializers.Serializer):
989
990 valid = serializers.BooleanField(default=False)
991
992 queryset = BomItem.objects.all()
993 serializer_class = BomItemValidationSerializer
994
995 def update(self, request, *args, **kwargs):
996 """ Perform update request """
997
998 partial = kwargs.pop('partial', False)
999
1000 valid = request.data.get('valid', False)
1001
1002 instance = self.get_object()
1003
1004 serializer = self.get_serializer(instance, data=request.data, partial=partial)
1005 serializer.is_valid(raise_exception=True)
1006
1007 if type(instance) == BomItem:
1008 instance.validate_hash(valid)
1009
1010 return Response(serializer.data)
1011
1012
1013 part_api_urls = [
1014 url(r'^tree/?', PartCategoryTree.as_view(), name='api-part-tree'),
1015
1016 # Base URL for PartCategory API endpoints
1017 url(r'^category/', include([
1018 url(r'^(?P<pk>\d+)/parameters/?', CategoryParameters.as_view(), name='api-part-category-parameters'),
1019 url(r'^(?P<pk>\d+)/?', CategoryDetail.as_view(), name='api-part-category-detail'),
1020 url(r'^$', CategoryList.as_view(), name='api-part-category-list'),
1021 ])),
1022
1023 # Base URL for PartTestTemplate API endpoints
1024 url(r'^test-template/', include([
1025 url(r'^$', PartTestTemplateList.as_view(), name='api-part-test-template-list'),
1026 ])),
1027
1028 # Base URL for PartAttachment API endpoints
1029 url(r'^attachment/', include([
1030 url(r'^$', PartAttachmentList.as_view(), name='api-part-attachment-list'),
1031 ])),
1032
1033 # Base URL for part sale pricing
1034 url(r'^sale-price/', include([
1035 url(r'^.*$', PartSalePriceList.as_view(), name='api-part-sale-price-list'),
1036 ])),
1037
1038 # Base URL for part internal pricing
1039 url(r'^internal-price/', include([
1040 url(r'^.*$', PartInternalPriceList.as_view(), name='api-part-internal-price-list'),
1041 ])),
1042
1043 # Base URL for PartParameter API endpoints
1044 url(r'^parameter/', include([
1045 url(r'^template/$', PartParameterTemplateList.as_view(), name='api-part-param-template-list'),
1046
1047 url(r'^(?P<pk>\d+)/', PartParameterDetail.as_view(), name='api-part-param-detail'),
1048 url(r'^.*$', PartParameterList.as_view(), name='api-part-param-list'),
1049 ])),
1050
1051 url(r'^thumbs/', include([
1052 url(r'^$', PartThumbs.as_view(), name='api-part-thumbs'),
1053 url(r'^(?P<pk>\d+)/?', PartThumbsUpdate.as_view(), name='api-part-thumbs-update'),
1054 ])),
1055
1056 url(r'^(?P<pk>\d+)/?', PartDetail.as_view(), name='api-part-detail'),
1057
1058 url(r'^.*$', PartList.as_view(), name='api-part-list'),
1059 ]
1060
1061 bom_api_urls = [
1062 # BOM Item Detail
1063 url(r'^(?P<pk>\d+)/', include([
1064 url(r'^validate/?', BomItemValidate.as_view(), name='api-bom-item-validate'),
1065 url(r'^.*$', BomDetail.as_view(), name='api-bom-item-detail'),
1066 ])),
1067
1068 # Catch-all
1069 url(r'^.*$', BomList.as_view(), name='api-bom-list'),
1070 ]
1071
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/InvenTree/part/api.py b/InvenTree/part/api.py
--- a/InvenTree/part/api.py
+++ b/InvenTree/part/api.py
@@ -706,6 +706,7 @@
'creation_date',
'IPN',
'in_stock',
+ 'category',
]
# Default ordering
| {"golden_diff": "diff --git a/InvenTree/part/api.py b/InvenTree/part/api.py\n--- a/InvenTree/part/api.py\n+++ b/InvenTree/part/api.py\n@@ -706,6 +706,7 @@\n 'creation_date',\n 'IPN',\n 'in_stock',\n+ 'category',\n ]\n \n # Default ordering\n", "issue": "Table ordering not working for any parameter\n\r\n\nTable ordering not working for any parameter\n\r\n\n", "before_files": [{"content": "\"\"\"\nProvides a JSON API for the Part app\n\"\"\"\n\n# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals\n\nfrom django_filters.rest_framework import DjangoFilterBackend\nfrom django.http import JsonResponse\nfrom django.db.models import Q, F, Count, Min, Max, Avg\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom rest_framework import status\nfrom rest_framework.response import Response\nfrom rest_framework import filters, serializers\nfrom rest_framework import generics\n\nfrom djmoney.money import Money\nfrom djmoney.contrib.exchange.models import convert_money\nfrom djmoney.contrib.exchange.exceptions import MissingRate\n\nfrom django.conf.urls import url, include\nfrom django.urls import reverse\n\nfrom .models import Part, PartCategory, BomItem\nfrom .models import PartParameter, PartParameterTemplate\nfrom .models import PartAttachment, PartTestTemplate\nfrom .models import PartSellPriceBreak, PartInternalPriceBreak\nfrom .models import PartCategoryParameterTemplate\n\nfrom common.models import InvenTreeSetting\nfrom build.models import Build\n\nfrom . import serializers as part_serializers\n\nfrom InvenTree.views import TreeSerializer\nfrom InvenTree.helpers import str2bool, isNull\nfrom InvenTree.api import AttachmentMixin\n\nfrom InvenTree.status_codes import BuildStatus\n\n\nclass PartCategoryTree(TreeSerializer):\n\n title = _(\"Parts\")\n model = PartCategory\n\n queryset = PartCategory.objects.all()\n\n @property\n def root_url(self):\n return reverse('part-index')\n\n def get_items(self):\n return PartCategory.objects.all().prefetch_related('parts', 'children')\n\n\nclass CategoryList(generics.ListCreateAPIView):\n \"\"\" API endpoint for accessing a list of PartCategory objects.\n\n - GET: Return a list of PartCategory objects\n - POST: Create a new PartCategory object\n \"\"\"\n\n queryset = PartCategory.objects.all()\n serializer_class = part_serializers.CategorySerializer\n\n def filter_queryset(self, queryset):\n \"\"\"\n Custom filtering:\n - Allow filtering by \"null\" parent to retrieve top-level part categories\n \"\"\"\n\n queryset = super().filter_queryset(queryset)\n\n params = self.request.query_params\n\n cat_id = params.get('parent', None)\n\n cascade = str2bool(params.get('cascade', False))\n\n # Do not filter by category\n if cat_id is None:\n pass\n # Look for top-level categories\n elif isNull(cat_id):\n\n if not cascade:\n queryset = queryset.filter(parent=None)\n\n else:\n try:\n category = PartCategory.objects.get(pk=cat_id)\n\n if cascade:\n parents = category.get_descendants(include_self=True)\n parent_ids = [p.id for p in parents]\n\n queryset = queryset.filter(parent__in=parent_ids)\n else:\n queryset = queryset.filter(parent=category)\n\n except (ValueError, PartCategory.DoesNotExist):\n pass\n\n return queryset\n\n filter_backends = [\n DjangoFilterBackend,\n filters.SearchFilter,\n filters.OrderingFilter,\n ]\n\n filter_fields = [\n ]\n\n ordering_fields = [\n 'name',\n ]\n\n ordering = 'name'\n\n search_fields = [\n 'name',\n 'description',\n ]\n\n\nclass CategoryDetail(generics.RetrieveUpdateDestroyAPIView):\n \"\"\" API endpoint for detail view of a single PartCategory object \"\"\"\n serializer_class = part_serializers.CategorySerializer\n queryset = PartCategory.objects.all()\n\n\nclass CategoryParameters(generics.ListAPIView):\n \"\"\" API endpoint for accessing a list of PartCategoryParameterTemplate objects.\n\n - GET: Return a list of PartCategoryParameterTemplate objects\n \"\"\"\n\n queryset = PartCategoryParameterTemplate.objects.all()\n serializer_class = part_serializers.CategoryParameterTemplateSerializer\n\n def get_queryset(self):\n \"\"\"\n Custom filtering:\n - Allow filtering by \"null\" parent to retrieve all categories parameter templates\n - Allow filtering by category\n - Allow traversing all parent categories\n \"\"\"\n\n try:\n cat_id = int(self.kwargs.get('pk', None))\n except TypeError:\n cat_id = None\n fetch_parent = str2bool(self.request.query_params.get('fetch_parent', 'true'))\n\n queryset = super().get_queryset()\n\n if isinstance(cat_id, int):\n\n try:\n category = PartCategory.objects.get(pk=cat_id)\n except PartCategory.DoesNotExist:\n # Return empty queryset\n return PartCategoryParameterTemplate.objects.none()\n\n category_list = [cat_id]\n\n if fetch_parent:\n parent_categories = category.get_ancestors()\n for parent in parent_categories:\n category_list.append(parent.pk)\n\n queryset = queryset.filter(category__in=category_list)\n\n return queryset\n\n\nclass PartSalePriceList(generics.ListCreateAPIView):\n \"\"\"\n API endpoint for list view of PartSalePriceBreak model\n \"\"\"\n\n queryset = PartSellPriceBreak.objects.all()\n serializer_class = part_serializers.PartSalePriceSerializer\n\n filter_backends = [\n DjangoFilterBackend\n ]\n\n filter_fields = [\n 'part',\n ]\n\n\nclass PartInternalPriceList(generics.ListCreateAPIView):\n \"\"\"\n API endpoint for list view of PartInternalPriceBreak model\n \"\"\"\n\n queryset = PartInternalPriceBreak.objects.all()\n serializer_class = part_serializers.PartInternalPriceSerializer\n permission_required = 'roles.sales_order.show'\n\n filter_backends = [\n DjangoFilterBackend\n ]\n\n filter_fields = [\n 'part',\n ]\n\n\nclass PartAttachmentList(generics.ListCreateAPIView, AttachmentMixin):\n \"\"\"\n API endpoint for listing (and creating) a PartAttachment (file upload).\n \"\"\"\n\n queryset = PartAttachment.objects.all()\n serializer_class = part_serializers.PartAttachmentSerializer\n\n filter_backends = [\n DjangoFilterBackend,\n ]\n\n filter_fields = [\n 'part',\n ]\n\n\nclass PartTestTemplateList(generics.ListCreateAPIView):\n \"\"\"\n API endpoint for listing (and creating) a PartTestTemplate.\n \"\"\"\n\n queryset = PartTestTemplate.objects.all()\n serializer_class = part_serializers.PartTestTemplateSerializer\n\n def filter_queryset(self, queryset):\n \"\"\"\n Filter the test list queryset.\n\n If filtering by 'part', we include results for any parts \"above\" the specified part.\n \"\"\"\n\n queryset = super().filter_queryset(queryset)\n\n params = self.request.query_params\n\n part = params.get('part', None)\n\n # Filter by part\n if part:\n try:\n part = Part.objects.get(pk=part)\n queryset = queryset.filter(part__in=part.get_ancestors(include_self=True))\n except (ValueError, Part.DoesNotExist):\n pass\n\n # Filter by 'required' status\n required = params.get('required', None)\n\n if required is not None:\n queryset = queryset.filter(required=required)\n\n return queryset\n\n filter_backends = [\n DjangoFilterBackend,\n filters.OrderingFilter,\n filters.SearchFilter,\n ]\n\n\nclass PartThumbs(generics.ListAPIView):\n \"\"\"\n API endpoint for retrieving information on available Part thumbnails\n \"\"\"\n\n queryset = Part.objects.all()\n serializer_class = part_serializers.PartThumbSerializer\n\n def get_queryset(self):\n\n queryset = super().get_queryset()\n\n # Get all Parts which have an associated image\n queryset = queryset.exclude(image='')\n\n return queryset\n\n def list(self, request, *args, **kwargs):\n \"\"\"\n Serialize the available Part images.\n - Images may be used for multiple parts!\n \"\"\"\n\n queryset = self.get_queryset()\n\n # TODO - We should return the thumbnails here, not the full image!\n\n # Return the most popular parts first\n data = queryset.values(\n 'image',\n ).annotate(count=Count('image')).order_by('-count')\n\n return Response(data)\n\n\nclass PartThumbsUpdate(generics.RetrieveUpdateAPIView):\n \"\"\" API endpoint for updating Part thumbnails\"\"\"\n\n queryset = Part.objects.all()\n serializer_class = part_serializers.PartThumbSerializerUpdate\n\n filter_backends = [\n DjangoFilterBackend\n ]\n\n\nclass PartDetail(generics.RetrieveUpdateDestroyAPIView):\n \"\"\" API endpoint for detail view of a single Part object \"\"\"\n\n queryset = Part.objects.all()\n serializer_class = part_serializers.PartSerializer\n\n starred_parts = None\n\n def get_queryset(self, *args, **kwargs):\n queryset = super().get_queryset(*args, **kwargs)\n\n queryset = part_serializers.PartSerializer.prefetch_queryset(queryset)\n queryset = part_serializers.PartSerializer.annotate_queryset(queryset)\n\n return queryset\n\n def get_serializer(self, *args, **kwargs):\n\n try:\n kwargs['category_detail'] = str2bool(self.request.query_params.get('category_detail', False))\n except AttributeError:\n pass\n\n # Ensure the request context is passed through\n kwargs['context'] = self.get_serializer_context()\n\n # Pass a list of \"starred\" parts fo the current user to the serializer\n # We do this to reduce the number of database queries required!\n if self.starred_parts is None and self.request is not None:\n self.starred_parts = [star.part for star in self.request.user.starred_parts.all()]\n\n kwargs['starred_parts'] = self.starred_parts\n\n return self.serializer_class(*args, **kwargs)\n\n def destroy(self, request, *args, **kwargs):\n # Retrieve part\n part = Part.objects.get(pk=int(kwargs['pk']))\n # Check if inactive\n if not part.active:\n # Delete\n return super(PartDetail, self).destroy(request, *args, **kwargs)\n else:\n # Return 405 error\n message = f'Part \\'{part.name}\\' (pk = {part.pk}) is active: cannot delete'\n return Response(status=status.HTTP_405_METHOD_NOT_ALLOWED, data=message)\n\n def update(self, request, *args, **kwargs):\n \"\"\"\n Custom update functionality for Part instance.\n\n - If the 'starred' field is provided, update the 'starred' status against current user\n \"\"\"\n\n if 'starred' in request.data:\n starred = str2bool(request.data.get('starred', None))\n\n self.get_object().setStarred(request.user, starred)\n\n response = super().update(request, *args, **kwargs)\n\n return response\n\n\nclass PartList(generics.ListCreateAPIView):\n \"\"\" API endpoint for accessing a list of Part objects\n\n - GET: Return list of objects\n - POST: Create a new Part object\n\n The Part object list can be filtered by:\n - category: Filter by PartCategory reference\n - cascade: If true, include parts from sub-categories\n - starred: Is the part \"starred\" by the current user?\n - is_template: Is the part a template part?\n - variant_of: Filter by variant_of Part reference\n - assembly: Filter by assembly field\n - component: Filter by component field\n - trackable: Filter by trackable field\n - purchaseable: Filter by purcahseable field\n - salable: Filter by salable field\n - active: Filter by active field\n - ancestor: Filter parts by 'ancestor' (template / variant tree)\n \"\"\"\n\n serializer_class = part_serializers.PartSerializer\n\n queryset = Part.objects.all()\n\n starred_parts = None\n\n def get_serializer(self, *args, **kwargs):\n\n # Ensure the request context is passed through\n kwargs['context'] = self.get_serializer_context()\n\n # Pass a list of \"starred\" parts fo the current user to the serializer\n # We do this to reduce the number of database queries required!\n if self.starred_parts is None and self.request is not None:\n self.starred_parts = [star.part for star in self.request.user.starred_parts.all()]\n\n kwargs['starred_parts'] = self.starred_parts\n\n return self.serializer_class(*args, **kwargs)\n\n def list(self, request, *args, **kwargs):\n \"\"\"\n Overide the 'list' method, as the PartCategory objects are\n very expensive to serialize!\n\n So we will serialize them first, and keep them in memory,\n so that they do not have to be serialized multiple times...\n \"\"\"\n\n queryset = self.filter_queryset(self.get_queryset())\n\n page = self.paginate_queryset(queryset)\n\n if page is not None:\n serializer = self.get_serializer(page, many=True)\n else:\n serializer = self.get_serializer(queryset, many=True)\n\n data = serializer.data\n\n # Do we wish to include PartCategory detail?\n if str2bool(request.query_params.get('category_detail', False)):\n\n # Work out which part categorie we need to query\n category_ids = set()\n\n for part in data:\n cat_id = part['category']\n\n if cat_id is not None:\n category_ids.add(cat_id)\n\n # Fetch only the required PartCategory objects from the database\n categories = PartCategory.objects.filter(pk__in=category_ids).prefetch_related(\n 'parts',\n 'parent',\n 'children',\n )\n\n category_map = {}\n\n # Serialize each PartCategory object\n for category in categories:\n category_map[category.pk] = part_serializers.CategorySerializer(category).data\n\n for part in data:\n cat_id = part['category']\n\n if cat_id is not None and cat_id in category_map.keys():\n detail = category_map[cat_id]\n else:\n detail = None\n\n part['category_detail'] = detail\n\n \"\"\"\n Determine the response type based on the request.\n a) For HTTP requests (e.g. via the browseable API) return a DRF response\n b) For AJAX requests, simply return a JSON rendered response.\n \"\"\"\n if page is not None:\n return self.get_paginated_response(data)\n elif request.is_ajax():\n return JsonResponse(data, safe=False)\n else:\n return Response(data)\n\n def perform_create(self, serializer):\n \"\"\"\n We wish to save the user who created this part!\n\n Note: Implementation copied from DRF class CreateModelMixin\n \"\"\"\n\n part = serializer.save()\n part.creation_user = self.request.user\n part.save()\n\n def get_queryset(self, *args, **kwargs):\n\n queryset = super().get_queryset(*args, **kwargs)\n\n queryset = part_serializers.PartSerializer.prefetch_queryset(queryset)\n queryset = part_serializers.PartSerializer.annotate_queryset(queryset)\n\n return queryset\n\n def filter_queryset(self, queryset):\n \"\"\"\n Perform custom filtering of the queryset.\n We overide the DRF filter_fields here because\n \"\"\"\n\n params = self.request.query_params\n\n queryset = super().filter_queryset(queryset)\n\n # Filter by \"uses\" query - Limit to parts which use the provided part\n uses = params.get('uses', None)\n\n if uses:\n try:\n uses = Part.objects.get(pk=uses)\n\n queryset = queryset.filter(uses.get_used_in_filter())\n\n except (ValueError, Part.DoesNotExist):\n pass\n\n # Filter by 'ancestor'?\n ancestor = params.get('ancestor', None)\n\n if ancestor is not None:\n # If an 'ancestor' part is provided, filter to match only children\n try:\n ancestor = Part.objects.get(pk=ancestor)\n descendants = ancestor.get_descendants(include_self=False)\n queryset = queryset.filter(pk__in=[d.pk for d in descendants])\n except (ValueError, Part.DoesNotExist):\n pass\n\n # Filter by whether the part has an IPN (internal part number) defined\n has_ipn = params.get('has_ipn', None)\n\n if has_ipn is not None:\n has_ipn = str2bool(has_ipn)\n\n if has_ipn:\n queryset = queryset.exclude(IPN='')\n else:\n queryset = queryset.filter(IPN='')\n\n # Filter by whether the BOM has been validated (or not)\n bom_valid = params.get('bom_valid', None)\n\n # TODO: Querying bom_valid status may be quite expensive\n # TODO: (It needs to be profiled!)\n # TODO: It might be worth caching the bom_valid status to a database column\n\n if bom_valid is not None:\n\n bom_valid = str2bool(bom_valid)\n\n # Limit queryset to active assemblies\n queryset = queryset.filter(active=True, assembly=True)\n\n pks = []\n\n for part in queryset:\n if part.is_bom_valid() == bom_valid:\n pks.append(part.pk)\n\n queryset = queryset.filter(pk__in=pks)\n\n # Filter by 'starred' parts?\n starred = params.get('starred', None)\n\n if starred is not None:\n starred = str2bool(starred)\n starred_parts = [star.part.pk for star in self.request.user.starred_parts.all()]\n\n if starred:\n queryset = queryset.filter(pk__in=starred_parts)\n else:\n queryset = queryset.exclude(pk__in=starred_parts)\n\n # Cascade? (Default = True)\n cascade = str2bool(params.get('cascade', True))\n\n # Does the user wish to filter by category?\n cat_id = params.get('category', None)\n\n if cat_id is None:\n # No category filtering if category is not specified\n pass\n\n else:\n # Category has been specified!\n if isNull(cat_id):\n # A 'null' category is the top-level category\n if cascade is False:\n # Do not cascade, only list parts in the top-level category\n queryset = queryset.filter(category=None)\n\n else:\n try:\n category = PartCategory.objects.get(pk=cat_id)\n\n # If '?cascade=true' then include parts which exist in sub-categories\n if cascade:\n queryset = queryset.filter(category__in=category.getUniqueChildren())\n # Just return parts directly in the requested category\n else:\n queryset = queryset.filter(category=cat_id)\n except (ValueError, PartCategory.DoesNotExist):\n pass\n\n # Annotate calculated data to the queryset\n # (This will be used for further filtering)\n queryset = part_serializers.PartSerializer.annotate_queryset(queryset)\n\n # Filter by whether the part has stock\n has_stock = params.get(\"has_stock\", None)\n\n if has_stock is not None:\n has_stock = str2bool(has_stock)\n\n if has_stock:\n queryset = queryset.filter(Q(in_stock__gt=0))\n else:\n queryset = queryset.filter(Q(in_stock__lte=0))\n\n # If we are filtering by 'low_stock' status\n low_stock = params.get('low_stock', None)\n\n if low_stock is not None:\n low_stock = str2bool(low_stock)\n\n if low_stock:\n # Ignore any parts which do not have a specified 'minimum_stock' level\n queryset = queryset.exclude(minimum_stock=0)\n # Filter items which have an 'in_stock' level lower than 'minimum_stock'\n queryset = queryset.filter(Q(in_stock__lt=F('minimum_stock')))\n else:\n # Filter items which have an 'in_stock' level higher than 'minimum_stock'\n queryset = queryset.filter(Q(in_stock__gte=F('minimum_stock')))\n\n # Filter by \"parts which need stock to complete build\"\n stock_to_build = params.get('stock_to_build', None)\n\n # TODO: This is super expensive, database query wise...\n # TODO: Need to figure out a cheaper way of making this filter query\n\n if stock_to_build is not None:\n # Get active builds\n builds = Build.objects.filter(status__in=BuildStatus.ACTIVE_CODES)\n # Store parts with builds needing stock\n parts_needed_to_complete_builds = []\n # Filter required parts\n for build in builds:\n parts_needed_to_complete_builds += [part.pk for part in build.required_parts_to_complete_build]\n\n queryset = queryset.filter(pk__in=parts_needed_to_complete_builds)\n\n # Optionally limit the maximum number of returned results\n # e.g. for displaying \"recent part\" list\n max_results = params.get('max_results', None)\n\n if max_results is not None:\n try:\n max_results = int(max_results)\n\n if max_results > 0:\n queryset = queryset[:max_results]\n\n except (ValueError):\n pass\n\n return queryset\n\n filter_backends = [\n DjangoFilterBackend,\n filters.SearchFilter,\n filters.OrderingFilter,\n ]\n\n filter_fields = [\n 'is_template',\n 'variant_of',\n 'assembly',\n 'component',\n 'trackable',\n 'purchaseable',\n 'salable',\n 'active',\n ]\n\n ordering_fields = [\n 'name',\n 'creation_date',\n 'IPN',\n 'in_stock',\n ]\n\n # Default ordering\n ordering = 'name'\n\n search_fields = [\n 'name',\n 'description',\n 'IPN',\n 'revision',\n 'keywords',\n 'category__name',\n ]\n\n\nclass PartParameterTemplateList(generics.ListCreateAPIView):\n \"\"\" API endpoint for accessing a list of PartParameterTemplate objects.\n\n - GET: Return list of PartParameterTemplate objects\n - POST: Create a new PartParameterTemplate object\n \"\"\"\n\n queryset = PartParameterTemplate.objects.all()\n serializer_class = part_serializers.PartParameterTemplateSerializer\n\n filter_backends = [\n filters.OrderingFilter,\n ]\n\n filter_fields = [\n 'name',\n ]\n\n\nclass PartParameterList(generics.ListCreateAPIView):\n \"\"\" API endpoint for accessing a list of PartParameter objects\n\n - GET: Return list of PartParameter objects\n - POST: Create a new PartParameter object\n \"\"\"\n\n queryset = PartParameter.objects.all()\n serializer_class = part_serializers.PartParameterSerializer\n\n filter_backends = [\n DjangoFilterBackend\n ]\n\n filter_fields = [\n 'part',\n 'template',\n ]\n\n\nclass PartParameterDetail(generics.RetrieveUpdateDestroyAPIView):\n \"\"\"\n API endpoint for detail view of a single PartParameter object\n \"\"\"\n\n queryset = PartParameter.objects.all()\n serializer_class = part_serializers.PartParameterSerializer\n\n\nclass BomList(generics.ListCreateAPIView):\n \"\"\" API endpoint for accessing a list of BomItem objects.\n\n - GET: Return list of BomItem objects\n - POST: Create a new BomItem object\n \"\"\"\n\n serializer_class = part_serializers.BomItemSerializer\n\n def list(self, request, *args, **kwargs):\n\n queryset = self.filter_queryset(self.get_queryset())\n\n serializer = self.get_serializer(queryset, many=True)\n\n data = serializer.data\n\n if request.is_ajax():\n return JsonResponse(data, safe=False)\n else:\n return Response(data)\n\n def get_serializer(self, *args, **kwargs):\n\n # Do we wish to include extra detail?\n try:\n kwargs['part_detail'] = str2bool(self.request.GET.get('part_detail', None))\n except AttributeError:\n pass\n\n try:\n kwargs['sub_part_detail'] = str2bool(self.request.GET.get('sub_part_detail', None))\n except AttributeError:\n pass\n\n # Ensure the request context is passed through!\n kwargs['context'] = self.get_serializer_context()\n\n return self.serializer_class(*args, **kwargs)\n\n def get_queryset(self, *args, **kwargs):\n\n queryset = BomItem.objects.all()\n\n queryset = self.get_serializer_class().setup_eager_loading(queryset)\n\n return queryset\n\n def filter_queryset(self, queryset):\n\n queryset = super().filter_queryset(queryset)\n\n params = self.request.query_params\n\n # Filter by \"optional\" status?\n optional = params.get('optional', None)\n\n if optional is not None:\n optional = str2bool(optional)\n\n queryset = queryset.filter(optional=optional)\n\n # Filter by \"inherited\" status\n inherited = params.get('inherited', None)\n\n if inherited is not None:\n inherited = str2bool(inherited)\n\n queryset = queryset.filter(inherited=inherited)\n\n # Filter by \"allow_variants\"\n variants = params.get(\"allow_variants\", None)\n\n if variants is not None:\n variants = str2bool(variants)\n\n queryset = queryset.filter(allow_variants=variants)\n\n # Filter by part?\n part = params.get('part', None)\n\n if part is not None:\n \"\"\"\n If we are filtering by \"part\", there are two cases to consider:\n\n a) Bom items which are defined for *this* part\n b) Inherited parts which are defined for a *parent* part\n\n So we need to construct two queries!\n \"\"\"\n\n # First, check that the part is actually valid!\n try:\n part = Part.objects.get(pk=part)\n\n queryset = queryset.filter(part.get_bom_item_filter())\n\n except (ValueError, Part.DoesNotExist):\n pass\n\n # Filter by \"active\" status of the part\n part_active = params.get('part_active', None)\n\n if part_active is not None:\n part_active = str2bool(part_active)\n queryset = queryset.filter(part__active=part_active)\n\n # Filter by \"trackable\" status of the part\n part_trackable = params.get('part_trackable', None)\n\n if part_trackable is not None:\n part_trackable = str2bool(part_trackable)\n queryset = queryset.filter(part__trackable=part_trackable)\n\n # Filter by \"trackable\" status of the sub-part\n sub_part_trackable = params.get('sub_part_trackable', None)\n\n if sub_part_trackable is not None:\n sub_part_trackable = str2bool(sub_part_trackable)\n queryset = queryset.filter(sub_part__trackable=sub_part_trackable)\n\n # Filter by whether the BOM line has been validated\n validated = params.get('validated', None)\n\n if validated is not None:\n validated = str2bool(validated)\n\n # Work out which lines have actually been validated\n pks = []\n\n for bom_item in queryset.all():\n if bom_item.is_line_valid:\n pks.append(bom_item.pk)\n\n if validated:\n queryset = queryset.filter(pk__in=pks)\n else:\n queryset = queryset.exclude(pk__in=pks)\n\n # Annotate with purchase prices\n queryset = queryset.annotate(\n purchase_price_min=Min('sub_part__stock_items__purchase_price'),\n purchase_price_max=Max('sub_part__stock_items__purchase_price'),\n purchase_price_avg=Avg('sub_part__stock_items__purchase_price'),\n )\n\n # Get values for currencies\n currencies = queryset.annotate(\n purchase_price_currency=F('sub_part__stock_items__purchase_price_currency'),\n ).values('pk', 'sub_part', 'purchase_price_currency')\n\n def convert_price(price, currency, decimal_places=4):\n \"\"\" Convert price field, returns Money field \"\"\"\n\n price_adjusted = None\n\n # Get default currency from settings\n default_currency = InvenTreeSetting.get_setting('INVENTREE_DEFAULT_CURRENCY')\n \n if price:\n if currency and default_currency:\n try:\n # Get adjusted price\n price_adjusted = convert_money(Money(price, currency), default_currency)\n except MissingRate:\n # No conversion rate set\n price_adjusted = Money(price, currency)\n else:\n # Currency exists\n if currency:\n price_adjusted = Money(price, currency)\n # Default currency exists\n if default_currency:\n price_adjusted = Money(price, default_currency)\n\n if price_adjusted and decimal_places:\n price_adjusted.decimal_places = decimal_places\n\n return price_adjusted\n\n # Convert prices to default currency (using backend conversion rates)\n for bom_item in queryset:\n # Find associated currency (select first found)\n purchase_price_currency = None\n for currency_item in currencies:\n if currency_item['pk'] == bom_item.pk and currency_item['sub_part'] == bom_item.sub_part.pk:\n purchase_price_currency = currency_item['purchase_price_currency']\n break\n # Convert prices\n bom_item.purchase_price_min = convert_price(bom_item.purchase_price_min, purchase_price_currency)\n bom_item.purchase_price_max = convert_price(bom_item.purchase_price_max, purchase_price_currency)\n bom_item.purchase_price_avg = convert_price(bom_item.purchase_price_avg, purchase_price_currency)\n\n return queryset\n\n filter_backends = [\n DjangoFilterBackend,\n filters.SearchFilter,\n filters.OrderingFilter,\n ]\n\n filter_fields = [\n ]\n\n\nclass BomDetail(generics.RetrieveUpdateDestroyAPIView):\n \"\"\" API endpoint for detail view of a single BomItem object \"\"\"\n\n queryset = BomItem.objects.all()\n serializer_class = part_serializers.BomItemSerializer\n\n\nclass BomItemValidate(generics.UpdateAPIView):\n \"\"\" API endpoint for validating a BomItem \"\"\"\n\n # Very simple serializers\n class BomItemValidationSerializer(serializers.Serializer):\n\n valid = serializers.BooleanField(default=False)\n\n queryset = BomItem.objects.all()\n serializer_class = BomItemValidationSerializer\n\n def update(self, request, *args, **kwargs):\n \"\"\" Perform update request \"\"\"\n\n partial = kwargs.pop('partial', False)\n\n valid = request.data.get('valid', False)\n\n instance = self.get_object()\n\n serializer = self.get_serializer(instance, data=request.data, partial=partial)\n serializer.is_valid(raise_exception=True)\n\n if type(instance) == BomItem:\n instance.validate_hash(valid)\n\n return Response(serializer.data)\n\n\npart_api_urls = [\n url(r'^tree/?', PartCategoryTree.as_view(), name='api-part-tree'),\n\n # Base URL for PartCategory API endpoints\n url(r'^category/', include([\n url(r'^(?P<pk>\\d+)/parameters/?', CategoryParameters.as_view(), name='api-part-category-parameters'),\n url(r'^(?P<pk>\\d+)/?', CategoryDetail.as_view(), name='api-part-category-detail'),\n url(r'^$', CategoryList.as_view(), name='api-part-category-list'),\n ])),\n\n # Base URL for PartTestTemplate API endpoints\n url(r'^test-template/', include([\n url(r'^$', PartTestTemplateList.as_view(), name='api-part-test-template-list'),\n ])),\n\n # Base URL for PartAttachment API endpoints\n url(r'^attachment/', include([\n url(r'^$', PartAttachmentList.as_view(), name='api-part-attachment-list'),\n ])),\n\n # Base URL for part sale pricing\n url(r'^sale-price/', include([\n url(r'^.*$', PartSalePriceList.as_view(), name='api-part-sale-price-list'),\n ])),\n\n # Base URL for part internal pricing\n url(r'^internal-price/', include([\n url(r'^.*$', PartInternalPriceList.as_view(), name='api-part-internal-price-list'),\n ])),\n\n # Base URL for PartParameter API endpoints\n url(r'^parameter/', include([\n url(r'^template/$', PartParameterTemplateList.as_view(), name='api-part-param-template-list'),\n\n url(r'^(?P<pk>\\d+)/', PartParameterDetail.as_view(), name='api-part-param-detail'),\n url(r'^.*$', PartParameterList.as_view(), name='api-part-param-list'),\n ])),\n\n url(r'^thumbs/', include([\n url(r'^$', PartThumbs.as_view(), name='api-part-thumbs'),\n url(r'^(?P<pk>\\d+)/?', PartThumbsUpdate.as_view(), name='api-part-thumbs-update'),\n ])),\n\n url(r'^(?P<pk>\\d+)/?', PartDetail.as_view(), name='api-part-detail'),\n\n url(r'^.*$', PartList.as_view(), name='api-part-list'),\n]\n\nbom_api_urls = [\n # BOM Item Detail\n url(r'^(?P<pk>\\d+)/', include([\n url(r'^validate/?', BomItemValidate.as_view(), name='api-bom-item-validate'),\n url(r'^.*$', BomDetail.as_view(), name='api-bom-item-detail'),\n ])),\n\n # Catch-all\n url(r'^.*$', BomList.as_view(), name='api-bom-list'),\n]\n", "path": "InvenTree/part/api.py"}], "after_files": [{"content": "\"\"\"\nProvides a JSON API for the Part app\n\"\"\"\n\n# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals\n\nfrom django_filters.rest_framework import DjangoFilterBackend\nfrom django.http import JsonResponse\nfrom django.db.models import Q, F, Count, Min, Max, Avg\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom rest_framework import status\nfrom rest_framework.response import Response\nfrom rest_framework import filters, serializers\nfrom rest_framework import generics\n\nfrom djmoney.money import Money\nfrom djmoney.contrib.exchange.models import convert_money\nfrom djmoney.contrib.exchange.exceptions import MissingRate\n\nfrom django.conf.urls import url, include\nfrom django.urls import reverse\n\nfrom .models import Part, PartCategory, BomItem\nfrom .models import PartParameter, PartParameterTemplate\nfrom .models import PartAttachment, PartTestTemplate\nfrom .models import PartSellPriceBreak, PartInternalPriceBreak\nfrom .models import PartCategoryParameterTemplate\n\nfrom common.models import InvenTreeSetting\nfrom build.models import Build\n\nfrom . import serializers as part_serializers\n\nfrom InvenTree.views import TreeSerializer\nfrom InvenTree.helpers import str2bool, isNull\nfrom InvenTree.api import AttachmentMixin\n\nfrom InvenTree.status_codes import BuildStatus\n\n\nclass PartCategoryTree(TreeSerializer):\n\n title = _(\"Parts\")\n model = PartCategory\n\n queryset = PartCategory.objects.all()\n\n @property\n def root_url(self):\n return reverse('part-index')\n\n def get_items(self):\n return PartCategory.objects.all().prefetch_related('parts', 'children')\n\n\nclass CategoryList(generics.ListCreateAPIView):\n \"\"\" API endpoint for accessing a list of PartCategory objects.\n\n - GET: Return a list of PartCategory objects\n - POST: Create a new PartCategory object\n \"\"\"\n\n queryset = PartCategory.objects.all()\n serializer_class = part_serializers.CategorySerializer\n\n def filter_queryset(self, queryset):\n \"\"\"\n Custom filtering:\n - Allow filtering by \"null\" parent to retrieve top-level part categories\n \"\"\"\n\n queryset = super().filter_queryset(queryset)\n\n params = self.request.query_params\n\n cat_id = params.get('parent', None)\n\n cascade = str2bool(params.get('cascade', False))\n\n # Do not filter by category\n if cat_id is None:\n pass\n # Look for top-level categories\n elif isNull(cat_id):\n\n if not cascade:\n queryset = queryset.filter(parent=None)\n\n else:\n try:\n category = PartCategory.objects.get(pk=cat_id)\n\n if cascade:\n parents = category.get_descendants(include_self=True)\n parent_ids = [p.id for p in parents]\n\n queryset = queryset.filter(parent__in=parent_ids)\n else:\n queryset = queryset.filter(parent=category)\n\n except (ValueError, PartCategory.DoesNotExist):\n pass\n\n return queryset\n\n filter_backends = [\n DjangoFilterBackend,\n filters.SearchFilter,\n filters.OrderingFilter,\n ]\n\n filter_fields = [\n ]\n\n ordering_fields = [\n 'name',\n ]\n\n ordering = 'name'\n\n search_fields = [\n 'name',\n 'description',\n ]\n\n\nclass CategoryDetail(generics.RetrieveUpdateDestroyAPIView):\n \"\"\" API endpoint for detail view of a single PartCategory object \"\"\"\n serializer_class = part_serializers.CategorySerializer\n queryset = PartCategory.objects.all()\n\n\nclass CategoryParameters(generics.ListAPIView):\n \"\"\" API endpoint for accessing a list of PartCategoryParameterTemplate objects.\n\n - GET: Return a list of PartCategoryParameterTemplate objects\n \"\"\"\n\n queryset = PartCategoryParameterTemplate.objects.all()\n serializer_class = part_serializers.CategoryParameterTemplateSerializer\n\n def get_queryset(self):\n \"\"\"\n Custom filtering:\n - Allow filtering by \"null\" parent to retrieve all categories parameter templates\n - Allow filtering by category\n - Allow traversing all parent categories\n \"\"\"\n\n try:\n cat_id = int(self.kwargs.get('pk', None))\n except TypeError:\n cat_id = None\n fetch_parent = str2bool(self.request.query_params.get('fetch_parent', 'true'))\n\n queryset = super().get_queryset()\n\n if isinstance(cat_id, int):\n\n try:\n category = PartCategory.objects.get(pk=cat_id)\n except PartCategory.DoesNotExist:\n # Return empty queryset\n return PartCategoryParameterTemplate.objects.none()\n\n category_list = [cat_id]\n\n if fetch_parent:\n parent_categories = category.get_ancestors()\n for parent in parent_categories:\n category_list.append(parent.pk)\n\n queryset = queryset.filter(category__in=category_list)\n\n return queryset\n\n\nclass PartSalePriceList(generics.ListCreateAPIView):\n \"\"\"\n API endpoint for list view of PartSalePriceBreak model\n \"\"\"\n\n queryset = PartSellPriceBreak.objects.all()\n serializer_class = part_serializers.PartSalePriceSerializer\n\n filter_backends = [\n DjangoFilterBackend\n ]\n\n filter_fields = [\n 'part',\n ]\n\n\nclass PartInternalPriceList(generics.ListCreateAPIView):\n \"\"\"\n API endpoint for list view of PartInternalPriceBreak model\n \"\"\"\n\n queryset = PartInternalPriceBreak.objects.all()\n serializer_class = part_serializers.PartInternalPriceSerializer\n permission_required = 'roles.sales_order.show'\n\n filter_backends = [\n DjangoFilterBackend\n ]\n\n filter_fields = [\n 'part',\n ]\n\n\nclass PartAttachmentList(generics.ListCreateAPIView, AttachmentMixin):\n \"\"\"\n API endpoint for listing (and creating) a PartAttachment (file upload).\n \"\"\"\n\n queryset = PartAttachment.objects.all()\n serializer_class = part_serializers.PartAttachmentSerializer\n\n filter_backends = [\n DjangoFilterBackend,\n ]\n\n filter_fields = [\n 'part',\n ]\n\n\nclass PartTestTemplateList(generics.ListCreateAPIView):\n \"\"\"\n API endpoint for listing (and creating) a PartTestTemplate.\n \"\"\"\n\n queryset = PartTestTemplate.objects.all()\n serializer_class = part_serializers.PartTestTemplateSerializer\n\n def filter_queryset(self, queryset):\n \"\"\"\n Filter the test list queryset.\n\n If filtering by 'part', we include results for any parts \"above\" the specified part.\n \"\"\"\n\n queryset = super().filter_queryset(queryset)\n\n params = self.request.query_params\n\n part = params.get('part', None)\n\n # Filter by part\n if part:\n try:\n part = Part.objects.get(pk=part)\n queryset = queryset.filter(part__in=part.get_ancestors(include_self=True))\n except (ValueError, Part.DoesNotExist):\n pass\n\n # Filter by 'required' status\n required = params.get('required', None)\n\n if required is not None:\n queryset = queryset.filter(required=required)\n\n return queryset\n\n filter_backends = [\n DjangoFilterBackend,\n filters.OrderingFilter,\n filters.SearchFilter,\n ]\n\n\nclass PartThumbs(generics.ListAPIView):\n \"\"\"\n API endpoint for retrieving information on available Part thumbnails\n \"\"\"\n\n queryset = Part.objects.all()\n serializer_class = part_serializers.PartThumbSerializer\n\n def get_queryset(self):\n\n queryset = super().get_queryset()\n\n # Get all Parts which have an associated image\n queryset = queryset.exclude(image='')\n\n return queryset\n\n def list(self, request, *args, **kwargs):\n \"\"\"\n Serialize the available Part images.\n - Images may be used for multiple parts!\n \"\"\"\n\n queryset = self.get_queryset()\n\n # TODO - We should return the thumbnails here, not the full image!\n\n # Return the most popular parts first\n data = queryset.values(\n 'image',\n ).annotate(count=Count('image')).order_by('-count')\n\n return Response(data)\n\n\nclass PartThumbsUpdate(generics.RetrieveUpdateAPIView):\n \"\"\" API endpoint for updating Part thumbnails\"\"\"\n\n queryset = Part.objects.all()\n serializer_class = part_serializers.PartThumbSerializerUpdate\n\n filter_backends = [\n DjangoFilterBackend\n ]\n\n\nclass PartDetail(generics.RetrieveUpdateDestroyAPIView):\n \"\"\" API endpoint for detail view of a single Part object \"\"\"\n\n queryset = Part.objects.all()\n serializer_class = part_serializers.PartSerializer\n\n starred_parts = None\n\n def get_queryset(self, *args, **kwargs):\n queryset = super().get_queryset(*args, **kwargs)\n\n queryset = part_serializers.PartSerializer.prefetch_queryset(queryset)\n queryset = part_serializers.PartSerializer.annotate_queryset(queryset)\n\n return queryset\n\n def get_serializer(self, *args, **kwargs):\n\n try:\n kwargs['category_detail'] = str2bool(self.request.query_params.get('category_detail', False))\n except AttributeError:\n pass\n\n # Ensure the request context is passed through\n kwargs['context'] = self.get_serializer_context()\n\n # Pass a list of \"starred\" parts fo the current user to the serializer\n # We do this to reduce the number of database queries required!\n if self.starred_parts is None and self.request is not None:\n self.starred_parts = [star.part for star in self.request.user.starred_parts.all()]\n\n kwargs['starred_parts'] = self.starred_parts\n\n return self.serializer_class(*args, **kwargs)\n\n def destroy(self, request, *args, **kwargs):\n # Retrieve part\n part = Part.objects.get(pk=int(kwargs['pk']))\n # Check if inactive\n if not part.active:\n # Delete\n return super(PartDetail, self).destroy(request, *args, **kwargs)\n else:\n # Return 405 error\n message = f'Part \\'{part.name}\\' (pk = {part.pk}) is active: cannot delete'\n return Response(status=status.HTTP_405_METHOD_NOT_ALLOWED, data=message)\n\n def update(self, request, *args, **kwargs):\n \"\"\"\n Custom update functionality for Part instance.\n\n - If the 'starred' field is provided, update the 'starred' status against current user\n \"\"\"\n\n if 'starred' in request.data:\n starred = str2bool(request.data.get('starred', None))\n\n self.get_object().setStarred(request.user, starred)\n\n response = super().update(request, *args, **kwargs)\n\n return response\n\n\nclass PartList(generics.ListCreateAPIView):\n \"\"\" API endpoint for accessing a list of Part objects\n\n - GET: Return list of objects\n - POST: Create a new Part object\n\n The Part object list can be filtered by:\n - category: Filter by PartCategory reference\n - cascade: If true, include parts from sub-categories\n - starred: Is the part \"starred\" by the current user?\n - is_template: Is the part a template part?\n - variant_of: Filter by variant_of Part reference\n - assembly: Filter by assembly field\n - component: Filter by component field\n - trackable: Filter by trackable field\n - purchaseable: Filter by purcahseable field\n - salable: Filter by salable field\n - active: Filter by active field\n - ancestor: Filter parts by 'ancestor' (template / variant tree)\n \"\"\"\n\n serializer_class = part_serializers.PartSerializer\n\n queryset = Part.objects.all()\n\n starred_parts = None\n\n def get_serializer(self, *args, **kwargs):\n\n # Ensure the request context is passed through\n kwargs['context'] = self.get_serializer_context()\n\n # Pass a list of \"starred\" parts fo the current user to the serializer\n # We do this to reduce the number of database queries required!\n if self.starred_parts is None and self.request is not None:\n self.starred_parts = [star.part for star in self.request.user.starred_parts.all()]\n\n kwargs['starred_parts'] = self.starred_parts\n\n return self.serializer_class(*args, **kwargs)\n\n def list(self, request, *args, **kwargs):\n \"\"\"\n Overide the 'list' method, as the PartCategory objects are\n very expensive to serialize!\n\n So we will serialize them first, and keep them in memory,\n so that they do not have to be serialized multiple times...\n \"\"\"\n\n queryset = self.filter_queryset(self.get_queryset())\n\n page = self.paginate_queryset(queryset)\n\n if page is not None:\n serializer = self.get_serializer(page, many=True)\n else:\n serializer = self.get_serializer(queryset, many=True)\n\n data = serializer.data\n\n # Do we wish to include PartCategory detail?\n if str2bool(request.query_params.get('category_detail', False)):\n\n # Work out which part categorie we need to query\n category_ids = set()\n\n for part in data:\n cat_id = part['category']\n\n if cat_id is not None:\n category_ids.add(cat_id)\n\n # Fetch only the required PartCategory objects from the database\n categories = PartCategory.objects.filter(pk__in=category_ids).prefetch_related(\n 'parts',\n 'parent',\n 'children',\n )\n\n category_map = {}\n\n # Serialize each PartCategory object\n for category in categories:\n category_map[category.pk] = part_serializers.CategorySerializer(category).data\n\n for part in data:\n cat_id = part['category']\n\n if cat_id is not None and cat_id in category_map.keys():\n detail = category_map[cat_id]\n else:\n detail = None\n\n part['category_detail'] = detail\n\n \"\"\"\n Determine the response type based on the request.\n a) For HTTP requests (e.g. via the browseable API) return a DRF response\n b) For AJAX requests, simply return a JSON rendered response.\n \"\"\"\n if page is not None:\n return self.get_paginated_response(data)\n elif request.is_ajax():\n return JsonResponse(data, safe=False)\n else:\n return Response(data)\n\n def perform_create(self, serializer):\n \"\"\"\n We wish to save the user who created this part!\n\n Note: Implementation copied from DRF class CreateModelMixin\n \"\"\"\n\n part = serializer.save()\n part.creation_user = self.request.user\n part.save()\n\n def get_queryset(self, *args, **kwargs):\n\n queryset = super().get_queryset(*args, **kwargs)\n\n queryset = part_serializers.PartSerializer.prefetch_queryset(queryset)\n queryset = part_serializers.PartSerializer.annotate_queryset(queryset)\n\n return queryset\n\n def filter_queryset(self, queryset):\n \"\"\"\n Perform custom filtering of the queryset.\n We overide the DRF filter_fields here because\n \"\"\"\n\n params = self.request.query_params\n\n queryset = super().filter_queryset(queryset)\n\n # Filter by \"uses\" query - Limit to parts which use the provided part\n uses = params.get('uses', None)\n\n if uses:\n try:\n uses = Part.objects.get(pk=uses)\n\n queryset = queryset.filter(uses.get_used_in_filter())\n\n except (ValueError, Part.DoesNotExist):\n pass\n\n # Filter by 'ancestor'?\n ancestor = params.get('ancestor', None)\n\n if ancestor is not None:\n # If an 'ancestor' part is provided, filter to match only children\n try:\n ancestor = Part.objects.get(pk=ancestor)\n descendants = ancestor.get_descendants(include_self=False)\n queryset = queryset.filter(pk__in=[d.pk for d in descendants])\n except (ValueError, Part.DoesNotExist):\n pass\n\n # Filter by whether the part has an IPN (internal part number) defined\n has_ipn = params.get('has_ipn', None)\n\n if has_ipn is not None:\n has_ipn = str2bool(has_ipn)\n\n if has_ipn:\n queryset = queryset.exclude(IPN='')\n else:\n queryset = queryset.filter(IPN='')\n\n # Filter by whether the BOM has been validated (or not)\n bom_valid = params.get('bom_valid', None)\n\n # TODO: Querying bom_valid status may be quite expensive\n # TODO: (It needs to be profiled!)\n # TODO: It might be worth caching the bom_valid status to a database column\n\n if bom_valid is not None:\n\n bom_valid = str2bool(bom_valid)\n\n # Limit queryset to active assemblies\n queryset = queryset.filter(active=True, assembly=True)\n\n pks = []\n\n for part in queryset:\n if part.is_bom_valid() == bom_valid:\n pks.append(part.pk)\n\n queryset = queryset.filter(pk__in=pks)\n\n # Filter by 'starred' parts?\n starred = params.get('starred', None)\n\n if starred is not None:\n starred = str2bool(starred)\n starred_parts = [star.part.pk for star in self.request.user.starred_parts.all()]\n\n if starred:\n queryset = queryset.filter(pk__in=starred_parts)\n else:\n queryset = queryset.exclude(pk__in=starred_parts)\n\n # Cascade? (Default = True)\n cascade = str2bool(params.get('cascade', True))\n\n # Does the user wish to filter by category?\n cat_id = params.get('category', None)\n\n if cat_id is None:\n # No category filtering if category is not specified\n pass\n\n else:\n # Category has been specified!\n if isNull(cat_id):\n # A 'null' category is the top-level category\n if cascade is False:\n # Do not cascade, only list parts in the top-level category\n queryset = queryset.filter(category=None)\n\n else:\n try:\n category = PartCategory.objects.get(pk=cat_id)\n\n # If '?cascade=true' then include parts which exist in sub-categories\n if cascade:\n queryset = queryset.filter(category__in=category.getUniqueChildren())\n # Just return parts directly in the requested category\n else:\n queryset = queryset.filter(category=cat_id)\n except (ValueError, PartCategory.DoesNotExist):\n pass\n\n # Annotate calculated data to the queryset\n # (This will be used for further filtering)\n queryset = part_serializers.PartSerializer.annotate_queryset(queryset)\n\n # Filter by whether the part has stock\n has_stock = params.get(\"has_stock\", None)\n\n if has_stock is not None:\n has_stock = str2bool(has_stock)\n\n if has_stock:\n queryset = queryset.filter(Q(in_stock__gt=0))\n else:\n queryset = queryset.filter(Q(in_stock__lte=0))\n\n # If we are filtering by 'low_stock' status\n low_stock = params.get('low_stock', None)\n\n if low_stock is not None:\n low_stock = str2bool(low_stock)\n\n if low_stock:\n # Ignore any parts which do not have a specified 'minimum_stock' level\n queryset = queryset.exclude(minimum_stock=0)\n # Filter items which have an 'in_stock' level lower than 'minimum_stock'\n queryset = queryset.filter(Q(in_stock__lt=F('minimum_stock')))\n else:\n # Filter items which have an 'in_stock' level higher than 'minimum_stock'\n queryset = queryset.filter(Q(in_stock__gte=F('minimum_stock')))\n\n # Filter by \"parts which need stock to complete build\"\n stock_to_build = params.get('stock_to_build', None)\n\n # TODO: This is super expensive, database query wise...\n # TODO: Need to figure out a cheaper way of making this filter query\n\n if stock_to_build is not None:\n # Get active builds\n builds = Build.objects.filter(status__in=BuildStatus.ACTIVE_CODES)\n # Store parts with builds needing stock\n parts_needed_to_complete_builds = []\n # Filter required parts\n for build in builds:\n parts_needed_to_complete_builds += [part.pk for part in build.required_parts_to_complete_build]\n\n queryset = queryset.filter(pk__in=parts_needed_to_complete_builds)\n\n # Optionally limit the maximum number of returned results\n # e.g. for displaying \"recent part\" list\n max_results = params.get('max_results', None)\n\n if max_results is not None:\n try:\n max_results = int(max_results)\n\n if max_results > 0:\n queryset = queryset[:max_results]\n\n except (ValueError):\n pass\n\n return queryset\n\n filter_backends = [\n DjangoFilterBackend,\n filters.SearchFilter,\n filters.OrderingFilter,\n ]\n\n filter_fields = [\n 'is_template',\n 'variant_of',\n 'assembly',\n 'component',\n 'trackable',\n 'purchaseable',\n 'salable',\n 'active',\n ]\n\n ordering_fields = [\n 'name',\n 'creation_date',\n 'IPN',\n 'in_stock',\n 'category',\n ]\n\n # Default ordering\n ordering = 'name'\n\n search_fields = [\n 'name',\n 'description',\n 'IPN',\n 'revision',\n 'keywords',\n 'category__name',\n ]\n\n\nclass PartParameterTemplateList(generics.ListCreateAPIView):\n \"\"\" API endpoint for accessing a list of PartParameterTemplate objects.\n\n - GET: Return list of PartParameterTemplate objects\n - POST: Create a new PartParameterTemplate object\n \"\"\"\n\n queryset = PartParameterTemplate.objects.all()\n serializer_class = part_serializers.PartParameterTemplateSerializer\n\n filter_backends = [\n filters.OrderingFilter,\n ]\n\n filter_fields = [\n 'name',\n ]\n\n\nclass PartParameterList(generics.ListCreateAPIView):\n \"\"\" API endpoint for accessing a list of PartParameter objects\n\n - GET: Return list of PartParameter objects\n - POST: Create a new PartParameter object\n \"\"\"\n\n queryset = PartParameter.objects.all()\n serializer_class = part_serializers.PartParameterSerializer\n\n filter_backends = [\n DjangoFilterBackend\n ]\n\n filter_fields = [\n 'part',\n 'template',\n ]\n\n\nclass PartParameterDetail(generics.RetrieveUpdateDestroyAPIView):\n \"\"\"\n API endpoint for detail view of a single PartParameter object\n \"\"\"\n\n queryset = PartParameter.objects.all()\n serializer_class = part_serializers.PartParameterSerializer\n\n\nclass BomList(generics.ListCreateAPIView):\n \"\"\" API endpoint for accessing a list of BomItem objects.\n\n - GET: Return list of BomItem objects\n - POST: Create a new BomItem object\n \"\"\"\n\n serializer_class = part_serializers.BomItemSerializer\n\n def list(self, request, *args, **kwargs):\n\n queryset = self.filter_queryset(self.get_queryset())\n\n serializer = self.get_serializer(queryset, many=True)\n\n data = serializer.data\n\n if request.is_ajax():\n return JsonResponse(data, safe=False)\n else:\n return Response(data)\n\n def get_serializer(self, *args, **kwargs):\n\n # Do we wish to include extra detail?\n try:\n kwargs['part_detail'] = str2bool(self.request.GET.get('part_detail', None))\n except AttributeError:\n pass\n\n try:\n kwargs['sub_part_detail'] = str2bool(self.request.GET.get('sub_part_detail', None))\n except AttributeError:\n pass\n\n # Ensure the request context is passed through!\n kwargs['context'] = self.get_serializer_context()\n\n return self.serializer_class(*args, **kwargs)\n\n def get_queryset(self, *args, **kwargs):\n\n queryset = BomItem.objects.all()\n\n queryset = self.get_serializer_class().setup_eager_loading(queryset)\n\n return queryset\n\n def filter_queryset(self, queryset):\n\n queryset = super().filter_queryset(queryset)\n\n params = self.request.query_params\n\n # Filter by \"optional\" status?\n optional = params.get('optional', None)\n\n if optional is not None:\n optional = str2bool(optional)\n\n queryset = queryset.filter(optional=optional)\n\n # Filter by \"inherited\" status\n inherited = params.get('inherited', None)\n\n if inherited is not None:\n inherited = str2bool(inherited)\n\n queryset = queryset.filter(inherited=inherited)\n\n # Filter by \"allow_variants\"\n variants = params.get(\"allow_variants\", None)\n\n if variants is not None:\n variants = str2bool(variants)\n\n queryset = queryset.filter(allow_variants=variants)\n\n # Filter by part?\n part = params.get('part', None)\n\n if part is not None:\n \"\"\"\n If we are filtering by \"part\", there are two cases to consider:\n\n a) Bom items which are defined for *this* part\n b) Inherited parts which are defined for a *parent* part\n\n So we need to construct two queries!\n \"\"\"\n\n # First, check that the part is actually valid!\n try:\n part = Part.objects.get(pk=part)\n\n queryset = queryset.filter(part.get_bom_item_filter())\n\n except (ValueError, Part.DoesNotExist):\n pass\n\n # Filter by \"active\" status of the part\n part_active = params.get('part_active', None)\n\n if part_active is not None:\n part_active = str2bool(part_active)\n queryset = queryset.filter(part__active=part_active)\n\n # Filter by \"trackable\" status of the part\n part_trackable = params.get('part_trackable', None)\n\n if part_trackable is not None:\n part_trackable = str2bool(part_trackable)\n queryset = queryset.filter(part__trackable=part_trackable)\n\n # Filter by \"trackable\" status of the sub-part\n sub_part_trackable = params.get('sub_part_trackable', None)\n\n if sub_part_trackable is not None:\n sub_part_trackable = str2bool(sub_part_trackable)\n queryset = queryset.filter(sub_part__trackable=sub_part_trackable)\n\n # Filter by whether the BOM line has been validated\n validated = params.get('validated', None)\n\n if validated is not None:\n validated = str2bool(validated)\n\n # Work out which lines have actually been validated\n pks = []\n\n for bom_item in queryset.all():\n if bom_item.is_line_valid:\n pks.append(bom_item.pk)\n\n if validated:\n queryset = queryset.filter(pk__in=pks)\n else:\n queryset = queryset.exclude(pk__in=pks)\n\n # Annotate with purchase prices\n queryset = queryset.annotate(\n purchase_price_min=Min('sub_part__stock_items__purchase_price'),\n purchase_price_max=Max('sub_part__stock_items__purchase_price'),\n purchase_price_avg=Avg('sub_part__stock_items__purchase_price'),\n )\n\n # Get values for currencies\n currencies = queryset.annotate(\n purchase_price_currency=F('sub_part__stock_items__purchase_price_currency'),\n ).values('pk', 'sub_part', 'purchase_price_currency')\n\n def convert_price(price, currency, decimal_places=4):\n \"\"\" Convert price field, returns Money field \"\"\"\n\n price_adjusted = None\n\n # Get default currency from settings\n default_currency = InvenTreeSetting.get_setting('INVENTREE_DEFAULT_CURRENCY')\n \n if price:\n if currency and default_currency:\n try:\n # Get adjusted price\n price_adjusted = convert_money(Money(price, currency), default_currency)\n except MissingRate:\n # No conversion rate set\n price_adjusted = Money(price, currency)\n else:\n # Currency exists\n if currency:\n price_adjusted = Money(price, currency)\n # Default currency exists\n if default_currency:\n price_adjusted = Money(price, default_currency)\n\n if price_adjusted and decimal_places:\n price_adjusted.decimal_places = decimal_places\n\n return price_adjusted\n\n # Convert prices to default currency (using backend conversion rates)\n for bom_item in queryset:\n # Find associated currency (select first found)\n purchase_price_currency = None\n for currency_item in currencies:\n if currency_item['pk'] == bom_item.pk and currency_item['sub_part'] == bom_item.sub_part.pk:\n purchase_price_currency = currency_item['purchase_price_currency']\n break\n # Convert prices\n bom_item.purchase_price_min = convert_price(bom_item.purchase_price_min, purchase_price_currency)\n bom_item.purchase_price_max = convert_price(bom_item.purchase_price_max, purchase_price_currency)\n bom_item.purchase_price_avg = convert_price(bom_item.purchase_price_avg, purchase_price_currency)\n\n return queryset\n\n filter_backends = [\n DjangoFilterBackend,\n filters.SearchFilter,\n filters.OrderingFilter,\n ]\n\n filter_fields = [\n ]\n\n\nclass BomDetail(generics.RetrieveUpdateDestroyAPIView):\n \"\"\" API endpoint for detail view of a single BomItem object \"\"\"\n\n queryset = BomItem.objects.all()\n serializer_class = part_serializers.BomItemSerializer\n\n\nclass BomItemValidate(generics.UpdateAPIView):\n \"\"\" API endpoint for validating a BomItem \"\"\"\n\n # Very simple serializers\n class BomItemValidationSerializer(serializers.Serializer):\n\n valid = serializers.BooleanField(default=False)\n\n queryset = BomItem.objects.all()\n serializer_class = BomItemValidationSerializer\n\n def update(self, request, *args, **kwargs):\n \"\"\" Perform update request \"\"\"\n\n partial = kwargs.pop('partial', False)\n\n valid = request.data.get('valid', False)\n\n instance = self.get_object()\n\n serializer = self.get_serializer(instance, data=request.data, partial=partial)\n serializer.is_valid(raise_exception=True)\n\n if type(instance) == BomItem:\n instance.validate_hash(valid)\n\n return Response(serializer.data)\n\n\npart_api_urls = [\n url(r'^tree/?', PartCategoryTree.as_view(), name='api-part-tree'),\n\n # Base URL for PartCategory API endpoints\n url(r'^category/', include([\n url(r'^(?P<pk>\\d+)/parameters/?', CategoryParameters.as_view(), name='api-part-category-parameters'),\n url(r'^(?P<pk>\\d+)/?', CategoryDetail.as_view(), name='api-part-category-detail'),\n url(r'^$', CategoryList.as_view(), name='api-part-category-list'),\n ])),\n\n # Base URL for PartTestTemplate API endpoints\n url(r'^test-template/', include([\n url(r'^$', PartTestTemplateList.as_view(), name='api-part-test-template-list'),\n ])),\n\n # Base URL for PartAttachment API endpoints\n url(r'^attachment/', include([\n url(r'^$', PartAttachmentList.as_view(), name='api-part-attachment-list'),\n ])),\n\n # Base URL for part sale pricing\n url(r'^sale-price/', include([\n url(r'^.*$', PartSalePriceList.as_view(), name='api-part-sale-price-list'),\n ])),\n\n # Base URL for part internal pricing\n url(r'^internal-price/', include([\n url(r'^.*$', PartInternalPriceList.as_view(), name='api-part-internal-price-list'),\n ])),\n\n # Base URL for PartParameter API endpoints\n url(r'^parameter/', include([\n url(r'^template/$', PartParameterTemplateList.as_view(), name='api-part-param-template-list'),\n\n url(r'^(?P<pk>\\d+)/', PartParameterDetail.as_view(), name='api-part-param-detail'),\n url(r'^.*$', PartParameterList.as_view(), name='api-part-param-list'),\n ])),\n\n url(r'^thumbs/', include([\n url(r'^$', PartThumbs.as_view(), name='api-part-thumbs'),\n url(r'^(?P<pk>\\d+)/?', PartThumbsUpdate.as_view(), name='api-part-thumbs-update'),\n ])),\n\n url(r'^(?P<pk>\\d+)/?', PartDetail.as_view(), name='api-part-detail'),\n\n url(r'^.*$', PartList.as_view(), name='api-part-list'),\n]\n\nbom_api_urls = [\n # BOM Item Detail\n url(r'^(?P<pk>\\d+)/', include([\n url(r'^validate/?', BomItemValidate.as_view(), name='api-bom-item-validate'),\n url(r'^.*$', BomDetail.as_view(), name='api-bom-item-detail'),\n ])),\n\n # Catch-all\n url(r'^.*$', BomList.as_view(), name='api-bom-list'),\n]\n", "path": "InvenTree/part/api.py"}]} |
gh_patches_debug_50 | rasdani/github-patches | git_diff | PokemonGoF__PokemonGo-Bot-3951 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bot fails to start: UnicodeEncodeError 'ascii' codec can't encode characters in position 0-3: ordinal not in range(128)
### Expected Behavior
Bot is able to start.
### Actual Behavior
Bot fails to start.
The names of some monsters are specified by Japanese characters. I'm not sure but it might cause this error.
### Your config.json (remove your credentials and any other private info)
```
{
"auth_service": "google",
"username": "xxx",
"password": "xxx",
"location": "xxx,xxx",
"gmapkey": "xxx",
"tasks": [
{
"type": "HandleSoftBan"
},
{
"type": "CollectLevelUpReward"
},
{
"type": "IncubateEggs",
"config": {
"longer_eggs_first": true
}
},
{
"type": "NicknamePokemon",
"config": {
"nickname_template": "{name:.8s}_{iv_pct}"
}
},
{
"type": "TransferPokemon"
},
{
"type": "EvolvePokemon",
"config": {
"evolve_all": "none",
"first_evolve_by": "iv",
"evolve_above_cp": 500,
"evolve_above_iv": 0.8,
"logic": "or",
"evolve_speed": 20,
"use_lucky_egg": false
}
},
{
"type": "RecycleItems",
"config": {
"item_filter": {
"Pokeball": { "keep" : 110 },
"Greatball": { "keep" : 150 },
"Ultraball": { "keep" : 150 },
"Potion": { "keep" : 20 },
"Super Potion": { "keep" : 30 },
"Hyper Potion": { "keep" : 40 },
"Revive": { "keep" : 40 },
"Razz Berry": { "keep" : 120 }
}
}
},
{
"type": "CatchVisiblePokemon"
},
{
"type": "CatchLuredPokemon"
},
{
"type": "SpinFort"
},
{
"type": "MoveToFort",
"config": {
"lure_attraction": true,
"lure_max_distance": 2000
}
},
{
"type": "FollowSpiral",
"config": {
"diameter": 4,
"step_size": 70
}
}
],
"map_object_cache_time": 5,
"forts": {
"avoid_circles": true,
"max_circle_size": 50
},
"websocket_server": false,
"walk": 4.16,
"action_wait_min": 1,
"action_wait_max": 4,
"debug": false,
"test": false,
"health_record": true,
"location_cache": true,
"distance_unit": "km",
"reconnecting_timeout": 15,
"evolve_captured": "NONE",
"catch_randomize_reticle_factor": 1.0,
"catch_randomize_spin_factor": 1.0,
"catch": {
"any": {"catch_above_cp": 0, "catch_above_iv": 0, "logic": "or"},
"// Example of always catching Rattata:": {},
"// Rattata": { "always_catch" : true },
"// Legendary pokemons (Goes under S-Tier)": {},
"Lapras": { "always_catch": true },
"Moltres": { "always_catch": true },
"Zapdos": { "always_catch": true },
"Articuno": { "always_catch": true },
"// always catch": {},
"Charmander": { "always_catch": true },
"Squirtle": { "always_catch": true },
"Pikachu": { "always_catch": true },
"Eevee": { "always_catch": true },
"Dragonite": { "always_catch": true },
"Dragonair": { "always_catch": true },
"Dratini": { "always_catch": true },
"// never catch": {},
"Caterpie": {"never_catch": true},
"Weedle": {"never_catch": true},
"Pidgey": {"never_catch": true},
"Rattata": {"never_catch": true},
"Psyduck": {"never_catch": true},
"Slowpoke": {"never_catch": true}
},
"release": {
"any": {"keep_best_iv": 2, "logic": "or"},
"Exeggcutor": { "never_release" : true },
"Gyarados": { "never_release" : true },
"Lapras": { "never_release" : true },
"Vaporeon": { "never_release" : true },
"Jolteon": { "never_release" : true },
"Flareon": { "never_release" : true },
"Snorlax": { "never_release" : true },
"Dragonite": { "never_release" : true },
"// any": {"keep_best_cp": 2, "keep_best_iv": 2, "logic": "or"},
"// any": {"release_below_cp": 0, "release_below_iv": 0, "logic": "or"},
"// Example of always releasing Rattata:": {},
"// Rattata": {"always_release": true},
"// Example of keeping 3 stronger (based on CP) Pidgey:": {},
"// Pidgey": {"keep_best_cp": 3},
"// Example of keeping 2 stronger (based on IV) Zubat:": {},
"// Zubat": {"keep_best_iv": 2},
"// Also, it is working with any": {},
"// any": {"keep_best_iv": 3},
"// Example of keeping the 2 strongest (based on CP) and 3 best (based on IV) Zubat:": {},
"// Zubat": {"keep_best_cp": 2, "keep_best_iv": 3}
},
"vips" : {
"Any pokemon put here directly force to use Berry & Best Ball to capture, to secure the capture rate!": {},
"any": {"catch_above_cp": 1200, "catch_above_iv": 0.9, "logic": "or" },
"Lapras": {},
"Moltres": {},
"Zapdos": {},
"Articuno": {},
"// S-Tier pokemons (if pokemon can be evolved into tier, list the representative)": {},
"Mewtwo": {},
"Dragonite": {},
"Snorlax": {},
"// Mew evolves to Mewtwo": {},
"Mew": {},
"Arcanine": {},
"Vaporeon": {},
"Gyarados": {},
"Exeggutor": {},
"Muk": {},
"Weezing": {},
"Flareon": {}
}
}
```
### Steps to Reproduce
2016-08-15 10:38:47,935 [ cli] [INFO] PokemonGO Bot v1.0
2016-08-15 10:38:47,936 [ cli] [INFO] No config argument specified, checking for /configs/config.json
2016-08-15 10:38:47,939 [ cli] [WARNING] The evolve_captured argument is no longer supported. Please use the EvolvePokemon task instead
2016-08-15 10:38:47,940 [ cli] [INFO] Configuration initialized
2016-08-15 10:38:47,940 [pokemongo_bot.health_record.bot_event] [INFO] Health check is enabled. For more information:
2016-08-15 10:38:47,940 [pokemongo_bot.health_record.bot_event] [INFO] https://github.com/PokemonGoF/PokemonGo-Bot/tree/dev#analytics
2016-08-15 10:38:47,945 [requests.packages.urllib3.connectionpool] [INFO] Starting new HTTP connection (1): www.google-analytics.com
2016-08-15 10:38:48,039 [PokemonGoBot] [INFO] [set_start_location] Setting start location.
2016-08-15 10:38:48,048 [PokemonGoBot] [INFO] [x] Coordinates found in passed in location, not geocoding.
2016-08-15 10:38:48,049 [PokemonGoBot] [INFO] [location_found] Location found: xxx, xxx (xxx,xxx, 0.0)
2016-08-15 10:38:48,049 [PokemonGoBot] [INFO] [position_update] Now at (xxx, xxx, 0)
2016-08-15 10:38:48,049 [PokemonGoBot] [INFO] [login_started] Login procedure started.
2016-08-15 10:38:50,020 [PokemonGoBot] [INFO] [login_successful] Login successful.
2016-08-15 10:38:52,387 [PokemonGoBot] [INFO]
2016-08-15 10:38:52,387 [PokemonGoBot] [INFO] --- sunnyfortune ---
2016-08-15 10:38:52,387 [PokemonGoBot] [INFO] Level: 24 (Next Level: 69740 XP) (Total: 640260 XP)
2016-08-15 10:38:52,387 [PokemonGoBot] [INFO] Pokemon Captured: 1688 | Pokestops Visited: 1917
2016-08-15 10:38:52,388 [PokemonGoBot] [INFO] Pokemon Bag: 194/250
2016-08-15 10:38:52,389 [PokemonGoBot] [INFO] Items: 689/700
2016-08-15 10:38:52,389 [PokemonGoBot] [INFO] Stardust: 247878 | Pokecoins: 70
2016-08-15 10:38:52,389 [PokemonGoBot] [INFO] PokeBalls: 96 | GreatBalls: 154 | UltraBalls: 150 | MasterBalls: 0
2016-08-15 10:38:52,390 [PokemonGoBot] [INFO] RazzBerries: 124 | BlukBerries: 0 | NanabBerries: 0
2016-08-15 10:38:52,390 [PokemonGoBot] [INFO] LuckyEgg: 6 | Incubator: 8 | TroyDisk: 11
2016-08-15 10:38:52,390 [PokemonGoBot] [INFO] Potion: 23 | SuperPotion: 30 | HyperPotion: 41 | MaxPotion: 0
2016-08-15 10:38:52,390 [PokemonGoBot] [INFO] Incense: 4 | IncenseSpicy: 0 | IncenseCool: 0
2016-08-15 10:38:52,390 [PokemonGoBot] [INFO] Revive: 40 | MaxRevive: 0
2016-08-15 10:38:52,390 [PokemonGoBot] [INFO]
2016-08-15 10:38:52,391 [PokemonGoBot] [INFO] Found encrypt.so! Platform: linux2 Encrypt.so directory: /home/sunny/project/PokemonGo-Bot
2016-08-15 10:38:52,391 [PokemonGoBot] [INFO]
2016-08-15 10:38:53,321 [PokemonGoBot] [INFO] [bot_start] Starting bot...
2016-08-15 10:38:53,637 [CollectLevelUpReward] [INFO] [level_up_reward] Received level up reward: []
2016-08-15 10:38:53,638 [IncubateEggs] [INFO] [next_egg_incubates] Next egg incubates in 0.13 km
2016-08-15 10:38:56,931 [ cli] [INFO]
2016-08-15 10:38:56,931 [ cli] [INFO] Ran for 0:00:09
2016-08-15 10:38:56,932 [ cli] [INFO] Total XP Earned: 0 Average: 0.00/h
2016-08-15 10:38:56,932 [ cli] [INFO] Travelled 0.00km
2016-08-15 10:38:56,932 [ cli] [INFO] Visited 0 stops
2016-08-15 10:38:56,932 [ cli] [INFO] Encountered 0 pokemon, 0 caught, 0 released, 0 evolved, 0 never seen before
2016-08-15 10:38:56,932 [ cli] [INFO] Threw 0 pokeballs
2016-08-15 10:38:56,933 [ cli] [INFO] Earned 0 Stardust
2016-08-15 10:38:56,933 [ cli] [INFO]
2016-08-15 10:38:56,933 [ cli] [INFO] Highest CP Pokemon:
2016-08-15 10:38:56,933 [ cli] [INFO] Most Perfect Pokemon:
Traceback (most recent call last):
File "pokecli.py", line 578, in <module>
main()
File "pokecli.py", line 103, in main
bot.tick()
File "/home/sunny/project/PokemonGo-Bot/pokemongo_bot/**init**.py", line 482, in tick
if worker.work() == WorkerResult.RUNNING:
File "/home/sunny/project/PokemonGo-Bot/pokemongo_bot/cell_workers/nickname_pokemon.py", line 204, in work
self._nickname_pokemon(pokemon)
File "/home/sunny/project/PokemonGo-Bot/pokemongo_bot/cell_workers/nickname_pokemon.py", line 271, in _nickname_pokemon
data={'old_name': old_nickname, 'current_name': new_nickname}
File "/home/sunny/project/PokemonGo-Bot/pokemongo_bot/base_task.py", line 28, in emit_event
data=data
File "/home/sunny/project/PokemonGo-Bot/pokemongo_bot/event_manager.py", line 61, in emit
formatted_msg = formatted.format(*_data)
UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-3: ordinal not in range(128)
2016-08-15 10:38:56,954 [sentry.errors] [ERROR] Sentry responded with an error: 'ascii' codec can't decode byte 0x9c in position 1: ordinal not in range(128) (url: https://app.getsentry.com/api/90254/store/)
Traceback (most recent call last):
File "/home/sunny/.pyenv/versions/anaconda2-2.5.0/envs/poke/lib/python2.7/site-packages/raven/transport/threaded.py", line 174, in send_sync
super(ThreadedHTTPTransport, self).send(data, headers)
File "/home/sunny/.pyenv/versions/anaconda2-2.5.0/envs/poke/lib/python2.7/site-packages/raven/transport/http.py", line 47, in send
ca_certs=self.ca_certs,
File "/home/sunny/.pyenv/versions/anaconda2-2.5.0/envs/poke/lib/python2.7/site-packages/raven/utils/http.py", line 66, in urlopen
return opener.open(url, data, timeout)
File "/home/sunny/.pyenv/versions/anaconda2-2.5.0/envs/poke/lib/python2.7/site-packages/future/backports/urllib/request.py", line 494, in open
response = self._open(req, data)
File "/home/sunny/.pyenv/versions/anaconda2-2.5.0/envs/poke/lib/python2.7/site-packages/future/backports/urllib/request.py", line 512, in _open
'_open', req)
File "/home/sunny/.pyenv/versions/anaconda2-2.5.0/envs/poke/lib/python2.7/site-packages/future/backports/urllib/request.py", line 466, in _call_chain
result = func(_args)
File "/home/sunny/.pyenv/versions/anaconda2-2.5.0/envs/poke/lib/python2.7/site-packages/raven/utils/http.py", line 46, in https_open
return self.do_open(ValidHTTPSConnection, req)
File "/home/sunny/.pyenv/versions/anaconda2-2.5.0/envs/poke/lib/python2.7/site-packages/future/backports/urllib/request.py", line 1284, in do_open
h.request(req.get_method(), req.selector, req.data, headers)
File "/home/sunny/.pyenv/versions/anaconda2-2.5.0/envs/poke/lib/python2.7/httplib.py", line 1057, in request
self._send_request(method, url, body, headers)
File "/home/sunny/.pyenv/versions/anaconda2-2.5.0/envs/poke/lib/python2.7/httplib.py", line 1097, in _send_request
self.endheaders(body)
File "/home/sunny/.pyenv/versions/anaconda2-2.5.0/envs/poke/lib/python2.7/httplib.py", line 1053, in endheaders
self._send_output(message_body)
File "/home/sunny/.pyenv/versions/anaconda2-2.5.0/envs/poke/lib/python2.7/httplib.py", line 895, in _send_output
msg += message_body
UnicodeDecodeError: 'ascii' codec can't decode byte 0x9c in position 1: ordinal not in range(128)
2016-08-15 10:38:56,958 [sentry.errors.uncaught] [ERROR] [u"UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-3: ordinal not in range(128)", u' File "pokecli.py", line 578, in <module>', u' File "pokecli.py", line 103, in main', u' File "pokemongo_bot/__init__.py", line 482, in tick', u' File "pokemongo_bot/cell_workers/nickname_pokemon.py", line 204, in work', u' File "pokemongo_bot/cell_workers/nickname_pokemon.py", line 271, in _nickname_pokemon', u' File "pokemongo_bot/base_task.py", line 28, in emit_event', u' File "pokemongo_bot/event_manager.py", line 61, in emit']
### Other Information
OS:ubuntu 14.04 LTS
Git Commit: 5c9cdb53e69b5069cee6fe100d39e3cf5d63539c
Python Version: Python 2.7.12 :: Continuum Analytics, Inc.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pokemongo_bot/cell_workers/nickname_pokemon.py`
Content:
```
1 import os
2 import json
3 from pokemongo_bot.base_task import BaseTask
4 from pokemongo_bot.human_behaviour import sleep
5 from pokemongo_bot.inventory import pokemons, Pokemon, Attack
6
7 import re
8
9
10 DEFAULT_IGNORE_FAVORITES = False
11 DEFAULT_GOOD_ATTACK_THRESHOLD = 0.7
12 DEFAULT_TEMPLATE = '{name}'
13
14 MAXIMUM_NICKNAME_LENGTH = 12
15
16
17 class NicknamePokemon(BaseTask):
18 SUPPORTED_TASK_API_VERSION = 1
19
20 """
21 Nickname user pokemons according to the specified template
22
23
24 PARAMETERS:
25
26 dont_nickname_favorite (default: False)
27 Prevents renaming of favorited pokemons
28
29 good_attack_threshold (default: 0.7)
30 Threshold for perfection of the attack in it's type (0.0-1.0)
31 after which attack will be treated as good.
32 Used for {fast_attack_char}, {charged_attack_char}, {attack_code}
33 templates
34
35 nickname_template (default: '{name}')
36 Template for nickname generation.
37 Empty template or any resulting in the simple pokemon name
38 (e.g. '', '{name}', ...) will revert all pokemon to their original
39 names (as if they had no nickname).
40
41 Niantic imposes a 12-character limit on all pokemon nicknames, so
42 any new nickname will be truncated to 12 characters if over that limit.
43 Thus, it is up to the user to exercise judgment on what template will
44 best suit their need with this constraint in mind.
45
46 You can use full force of the Python [Format String syntax](https://docs.python.org/2.7/library/string.html#formatstrings)
47 For example, using `{name:.8s}` causes the Pokemon name to never take up
48 more than 8 characters in the nickname. This would help guarantee that
49 a template like `{name:.8s}_{iv_pct}` never goes over the 12-character
50 limit.
51
52
53 **NOTE:** If you experience frequent `Pokemon not found` error messages,
54 this is because the inventory cache has not been updated after a pokemon
55 was released. This can be remedied by placing the `NicknamePokemon` task
56 above the `TransferPokemon` task in your `config.json` file.
57
58
59 EXAMPLE CONFIG:
60 {
61 "type": "NicknamePokemon",
62 "config": {
63 "enabled": true,
64 "dont_nickname_favorite": false,
65 "good_attack_threshold": 0.7,
66 "nickname_template": "{iv_pct}_{iv_ads}"
67 }
68 }
69
70
71 SUPPORTED PATTERN KEYS:
72
73 {name} Pokemon name (e.g. Articuno)
74 {id} Pokemon ID/Number (1-151)
75 {cp} Combat Points (10-4145)
76
77 # Individial Values
78 {iv_attack} Individial Attack (0-15) of the current specific pokemon
79 {iv_defense} Individial Defense (0-15) of the current specific pokemon
80 {iv_stamina} Individial Stamina (0-15) of the current specific pokemon
81 {iv_ads} Joined IV values (e.g. 4/12/9)
82 {iv_sum} Sum of the Individial Values (0-45)
83 {iv_pct} IV perfection (in 000-100 format - 3 chars)
84 {iv_pct2} IV perfection (in 00-99 format - 2 chars)
85 So 99 is best (it's a 100% perfection)
86 {iv_pct1} IV perfection (in 0-9 format - 1 char)
87 {iv_ads_hex} Joined IV values in HEX (e.g. 4C9)
88
89 # Basic Values of the pokemon (identical for all of one kind)
90 {base_attack} Basic Attack (40-284) of the current pokemon kind
91 {base_defense} Basic Defense (54-242) of the current pokemon kind
92 {base_stamina} Basic Stamina (20-500) of the current pokemon kind
93 {base_ads} Joined Basic Values (e.g. 125/93/314)
94
95 # Final Values of the pokemon (Base Values + Individial Values)
96 {attack} Basic Attack + Individial Attack
97 {defense} Basic Defense + Individial Defense
98 {stamina} Basic Stamina + Individial Stamina
99 {sum_ads} Joined Final Values (e.g. 129/97/321)
100
101 # IV CP perfection - it's a kind of IV perfection percent
102 # but calculated using weight of each IV in its contribution
103 # to CP of the best evolution of current pokemon.
104 # So it tends to be more accurate than simple IV perfection.
105 {ivcp_pct} IV CP perfection (in 000-100 format - 3 chars)
106 {ivcp_pct2} IV CP perfection (in 00-99 format - 2 chars)
107 So 99 is best (it's a 100% perfection)
108 {ivcp_pct1} IV CP perfection (in 0-9 format - 1 char)
109
110 # Character codes for fast/charged attack types.
111 # If attack is good character is uppecased, otherwise lowercased.
112 # Use 'good_attack_threshold' option for customization
113 #
114 # It's an effective way to represent type with one character.
115 # If first char of the type name is unique - use it,
116 # in other case suitable substitute used
117 #
118 # Type codes:
119 # Bug: 'B'
120 # Dark: 'K'
121 # Dragon: 'D'
122 # Electric: 'E'
123 # Fairy: 'Y'
124 # Fighting: 'T'
125 # Fire: 'F'
126 # Flying: 'L'
127 # Ghost: 'H'
128 # Grass: 'A'
129 # Ground: 'G'
130 # Ice: 'I'
131 # Normal: 'N'
132 # Poison: 'P'
133 # Psychic: 'C'
134 # Rock: 'R'
135 # Steel: 'S'
136 # Water: 'W'
137 #
138 {fast_attack_char} One character code for fast attack type
139 (e.g. 'F' for good Fire or 's' for bad
140 Steel attack)
141 {charged_attack_char} One character code for charged attack type
142 (e.g. 'n' for bad Normal or 'I' for good
143 Ice attack)
144 {attack_code} Joined 2 character code for both attacks
145 (e.g. 'Lh' for pokemon with good Flying
146 and weak Ghost attacks)
147
148 # Moveset perfection percents for attack and for defense
149 # Calculated for current pokemon only, not between all pokemons
150 # So perfect moveset can be weak if pokemon is weak (e.g. Caterpie)
151 {attack_pct} Moveset perfection for attack (in 000-100 format - 3 chars)
152 {defense_pct} Moveset perfection for defense (in 000-100 format - 3 chars)
153 {attack_pct2} Moveset perfection for attack (in 00-99 format - 2 chars)
154 {defense_pct2} Moveset perfection for defense (in 00-99 format - 2 chars)
155 {attack_pct1} Moveset perfection for attack (in 0-9 format - 1 char)
156 {defense_pct1} Moveset perfection for defense (in 0-9 format - 1 char)
157
158 # Special case: pokemon object.
159 # You can access any available pokemon info via it.
160 # Examples:
161 # '{pokemon.ivcp:.2%}' -> '47.00%'
162 # '{pokemon.fast_attack}' -> 'Wing Attack'
163 # '{pokemon.fast_attack.type}' -> 'Flying'
164 # '{pokemon.fast_attack.dps:.2f}' -> '10.91'
165 # '{pokemon.fast_attack.dps:.0f}' -> '11'
166 # '{pokemon.charged_attack}' -> 'Ominous Wind'
167 {pokemon} Pokemon instance (see inventory.py for class sources)
168
169
170 EXAMPLES:
171
172 1. "nickname_template": "{ivcp_pct}_{iv_pct}_{iv_ads}"
173
174 Golbat with IV (attack: 9, defense: 4 and stamina: 8) will result in:
175 '48_46_9/4/8'
176
177 2. "nickname_template": "{attack_code}{attack_pct1}{defense_pct1}{ivcp_pct1}{name}"
178
179 Same Golbat (with attacks Wing Attack & Ominous Wind) will have nickname:
180 'Lh474Golbat'
181
182 See /tests/nickname_test.py for more examples.
183 """
184
185 # noinspection PyAttributeOutsideInit
186 def initialize(self):
187 self.ignore_favorites = self.config.get(
188 'dont_nickname_favorite', DEFAULT_IGNORE_FAVORITES)
189 self.good_attack_threshold = self.config.get(
190 'good_attack_threshold', DEFAULT_GOOD_ATTACK_THRESHOLD)
191 self.template = self.config.get(
192 'nickname_template', DEFAULT_TEMPLATE)
193
194 self.translate = None
195 locale = self.config.get('locale', 'en')
196 if locale != 'en':
197 fn = 'data/locales/{}.json'.format(locale)
198 if os.path.isfile(fn):
199 self.translate = json.load(open(fn))
200
201 def work(self):
202 """
203 Iterate over all user pokemons and nickname if needed
204 """
205 for pokemon in pokemons().all(): # type: Pokemon
206 if not pokemon.is_favorite or not self.ignore_favorites:
207 self._nickname_pokemon(pokemon)
208
209 def _localize(self, string):
210 if self.translate and string in self.translate:
211 return self.translate[string]
212 else:
213 return string
214
215 def _nickname_pokemon(self, pokemon):
216 # type: (Pokemon) -> None
217 """
218 Nicknaming process
219 """
220
221 # We need id of the specific pokemon unstance to be able to rename it
222 instance_id = pokemon.id
223 if not instance_id:
224 self.emit_event(
225 'api_error',
226 formatted='Failed to get pokemon name, will not rename.'
227 )
228 return
229
230 # Generate new nickname
231 old_nickname = pokemon.nickname
232 try:
233 new_nickname = self._generate_new_nickname(pokemon, self.template)
234 except KeyError as bad_key:
235 self.emit_event(
236 'config_error',
237 formatted="Unable to nickname {} due to bad template ({})"
238 .format(old_nickname, bad_key)
239 )
240 return
241
242 # Skip if pokemon is already well named
243 if pokemon.nickname_raw == new_nickname:
244 return
245
246 # Send request
247 response = self.bot.api.nickname_pokemon(
248 pokemon_id=instance_id, nickname=new_nickname)
249 sleep(1.2) # wait a bit after request
250
251 # Check result
252 try:
253 result = reduce(dict.__getitem__, ["responses", "NICKNAME_POKEMON"],
254 response)['result']
255 except KeyError:
256 self.emit_event(
257 'api_error',
258 formatted='Attempt to nickname received bad response from server.'
259 )
260 return
261
262 # Nickname unset
263 if result == 0:
264 self.emit_event(
265 'unset_pokemon_nickname',
266 formatted="Pokemon {old_name} nickname unset.",
267 data={'old_name': old_nickname}
268 )
269 pokemon.update_nickname(new_nickname)
270 elif result == 1:
271 self.emit_event(
272 'rename_pokemon',
273 formatted="Pokemon {old_name} renamed to {current_name}",
274 data={'old_name': old_nickname, 'current_name': new_nickname}
275 )
276 pokemon.update_nickname(new_nickname)
277 elif result == 2:
278 self.emit_event(
279 'pokemon_nickname_invalid',
280 formatted="Nickname {nickname} is invalid",
281 data={'nickname': new_nickname}
282 )
283 else:
284 self.emit_event(
285 'api_error',
286 formatted='Attempt to nickname received unexpected result'
287 ' from server ({}).'.format(result)
288 )
289
290 def _generate_new_nickname(self, pokemon, template):
291 # type: (Pokemon, string) -> string
292 """
293 New nickname generation
294 """
295
296 # Filter template
297 # only convert the keys to lowercase, leaving the format specifier alone
298 template = re.sub(r"{[\w_\d]*", lambda x:x.group(0).lower(), template).strip()
299
300 # Individial Values of the current specific pokemon (different for each)
301 iv_attack = pokemon.iv_attack
302 iv_defense = pokemon.iv_defense
303 iv_stamina = pokemon.iv_stamina
304 iv_list = [iv_attack, iv_defense, iv_stamina]
305 iv_sum = sum(iv_list)
306 iv_pct = iv_sum / 45.0
307
308 # Basic Values of the pokemon (identical for all of one kind)
309 base_attack = pokemon.static.base_attack
310 base_defense = pokemon.static.base_defense
311 base_stamina = pokemon.static.base_stamina
312
313 # Final Values of the pokemon
314 attack = base_attack + iv_attack
315 defense = base_defense + iv_defense
316 stamina = base_stamina + iv_stamina
317
318 # One character codes for fast/charged attack types
319 # If attack is good then character is uppecased, otherwise lowercased
320 fast_attack_char = self.attack_char(pokemon.fast_attack)
321 charged_attack_char = self.attack_char(pokemon.charged_attack)
322 # 2 characters code for both attacks of the pokemon
323 attack_code = fast_attack_char + charged_attack_char
324
325 moveset = pokemon.moveset
326
327 pokemon.name = self._localize(pokemon.name)
328
329 #
330 # Generate new nickname
331 #
332 new_name = template.format(
333 # Pokemon
334 pokemon=pokemon,
335 # Pokemon name
336 name=pokemon.name,
337 # Pokemon ID/Number
338 id=int(pokemon.pokemon_id),
339 # Combat Points
340 cp=int(pokemon.cp),
341
342 # Individial Values of the current specific pokemon
343 iv_attack=iv_attack,
344 iv_defense=iv_defense,
345 iv_stamina=iv_stamina,
346 # Joined IV values like: 4/12/9
347 iv_ads='/'.join(map(str, iv_list)),
348 # Joined IV values in HEX like: 4C9
349 iv_ads_hex = ''.join(map(lambda x: format(x, 'X'), iv_list)),
350 # Sum of the Individial Values
351 iv_sum=iv_sum,
352 # IV perfection (in 000-100 format - 3 chars)
353 iv_pct="{:03.0f}".format(iv_pct * 100),
354 # IV perfection (in 00-99 format - 2 chars)
355 # 99 is best (it's a 100% perfection)
356 iv_pct2="{:02.0f}".format(iv_pct * 99),
357 # IV perfection (in 0-9 format - 1 char)
358 # 9 is best (it's a 100% perfection)
359 iv_pct1=int(round(iv_pct * 9)),
360
361 # Basic Values of the pokemon (identical for all of one kind)
362 base_attack=base_attack,
363 base_defense=base_defense,
364 base_stamina=base_stamina,
365 # Joined Base Values like: 125/93/314
366 base_ads='/'.join(map(str, [base_attack, base_defense, base_stamina])),
367
368 # Final Values of the pokemon (Base Values + Individial Values)
369 attack=attack,
370 defense=defense,
371 stamina=stamina,
372 # Joined Final Values like: 129/97/321
373 sum_ads='/'.join(map(str, [attack, defense, stamina])),
374
375 # IV CP perfection (in 000-100 format - 3 chars)
376 # It's a kind of IV perfection percent but calculated
377 # using weight of each IV in its contribution to CP of the best
378 # evolution of current pokemon
379 # So it tends to be more accurate than simple IV perfection
380 ivcp_pct="{:03.0f}".format(pokemon.ivcp * 100),
381 # IV CP perfection (in 00-99 format - 2 chars)
382 ivcp_pct2="{:02.0f}".format(pokemon.ivcp * 99),
383 # IV CP perfection (in 0-9 format - 1 char)
384 ivcp_pct1=int(round(pokemon.ivcp * 9)),
385
386 # One character code for fast attack type
387 # If attack is good character is uppecased, otherwise lowercased
388 fast_attack_char=fast_attack_char,
389 # One character code for charged attack type
390 charged_attack_char=charged_attack_char,
391 # 2 characters code for both attacks of the pokemon
392 attack_code=attack_code,
393
394 # Moveset perfection for attack and for defense (in 000-100 format)
395 # Calculated for current pokemon only, not between all pokemons
396 # So perfect moveset can be weak if pokemon is weak (e.g. Caterpie)
397 attack_pct="{:03.0f}".format(moveset.attack_perfection * 100),
398 defense_pct="{:03.0f}".format(moveset.defense_perfection * 100),
399
400 # Moveset perfection (in 00-99 format - 2 chars)
401 attack_pct2="{:02.0f}".format(moveset.attack_perfection * 99),
402 defense_pct2="{:02.0f}".format(moveset.defense_perfection * 99),
403
404 # Moveset perfection (in 0-9 format - 1 char)
405 attack_pct1=int(round(moveset.attack_perfection * 9)),
406 defense_pct1=int(round(moveset.defense_perfection * 9)),
407 )
408
409 # Use empty result for unsetting nickname
410 # So original pokemon name will be shown to user
411 if new_name == pokemon.name:
412 new_name = ''
413
414 # 12 is a max allowed length for the nickname
415 return new_name[:MAXIMUM_NICKNAME_LENGTH]
416
417 def attack_char(self, attack):
418 # type: (Attack) -> string
419 """
420 One character code for attack type
421 If attack is good then character is uppecased, otherwise lowercased
422
423 Type codes:
424
425 Bug: 'B'
426 Dark: 'K'
427 Dragon: 'D'
428 Electric: 'E'
429 Fairy: 'Y'
430 Fighting: 'T'
431 Fire: 'F'
432 Flying: 'L'
433 Ghost: 'H'
434 Grass: 'A'
435 Ground: 'G'
436 Ice: 'I'
437 Normal: 'N'
438 Poison: 'P'
439 Psychic: 'C'
440 Rock: 'R'
441 Steel: 'S'
442 Water: 'W'
443
444 it's an effective way to represent type with one character
445 if first char is unique - use it, in other case suitable substitute used
446 """
447 char = attack.type.as_one_char.upper()
448 if attack.rate_in_type < self.good_attack_threshold:
449 char = char.lower()
450 return char
451
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pokemongo_bot/cell_workers/nickname_pokemon.py b/pokemongo_bot/cell_workers/nickname_pokemon.py
--- a/pokemongo_bot/cell_workers/nickname_pokemon.py
+++ b/pokemongo_bot/cell_workers/nickname_pokemon.py
@@ -1,3 +1,6 @@
+# -*- coding: utf-8 -*-
+from __future__ import unicode_literals
+
import os
import json
from pokemongo_bot.base_task import BaseTask
| {"golden_diff": "diff --git a/pokemongo_bot/cell_workers/nickname_pokemon.py b/pokemongo_bot/cell_workers/nickname_pokemon.py\n--- a/pokemongo_bot/cell_workers/nickname_pokemon.py\n+++ b/pokemongo_bot/cell_workers/nickname_pokemon.py\n@@ -1,3 +1,6 @@\n+# -*- coding: utf-8 -*-\n+from __future__ import unicode_literals\n+\n import os\n import json\n from pokemongo_bot.base_task import BaseTask\n", "issue": "Bot fails to start: UnicodeEncodeError 'ascii' codec can't encode characters in position 0-3: ordinal not in range(128)\n### Expected Behavior\n\nBot is able to start.\n### Actual Behavior\n\nBot fails to start.\n\nThe names of some monsters are specified by Japanese characters. I'm not sure but it might cause this error.\n### Your config.json (remove your credentials and any other private info)\n\n```\n{\n \"auth_service\": \"google\",\n \"username\": \"xxx\",\n \"password\": \"xxx\",\n \"location\": \"xxx,xxx\",\n \"gmapkey\": \"xxx\",\n \"tasks\": [\n {\n \"type\": \"HandleSoftBan\"\n },\n {\n \"type\": \"CollectLevelUpReward\"\n },\n {\n \"type\": \"IncubateEggs\",\n \"config\": {\n \"longer_eggs_first\": true\n }\n },\n {\n \"type\": \"NicknamePokemon\",\n \"config\": {\n \"nickname_template\": \"{name:.8s}_{iv_pct}\"\n }\n },\n {\n \"type\": \"TransferPokemon\"\n },\n {\n \"type\": \"EvolvePokemon\",\n \"config\": {\n \"evolve_all\": \"none\",\n \"first_evolve_by\": \"iv\",\n \"evolve_above_cp\": 500,\n \"evolve_above_iv\": 0.8,\n \"logic\": \"or\",\n \"evolve_speed\": 20,\n \"use_lucky_egg\": false\n }\n },\n {\n \"type\": \"RecycleItems\",\n \"config\": {\n \"item_filter\": {\n \"Pokeball\": { \"keep\" : 110 },\n \"Greatball\": { \"keep\" : 150 },\n \"Ultraball\": { \"keep\" : 150 },\n \"Potion\": { \"keep\" : 20 },\n \"Super Potion\": { \"keep\" : 30 },\n \"Hyper Potion\": { \"keep\" : 40 },\n \"Revive\": { \"keep\" : 40 },\n \"Razz Berry\": { \"keep\" : 120 }\n }\n }\n },\n {\n \"type\": \"CatchVisiblePokemon\"\n },\n {\n \"type\": \"CatchLuredPokemon\"\n },\n {\n \"type\": \"SpinFort\"\n },\n {\n \"type\": \"MoveToFort\",\n \"config\": {\n \"lure_attraction\": true,\n \"lure_max_distance\": 2000\n }\n },\n {\n \"type\": \"FollowSpiral\",\n \"config\": {\n \"diameter\": 4,\n \"step_size\": 70\n }\n }\n ],\n \"map_object_cache_time\": 5,\n \"forts\": {\n \"avoid_circles\": true,\n \"max_circle_size\": 50\n },\n \"websocket_server\": false,\n \"walk\": 4.16,\n \"action_wait_min\": 1,\n \"action_wait_max\": 4,\n \"debug\": false,\n \"test\": false,\n \"health_record\": true,\n \"location_cache\": true,\n \"distance_unit\": \"km\",\n \"reconnecting_timeout\": 15,\n \"evolve_captured\": \"NONE\",\n \"catch_randomize_reticle_factor\": 1.0,\n \"catch_randomize_spin_factor\": 1.0,\n \"catch\": {\n \"any\": {\"catch_above_cp\": 0, \"catch_above_iv\": 0, \"logic\": \"or\"},\n\n \"// Example of always catching Rattata:\": {},\n \"// Rattata\": { \"always_catch\" : true },\n\n \"// Legendary pokemons (Goes under S-Tier)\": {},\n \"Lapras\": { \"always_catch\": true },\n \"Moltres\": { \"always_catch\": true },\n \"Zapdos\": { \"always_catch\": true },\n \"Articuno\": { \"always_catch\": true },\n\n \"// always catch\": {},\n \"Charmander\": { \"always_catch\": true },\n \"Squirtle\": { \"always_catch\": true },\n \"Pikachu\": { \"always_catch\": true },\n \"Eevee\": { \"always_catch\": true },\n \"Dragonite\": { \"always_catch\": true },\n \"Dragonair\": { \"always_catch\": true },\n \"Dratini\": { \"always_catch\": true },\n\n \"// never catch\": {},\n \"Caterpie\": {\"never_catch\": true},\n \"Weedle\": {\"never_catch\": true},\n \"Pidgey\": {\"never_catch\": true},\n \"Rattata\": {\"never_catch\": true},\n \"Psyduck\": {\"never_catch\": true},\n \"Slowpoke\": {\"never_catch\": true}\n },\n \"release\": {\n \"any\": {\"keep_best_iv\": 2, \"logic\": \"or\"},\n \"Exeggcutor\": { \"never_release\" : true },\n \"Gyarados\": { \"never_release\" : true },\n \"Lapras\": { \"never_release\" : true },\n \"Vaporeon\": { \"never_release\" : true },\n \"Jolteon\": { \"never_release\" : true },\n \"Flareon\": { \"never_release\" : true },\n \"Snorlax\": { \"never_release\" : true },\n \"Dragonite\": { \"never_release\" : true },\n \"// any\": {\"keep_best_cp\": 2, \"keep_best_iv\": 2, \"logic\": \"or\"},\n \"// any\": {\"release_below_cp\": 0, \"release_below_iv\": 0, \"logic\": \"or\"},\n \"// Example of always releasing Rattata:\": {},\n \"// Rattata\": {\"always_release\": true},\n \"// Example of keeping 3 stronger (based on CP) Pidgey:\": {},\n \"// Pidgey\": {\"keep_best_cp\": 3},\n \"// Example of keeping 2 stronger (based on IV) Zubat:\": {},\n \"// Zubat\": {\"keep_best_iv\": 2},\n \"// Also, it is working with any\": {},\n \"// any\": {\"keep_best_iv\": 3},\n \"// Example of keeping the 2 strongest (based on CP) and 3 best (based on IV) Zubat:\": {},\n \"// Zubat\": {\"keep_best_cp\": 2, \"keep_best_iv\": 3}\n },\n \"vips\" : {\n \"Any pokemon put here directly force to use Berry & Best Ball to capture, to secure the capture rate!\": {},\n \"any\": {\"catch_above_cp\": 1200, \"catch_above_iv\": 0.9, \"logic\": \"or\" },\n \"Lapras\": {},\n \"Moltres\": {},\n \"Zapdos\": {},\n \"Articuno\": {},\n\n \"// S-Tier pokemons (if pokemon can be evolved into tier, list the representative)\": {},\n \"Mewtwo\": {},\n \"Dragonite\": {},\n \"Snorlax\": {},\n \"// Mew evolves to Mewtwo\": {},\n \"Mew\": {},\n \"Arcanine\": {},\n \"Vaporeon\": {},\n \"Gyarados\": {},\n \"Exeggutor\": {},\n \"Muk\": {},\n \"Weezing\": {},\n \"Flareon\": {}\n\n }\n}\n```\n### Steps to Reproduce\n\n2016-08-15 10:38:47,935 [ cli] [INFO] PokemonGO Bot v1.0\n2016-08-15 10:38:47,936 [ cli] [INFO] No config argument specified, checking for /configs/config.json\n2016-08-15 10:38:47,939 [ cli] [WARNING] The evolve_captured argument is no longer supported. Please use the EvolvePokemon task instead\n2016-08-15 10:38:47,940 [ cli] [INFO] Configuration initialized\n2016-08-15 10:38:47,940 [pokemongo_bot.health_record.bot_event] [INFO] Health check is enabled. For more information:\n2016-08-15 10:38:47,940 [pokemongo_bot.health_record.bot_event] [INFO] https://github.com/PokemonGoF/PokemonGo-Bot/tree/dev#analytics\n2016-08-15 10:38:47,945 [requests.packages.urllib3.connectionpool] [INFO] Starting new HTTP connection (1): www.google-analytics.com\n2016-08-15 10:38:48,039 [PokemonGoBot] [INFO] [set_start_location] Setting start location.\n2016-08-15 10:38:48,048 [PokemonGoBot] [INFO] [x] Coordinates found in passed in location, not geocoding.\n2016-08-15 10:38:48,049 [PokemonGoBot] [INFO] [location_found] Location found: xxx, xxx (xxx,xxx, 0.0)\n2016-08-15 10:38:48,049 [PokemonGoBot] [INFO] [position_update] Now at (xxx, xxx, 0)\n2016-08-15 10:38:48,049 [PokemonGoBot] [INFO] [login_started] Login procedure started.\n2016-08-15 10:38:50,020 [PokemonGoBot] [INFO] [login_successful] Login successful.\n2016-08-15 10:38:52,387 [PokemonGoBot] [INFO] \n2016-08-15 10:38:52,387 [PokemonGoBot] [INFO] --- sunnyfortune ---\n2016-08-15 10:38:52,387 [PokemonGoBot] [INFO] Level: 24 (Next Level: 69740 XP) (Total: 640260 XP)\n2016-08-15 10:38:52,387 [PokemonGoBot] [INFO] Pokemon Captured: 1688 | Pokestops Visited: 1917\n2016-08-15 10:38:52,388 [PokemonGoBot] [INFO] Pokemon Bag: 194/250\n2016-08-15 10:38:52,389 [PokemonGoBot] [INFO] Items: 689/700\n2016-08-15 10:38:52,389 [PokemonGoBot] [INFO] Stardust: 247878 | Pokecoins: 70\n2016-08-15 10:38:52,389 [PokemonGoBot] [INFO] PokeBalls: 96 | GreatBalls: 154 | UltraBalls: 150 | MasterBalls: 0\n2016-08-15 10:38:52,390 [PokemonGoBot] [INFO] RazzBerries: 124 | BlukBerries: 0 | NanabBerries: 0\n2016-08-15 10:38:52,390 [PokemonGoBot] [INFO] LuckyEgg: 6 | Incubator: 8 | TroyDisk: 11\n2016-08-15 10:38:52,390 [PokemonGoBot] [INFO] Potion: 23 | SuperPotion: 30 | HyperPotion: 41 | MaxPotion: 0\n2016-08-15 10:38:52,390 [PokemonGoBot] [INFO] Incense: 4 | IncenseSpicy: 0 | IncenseCool: 0\n2016-08-15 10:38:52,390 [PokemonGoBot] [INFO] Revive: 40 | MaxRevive: 0\n2016-08-15 10:38:52,390 [PokemonGoBot] [INFO] \n2016-08-15 10:38:52,391 [PokemonGoBot] [INFO] Found encrypt.so! Platform: linux2 Encrypt.so directory: /home/sunny/project/PokemonGo-Bot\n2016-08-15 10:38:52,391 [PokemonGoBot] [INFO] \n2016-08-15 10:38:53,321 [PokemonGoBot] [INFO] [bot_start] Starting bot...\n2016-08-15 10:38:53,637 [CollectLevelUpReward] [INFO] [level_up_reward] Received level up reward: []\n2016-08-15 10:38:53,638 [IncubateEggs] [INFO] [next_egg_incubates] Next egg incubates in 0.13 km\n2016-08-15 10:38:56,931 [ cli] [INFO] \n2016-08-15 10:38:56,931 [ cli] [INFO] Ran for 0:00:09\n2016-08-15 10:38:56,932 [ cli] [INFO] Total XP Earned: 0 Average: 0.00/h\n2016-08-15 10:38:56,932 [ cli] [INFO] Travelled 0.00km\n2016-08-15 10:38:56,932 [ cli] [INFO] Visited 0 stops\n2016-08-15 10:38:56,932 [ cli] [INFO] Encountered 0 pokemon, 0 caught, 0 released, 0 evolved, 0 never seen before\n2016-08-15 10:38:56,932 [ cli] [INFO] Threw 0 pokeballs\n2016-08-15 10:38:56,933 [ cli] [INFO] Earned 0 Stardust\n2016-08-15 10:38:56,933 [ cli] [INFO] \n2016-08-15 10:38:56,933 [ cli] [INFO] Highest CP Pokemon: \n2016-08-15 10:38:56,933 [ cli] [INFO] Most Perfect Pokemon: \nTraceback (most recent call last):\n File \"pokecli.py\", line 578, in <module>\n main()\n File \"pokecli.py\", line 103, in main\n bot.tick()\n File \"/home/sunny/project/PokemonGo-Bot/pokemongo_bot/**init**.py\", line 482, in tick\n if worker.work() == WorkerResult.RUNNING:\n File \"/home/sunny/project/PokemonGo-Bot/pokemongo_bot/cell_workers/nickname_pokemon.py\", line 204, in work\n self._nickname_pokemon(pokemon)\n File \"/home/sunny/project/PokemonGo-Bot/pokemongo_bot/cell_workers/nickname_pokemon.py\", line 271, in _nickname_pokemon\n data={'old_name': old_nickname, 'current_name': new_nickname}\n File \"/home/sunny/project/PokemonGo-Bot/pokemongo_bot/base_task.py\", line 28, in emit_event\n data=data\n File \"/home/sunny/project/PokemonGo-Bot/pokemongo_bot/event_manager.py\", line 61, in emit\n formatted_msg = formatted.format(*_data)\nUnicodeEncodeError: 'ascii' codec can't encode characters in position 0-3: ordinal not in range(128)\n2016-08-15 10:38:56,954 [sentry.errors] [ERROR] Sentry responded with an error: 'ascii' codec can't decode byte 0x9c in position 1: ordinal not in range(128) (url: https://app.getsentry.com/api/90254/store/)\nTraceback (most recent call last):\n File \"/home/sunny/.pyenv/versions/anaconda2-2.5.0/envs/poke/lib/python2.7/site-packages/raven/transport/threaded.py\", line 174, in send_sync\n super(ThreadedHTTPTransport, self).send(data, headers)\n File \"/home/sunny/.pyenv/versions/anaconda2-2.5.0/envs/poke/lib/python2.7/site-packages/raven/transport/http.py\", line 47, in send\n ca_certs=self.ca_certs,\n File \"/home/sunny/.pyenv/versions/anaconda2-2.5.0/envs/poke/lib/python2.7/site-packages/raven/utils/http.py\", line 66, in urlopen\n return opener.open(url, data, timeout)\n File \"/home/sunny/.pyenv/versions/anaconda2-2.5.0/envs/poke/lib/python2.7/site-packages/future/backports/urllib/request.py\", line 494, in open\n response = self._open(req, data)\n File \"/home/sunny/.pyenv/versions/anaconda2-2.5.0/envs/poke/lib/python2.7/site-packages/future/backports/urllib/request.py\", line 512, in _open\n '_open', req)\n File \"/home/sunny/.pyenv/versions/anaconda2-2.5.0/envs/poke/lib/python2.7/site-packages/future/backports/urllib/request.py\", line 466, in _call_chain\n result = func(_args)\n File \"/home/sunny/.pyenv/versions/anaconda2-2.5.0/envs/poke/lib/python2.7/site-packages/raven/utils/http.py\", line 46, in https_open\n return self.do_open(ValidHTTPSConnection, req)\n File \"/home/sunny/.pyenv/versions/anaconda2-2.5.0/envs/poke/lib/python2.7/site-packages/future/backports/urllib/request.py\", line 1284, in do_open\n h.request(req.get_method(), req.selector, req.data, headers)\n File \"/home/sunny/.pyenv/versions/anaconda2-2.5.0/envs/poke/lib/python2.7/httplib.py\", line 1057, in request\n self._send_request(method, url, body, headers)\n File \"/home/sunny/.pyenv/versions/anaconda2-2.5.0/envs/poke/lib/python2.7/httplib.py\", line 1097, in _send_request\n self.endheaders(body)\n File \"/home/sunny/.pyenv/versions/anaconda2-2.5.0/envs/poke/lib/python2.7/httplib.py\", line 1053, in endheaders\n self._send_output(message_body)\n File \"/home/sunny/.pyenv/versions/anaconda2-2.5.0/envs/poke/lib/python2.7/httplib.py\", line 895, in _send_output\n msg += message_body\nUnicodeDecodeError: 'ascii' codec can't decode byte 0x9c in position 1: ordinal not in range(128)\n2016-08-15 10:38:56,958 [sentry.errors.uncaught] [ERROR] [u\"UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-3: ordinal not in range(128)\", u' File \"pokecli.py\", line 578, in <module>', u' File \"pokecli.py\", line 103, in main', u' File \"pokemongo_bot/__init__.py\", line 482, in tick', u' File \"pokemongo_bot/cell_workers/nickname_pokemon.py\", line 204, in work', u' File \"pokemongo_bot/cell_workers/nickname_pokemon.py\", line 271, in _nickname_pokemon', u' File \"pokemongo_bot/base_task.py\", line 28, in emit_event', u' File \"pokemongo_bot/event_manager.py\", line 61, in emit']\n### Other Information\n\nOS:ubuntu 14.04 LTS\nGit Commit: 5c9cdb53e69b5069cee6fe100d39e3cf5d63539c\nPython Version: Python 2.7.12 :: Continuum Analytics, Inc.\n\n", "before_files": [{"content": "import os\nimport json\nfrom pokemongo_bot.base_task import BaseTask\nfrom pokemongo_bot.human_behaviour import sleep\nfrom pokemongo_bot.inventory import pokemons, Pokemon, Attack\n\nimport re\n\n\nDEFAULT_IGNORE_FAVORITES = False\nDEFAULT_GOOD_ATTACK_THRESHOLD = 0.7\nDEFAULT_TEMPLATE = '{name}'\n\nMAXIMUM_NICKNAME_LENGTH = 12\n\n\nclass NicknamePokemon(BaseTask):\n SUPPORTED_TASK_API_VERSION = 1\n\n \"\"\"\n Nickname user pokemons according to the specified template\n\n\n PARAMETERS:\n\n dont_nickname_favorite (default: False)\n Prevents renaming of favorited pokemons\n\n good_attack_threshold (default: 0.7)\n Threshold for perfection of the attack in it's type (0.0-1.0)\n after which attack will be treated as good.\n Used for {fast_attack_char}, {charged_attack_char}, {attack_code}\n templates\n\n nickname_template (default: '{name}')\n Template for nickname generation.\n Empty template or any resulting in the simple pokemon name\n (e.g. '', '{name}', ...) will revert all pokemon to their original\n names (as if they had no nickname).\n\n Niantic imposes a 12-character limit on all pokemon nicknames, so\n any new nickname will be truncated to 12 characters if over that limit.\n Thus, it is up to the user to exercise judgment on what template will\n best suit their need with this constraint in mind.\n\n You can use full force of the Python [Format String syntax](https://docs.python.org/2.7/library/string.html#formatstrings)\n For example, using `{name:.8s}` causes the Pokemon name to never take up\n more than 8 characters in the nickname. This would help guarantee that\n a template like `{name:.8s}_{iv_pct}` never goes over the 12-character\n limit.\n\n\n **NOTE:** If you experience frequent `Pokemon not found` error messages,\n this is because the inventory cache has not been updated after a pokemon\n was released. This can be remedied by placing the `NicknamePokemon` task\n above the `TransferPokemon` task in your `config.json` file.\n\n\n EXAMPLE CONFIG:\n {\n \"type\": \"NicknamePokemon\",\n \"config\": {\n \"enabled\": true,\n \"dont_nickname_favorite\": false,\n \"good_attack_threshold\": 0.7,\n \"nickname_template\": \"{iv_pct}_{iv_ads}\"\n }\n }\n\n\n SUPPORTED PATTERN KEYS:\n\n {name} Pokemon name (e.g. Articuno)\n {id} Pokemon ID/Number (1-151)\n {cp} Combat Points (10-4145)\n\n # Individial Values\n {iv_attack} Individial Attack (0-15) of the current specific pokemon\n {iv_defense} Individial Defense (0-15) of the current specific pokemon\n {iv_stamina} Individial Stamina (0-15) of the current specific pokemon\n {iv_ads} Joined IV values (e.g. 4/12/9)\n {iv_sum} Sum of the Individial Values (0-45)\n {iv_pct} IV perfection (in 000-100 format - 3 chars)\n {iv_pct2} IV perfection (in 00-99 format - 2 chars)\n So 99 is best (it's a 100% perfection)\n {iv_pct1} IV perfection (in 0-9 format - 1 char)\n {iv_ads_hex} Joined IV values in HEX (e.g. 4C9)\n\n # Basic Values of the pokemon (identical for all of one kind)\n {base_attack} Basic Attack (40-284) of the current pokemon kind\n {base_defense} Basic Defense (54-242) of the current pokemon kind\n {base_stamina} Basic Stamina (20-500) of the current pokemon kind\n {base_ads} Joined Basic Values (e.g. 125/93/314)\n\n # Final Values of the pokemon (Base Values + Individial Values)\n {attack} Basic Attack + Individial Attack\n {defense} Basic Defense + Individial Defense\n {stamina} Basic Stamina + Individial Stamina\n {sum_ads} Joined Final Values (e.g. 129/97/321)\n\n # IV CP perfection - it's a kind of IV perfection percent\n # but calculated using weight of each IV in its contribution\n # to CP of the best evolution of current pokemon.\n # So it tends to be more accurate than simple IV perfection.\n {ivcp_pct} IV CP perfection (in 000-100 format - 3 chars)\n {ivcp_pct2} IV CP perfection (in 00-99 format - 2 chars)\n So 99 is best (it's a 100% perfection)\n {ivcp_pct1} IV CP perfection (in 0-9 format - 1 char)\n\n # Character codes for fast/charged attack types.\n # If attack is good character is uppecased, otherwise lowercased.\n # Use 'good_attack_threshold' option for customization\n #\n # It's an effective way to represent type with one character.\n # If first char of the type name is unique - use it,\n # in other case suitable substitute used\n #\n # Type codes:\n # Bug: 'B'\n # Dark: 'K'\n # Dragon: 'D'\n # Electric: 'E'\n # Fairy: 'Y'\n # Fighting: 'T'\n # Fire: 'F'\n # Flying: 'L'\n # Ghost: 'H'\n # Grass: 'A'\n # Ground: 'G'\n # Ice: 'I'\n # Normal: 'N'\n # Poison: 'P'\n # Psychic: 'C'\n # Rock: 'R'\n # Steel: 'S'\n # Water: 'W'\n #\n {fast_attack_char} One character code for fast attack type\n (e.g. 'F' for good Fire or 's' for bad\n Steel attack)\n {charged_attack_char} One character code for charged attack type\n (e.g. 'n' for bad Normal or 'I' for good\n Ice attack)\n {attack_code} Joined 2 character code for both attacks\n (e.g. 'Lh' for pokemon with good Flying\n and weak Ghost attacks)\n\n # Moveset perfection percents for attack and for defense\n # Calculated for current pokemon only, not between all pokemons\n # So perfect moveset can be weak if pokemon is weak (e.g. Caterpie)\n {attack_pct} Moveset perfection for attack (in 000-100 format - 3 chars)\n {defense_pct} Moveset perfection for defense (in 000-100 format - 3 chars)\n {attack_pct2} Moveset perfection for attack (in 00-99 format - 2 chars)\n {defense_pct2} Moveset perfection for defense (in 00-99 format - 2 chars)\n {attack_pct1} Moveset perfection for attack (in 0-9 format - 1 char)\n {defense_pct1} Moveset perfection for defense (in 0-9 format - 1 char)\n\n # Special case: pokemon object.\n # You can access any available pokemon info via it.\n # Examples:\n # '{pokemon.ivcp:.2%}' -> '47.00%'\n # '{pokemon.fast_attack}' -> 'Wing Attack'\n # '{pokemon.fast_attack.type}' -> 'Flying'\n # '{pokemon.fast_attack.dps:.2f}' -> '10.91'\n # '{pokemon.fast_attack.dps:.0f}' -> '11'\n # '{pokemon.charged_attack}' -> 'Ominous Wind'\n {pokemon} Pokemon instance (see inventory.py for class sources)\n\n\n EXAMPLES:\n\n 1. \"nickname_template\": \"{ivcp_pct}_{iv_pct}_{iv_ads}\"\n\n Golbat with IV (attack: 9, defense: 4 and stamina: 8) will result in:\n '48_46_9/4/8'\n\n 2. \"nickname_template\": \"{attack_code}{attack_pct1}{defense_pct1}{ivcp_pct1}{name}\"\n\n Same Golbat (with attacks Wing Attack & Ominous Wind) will have nickname:\n 'Lh474Golbat'\n\n See /tests/nickname_test.py for more examples.\n \"\"\"\n\n # noinspection PyAttributeOutsideInit\n def initialize(self):\n self.ignore_favorites = self.config.get(\n 'dont_nickname_favorite', DEFAULT_IGNORE_FAVORITES)\n self.good_attack_threshold = self.config.get(\n 'good_attack_threshold', DEFAULT_GOOD_ATTACK_THRESHOLD)\n self.template = self.config.get(\n 'nickname_template', DEFAULT_TEMPLATE)\n\n self.translate = None\n locale = self.config.get('locale', 'en')\n if locale != 'en':\n fn = 'data/locales/{}.json'.format(locale)\n if os.path.isfile(fn):\n self.translate = json.load(open(fn))\n\n def work(self):\n \"\"\"\n Iterate over all user pokemons and nickname if needed\n \"\"\"\n for pokemon in pokemons().all(): # type: Pokemon\n if not pokemon.is_favorite or not self.ignore_favorites:\n self._nickname_pokemon(pokemon)\n\n def _localize(self, string):\n if self.translate and string in self.translate:\n return self.translate[string]\n else:\n return string\n\n def _nickname_pokemon(self, pokemon):\n # type: (Pokemon) -> None\n \"\"\"\n Nicknaming process\n \"\"\"\n\n # We need id of the specific pokemon unstance to be able to rename it\n instance_id = pokemon.id\n if not instance_id:\n self.emit_event(\n 'api_error',\n formatted='Failed to get pokemon name, will not rename.'\n )\n return\n\n # Generate new nickname\n old_nickname = pokemon.nickname\n try:\n new_nickname = self._generate_new_nickname(pokemon, self.template)\n except KeyError as bad_key:\n self.emit_event(\n 'config_error',\n formatted=\"Unable to nickname {} due to bad template ({})\"\n .format(old_nickname, bad_key)\n )\n return\n\n # Skip if pokemon is already well named\n if pokemon.nickname_raw == new_nickname:\n return\n\n # Send request\n response = self.bot.api.nickname_pokemon(\n pokemon_id=instance_id, nickname=new_nickname)\n sleep(1.2) # wait a bit after request\n\n # Check result\n try:\n result = reduce(dict.__getitem__, [\"responses\", \"NICKNAME_POKEMON\"],\n response)['result']\n except KeyError:\n self.emit_event(\n 'api_error',\n formatted='Attempt to nickname received bad response from server.'\n )\n return\n\n # Nickname unset\n if result == 0:\n self.emit_event(\n 'unset_pokemon_nickname',\n formatted=\"Pokemon {old_name} nickname unset.\",\n data={'old_name': old_nickname}\n )\n pokemon.update_nickname(new_nickname)\n elif result == 1:\n self.emit_event(\n 'rename_pokemon',\n formatted=\"Pokemon {old_name} renamed to {current_name}\",\n data={'old_name': old_nickname, 'current_name': new_nickname}\n )\n pokemon.update_nickname(new_nickname)\n elif result == 2:\n self.emit_event(\n 'pokemon_nickname_invalid',\n formatted=\"Nickname {nickname} is invalid\",\n data={'nickname': new_nickname}\n )\n else:\n self.emit_event(\n 'api_error',\n formatted='Attempt to nickname received unexpected result'\n ' from server ({}).'.format(result)\n )\n\n def _generate_new_nickname(self, pokemon, template):\n # type: (Pokemon, string) -> string\n \"\"\"\n New nickname generation\n \"\"\"\n\n # Filter template\n # only convert the keys to lowercase, leaving the format specifier alone\n template = re.sub(r\"{[\\w_\\d]*\", lambda x:x.group(0).lower(), template).strip()\n\n # Individial Values of the current specific pokemon (different for each)\n iv_attack = pokemon.iv_attack\n iv_defense = pokemon.iv_defense\n iv_stamina = pokemon.iv_stamina\n iv_list = [iv_attack, iv_defense, iv_stamina]\n iv_sum = sum(iv_list)\n iv_pct = iv_sum / 45.0\n\n # Basic Values of the pokemon (identical for all of one kind)\n base_attack = pokemon.static.base_attack\n base_defense = pokemon.static.base_defense\n base_stamina = pokemon.static.base_stamina\n\n # Final Values of the pokemon\n attack = base_attack + iv_attack\n defense = base_defense + iv_defense\n stamina = base_stamina + iv_stamina\n\n # One character codes for fast/charged attack types\n # If attack is good then character is uppecased, otherwise lowercased\n fast_attack_char = self.attack_char(pokemon.fast_attack)\n charged_attack_char = self.attack_char(pokemon.charged_attack)\n # 2 characters code for both attacks of the pokemon\n attack_code = fast_attack_char + charged_attack_char\n\n moveset = pokemon.moveset\n\n pokemon.name = self._localize(pokemon.name)\n\n #\n # Generate new nickname\n #\n new_name = template.format(\n # Pokemon\n pokemon=pokemon,\n # Pokemon name\n name=pokemon.name,\n # Pokemon ID/Number\n id=int(pokemon.pokemon_id),\n # Combat Points\n cp=int(pokemon.cp),\n\n # Individial Values of the current specific pokemon\n iv_attack=iv_attack,\n iv_defense=iv_defense,\n iv_stamina=iv_stamina,\n # Joined IV values like: 4/12/9\n iv_ads='/'.join(map(str, iv_list)),\n # Joined IV values in HEX like: 4C9\n iv_ads_hex = ''.join(map(lambda x: format(x, 'X'), iv_list)),\n # Sum of the Individial Values\n iv_sum=iv_sum,\n # IV perfection (in 000-100 format - 3 chars)\n iv_pct=\"{:03.0f}\".format(iv_pct * 100),\n # IV perfection (in 00-99 format - 2 chars)\n # 99 is best (it's a 100% perfection)\n iv_pct2=\"{:02.0f}\".format(iv_pct * 99),\n # IV perfection (in 0-9 format - 1 char)\n # 9 is best (it's a 100% perfection)\n iv_pct1=int(round(iv_pct * 9)),\n\n # Basic Values of the pokemon (identical for all of one kind)\n base_attack=base_attack,\n base_defense=base_defense,\n base_stamina=base_stamina,\n # Joined Base Values like: 125/93/314\n base_ads='/'.join(map(str, [base_attack, base_defense, base_stamina])),\n\n # Final Values of the pokemon (Base Values + Individial Values)\n attack=attack,\n defense=defense,\n stamina=stamina,\n # Joined Final Values like: 129/97/321\n sum_ads='/'.join(map(str, [attack, defense, stamina])),\n\n # IV CP perfection (in 000-100 format - 3 chars)\n # It's a kind of IV perfection percent but calculated\n # using weight of each IV in its contribution to CP of the best\n # evolution of current pokemon\n # So it tends to be more accurate than simple IV perfection\n ivcp_pct=\"{:03.0f}\".format(pokemon.ivcp * 100),\n # IV CP perfection (in 00-99 format - 2 chars)\n ivcp_pct2=\"{:02.0f}\".format(pokemon.ivcp * 99),\n # IV CP perfection (in 0-9 format - 1 char)\n ivcp_pct1=int(round(pokemon.ivcp * 9)),\n\n # One character code for fast attack type\n # If attack is good character is uppecased, otherwise lowercased\n fast_attack_char=fast_attack_char,\n # One character code for charged attack type\n charged_attack_char=charged_attack_char,\n # 2 characters code for both attacks of the pokemon\n attack_code=attack_code,\n\n # Moveset perfection for attack and for defense (in 000-100 format)\n # Calculated for current pokemon only, not between all pokemons\n # So perfect moveset can be weak if pokemon is weak (e.g. Caterpie)\n attack_pct=\"{:03.0f}\".format(moveset.attack_perfection * 100),\n defense_pct=\"{:03.0f}\".format(moveset.defense_perfection * 100),\n\n # Moveset perfection (in 00-99 format - 2 chars)\n attack_pct2=\"{:02.0f}\".format(moveset.attack_perfection * 99),\n defense_pct2=\"{:02.0f}\".format(moveset.defense_perfection * 99),\n\n # Moveset perfection (in 0-9 format - 1 char)\n attack_pct1=int(round(moveset.attack_perfection * 9)),\n defense_pct1=int(round(moveset.defense_perfection * 9)),\n )\n\n # Use empty result for unsetting nickname\n # So original pokemon name will be shown to user\n if new_name == pokemon.name:\n new_name = ''\n\n # 12 is a max allowed length for the nickname\n return new_name[:MAXIMUM_NICKNAME_LENGTH]\n\n def attack_char(self, attack):\n # type: (Attack) -> string\n \"\"\"\n One character code for attack type\n If attack is good then character is uppecased, otherwise lowercased\n\n Type codes:\n\n Bug: 'B'\n Dark: 'K'\n Dragon: 'D'\n Electric: 'E'\n Fairy: 'Y'\n Fighting: 'T'\n Fire: 'F'\n Flying: 'L'\n Ghost: 'H'\n Grass: 'A'\n Ground: 'G'\n Ice: 'I'\n Normal: 'N'\n Poison: 'P'\n Psychic: 'C'\n Rock: 'R'\n Steel: 'S'\n Water: 'W'\n\n it's an effective way to represent type with one character\n if first char is unique - use it, in other case suitable substitute used\n \"\"\"\n char = attack.type.as_one_char.upper()\n if attack.rate_in_type < self.good_attack_threshold:\n char = char.lower()\n return char\n", "path": "pokemongo_bot/cell_workers/nickname_pokemon.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals\n\nimport os\nimport json\nfrom pokemongo_bot.base_task import BaseTask\nfrom pokemongo_bot.human_behaviour import sleep\nfrom pokemongo_bot.inventory import pokemons, Pokemon, Attack\n\nimport re\n\n\nDEFAULT_IGNORE_FAVORITES = False\nDEFAULT_GOOD_ATTACK_THRESHOLD = 0.7\nDEFAULT_TEMPLATE = '{name}'\n\nMAXIMUM_NICKNAME_LENGTH = 12\n\n\nclass NicknamePokemon(BaseTask):\n SUPPORTED_TASK_API_VERSION = 1\n\n \"\"\"\n Nickname user pokemons according to the specified template\n\n\n PARAMETERS:\n\n dont_nickname_favorite (default: False)\n Prevents renaming of favorited pokemons\n\n good_attack_threshold (default: 0.7)\n Threshold for perfection of the attack in it's type (0.0-1.0)\n after which attack will be treated as good.\n Used for {fast_attack_char}, {charged_attack_char}, {attack_code}\n templates\n\n nickname_template (default: '{name}')\n Template for nickname generation.\n Empty template or any resulting in the simple pokemon name\n (e.g. '', '{name}', ...) will revert all pokemon to their original\n names (as if they had no nickname).\n\n Niantic imposes a 12-character limit on all pokemon nicknames, so\n any new nickname will be truncated to 12 characters if over that limit.\n Thus, it is up to the user to exercise judgment on what template will\n best suit their need with this constraint in mind.\n\n You can use full force of the Python [Format String syntax](https://docs.python.org/2.7/library/string.html#formatstrings)\n For example, using `{name:.8s}` causes the Pokemon name to never take up\n more than 8 characters in the nickname. This would help guarantee that\n a template like `{name:.8s}_{iv_pct}` never goes over the 12-character\n limit.\n\n\n **NOTE:** If you experience frequent `Pokemon not found` error messages,\n this is because the inventory cache has not been updated after a pokemon\n was released. This can be remedied by placing the `NicknamePokemon` task\n above the `TransferPokemon` task in your `config.json` file.\n\n\n EXAMPLE CONFIG:\n {\n \"type\": \"NicknamePokemon\",\n \"config\": {\n \"enabled\": true,\n \"dont_nickname_favorite\": false,\n \"good_attack_threshold\": 0.7,\n \"nickname_template\": \"{iv_pct}_{iv_ads}\"\n }\n }\n\n\n SUPPORTED PATTERN KEYS:\n\n {name} Pokemon name (e.g. Articuno)\n {id} Pokemon ID/Number (1-151)\n {cp} Combat Points (10-4145)\n\n # Individial Values\n {iv_attack} Individial Attack (0-15) of the current specific pokemon\n {iv_defense} Individial Defense (0-15) of the current specific pokemon\n {iv_stamina} Individial Stamina (0-15) of the current specific pokemon\n {iv_ads} Joined IV values (e.g. 4/12/9)\n {iv_sum} Sum of the Individial Values (0-45)\n {iv_pct} IV perfection (in 000-100 format - 3 chars)\n {iv_pct2} IV perfection (in 00-99 format - 2 chars)\n So 99 is best (it's a 100% perfection)\n {iv_pct1} IV perfection (in 0-9 format - 1 char)\n {iv_ads_hex} Joined IV values in HEX (e.g. 4C9)\n\n # Basic Values of the pokemon (identical for all of one kind)\n {base_attack} Basic Attack (40-284) of the current pokemon kind\n {base_defense} Basic Defense (54-242) of the current pokemon kind\n {base_stamina} Basic Stamina (20-500) of the current pokemon kind\n {base_ads} Joined Basic Values (e.g. 125/93/314)\n\n # Final Values of the pokemon (Base Values + Individial Values)\n {attack} Basic Attack + Individial Attack\n {defense} Basic Defense + Individial Defense\n {stamina} Basic Stamina + Individial Stamina\n {sum_ads} Joined Final Values (e.g. 129/97/321)\n\n # IV CP perfection - it's a kind of IV perfection percent\n # but calculated using weight of each IV in its contribution\n # to CP of the best evolution of current pokemon.\n # So it tends to be more accurate than simple IV perfection.\n {ivcp_pct} IV CP perfection (in 000-100 format - 3 chars)\n {ivcp_pct2} IV CP perfection (in 00-99 format - 2 chars)\n So 99 is best (it's a 100% perfection)\n {ivcp_pct1} IV CP perfection (in 0-9 format - 1 char)\n\n # Character codes for fast/charged attack types.\n # If attack is good character is uppecased, otherwise lowercased.\n # Use 'good_attack_threshold' option for customization\n #\n # It's an effective way to represent type with one character.\n # If first char of the type name is unique - use it,\n # in other case suitable substitute used\n #\n # Type codes:\n # Bug: 'B'\n # Dark: 'K'\n # Dragon: 'D'\n # Electric: 'E'\n # Fairy: 'Y'\n # Fighting: 'T'\n # Fire: 'F'\n # Flying: 'L'\n # Ghost: 'H'\n # Grass: 'A'\n # Ground: 'G'\n # Ice: 'I'\n # Normal: 'N'\n # Poison: 'P'\n # Psychic: 'C'\n # Rock: 'R'\n # Steel: 'S'\n # Water: 'W'\n #\n {fast_attack_char} One character code for fast attack type\n (e.g. 'F' for good Fire or 's' for bad\n Steel attack)\n {charged_attack_char} One character code for charged attack type\n (e.g. 'n' for bad Normal or 'I' for good\n Ice attack)\n {attack_code} Joined 2 character code for both attacks\n (e.g. 'Lh' for pokemon with good Flying\n and weak Ghost attacks)\n\n # Moveset perfection percents for attack and for defense\n # Calculated for current pokemon only, not between all pokemons\n # So perfect moveset can be weak if pokemon is weak (e.g. Caterpie)\n {attack_pct} Moveset perfection for attack (in 000-100 format - 3 chars)\n {defense_pct} Moveset perfection for defense (in 000-100 format - 3 chars)\n {attack_pct2} Moveset perfection for attack (in 00-99 format - 2 chars)\n {defense_pct2} Moveset perfection for defense (in 00-99 format - 2 chars)\n {attack_pct1} Moveset perfection for attack (in 0-9 format - 1 char)\n {defense_pct1} Moveset perfection for defense (in 0-9 format - 1 char)\n\n # Special case: pokemon object.\n # You can access any available pokemon info via it.\n # Examples:\n # '{pokemon.ivcp:.2%}' -> '47.00%'\n # '{pokemon.fast_attack}' -> 'Wing Attack'\n # '{pokemon.fast_attack.type}' -> 'Flying'\n # '{pokemon.fast_attack.dps:.2f}' -> '10.91'\n # '{pokemon.fast_attack.dps:.0f}' -> '11'\n # '{pokemon.charged_attack}' -> 'Ominous Wind'\n {pokemon} Pokemon instance (see inventory.py for class sources)\n\n\n EXAMPLES:\n\n 1. \"nickname_template\": \"{ivcp_pct}_{iv_pct}_{iv_ads}\"\n\n Golbat with IV (attack: 9, defense: 4 and stamina: 8) will result in:\n '48_46_9/4/8'\n\n 2. \"nickname_template\": \"{attack_code}{attack_pct1}{defense_pct1}{ivcp_pct1}{name}\"\n\n Same Golbat (with attacks Wing Attack & Ominous Wind) will have nickname:\n 'Lh474Golbat'\n\n See /tests/nickname_test.py for more examples.\n \"\"\"\n\n # noinspection PyAttributeOutsideInit\n def initialize(self):\n self.ignore_favorites = self.config.get(\n 'dont_nickname_favorite', DEFAULT_IGNORE_FAVORITES)\n self.good_attack_threshold = self.config.get(\n 'good_attack_threshold', DEFAULT_GOOD_ATTACK_THRESHOLD)\n self.template = self.config.get(\n 'nickname_template', DEFAULT_TEMPLATE)\n\n self.translate = None\n locale = self.config.get('locale', 'en')\n if locale != 'en':\n fn = 'data/locales/{}.json'.format(locale)\n if os.path.isfile(fn):\n self.translate = json.load(open(fn))\n\n def work(self):\n \"\"\"\n Iterate over all user pokemons and nickname if needed\n \"\"\"\n for pokemon in pokemons().all(): # type: Pokemon\n if not pokemon.is_favorite or not self.ignore_favorites:\n self._nickname_pokemon(pokemon)\n\n def _localize(self, string):\n if self.translate and string in self.translate:\n return self.translate[string]\n else:\n return string\n\n def _nickname_pokemon(self, pokemon):\n # type: (Pokemon) -> None\n \"\"\"\n Nicknaming process\n \"\"\"\n\n # We need id of the specific pokemon unstance to be able to rename it\n instance_id = pokemon.id\n if not instance_id:\n self.emit_event(\n 'api_error',\n formatted='Failed to get pokemon name, will not rename.'\n )\n return\n\n # Generate new nickname\n old_nickname = pokemon.nickname\n try:\n new_nickname = self._generate_new_nickname(pokemon, self.template)\n except KeyError as bad_key:\n self.emit_event(\n 'config_error',\n formatted=\"Unable to nickname {} due to bad template ({})\"\n .format(old_nickname, bad_key)\n )\n return\n\n # Skip if pokemon is already well named\n if pokemon.nickname_raw == new_nickname:\n return\n\n # Send request\n response = self.bot.api.nickname_pokemon(\n pokemon_id=instance_id, nickname=new_nickname)\n sleep(1.2) # wait a bit after request\n\n # Check result\n try:\n result = reduce(dict.__getitem__, [\"responses\", \"NICKNAME_POKEMON\"],\n response)['result']\n except KeyError:\n self.emit_event(\n 'api_error',\n formatted='Attempt to nickname received bad response from server.'\n )\n return\n\n # Nickname unset\n if result == 0:\n self.emit_event(\n 'unset_pokemon_nickname',\n formatted=\"Pokemon {old_name} nickname unset.\",\n data={'old_name': old_nickname}\n )\n pokemon.update_nickname(new_nickname)\n elif result == 1:\n self.emit_event(\n 'rename_pokemon',\n formatted=\"Pokemon {old_name} renamed to {current_name}\",\n data={'old_name': old_nickname, 'current_name': new_nickname}\n )\n pokemon.update_nickname(new_nickname)\n elif result == 2:\n self.emit_event(\n 'pokemon_nickname_invalid',\n formatted=\"Nickname {nickname} is invalid\",\n data={'nickname': new_nickname}\n )\n else:\n self.emit_event(\n 'api_error',\n formatted='Attempt to nickname received unexpected result'\n ' from server ({}).'.format(result)\n )\n\n def _generate_new_nickname(self, pokemon, template):\n # type: (Pokemon, string) -> string\n \"\"\"\n New nickname generation\n \"\"\"\n\n # Filter template\n # only convert the keys to lowercase, leaving the format specifier alone\n template = re.sub(r\"{[\\w_\\d]*\", lambda x:x.group(0).lower(), template).strip()\n\n # Individial Values of the current specific pokemon (different for each)\n iv_attack = pokemon.iv_attack\n iv_defense = pokemon.iv_defense\n iv_stamina = pokemon.iv_stamina\n iv_list = [iv_attack, iv_defense, iv_stamina]\n iv_sum = sum(iv_list)\n iv_pct = iv_sum / 45.0\n\n # Basic Values of the pokemon (identical for all of one kind)\n base_attack = pokemon.static.base_attack\n base_defense = pokemon.static.base_defense\n base_stamina = pokemon.static.base_stamina\n\n # Final Values of the pokemon\n attack = base_attack + iv_attack\n defense = base_defense + iv_defense\n stamina = base_stamina + iv_stamina\n\n # One character codes for fast/charged attack types\n # If attack is good then character is uppecased, otherwise lowercased\n fast_attack_char = self.attack_char(pokemon.fast_attack)\n charged_attack_char = self.attack_char(pokemon.charged_attack)\n # 2 characters code for both attacks of the pokemon\n attack_code = fast_attack_char + charged_attack_char\n\n moveset = pokemon.moveset\n\n pokemon.name = self._localize(pokemon.name)\n\n #\n # Generate new nickname\n #\n new_name = template.format(\n # Pokemon\n pokemon=pokemon,\n # Pokemon name\n name=pokemon.name,\n # Pokemon ID/Number\n id=int(pokemon.pokemon_id),\n # Combat Points\n cp=int(pokemon.cp),\n\n # Individial Values of the current specific pokemon\n iv_attack=iv_attack,\n iv_defense=iv_defense,\n iv_stamina=iv_stamina,\n # Joined IV values like: 4/12/9\n iv_ads='/'.join(map(str, iv_list)),\n # Joined IV values in HEX like: 4C9\n iv_ads_hex = ''.join(map(lambda x: format(x, 'X'), iv_list)),\n # Sum of the Individial Values\n iv_sum=iv_sum,\n # IV perfection (in 000-100 format - 3 chars)\n iv_pct=\"{:03.0f}\".format(iv_pct * 100),\n # IV perfection (in 00-99 format - 2 chars)\n # 99 is best (it's a 100% perfection)\n iv_pct2=\"{:02.0f}\".format(iv_pct * 99),\n # IV perfection (in 0-9 format - 1 char)\n # 9 is best (it's a 100% perfection)\n iv_pct1=int(round(iv_pct * 9)),\n\n # Basic Values of the pokemon (identical for all of one kind)\n base_attack=base_attack,\n base_defense=base_defense,\n base_stamina=base_stamina,\n # Joined Base Values like: 125/93/314\n base_ads='/'.join(map(str, [base_attack, base_defense, base_stamina])),\n\n # Final Values of the pokemon (Base Values + Individial Values)\n attack=attack,\n defense=defense,\n stamina=stamina,\n # Joined Final Values like: 129/97/321\n sum_ads='/'.join(map(str, [attack, defense, stamina])),\n\n # IV CP perfection (in 000-100 format - 3 chars)\n # It's a kind of IV perfection percent but calculated\n # using weight of each IV in its contribution to CP of the best\n # evolution of current pokemon\n # So it tends to be more accurate than simple IV perfection\n ivcp_pct=\"{:03.0f}\".format(pokemon.ivcp * 100),\n # IV CP perfection (in 00-99 format - 2 chars)\n ivcp_pct2=\"{:02.0f}\".format(pokemon.ivcp * 99),\n # IV CP perfection (in 0-9 format - 1 char)\n ivcp_pct1=int(round(pokemon.ivcp * 9)),\n\n # One character code for fast attack type\n # If attack is good character is uppecased, otherwise lowercased\n fast_attack_char=fast_attack_char,\n # One character code for charged attack type\n charged_attack_char=charged_attack_char,\n # 2 characters code for both attacks of the pokemon\n attack_code=attack_code,\n\n # Moveset perfection for attack and for defense (in 000-100 format)\n # Calculated for current pokemon only, not between all pokemons\n # So perfect moveset can be weak if pokemon is weak (e.g. Caterpie)\n attack_pct=\"{:03.0f}\".format(moveset.attack_perfection * 100),\n defense_pct=\"{:03.0f}\".format(moveset.defense_perfection * 100),\n\n # Moveset perfection (in 00-99 format - 2 chars)\n attack_pct2=\"{:02.0f}\".format(moveset.attack_perfection * 99),\n defense_pct2=\"{:02.0f}\".format(moveset.defense_perfection * 99),\n\n # Moveset perfection (in 0-9 format - 1 char)\n attack_pct1=int(round(moveset.attack_perfection * 9)),\n defense_pct1=int(round(moveset.defense_perfection * 9)),\n )\n\n # Use empty result for unsetting nickname\n # So original pokemon name will be shown to user\n if new_name == pokemon.name:\n new_name = ''\n\n # 12 is a max allowed length for the nickname\n return new_name[:MAXIMUM_NICKNAME_LENGTH]\n\n def attack_char(self, attack):\n # type: (Attack) -> string\n \"\"\"\n One character code for attack type\n If attack is good then character is uppecased, otherwise lowercased\n\n Type codes:\n\n Bug: 'B'\n Dark: 'K'\n Dragon: 'D'\n Electric: 'E'\n Fairy: 'Y'\n Fighting: 'T'\n Fire: 'F'\n Flying: 'L'\n Ghost: 'H'\n Grass: 'A'\n Ground: 'G'\n Ice: 'I'\n Normal: 'N'\n Poison: 'P'\n Psychic: 'C'\n Rock: 'R'\n Steel: 'S'\n Water: 'W'\n\n it's an effective way to represent type with one character\n if first char is unique - use it, in other case suitable substitute used\n \"\"\"\n char = attack.type.as_one_char.upper()\n if attack.rate_in_type < self.good_attack_threshold:\n char = char.lower()\n return char\n", "path": "pokemongo_bot/cell_workers/nickname_pokemon.py"}]} |
gh_patches_debug_51 | rasdani/github-patches | git_diff | pydantic__pydantic-1618 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
NameError: name 'SchemaExtraCallable' is not defined
# Bug
https://github.com/pawamoy/pytkdocs/pull/41/checks?check_run_id=747827745
```
pydantic version: 1.5.1
pydantic compiled: False
install path: /home/pawamoy/.cache/pypoetry/virtualenvs/pytkdocs-LMVK1zAi-py3.7/lib/python3.7/site-packages/pydantic
python version: 3.7.5 (default, Apr 27 2020, 16:40:42) [GCC 9.3.0]
platform: Linux-5.6.15-arch1-1-x86_64-with-arch
optional deps. installed: ['typing-extensions']
```
```py
>>> import typing
>>> import pydantic
>>>
>>> class M(pydantic.BaseModel):
... a: int
...
>>> typing.get_type_hints(M.__config__)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/pawamoy/.basher-packages/pyenv/pyenv/versions/3.7.5/lib/python3.7/typing.py", line 976, in get_type_hints
value = _eval_type(value, base_globals, localns)
File "/home/pawamoy/.basher-packages/pyenv/pyenv/versions/3.7.5/lib/python3.7/typing.py", line 265, in _eval_type
ev_args = tuple(_eval_type(a, globalns, localns) for a in t.__args__)
File "/home/pawamoy/.basher-packages/pyenv/pyenv/versions/3.7.5/lib/python3.7/typing.py", line 265, in <genexpr>
ev_args = tuple(_eval_type(a, globalns, localns) for a in t.__args__)
File "/home/pawamoy/.basher-packages/pyenv/pyenv/versions/3.7.5/lib/python3.7/typing.py", line 263, in _eval_type
return t._evaluate(globalns, localns)
File "/home/pawamoy/.basher-packages/pyenv/pyenv/versions/3.7.5/lib/python3.7/typing.py", line 467, in _evaluate
eval(self.__forward_code__, globalns, localns),
File "<string>", line 1, in <module>
NameError: name 'SchemaExtraCallable' is not defined
```
Now you could wonder, *"why are you doing this?"*, and you would be right to do so. Well, I'm writing a package that automatically introspect classes and all their members to output structured data in JSON (it's `pytkdocs`, used by `mkdocstrings` to bring autodoc for MkDocs, and `pytkdocs` tries to support Pydantic models).
I'm just reporting for the sake of it. Maybe there's an easy fix? Maybe it's a bug in Python's `typing`? Maybe it's expected because `SchemaExtraCallable` is a forward ref in this context?
Anyway, I'm catching the error for now, so it's fine if you want to close the issue :slightly_smiling_face:
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pydantic/main.py`
Content:
```
1 import json
2 import sys
3 import warnings
4 from abc import ABCMeta
5 from copy import deepcopy
6 from enum import Enum
7 from functools import partial
8 from pathlib import Path
9 from types import FunctionType
10 from typing import (
11 TYPE_CHECKING,
12 AbstractSet,
13 Any,
14 Callable,
15 Dict,
16 List,
17 Mapping,
18 Optional,
19 Tuple,
20 Type,
21 TypeVar,
22 Union,
23 cast,
24 no_type_check,
25 overload,
26 )
27
28 from .class_validators import ROOT_KEY, ValidatorGroup, extract_root_validators, extract_validators, inherit_validators
29 from .error_wrappers import ErrorWrapper, ValidationError
30 from .errors import ConfigError, DictError, ExtraError, MissingError
31 from .fields import SHAPE_MAPPING, ModelField, Undefined
32 from .json import custom_pydantic_encoder, pydantic_encoder
33 from .parse import Protocol, load_file, load_str_bytes
34 from .schema import model_schema
35 from .types import PyObject, StrBytes
36 from .typing import AnyCallable, AnyType, ForwardRef, is_classvar, resolve_annotations, update_field_forward_refs
37 from .utils import (
38 ClassAttribute,
39 GetterDict,
40 Representation,
41 ValueItems,
42 generate_model_signature,
43 lenient_issubclass,
44 sequence_like,
45 validate_field_name,
46 )
47
48 if TYPE_CHECKING:
49 import typing_extensions
50 from inspect import Signature
51 from .class_validators import ValidatorListDict
52 from .types import ModelOrDc
53 from .typing import CallableGenerator, TupleGenerator, DictStrAny, DictAny, SetStr
54 from .typing import AbstractSetIntStr, MappingIntStrAny, ReprArgs # noqa: F401
55
56 ConfigType = Type['BaseConfig']
57 Model = TypeVar('Model', bound='BaseModel')
58
59 class SchemaExtraCallable(typing_extensions.Protocol):
60 @overload
61 def __call__(self, schema: Dict[str, Any]) -> None:
62 pass
63
64 @overload # noqa: F811
65 def __call__(self, schema: Dict[str, Any], model_class: Type['Model']) -> None: # noqa: F811
66 pass
67
68
69 try:
70 import cython # type: ignore
71 except ImportError:
72 compiled: bool = False
73 else: # pragma: no cover
74 try:
75 compiled = cython.compiled
76 except AttributeError:
77 compiled = False
78
79 __all__ = 'BaseConfig', 'BaseModel', 'Extra', 'compiled', 'create_model', 'validate_model'
80
81
82 class Extra(str, Enum):
83 allow = 'allow'
84 ignore = 'ignore'
85 forbid = 'forbid'
86
87
88 class BaseConfig:
89 title = None
90 anystr_strip_whitespace = False
91 min_anystr_length = None
92 max_anystr_length = None
93 validate_all = False
94 extra = Extra.ignore
95 allow_mutation = True
96 allow_population_by_field_name = False
97 use_enum_values = False
98 fields: Dict[str, Union[str, Dict[str, str]]] = {}
99 validate_assignment = False
100 error_msg_templates: Dict[str, str] = {}
101 arbitrary_types_allowed = False
102 orm_mode: bool = False
103 getter_dict: Type[GetterDict] = GetterDict
104 alias_generator: Optional[Callable[[str], str]] = None
105 keep_untouched: Tuple[type, ...] = ()
106 schema_extra: Union[Dict[str, Any], 'SchemaExtraCallable'] = {}
107 json_loads: Callable[[str], Any] = json.loads
108 json_dumps: Callable[..., str] = json.dumps
109 json_encoders: Dict[AnyType, AnyCallable] = {}
110
111 @classmethod
112 def get_field_info(cls, name: str) -> Dict[str, Any]:
113 fields_value = cls.fields.get(name)
114
115 if isinstance(fields_value, str):
116 field_info: Dict[str, Any] = {'alias': fields_value}
117 elif isinstance(fields_value, dict):
118 field_info = fields_value
119 else:
120 field_info = {}
121
122 if 'alias' in field_info:
123 field_info.setdefault('alias_priority', 2)
124
125 if field_info.get('alias_priority', 0) <= 1 and cls.alias_generator:
126 alias = cls.alias_generator(name)
127 if not isinstance(alias, str):
128 raise TypeError(f'Config.alias_generator must return str, not {alias.__class__}')
129 field_info.update(alias=alias, alias_priority=1)
130 return field_info
131
132 @classmethod
133 def prepare_field(cls, field: 'ModelField') -> None:
134 """
135 Optional hook to check or modify fields during model creation.
136 """
137 pass
138
139
140 def inherit_config(self_config: 'ConfigType', parent_config: 'ConfigType') -> 'ConfigType':
141 if not self_config:
142 base_classes = (parent_config,)
143 elif self_config == parent_config:
144 base_classes = (self_config,)
145 else:
146 base_classes = self_config, parent_config # type: ignore
147 return type('Config', base_classes, {})
148
149
150 EXTRA_LINK = 'https://pydantic-docs.helpmanual.io/usage/model_config/'
151
152
153 def prepare_config(config: Type[BaseConfig], cls_name: str) -> None:
154 if not isinstance(config.extra, Extra):
155 try:
156 config.extra = Extra(config.extra)
157 except ValueError:
158 raise ValueError(f'"{cls_name}": {config.extra} is not a valid value for "extra"')
159
160 if hasattr(config, 'allow_population_by_alias'):
161 warnings.warn(
162 f'{cls_name}: "allow_population_by_alias" is deprecated and replaced by "allow_population_by_field_name"',
163 DeprecationWarning,
164 )
165 config.allow_population_by_field_name = config.allow_population_by_alias # type: ignore
166
167 if hasattr(config, 'case_insensitive') and any('BaseSettings.Config' in c.__qualname__ for c in config.__mro__):
168 warnings.warn(
169 f'{cls_name}: "case_insensitive" is deprecated on BaseSettings config and replaced by '
170 f'"case_sensitive" (default False)',
171 DeprecationWarning,
172 )
173 config.case_sensitive = not config.case_insensitive # type: ignore
174
175
176 def is_valid_field(name: str) -> bool:
177 if not name.startswith('_'):
178 return True
179 return ROOT_KEY == name
180
181
182 def validate_custom_root_type(fields: Dict[str, ModelField]) -> None:
183 if len(fields) > 1:
184 raise ValueError('__root__ cannot be mixed with other fields')
185
186
187 UNTOUCHED_TYPES = FunctionType, property, type, classmethod, staticmethod
188
189 # Note `ModelMetaclass` refers to `BaseModel`, but is also used to *create* `BaseModel`, so we need to add this extra
190 # (somewhat hacky) boolean to keep track of whether we've created the `BaseModel` class yet, and therefore whether it's
191 # safe to refer to it. If it *hasn't* been created, we assume that the `__new__` call we're in the middle of is for
192 # the `BaseModel` class, since that's defined immediately after the metaclass.
193 _is_base_model_class_defined = False
194
195
196 class ModelMetaclass(ABCMeta):
197 @no_type_check # noqa C901
198 def __new__(mcs, name, bases, namespace, **kwargs): # noqa C901
199 fields: Dict[str, ModelField] = {}
200 config = BaseConfig
201 validators: 'ValidatorListDict' = {}
202 fields_defaults: Dict[str, Any] = {}
203
204 pre_root_validators, post_root_validators = [], []
205 for base in reversed(bases):
206 if _is_base_model_class_defined and issubclass(base, BaseModel) and base != BaseModel:
207 fields.update(deepcopy(base.__fields__))
208 config = inherit_config(base.__config__, config)
209 validators = inherit_validators(base.__validators__, validators)
210 pre_root_validators += base.__pre_root_validators__
211 post_root_validators += base.__post_root_validators__
212
213 config = inherit_config(namespace.get('Config'), config)
214 validators = inherit_validators(extract_validators(namespace), validators)
215 vg = ValidatorGroup(validators)
216
217 for f in fields.values():
218 if not f.required:
219 fields_defaults[f.name] = f.default
220
221 f.set_config(config)
222 extra_validators = vg.get_validators(f.name)
223 if extra_validators:
224 f.class_validators.update(extra_validators)
225 # re-run prepare to add extra validators
226 f.populate_validators()
227
228 prepare_config(config, name)
229
230 class_vars = set()
231 if (namespace.get('__module__'), namespace.get('__qualname__')) != ('pydantic.main', 'BaseModel'):
232 annotations = resolve_annotations(namespace.get('__annotations__', {}), namespace.get('__module__', None))
233 untouched_types = UNTOUCHED_TYPES + config.keep_untouched
234 # annotation only fields need to come first in fields
235 for ann_name, ann_type in annotations.items():
236 if is_classvar(ann_type):
237 class_vars.add(ann_name)
238 elif is_valid_field(ann_name):
239 validate_field_name(bases, ann_name)
240 value = namespace.get(ann_name, Undefined)
241 if (
242 isinstance(value, untouched_types)
243 and ann_type != PyObject
244 and not lenient_issubclass(getattr(ann_type, '__origin__', None), Type)
245 ):
246 continue
247 fields[ann_name] = inferred = ModelField.infer(
248 name=ann_name,
249 value=value,
250 annotation=ann_type,
251 class_validators=vg.get_validators(ann_name),
252 config=config,
253 )
254 if not inferred.required:
255 fields_defaults[ann_name] = inferred.default
256
257 for var_name, value in namespace.items():
258 if (
259 var_name not in annotations
260 and is_valid_field(var_name)
261 and not isinstance(value, untouched_types)
262 and var_name not in class_vars
263 ):
264 validate_field_name(bases, var_name)
265 inferred = ModelField.infer(
266 name=var_name,
267 value=value,
268 annotation=annotations.get(var_name),
269 class_validators=vg.get_validators(var_name),
270 config=config,
271 )
272 if var_name in fields and inferred.type_ != fields[var_name].type_:
273 raise TypeError(
274 f'The type of {name}.{var_name} differs from the new default value; '
275 f'if you wish to change the type of this field, please use a type annotation'
276 )
277 fields[var_name] = inferred
278 if not inferred.required:
279 fields_defaults[var_name] = inferred.default
280
281 _custom_root_type = ROOT_KEY in fields
282 if _custom_root_type:
283 validate_custom_root_type(fields)
284 vg.check_for_unused()
285 if config.json_encoders:
286 json_encoder = partial(custom_pydantic_encoder, config.json_encoders)
287 else:
288 json_encoder = pydantic_encoder
289 pre_rv_new, post_rv_new = extract_root_validators(namespace)
290 new_namespace = {
291 '__config__': config,
292 '__fields__': fields,
293 '__field_defaults__': fields_defaults,
294 '__validators__': vg.validators,
295 '__pre_root_validators__': pre_root_validators + pre_rv_new,
296 '__post_root_validators__': post_root_validators + post_rv_new,
297 '__schema_cache__': {},
298 '__json_encoder__': staticmethod(json_encoder),
299 '__custom_root_type__': _custom_root_type,
300 **{n: v for n, v in namespace.items() if n not in fields},
301 }
302
303 cls = super().__new__(mcs, name, bases, new_namespace, **kwargs)
304 # set __signature__ attr only for model class, but not for its instances
305 cls.__signature__ = ClassAttribute('__signature__', generate_model_signature(cls.__init__, fields, config))
306 return cls
307
308
309 class BaseModel(Representation, metaclass=ModelMetaclass):
310 if TYPE_CHECKING:
311 # populated by the metaclass, defined here to help IDEs only
312 __fields__: Dict[str, ModelField] = {}
313 __field_defaults__: Dict[str, Any] = {}
314 __validators__: Dict[str, AnyCallable] = {}
315 __pre_root_validators__: List[AnyCallable]
316 __post_root_validators__: List[Tuple[bool, AnyCallable]]
317 __config__: Type[BaseConfig] = BaseConfig
318 __root__: Any = None
319 __json_encoder__: Callable[[Any], Any] = lambda x: x
320 __schema_cache__: 'DictAny' = {}
321 __custom_root_type__: bool = False
322 __signature__: 'Signature'
323
324 Config = BaseConfig
325 __slots__ = ('__dict__', '__fields_set__')
326 __doc__ = '' # Null out the Representation docstring
327
328 def __init__(__pydantic_self__, **data: Any) -> None:
329 """
330 Create a new model by parsing and validating input data from keyword arguments.
331
332 Raises ValidationError if the input data cannot be parsed to form a valid model.
333 """
334 # Uses something other than `self` the first arg to allow "self" as a settable attribute
335 if TYPE_CHECKING:
336 __pydantic_self__.__dict__: Dict[str, Any] = {}
337 __pydantic_self__.__fields_set__: 'SetStr' = set()
338 values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
339 if validation_error:
340 raise validation_error
341 object.__setattr__(__pydantic_self__, '__dict__', values)
342 object.__setattr__(__pydantic_self__, '__fields_set__', fields_set)
343
344 @no_type_check
345 def __setattr__(self, name, value):
346 if self.__config__.extra is not Extra.allow and name not in self.__fields__:
347 raise ValueError(f'"{self.__class__.__name__}" object has no field "{name}"')
348 elif not self.__config__.allow_mutation:
349 raise TypeError(f'"{self.__class__.__name__}" is immutable and does not support item assignment')
350 elif self.__config__.validate_assignment:
351 known_field = self.__fields__.get(name, None)
352 if known_field:
353 value, error_ = known_field.validate(value, self.dict(exclude={name}), loc=name, cls=self.__class__)
354 if error_:
355 raise ValidationError([error_], self.__class__)
356 self.__dict__[name] = value
357 self.__fields_set__.add(name)
358
359 def __getstate__(self) -> 'DictAny':
360 return {'__dict__': self.__dict__, '__fields_set__': self.__fields_set__}
361
362 def __setstate__(self, state: 'DictAny') -> None:
363 object.__setattr__(self, '__dict__', state['__dict__'])
364 object.__setattr__(self, '__fields_set__', state['__fields_set__'])
365
366 def dict(
367 self,
368 *,
369 include: Union['AbstractSetIntStr', 'MappingIntStrAny'] = None,
370 exclude: Union['AbstractSetIntStr', 'MappingIntStrAny'] = None,
371 by_alias: bool = False,
372 skip_defaults: bool = None,
373 exclude_unset: bool = False,
374 exclude_defaults: bool = False,
375 exclude_none: bool = False,
376 ) -> 'DictStrAny':
377 """
378 Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
379
380 """
381 if skip_defaults is not None:
382 warnings.warn(
383 f'{self.__class__.__name__}.dict(): "skip_defaults" is deprecated and replaced by "exclude_unset"',
384 DeprecationWarning,
385 )
386 exclude_unset = skip_defaults
387
388 return dict(
389 self._iter(
390 to_dict=True,
391 by_alias=by_alias,
392 include=include,
393 exclude=exclude,
394 exclude_unset=exclude_unset,
395 exclude_defaults=exclude_defaults,
396 exclude_none=exclude_none,
397 )
398 )
399
400 def json(
401 self,
402 *,
403 include: Union['AbstractSetIntStr', 'MappingIntStrAny'] = None,
404 exclude: Union['AbstractSetIntStr', 'MappingIntStrAny'] = None,
405 by_alias: bool = False,
406 skip_defaults: bool = None,
407 exclude_unset: bool = False,
408 exclude_defaults: bool = False,
409 exclude_none: bool = False,
410 encoder: Optional[Callable[[Any], Any]] = None,
411 **dumps_kwargs: Any,
412 ) -> str:
413 """
414 Generate a JSON representation of the model, `include` and `exclude` arguments as per `dict()`.
415
416 `encoder` is an optional function to supply as `default` to json.dumps(), other arguments as per `json.dumps()`.
417 """
418 if skip_defaults is not None:
419 warnings.warn(
420 f'{self.__class__.__name__}.json(): "skip_defaults" is deprecated and replaced by "exclude_unset"',
421 DeprecationWarning,
422 )
423 exclude_unset = skip_defaults
424 encoder = cast(Callable[[Any], Any], encoder or self.__json_encoder__)
425 data = self.dict(
426 include=include,
427 exclude=exclude,
428 by_alias=by_alias,
429 exclude_unset=exclude_unset,
430 exclude_defaults=exclude_defaults,
431 exclude_none=exclude_none,
432 )
433 if self.__custom_root_type__:
434 data = data[ROOT_KEY]
435 return self.__config__.json_dumps(data, default=encoder, **dumps_kwargs)
436
437 @classmethod
438 def parse_obj(cls: Type['Model'], obj: Any) -> 'Model':
439 if cls.__custom_root_type__ and (
440 not (isinstance(obj, dict) and obj.keys() == {ROOT_KEY}) or cls.__fields__[ROOT_KEY].shape == SHAPE_MAPPING
441 ):
442 obj = {ROOT_KEY: obj}
443 elif not isinstance(obj, dict):
444 try:
445 obj = dict(obj)
446 except (TypeError, ValueError) as e:
447 exc = TypeError(f'{cls.__name__} expected dict not {obj.__class__.__name__}')
448 raise ValidationError([ErrorWrapper(exc, loc=ROOT_KEY)], cls) from e
449 return cls(**obj)
450
451 @classmethod
452 def parse_raw(
453 cls: Type['Model'],
454 b: StrBytes,
455 *,
456 content_type: str = None,
457 encoding: str = 'utf8',
458 proto: Protocol = None,
459 allow_pickle: bool = False,
460 ) -> 'Model':
461 try:
462 obj = load_str_bytes(
463 b,
464 proto=proto,
465 content_type=content_type,
466 encoding=encoding,
467 allow_pickle=allow_pickle,
468 json_loads=cls.__config__.json_loads,
469 )
470 except (ValueError, TypeError, UnicodeDecodeError) as e:
471 raise ValidationError([ErrorWrapper(e, loc=ROOT_KEY)], cls)
472 return cls.parse_obj(obj)
473
474 @classmethod
475 def parse_file(
476 cls: Type['Model'],
477 path: Union[str, Path],
478 *,
479 content_type: str = None,
480 encoding: str = 'utf8',
481 proto: Protocol = None,
482 allow_pickle: bool = False,
483 ) -> 'Model':
484 obj = load_file(
485 path,
486 proto=proto,
487 content_type=content_type,
488 encoding=encoding,
489 allow_pickle=allow_pickle,
490 json_loads=cls.__config__.json_loads,
491 )
492 return cls.parse_obj(obj)
493
494 @classmethod
495 def from_orm(cls: Type['Model'], obj: Any) -> 'Model':
496 if not cls.__config__.orm_mode:
497 raise ConfigError('You must have the config attribute orm_mode=True to use from_orm')
498 obj = cls._decompose_class(obj)
499 m = cls.__new__(cls)
500 values, fields_set, validation_error = validate_model(cls, obj)
501 if validation_error:
502 raise validation_error
503 object.__setattr__(m, '__dict__', values)
504 object.__setattr__(m, '__fields_set__', fields_set)
505 return m
506
507 @classmethod
508 def construct(cls: Type['Model'], _fields_set: Optional['SetStr'] = None, **values: Any) -> 'Model':
509 """
510 Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
511 Default values are respected, but no other validation is performed.
512 """
513 m = cls.__new__(cls)
514 object.__setattr__(m, '__dict__', {**deepcopy(cls.__field_defaults__), **values})
515 if _fields_set is None:
516 _fields_set = set(values.keys())
517 object.__setattr__(m, '__fields_set__', _fields_set)
518 return m
519
520 def copy(
521 self: 'Model',
522 *,
523 include: Union['AbstractSetIntStr', 'MappingIntStrAny'] = None,
524 exclude: Union['AbstractSetIntStr', 'MappingIntStrAny'] = None,
525 update: 'DictStrAny' = None,
526 deep: bool = False,
527 ) -> 'Model':
528 """
529 Duplicate a model, optionally choose which fields to include, exclude and change.
530
531 :param include: fields to include in new model
532 :param exclude: fields to exclude from new model, as with values this takes precedence over include
533 :param update: values to change/add in the new model. Note: the data is not validated before creating
534 the new model: you should trust this data
535 :param deep: set to `True` to make a deep copy of the model
536 :return: new model instance
537 """
538
539 v = dict(
540 self._iter(to_dict=False, by_alias=False, include=include, exclude=exclude, exclude_unset=False),
541 **(update or {}),
542 )
543
544 if deep:
545 v = deepcopy(v)
546
547 cls = self.__class__
548 m = cls.__new__(cls)
549 object.__setattr__(m, '__dict__', v)
550 object.__setattr__(m, '__fields_set__', self.__fields_set__.copy())
551 return m
552
553 @classmethod
554 def schema(cls, by_alias: bool = True) -> 'DictStrAny':
555 cached = cls.__schema_cache__.get(by_alias)
556 if cached is not None:
557 return cached
558 s = model_schema(cls, by_alias=by_alias)
559 cls.__schema_cache__[by_alias] = s
560 return s
561
562 @classmethod
563 def schema_json(cls, *, by_alias: bool = True, **dumps_kwargs: Any) -> str:
564 from .json import pydantic_encoder
565
566 return cls.__config__.json_dumps(cls.schema(by_alias=by_alias), default=pydantic_encoder, **dumps_kwargs)
567
568 @classmethod
569 def __get_validators__(cls) -> 'CallableGenerator':
570 yield cls.validate
571
572 @classmethod
573 def validate(cls: Type['Model'], value: Any) -> 'Model':
574 if isinstance(value, dict):
575 return cls(**value)
576 elif isinstance(value, cls):
577 return value.copy()
578 elif cls.__config__.orm_mode:
579 return cls.from_orm(value)
580 elif cls.__custom_root_type__:
581 return cls.parse_obj(value)
582 else:
583 try:
584 value_as_dict = dict(value)
585 except (TypeError, ValueError) as e:
586 raise DictError() from e
587 return cls(**value_as_dict)
588
589 @classmethod
590 def _decompose_class(cls: Type['Model'], obj: Any) -> GetterDict:
591 return cls.__config__.getter_dict(obj)
592
593 @classmethod
594 @no_type_check
595 def _get_value(
596 cls,
597 v: Any,
598 to_dict: bool,
599 by_alias: bool,
600 include: Optional[Union['AbstractSetIntStr', 'MappingIntStrAny']],
601 exclude: Optional[Union['AbstractSetIntStr', 'MappingIntStrAny']],
602 exclude_unset: bool,
603 exclude_defaults: bool,
604 exclude_none: bool,
605 ) -> Any:
606
607 if isinstance(v, BaseModel):
608 if to_dict:
609 return v.dict(
610 by_alias=by_alias,
611 exclude_unset=exclude_unset,
612 exclude_defaults=exclude_defaults,
613 include=include,
614 exclude=exclude,
615 exclude_none=exclude_none,
616 )
617 else:
618 return v.copy(include=include, exclude=exclude)
619
620 value_exclude = ValueItems(v, exclude) if exclude else None
621 value_include = ValueItems(v, include) if include else None
622
623 if isinstance(v, dict):
624 return {
625 k_: cls._get_value(
626 v_,
627 to_dict=to_dict,
628 by_alias=by_alias,
629 exclude_unset=exclude_unset,
630 exclude_defaults=exclude_defaults,
631 include=value_include and value_include.for_element(k_),
632 exclude=value_exclude and value_exclude.for_element(k_),
633 exclude_none=exclude_none,
634 )
635 for k_, v_ in v.items()
636 if (not value_exclude or not value_exclude.is_excluded(k_))
637 and (not value_include or value_include.is_included(k_))
638 }
639
640 elif sequence_like(v):
641 return v.__class__(
642 cls._get_value(
643 v_,
644 to_dict=to_dict,
645 by_alias=by_alias,
646 exclude_unset=exclude_unset,
647 exclude_defaults=exclude_defaults,
648 include=value_include and value_include.for_element(i),
649 exclude=value_exclude and value_exclude.for_element(i),
650 exclude_none=exclude_none,
651 )
652 for i, v_ in enumerate(v)
653 if (not value_exclude or not value_exclude.is_excluded(i))
654 and (not value_include or value_include.is_included(i))
655 )
656
657 else:
658 return v
659
660 @classmethod
661 def update_forward_refs(cls, **localns: Any) -> None:
662 """
663 Try to update ForwardRefs on fields based on this Model, globalns and localns.
664 """
665 globalns = sys.modules[cls.__module__].__dict__.copy()
666 globalns.setdefault(cls.__name__, cls)
667 for f in cls.__fields__.values():
668 update_field_forward_refs(f, globalns=globalns, localns=localns)
669
670 def __iter__(self) -> 'TupleGenerator':
671 """
672 so `dict(model)` works
673 """
674 yield from self.__dict__.items()
675
676 def _iter(
677 self,
678 to_dict: bool = False,
679 by_alias: bool = False,
680 include: Union['AbstractSetIntStr', 'MappingIntStrAny'] = None,
681 exclude: Union['AbstractSetIntStr', 'MappingIntStrAny'] = None,
682 exclude_unset: bool = False,
683 exclude_defaults: bool = False,
684 exclude_none: bool = False,
685 ) -> 'TupleGenerator':
686
687 allowed_keys = self._calculate_keys(include=include, exclude=exclude, exclude_unset=exclude_unset)
688 if allowed_keys is None and not (to_dict or by_alias or exclude_unset or exclude_defaults or exclude_none):
689 # huge boost for plain _iter()
690 yield from self.__dict__.items()
691 return
692
693 value_exclude = ValueItems(self, exclude) if exclude else None
694 value_include = ValueItems(self, include) if include else None
695
696 for field_key, v in self.__dict__.items():
697 if (
698 (allowed_keys is not None and field_key not in allowed_keys)
699 or (exclude_none and v is None)
700 or (exclude_defaults and self.__field_defaults__.get(field_key, _missing) == v)
701 ):
702 continue
703 if by_alias and field_key in self.__fields__:
704 dict_key = self.__fields__[field_key].alias
705 else:
706 dict_key = field_key
707 if to_dict or value_include or value_exclude:
708 v = self._get_value(
709 v,
710 to_dict=to_dict,
711 by_alias=by_alias,
712 include=value_include and value_include.for_element(field_key),
713 exclude=value_exclude and value_exclude.for_element(field_key),
714 exclude_unset=exclude_unset,
715 exclude_defaults=exclude_defaults,
716 exclude_none=exclude_none,
717 )
718 yield dict_key, v
719
720 def _calculate_keys(
721 self,
722 include: Optional[Union['AbstractSetIntStr', 'MappingIntStrAny']],
723 exclude: Optional[Union['AbstractSetIntStr', 'MappingIntStrAny']],
724 exclude_unset: bool,
725 update: Optional['DictStrAny'] = None,
726 ) -> Optional[AbstractSet[str]]:
727 if include is None and exclude is None and exclude_unset is False:
728 return None
729
730 keys: AbstractSet[str]
731 if exclude_unset:
732 keys = self.__fields_set__.copy()
733 else:
734 keys = self.__dict__.keys()
735
736 if include is not None:
737 if isinstance(include, Mapping):
738 keys &= include.keys()
739 else:
740 keys &= include
741
742 if update:
743 keys -= update.keys()
744
745 if exclude:
746 if isinstance(exclude, Mapping):
747 keys -= {k for k, v in exclude.items() if v is ...}
748 else:
749 keys -= exclude
750
751 return keys
752
753 def __eq__(self, other: Any) -> bool:
754 if isinstance(other, BaseModel):
755 return self.dict() == other.dict()
756 else:
757 return self.dict() == other
758
759 def __repr_args__(self) -> 'ReprArgs':
760 return self.__dict__.items() # type: ignore
761
762 @property
763 def fields(self) -> Dict[str, ModelField]:
764 warnings.warn('`fields` attribute is deprecated, use `__fields__` instead', DeprecationWarning)
765 return self.__fields__
766
767 def to_string(self, pretty: bool = False) -> str:
768 warnings.warn('`model.to_string()` method is deprecated, use `str(model)` instead', DeprecationWarning)
769 return str(self)
770
771 @property
772 def __values__(self) -> 'DictStrAny':
773 warnings.warn('`__values__` attribute is deprecated, use `__dict__` instead', DeprecationWarning)
774 return self.__dict__
775
776
777 _is_base_model_class_defined = True
778
779
780 def create_model(
781 __model_name: str,
782 *,
783 __config__: Type[BaseConfig] = None,
784 __base__: Type[BaseModel] = None,
785 __module__: Optional[str] = None,
786 __validators__: Dict[str, classmethod] = None,
787 **field_definitions: Any,
788 ) -> Type[BaseModel]:
789 """
790 Dynamically create a model.
791 :param __model_name: name of the created model
792 :param __config__: config class to use for the new model
793 :param __base__: base class for the new model to inherit from
794 :param __validators__: a dict of method names and @validator class methods
795 :param **field_definitions: fields of the model (or extra fields if a base is supplied) in the format
796 `<name>=(<type>, <default default>)` or `<name>=<default value> eg. `foobar=(str, ...)` or `foobar=123`
797 """
798 if __base__:
799 if __config__ is not None:
800 raise ConfigError('to avoid confusion __config__ and __base__ cannot be used together')
801 else:
802 __base__ = BaseModel
803
804 fields = {}
805 annotations = {}
806
807 for f_name, f_def in field_definitions.items():
808 if not is_valid_field(f_name):
809 warnings.warn(f'fields may not start with an underscore, ignoring "{f_name}"', RuntimeWarning)
810 if isinstance(f_def, tuple):
811 try:
812 f_annotation, f_value = f_def
813 except ValueError as e:
814 raise ConfigError(
815 'field definitions should either be a tuple of (<type>, <default>) or just a '
816 'default value, unfortunately this means tuples as '
817 'default values are not allowed'
818 ) from e
819 else:
820 f_annotation, f_value = None, f_def
821
822 if f_annotation:
823 annotations[f_name] = f_annotation
824 fields[f_name] = f_value
825
826 namespace: 'DictStrAny' = {'__annotations__': annotations, '__module__': __module__}
827 if __validators__:
828 namespace.update(__validators__)
829 namespace.update(fields)
830 if __config__:
831 namespace['Config'] = inherit_config(__config__, BaseConfig)
832
833 return type(__model_name, (__base__,), namespace)
834
835
836 _missing = object()
837
838
839 def validate_model( # noqa: C901 (ignore complexity)
840 model: Type[BaseModel], input_data: 'DictStrAny', cls: 'ModelOrDc' = None
841 ) -> Tuple['DictStrAny', 'SetStr', Optional[ValidationError]]:
842 """
843 validate data against a model.
844 """
845 values = {}
846 errors = []
847 # input_data names, possibly alias
848 names_used = set()
849 # field names, never aliases
850 fields_set = set()
851 config = model.__config__
852 check_extra = config.extra is not Extra.ignore
853 cls_ = cls or model
854
855 for validator in model.__pre_root_validators__:
856 try:
857 input_data = validator(cls_, input_data)
858 except (ValueError, TypeError, AssertionError) as exc:
859 return {}, set(), ValidationError([ErrorWrapper(exc, loc=ROOT_KEY)], cls_)
860
861 for name, field in model.__fields__.items():
862 if field.type_.__class__ == ForwardRef:
863 raise ConfigError(
864 f'field "{field.name}" not yet prepared so type is still a ForwardRef, '
865 f'you might need to call {cls_.__name__}.update_forward_refs().'
866 )
867
868 value = input_data.get(field.alias, _missing)
869 using_name = False
870 if value is _missing and config.allow_population_by_field_name and field.alt_alias:
871 value = input_data.get(field.name, _missing)
872 using_name = True
873
874 if value is _missing:
875 if field.required:
876 errors.append(ErrorWrapper(MissingError(), loc=field.alias))
877 continue
878
879 value = field.get_default()
880
881 if not config.validate_all and not field.validate_always:
882 values[name] = value
883 continue
884 else:
885 fields_set.add(name)
886 if check_extra:
887 names_used.add(field.name if using_name else field.alias)
888
889 v_, errors_ = field.validate(value, values, loc=field.alias, cls=cls_)
890 if isinstance(errors_, ErrorWrapper):
891 errors.append(errors_)
892 elif isinstance(errors_, list):
893 errors.extend(errors_)
894 else:
895 values[name] = v_
896
897 if check_extra:
898 if isinstance(input_data, GetterDict):
899 extra = input_data.extra_keys() - names_used
900 else:
901 extra = input_data.keys() - names_used
902 if extra:
903 fields_set |= extra
904 if config.extra is Extra.allow:
905 for f in extra:
906 values[f] = input_data[f]
907 else:
908 for f in sorted(extra):
909 errors.append(ErrorWrapper(ExtraError(), loc=f))
910
911 for skip_on_failure, validator in model.__post_root_validators__:
912 if skip_on_failure and errors:
913 continue
914 try:
915 values = validator(cls_, values)
916 except (ValueError, TypeError, AssertionError) as exc:
917 errors.append(ErrorWrapper(exc, loc=ROOT_KEY))
918 break
919
920 if errors:
921 return values, fields_set, ValidationError(errors, cls_)
922 else:
923 return values, fields_set, None
924
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pydantic/main.py b/pydantic/main.py
--- a/pydantic/main.py
+++ b/pydantic/main.py
@@ -66,6 +66,10 @@
pass
+else:
+ SchemaExtraCallable = Callable[..., None]
+
+
try:
import cython # type: ignore
except ImportError:
| {"golden_diff": "diff --git a/pydantic/main.py b/pydantic/main.py\n--- a/pydantic/main.py\n+++ b/pydantic/main.py\n@@ -66,6 +66,10 @@\n pass\n \n \n+else:\n+ SchemaExtraCallable = Callable[..., None]\n+\n+\n try:\n import cython # type: ignore\n except ImportError:\n", "issue": "NameError: name 'SchemaExtraCallable' is not defined\n# Bug\r\n\r\nhttps://github.com/pawamoy/pytkdocs/pull/41/checks?check_run_id=747827745\r\n\r\n```\r\n pydantic version: 1.5.1\r\n pydantic compiled: False\r\n install path: /home/pawamoy/.cache/pypoetry/virtualenvs/pytkdocs-LMVK1zAi-py3.7/lib/python3.7/site-packages/pydantic\r\n python version: 3.7.5 (default, Apr 27 2020, 16:40:42) [GCC 9.3.0]\r\n platform: Linux-5.6.15-arch1-1-x86_64-with-arch\r\n optional deps. installed: ['typing-extensions']\r\n```\r\n\r\n```py\r\n>>> import typing\r\n>>> import pydantic\r\n>>> \r\n>>> class M(pydantic.BaseModel):\r\n... a: int\r\n... \r\n>>> typing.get_type_hints(M.__config__)\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/pawamoy/.basher-packages/pyenv/pyenv/versions/3.7.5/lib/python3.7/typing.py\", line 976, in get_type_hints\r\n value = _eval_type(value, base_globals, localns)\r\n File \"/home/pawamoy/.basher-packages/pyenv/pyenv/versions/3.7.5/lib/python3.7/typing.py\", line 265, in _eval_type\r\n ev_args = tuple(_eval_type(a, globalns, localns) for a in t.__args__)\r\n File \"/home/pawamoy/.basher-packages/pyenv/pyenv/versions/3.7.5/lib/python3.7/typing.py\", line 265, in <genexpr>\r\n ev_args = tuple(_eval_type(a, globalns, localns) for a in t.__args__)\r\n File \"/home/pawamoy/.basher-packages/pyenv/pyenv/versions/3.7.5/lib/python3.7/typing.py\", line 263, in _eval_type\r\n return t._evaluate(globalns, localns)\r\n File \"/home/pawamoy/.basher-packages/pyenv/pyenv/versions/3.7.5/lib/python3.7/typing.py\", line 467, in _evaluate\r\n eval(self.__forward_code__, globalns, localns),\r\n File \"<string>\", line 1, in <module>\r\nNameError: name 'SchemaExtraCallable' is not defined\r\n```\r\n\r\nNow you could wonder, *\"why are you doing this?\"*, and you would be right to do so. Well, I'm writing a package that automatically introspect classes and all their members to output structured data in JSON (it's `pytkdocs`, used by `mkdocstrings` to bring autodoc for MkDocs, and `pytkdocs` tries to support Pydantic models).\r\n\r\nI'm just reporting for the sake of it. Maybe there's an easy fix? Maybe it's a bug in Python's `typing`? Maybe it's expected because `SchemaExtraCallable` is a forward ref in this context?\r\n\r\nAnyway, I'm catching the error for now, so it's fine if you want to close the issue :slightly_smiling_face: \n", "before_files": [{"content": "import json\nimport sys\nimport warnings\nfrom abc import ABCMeta\nfrom copy import deepcopy\nfrom enum import Enum\nfrom functools import partial\nfrom pathlib import Path\nfrom types import FunctionType\nfrom typing import (\n TYPE_CHECKING,\n AbstractSet,\n Any,\n Callable,\n Dict,\n List,\n Mapping,\n Optional,\n Tuple,\n Type,\n TypeVar,\n Union,\n cast,\n no_type_check,\n overload,\n)\n\nfrom .class_validators import ROOT_KEY, ValidatorGroup, extract_root_validators, extract_validators, inherit_validators\nfrom .error_wrappers import ErrorWrapper, ValidationError\nfrom .errors import ConfigError, DictError, ExtraError, MissingError\nfrom .fields import SHAPE_MAPPING, ModelField, Undefined\nfrom .json import custom_pydantic_encoder, pydantic_encoder\nfrom .parse import Protocol, load_file, load_str_bytes\nfrom .schema import model_schema\nfrom .types import PyObject, StrBytes\nfrom .typing import AnyCallable, AnyType, ForwardRef, is_classvar, resolve_annotations, update_field_forward_refs\nfrom .utils import (\n ClassAttribute,\n GetterDict,\n Representation,\n ValueItems,\n generate_model_signature,\n lenient_issubclass,\n sequence_like,\n validate_field_name,\n)\n\nif TYPE_CHECKING:\n import typing_extensions\n from inspect import Signature\n from .class_validators import ValidatorListDict\n from .types import ModelOrDc\n from .typing import CallableGenerator, TupleGenerator, DictStrAny, DictAny, SetStr\n from .typing import AbstractSetIntStr, MappingIntStrAny, ReprArgs # noqa: F401\n\n ConfigType = Type['BaseConfig']\n Model = TypeVar('Model', bound='BaseModel')\n\n class SchemaExtraCallable(typing_extensions.Protocol):\n @overload\n def __call__(self, schema: Dict[str, Any]) -> None:\n pass\n\n @overload # noqa: F811\n def __call__(self, schema: Dict[str, Any], model_class: Type['Model']) -> None: # noqa: F811\n pass\n\n\ntry:\n import cython # type: ignore\nexcept ImportError:\n compiled: bool = False\nelse: # pragma: no cover\n try:\n compiled = cython.compiled\n except AttributeError:\n compiled = False\n\n__all__ = 'BaseConfig', 'BaseModel', 'Extra', 'compiled', 'create_model', 'validate_model'\n\n\nclass Extra(str, Enum):\n allow = 'allow'\n ignore = 'ignore'\n forbid = 'forbid'\n\n\nclass BaseConfig:\n title = None\n anystr_strip_whitespace = False\n min_anystr_length = None\n max_anystr_length = None\n validate_all = False\n extra = Extra.ignore\n allow_mutation = True\n allow_population_by_field_name = False\n use_enum_values = False\n fields: Dict[str, Union[str, Dict[str, str]]] = {}\n validate_assignment = False\n error_msg_templates: Dict[str, str] = {}\n arbitrary_types_allowed = False\n orm_mode: bool = False\n getter_dict: Type[GetterDict] = GetterDict\n alias_generator: Optional[Callable[[str], str]] = None\n keep_untouched: Tuple[type, ...] = ()\n schema_extra: Union[Dict[str, Any], 'SchemaExtraCallable'] = {}\n json_loads: Callable[[str], Any] = json.loads\n json_dumps: Callable[..., str] = json.dumps\n json_encoders: Dict[AnyType, AnyCallable] = {}\n\n @classmethod\n def get_field_info(cls, name: str) -> Dict[str, Any]:\n fields_value = cls.fields.get(name)\n\n if isinstance(fields_value, str):\n field_info: Dict[str, Any] = {'alias': fields_value}\n elif isinstance(fields_value, dict):\n field_info = fields_value\n else:\n field_info = {}\n\n if 'alias' in field_info:\n field_info.setdefault('alias_priority', 2)\n\n if field_info.get('alias_priority', 0) <= 1 and cls.alias_generator:\n alias = cls.alias_generator(name)\n if not isinstance(alias, str):\n raise TypeError(f'Config.alias_generator must return str, not {alias.__class__}')\n field_info.update(alias=alias, alias_priority=1)\n return field_info\n\n @classmethod\n def prepare_field(cls, field: 'ModelField') -> None:\n \"\"\"\n Optional hook to check or modify fields during model creation.\n \"\"\"\n pass\n\n\ndef inherit_config(self_config: 'ConfigType', parent_config: 'ConfigType') -> 'ConfigType':\n if not self_config:\n base_classes = (parent_config,)\n elif self_config == parent_config:\n base_classes = (self_config,)\n else:\n base_classes = self_config, parent_config # type: ignore\n return type('Config', base_classes, {})\n\n\nEXTRA_LINK = 'https://pydantic-docs.helpmanual.io/usage/model_config/'\n\n\ndef prepare_config(config: Type[BaseConfig], cls_name: str) -> None:\n if not isinstance(config.extra, Extra):\n try:\n config.extra = Extra(config.extra)\n except ValueError:\n raise ValueError(f'\"{cls_name}\": {config.extra} is not a valid value for \"extra\"')\n\n if hasattr(config, 'allow_population_by_alias'):\n warnings.warn(\n f'{cls_name}: \"allow_population_by_alias\" is deprecated and replaced by \"allow_population_by_field_name\"',\n DeprecationWarning,\n )\n config.allow_population_by_field_name = config.allow_population_by_alias # type: ignore\n\n if hasattr(config, 'case_insensitive') and any('BaseSettings.Config' in c.__qualname__ for c in config.__mro__):\n warnings.warn(\n f'{cls_name}: \"case_insensitive\" is deprecated on BaseSettings config and replaced by '\n f'\"case_sensitive\" (default False)',\n DeprecationWarning,\n )\n config.case_sensitive = not config.case_insensitive # type: ignore\n\n\ndef is_valid_field(name: str) -> bool:\n if not name.startswith('_'):\n return True\n return ROOT_KEY == name\n\n\ndef validate_custom_root_type(fields: Dict[str, ModelField]) -> None:\n if len(fields) > 1:\n raise ValueError('__root__ cannot be mixed with other fields')\n\n\nUNTOUCHED_TYPES = FunctionType, property, type, classmethod, staticmethod\n\n# Note `ModelMetaclass` refers to `BaseModel`, but is also used to *create* `BaseModel`, so we need to add this extra\n# (somewhat hacky) boolean to keep track of whether we've created the `BaseModel` class yet, and therefore whether it's\n# safe to refer to it. If it *hasn't* been created, we assume that the `__new__` call we're in the middle of is for\n# the `BaseModel` class, since that's defined immediately after the metaclass.\n_is_base_model_class_defined = False\n\n\nclass ModelMetaclass(ABCMeta):\n @no_type_check # noqa C901\n def __new__(mcs, name, bases, namespace, **kwargs): # noqa C901\n fields: Dict[str, ModelField] = {}\n config = BaseConfig\n validators: 'ValidatorListDict' = {}\n fields_defaults: Dict[str, Any] = {}\n\n pre_root_validators, post_root_validators = [], []\n for base in reversed(bases):\n if _is_base_model_class_defined and issubclass(base, BaseModel) and base != BaseModel:\n fields.update(deepcopy(base.__fields__))\n config = inherit_config(base.__config__, config)\n validators = inherit_validators(base.__validators__, validators)\n pre_root_validators += base.__pre_root_validators__\n post_root_validators += base.__post_root_validators__\n\n config = inherit_config(namespace.get('Config'), config)\n validators = inherit_validators(extract_validators(namespace), validators)\n vg = ValidatorGroup(validators)\n\n for f in fields.values():\n if not f.required:\n fields_defaults[f.name] = f.default\n\n f.set_config(config)\n extra_validators = vg.get_validators(f.name)\n if extra_validators:\n f.class_validators.update(extra_validators)\n # re-run prepare to add extra validators\n f.populate_validators()\n\n prepare_config(config, name)\n\n class_vars = set()\n if (namespace.get('__module__'), namespace.get('__qualname__')) != ('pydantic.main', 'BaseModel'):\n annotations = resolve_annotations(namespace.get('__annotations__', {}), namespace.get('__module__', None))\n untouched_types = UNTOUCHED_TYPES + config.keep_untouched\n # annotation only fields need to come first in fields\n for ann_name, ann_type in annotations.items():\n if is_classvar(ann_type):\n class_vars.add(ann_name)\n elif is_valid_field(ann_name):\n validate_field_name(bases, ann_name)\n value = namespace.get(ann_name, Undefined)\n if (\n isinstance(value, untouched_types)\n and ann_type != PyObject\n and not lenient_issubclass(getattr(ann_type, '__origin__', None), Type)\n ):\n continue\n fields[ann_name] = inferred = ModelField.infer(\n name=ann_name,\n value=value,\n annotation=ann_type,\n class_validators=vg.get_validators(ann_name),\n config=config,\n )\n if not inferred.required:\n fields_defaults[ann_name] = inferred.default\n\n for var_name, value in namespace.items():\n if (\n var_name not in annotations\n and is_valid_field(var_name)\n and not isinstance(value, untouched_types)\n and var_name not in class_vars\n ):\n validate_field_name(bases, var_name)\n inferred = ModelField.infer(\n name=var_name,\n value=value,\n annotation=annotations.get(var_name),\n class_validators=vg.get_validators(var_name),\n config=config,\n )\n if var_name in fields and inferred.type_ != fields[var_name].type_:\n raise TypeError(\n f'The type of {name}.{var_name} differs from the new default value; '\n f'if you wish to change the type of this field, please use a type annotation'\n )\n fields[var_name] = inferred\n if not inferred.required:\n fields_defaults[var_name] = inferred.default\n\n _custom_root_type = ROOT_KEY in fields\n if _custom_root_type:\n validate_custom_root_type(fields)\n vg.check_for_unused()\n if config.json_encoders:\n json_encoder = partial(custom_pydantic_encoder, config.json_encoders)\n else:\n json_encoder = pydantic_encoder\n pre_rv_new, post_rv_new = extract_root_validators(namespace)\n new_namespace = {\n '__config__': config,\n '__fields__': fields,\n '__field_defaults__': fields_defaults,\n '__validators__': vg.validators,\n '__pre_root_validators__': pre_root_validators + pre_rv_new,\n '__post_root_validators__': post_root_validators + post_rv_new,\n '__schema_cache__': {},\n '__json_encoder__': staticmethod(json_encoder),\n '__custom_root_type__': _custom_root_type,\n **{n: v for n, v in namespace.items() if n not in fields},\n }\n\n cls = super().__new__(mcs, name, bases, new_namespace, **kwargs)\n # set __signature__ attr only for model class, but not for its instances\n cls.__signature__ = ClassAttribute('__signature__', generate_model_signature(cls.__init__, fields, config))\n return cls\n\n\nclass BaseModel(Representation, metaclass=ModelMetaclass):\n if TYPE_CHECKING:\n # populated by the metaclass, defined here to help IDEs only\n __fields__: Dict[str, ModelField] = {}\n __field_defaults__: Dict[str, Any] = {}\n __validators__: Dict[str, AnyCallable] = {}\n __pre_root_validators__: List[AnyCallable]\n __post_root_validators__: List[Tuple[bool, AnyCallable]]\n __config__: Type[BaseConfig] = BaseConfig\n __root__: Any = None\n __json_encoder__: Callable[[Any], Any] = lambda x: x\n __schema_cache__: 'DictAny' = {}\n __custom_root_type__: bool = False\n __signature__: 'Signature'\n\n Config = BaseConfig\n __slots__ = ('__dict__', '__fields_set__')\n __doc__ = '' # Null out the Representation docstring\n\n def __init__(__pydantic_self__, **data: Any) -> None:\n \"\"\"\n Create a new model by parsing and validating input data from keyword arguments.\n\n Raises ValidationError if the input data cannot be parsed to form a valid model.\n \"\"\"\n # Uses something other than `self` the first arg to allow \"self\" as a settable attribute\n if TYPE_CHECKING:\n __pydantic_self__.__dict__: Dict[str, Any] = {}\n __pydantic_self__.__fields_set__: 'SetStr' = set()\n values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)\n if validation_error:\n raise validation_error\n object.__setattr__(__pydantic_self__, '__dict__', values)\n object.__setattr__(__pydantic_self__, '__fields_set__', fields_set)\n\n @no_type_check\n def __setattr__(self, name, value):\n if self.__config__.extra is not Extra.allow and name not in self.__fields__:\n raise ValueError(f'\"{self.__class__.__name__}\" object has no field \"{name}\"')\n elif not self.__config__.allow_mutation:\n raise TypeError(f'\"{self.__class__.__name__}\" is immutable and does not support item assignment')\n elif self.__config__.validate_assignment:\n known_field = self.__fields__.get(name, None)\n if known_field:\n value, error_ = known_field.validate(value, self.dict(exclude={name}), loc=name, cls=self.__class__)\n if error_:\n raise ValidationError([error_], self.__class__)\n self.__dict__[name] = value\n self.__fields_set__.add(name)\n\n def __getstate__(self) -> 'DictAny':\n return {'__dict__': self.__dict__, '__fields_set__': self.__fields_set__}\n\n def __setstate__(self, state: 'DictAny') -> None:\n object.__setattr__(self, '__dict__', state['__dict__'])\n object.__setattr__(self, '__fields_set__', state['__fields_set__'])\n\n def dict(\n self,\n *,\n include: Union['AbstractSetIntStr', 'MappingIntStrAny'] = None,\n exclude: Union['AbstractSetIntStr', 'MappingIntStrAny'] = None,\n by_alias: bool = False,\n skip_defaults: bool = None,\n exclude_unset: bool = False,\n exclude_defaults: bool = False,\n exclude_none: bool = False,\n ) -> 'DictStrAny':\n \"\"\"\n Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.\n\n \"\"\"\n if skip_defaults is not None:\n warnings.warn(\n f'{self.__class__.__name__}.dict(): \"skip_defaults\" is deprecated and replaced by \"exclude_unset\"',\n DeprecationWarning,\n )\n exclude_unset = skip_defaults\n\n return dict(\n self._iter(\n to_dict=True,\n by_alias=by_alias,\n include=include,\n exclude=exclude,\n exclude_unset=exclude_unset,\n exclude_defaults=exclude_defaults,\n exclude_none=exclude_none,\n )\n )\n\n def json(\n self,\n *,\n include: Union['AbstractSetIntStr', 'MappingIntStrAny'] = None,\n exclude: Union['AbstractSetIntStr', 'MappingIntStrAny'] = None,\n by_alias: bool = False,\n skip_defaults: bool = None,\n exclude_unset: bool = False,\n exclude_defaults: bool = False,\n exclude_none: bool = False,\n encoder: Optional[Callable[[Any], Any]] = None,\n **dumps_kwargs: Any,\n ) -> str:\n \"\"\"\n Generate a JSON representation of the model, `include` and `exclude` arguments as per `dict()`.\n\n `encoder` is an optional function to supply as `default` to json.dumps(), other arguments as per `json.dumps()`.\n \"\"\"\n if skip_defaults is not None:\n warnings.warn(\n f'{self.__class__.__name__}.json(): \"skip_defaults\" is deprecated and replaced by \"exclude_unset\"',\n DeprecationWarning,\n )\n exclude_unset = skip_defaults\n encoder = cast(Callable[[Any], Any], encoder or self.__json_encoder__)\n data = self.dict(\n include=include,\n exclude=exclude,\n by_alias=by_alias,\n exclude_unset=exclude_unset,\n exclude_defaults=exclude_defaults,\n exclude_none=exclude_none,\n )\n if self.__custom_root_type__:\n data = data[ROOT_KEY]\n return self.__config__.json_dumps(data, default=encoder, **dumps_kwargs)\n\n @classmethod\n def parse_obj(cls: Type['Model'], obj: Any) -> 'Model':\n if cls.__custom_root_type__ and (\n not (isinstance(obj, dict) and obj.keys() == {ROOT_KEY}) or cls.__fields__[ROOT_KEY].shape == SHAPE_MAPPING\n ):\n obj = {ROOT_KEY: obj}\n elif not isinstance(obj, dict):\n try:\n obj = dict(obj)\n except (TypeError, ValueError) as e:\n exc = TypeError(f'{cls.__name__} expected dict not {obj.__class__.__name__}')\n raise ValidationError([ErrorWrapper(exc, loc=ROOT_KEY)], cls) from e\n return cls(**obj)\n\n @classmethod\n def parse_raw(\n cls: Type['Model'],\n b: StrBytes,\n *,\n content_type: str = None,\n encoding: str = 'utf8',\n proto: Protocol = None,\n allow_pickle: bool = False,\n ) -> 'Model':\n try:\n obj = load_str_bytes(\n b,\n proto=proto,\n content_type=content_type,\n encoding=encoding,\n allow_pickle=allow_pickle,\n json_loads=cls.__config__.json_loads,\n )\n except (ValueError, TypeError, UnicodeDecodeError) as e:\n raise ValidationError([ErrorWrapper(e, loc=ROOT_KEY)], cls)\n return cls.parse_obj(obj)\n\n @classmethod\n def parse_file(\n cls: Type['Model'],\n path: Union[str, Path],\n *,\n content_type: str = None,\n encoding: str = 'utf8',\n proto: Protocol = None,\n allow_pickle: bool = False,\n ) -> 'Model':\n obj = load_file(\n path,\n proto=proto,\n content_type=content_type,\n encoding=encoding,\n allow_pickle=allow_pickle,\n json_loads=cls.__config__.json_loads,\n )\n return cls.parse_obj(obj)\n\n @classmethod\n def from_orm(cls: Type['Model'], obj: Any) -> 'Model':\n if not cls.__config__.orm_mode:\n raise ConfigError('You must have the config attribute orm_mode=True to use from_orm')\n obj = cls._decompose_class(obj)\n m = cls.__new__(cls)\n values, fields_set, validation_error = validate_model(cls, obj)\n if validation_error:\n raise validation_error\n object.__setattr__(m, '__dict__', values)\n object.__setattr__(m, '__fields_set__', fields_set)\n return m\n\n @classmethod\n def construct(cls: Type['Model'], _fields_set: Optional['SetStr'] = None, **values: Any) -> 'Model':\n \"\"\"\n Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\n Default values are respected, but no other validation is performed.\n \"\"\"\n m = cls.__new__(cls)\n object.__setattr__(m, '__dict__', {**deepcopy(cls.__field_defaults__), **values})\n if _fields_set is None:\n _fields_set = set(values.keys())\n object.__setattr__(m, '__fields_set__', _fields_set)\n return m\n\n def copy(\n self: 'Model',\n *,\n include: Union['AbstractSetIntStr', 'MappingIntStrAny'] = None,\n exclude: Union['AbstractSetIntStr', 'MappingIntStrAny'] = None,\n update: 'DictStrAny' = None,\n deep: bool = False,\n ) -> 'Model':\n \"\"\"\n Duplicate a model, optionally choose which fields to include, exclude and change.\n\n :param include: fields to include in new model\n :param exclude: fields to exclude from new model, as with values this takes precedence over include\n :param update: values to change/add in the new model. Note: the data is not validated before creating\n the new model: you should trust this data\n :param deep: set to `True` to make a deep copy of the model\n :return: new model instance\n \"\"\"\n\n v = dict(\n self._iter(to_dict=False, by_alias=False, include=include, exclude=exclude, exclude_unset=False),\n **(update or {}),\n )\n\n if deep:\n v = deepcopy(v)\n\n cls = self.__class__\n m = cls.__new__(cls)\n object.__setattr__(m, '__dict__', v)\n object.__setattr__(m, '__fields_set__', self.__fields_set__.copy())\n return m\n\n @classmethod\n def schema(cls, by_alias: bool = True) -> 'DictStrAny':\n cached = cls.__schema_cache__.get(by_alias)\n if cached is not None:\n return cached\n s = model_schema(cls, by_alias=by_alias)\n cls.__schema_cache__[by_alias] = s\n return s\n\n @classmethod\n def schema_json(cls, *, by_alias: bool = True, **dumps_kwargs: Any) -> str:\n from .json import pydantic_encoder\n\n return cls.__config__.json_dumps(cls.schema(by_alias=by_alias), default=pydantic_encoder, **dumps_kwargs)\n\n @classmethod\n def __get_validators__(cls) -> 'CallableGenerator':\n yield cls.validate\n\n @classmethod\n def validate(cls: Type['Model'], value: Any) -> 'Model':\n if isinstance(value, dict):\n return cls(**value)\n elif isinstance(value, cls):\n return value.copy()\n elif cls.__config__.orm_mode:\n return cls.from_orm(value)\n elif cls.__custom_root_type__:\n return cls.parse_obj(value)\n else:\n try:\n value_as_dict = dict(value)\n except (TypeError, ValueError) as e:\n raise DictError() from e\n return cls(**value_as_dict)\n\n @classmethod\n def _decompose_class(cls: Type['Model'], obj: Any) -> GetterDict:\n return cls.__config__.getter_dict(obj)\n\n @classmethod\n @no_type_check\n def _get_value(\n cls,\n v: Any,\n to_dict: bool,\n by_alias: bool,\n include: Optional[Union['AbstractSetIntStr', 'MappingIntStrAny']],\n exclude: Optional[Union['AbstractSetIntStr', 'MappingIntStrAny']],\n exclude_unset: bool,\n exclude_defaults: bool,\n exclude_none: bool,\n ) -> Any:\n\n if isinstance(v, BaseModel):\n if to_dict:\n return v.dict(\n by_alias=by_alias,\n exclude_unset=exclude_unset,\n exclude_defaults=exclude_defaults,\n include=include,\n exclude=exclude,\n exclude_none=exclude_none,\n )\n else:\n return v.copy(include=include, exclude=exclude)\n\n value_exclude = ValueItems(v, exclude) if exclude else None\n value_include = ValueItems(v, include) if include else None\n\n if isinstance(v, dict):\n return {\n k_: cls._get_value(\n v_,\n to_dict=to_dict,\n by_alias=by_alias,\n exclude_unset=exclude_unset,\n exclude_defaults=exclude_defaults,\n include=value_include and value_include.for_element(k_),\n exclude=value_exclude and value_exclude.for_element(k_),\n exclude_none=exclude_none,\n )\n for k_, v_ in v.items()\n if (not value_exclude or not value_exclude.is_excluded(k_))\n and (not value_include or value_include.is_included(k_))\n }\n\n elif sequence_like(v):\n return v.__class__(\n cls._get_value(\n v_,\n to_dict=to_dict,\n by_alias=by_alias,\n exclude_unset=exclude_unset,\n exclude_defaults=exclude_defaults,\n include=value_include and value_include.for_element(i),\n exclude=value_exclude and value_exclude.for_element(i),\n exclude_none=exclude_none,\n )\n for i, v_ in enumerate(v)\n if (not value_exclude or not value_exclude.is_excluded(i))\n and (not value_include or value_include.is_included(i))\n )\n\n else:\n return v\n\n @classmethod\n def update_forward_refs(cls, **localns: Any) -> None:\n \"\"\"\n Try to update ForwardRefs on fields based on this Model, globalns and localns.\n \"\"\"\n globalns = sys.modules[cls.__module__].__dict__.copy()\n globalns.setdefault(cls.__name__, cls)\n for f in cls.__fields__.values():\n update_field_forward_refs(f, globalns=globalns, localns=localns)\n\n def __iter__(self) -> 'TupleGenerator':\n \"\"\"\n so `dict(model)` works\n \"\"\"\n yield from self.__dict__.items()\n\n def _iter(\n self,\n to_dict: bool = False,\n by_alias: bool = False,\n include: Union['AbstractSetIntStr', 'MappingIntStrAny'] = None,\n exclude: Union['AbstractSetIntStr', 'MappingIntStrAny'] = None,\n exclude_unset: bool = False,\n exclude_defaults: bool = False,\n exclude_none: bool = False,\n ) -> 'TupleGenerator':\n\n allowed_keys = self._calculate_keys(include=include, exclude=exclude, exclude_unset=exclude_unset)\n if allowed_keys is None and not (to_dict or by_alias or exclude_unset or exclude_defaults or exclude_none):\n # huge boost for plain _iter()\n yield from self.__dict__.items()\n return\n\n value_exclude = ValueItems(self, exclude) if exclude else None\n value_include = ValueItems(self, include) if include else None\n\n for field_key, v in self.__dict__.items():\n if (\n (allowed_keys is not None and field_key not in allowed_keys)\n or (exclude_none and v is None)\n or (exclude_defaults and self.__field_defaults__.get(field_key, _missing) == v)\n ):\n continue\n if by_alias and field_key in self.__fields__:\n dict_key = self.__fields__[field_key].alias\n else:\n dict_key = field_key\n if to_dict or value_include or value_exclude:\n v = self._get_value(\n v,\n to_dict=to_dict,\n by_alias=by_alias,\n include=value_include and value_include.for_element(field_key),\n exclude=value_exclude and value_exclude.for_element(field_key),\n exclude_unset=exclude_unset,\n exclude_defaults=exclude_defaults,\n exclude_none=exclude_none,\n )\n yield dict_key, v\n\n def _calculate_keys(\n self,\n include: Optional[Union['AbstractSetIntStr', 'MappingIntStrAny']],\n exclude: Optional[Union['AbstractSetIntStr', 'MappingIntStrAny']],\n exclude_unset: bool,\n update: Optional['DictStrAny'] = None,\n ) -> Optional[AbstractSet[str]]:\n if include is None and exclude is None and exclude_unset is False:\n return None\n\n keys: AbstractSet[str]\n if exclude_unset:\n keys = self.__fields_set__.copy()\n else:\n keys = self.__dict__.keys()\n\n if include is not None:\n if isinstance(include, Mapping):\n keys &= include.keys()\n else:\n keys &= include\n\n if update:\n keys -= update.keys()\n\n if exclude:\n if isinstance(exclude, Mapping):\n keys -= {k for k, v in exclude.items() if v is ...}\n else:\n keys -= exclude\n\n return keys\n\n def __eq__(self, other: Any) -> bool:\n if isinstance(other, BaseModel):\n return self.dict() == other.dict()\n else:\n return self.dict() == other\n\n def __repr_args__(self) -> 'ReprArgs':\n return self.__dict__.items() # type: ignore\n\n @property\n def fields(self) -> Dict[str, ModelField]:\n warnings.warn('`fields` attribute is deprecated, use `__fields__` instead', DeprecationWarning)\n return self.__fields__\n\n def to_string(self, pretty: bool = False) -> str:\n warnings.warn('`model.to_string()` method is deprecated, use `str(model)` instead', DeprecationWarning)\n return str(self)\n\n @property\n def __values__(self) -> 'DictStrAny':\n warnings.warn('`__values__` attribute is deprecated, use `__dict__` instead', DeprecationWarning)\n return self.__dict__\n\n\n_is_base_model_class_defined = True\n\n\ndef create_model(\n __model_name: str,\n *,\n __config__: Type[BaseConfig] = None,\n __base__: Type[BaseModel] = None,\n __module__: Optional[str] = None,\n __validators__: Dict[str, classmethod] = None,\n **field_definitions: Any,\n) -> Type[BaseModel]:\n \"\"\"\n Dynamically create a model.\n :param __model_name: name of the created model\n :param __config__: config class to use for the new model\n :param __base__: base class for the new model to inherit from\n :param __validators__: a dict of method names and @validator class methods\n :param **field_definitions: fields of the model (or extra fields if a base is supplied) in the format\n `<name>=(<type>, <default default>)` or `<name>=<default value> eg. `foobar=(str, ...)` or `foobar=123`\n \"\"\"\n if __base__:\n if __config__ is not None:\n raise ConfigError('to avoid confusion __config__ and __base__ cannot be used together')\n else:\n __base__ = BaseModel\n\n fields = {}\n annotations = {}\n\n for f_name, f_def in field_definitions.items():\n if not is_valid_field(f_name):\n warnings.warn(f'fields may not start with an underscore, ignoring \"{f_name}\"', RuntimeWarning)\n if isinstance(f_def, tuple):\n try:\n f_annotation, f_value = f_def\n except ValueError as e:\n raise ConfigError(\n 'field definitions should either be a tuple of (<type>, <default>) or just a '\n 'default value, unfortunately this means tuples as '\n 'default values are not allowed'\n ) from e\n else:\n f_annotation, f_value = None, f_def\n\n if f_annotation:\n annotations[f_name] = f_annotation\n fields[f_name] = f_value\n\n namespace: 'DictStrAny' = {'__annotations__': annotations, '__module__': __module__}\n if __validators__:\n namespace.update(__validators__)\n namespace.update(fields)\n if __config__:\n namespace['Config'] = inherit_config(__config__, BaseConfig)\n\n return type(__model_name, (__base__,), namespace)\n\n\n_missing = object()\n\n\ndef validate_model( # noqa: C901 (ignore complexity)\n model: Type[BaseModel], input_data: 'DictStrAny', cls: 'ModelOrDc' = None\n) -> Tuple['DictStrAny', 'SetStr', Optional[ValidationError]]:\n \"\"\"\n validate data against a model.\n \"\"\"\n values = {}\n errors = []\n # input_data names, possibly alias\n names_used = set()\n # field names, never aliases\n fields_set = set()\n config = model.__config__\n check_extra = config.extra is not Extra.ignore\n cls_ = cls or model\n\n for validator in model.__pre_root_validators__:\n try:\n input_data = validator(cls_, input_data)\n except (ValueError, TypeError, AssertionError) as exc:\n return {}, set(), ValidationError([ErrorWrapper(exc, loc=ROOT_KEY)], cls_)\n\n for name, field in model.__fields__.items():\n if field.type_.__class__ == ForwardRef:\n raise ConfigError(\n f'field \"{field.name}\" not yet prepared so type is still a ForwardRef, '\n f'you might need to call {cls_.__name__}.update_forward_refs().'\n )\n\n value = input_data.get(field.alias, _missing)\n using_name = False\n if value is _missing and config.allow_population_by_field_name and field.alt_alias:\n value = input_data.get(field.name, _missing)\n using_name = True\n\n if value is _missing:\n if field.required:\n errors.append(ErrorWrapper(MissingError(), loc=field.alias))\n continue\n\n value = field.get_default()\n\n if not config.validate_all and not field.validate_always:\n values[name] = value\n continue\n else:\n fields_set.add(name)\n if check_extra:\n names_used.add(field.name if using_name else field.alias)\n\n v_, errors_ = field.validate(value, values, loc=field.alias, cls=cls_)\n if isinstance(errors_, ErrorWrapper):\n errors.append(errors_)\n elif isinstance(errors_, list):\n errors.extend(errors_)\n else:\n values[name] = v_\n\n if check_extra:\n if isinstance(input_data, GetterDict):\n extra = input_data.extra_keys() - names_used\n else:\n extra = input_data.keys() - names_used\n if extra:\n fields_set |= extra\n if config.extra is Extra.allow:\n for f in extra:\n values[f] = input_data[f]\n else:\n for f in sorted(extra):\n errors.append(ErrorWrapper(ExtraError(), loc=f))\n\n for skip_on_failure, validator in model.__post_root_validators__:\n if skip_on_failure and errors:\n continue\n try:\n values = validator(cls_, values)\n except (ValueError, TypeError, AssertionError) as exc:\n errors.append(ErrorWrapper(exc, loc=ROOT_KEY))\n break\n\n if errors:\n return values, fields_set, ValidationError(errors, cls_)\n else:\n return values, fields_set, None\n", "path": "pydantic/main.py"}], "after_files": [{"content": "import json\nimport sys\nimport warnings\nfrom abc import ABCMeta\nfrom copy import deepcopy\nfrom enum import Enum\nfrom functools import partial\nfrom pathlib import Path\nfrom types import FunctionType\nfrom typing import (\n TYPE_CHECKING,\n AbstractSet,\n Any,\n Callable,\n Dict,\n List,\n Mapping,\n Optional,\n Tuple,\n Type,\n TypeVar,\n Union,\n cast,\n no_type_check,\n overload,\n)\n\nfrom .class_validators import ROOT_KEY, ValidatorGroup, extract_root_validators, extract_validators, inherit_validators\nfrom .error_wrappers import ErrorWrapper, ValidationError\nfrom .errors import ConfigError, DictError, ExtraError, MissingError\nfrom .fields import SHAPE_MAPPING, ModelField, Undefined\nfrom .json import custom_pydantic_encoder, pydantic_encoder\nfrom .parse import Protocol, load_file, load_str_bytes\nfrom .schema import model_schema\nfrom .types import PyObject, StrBytes\nfrom .typing import AnyCallable, AnyType, ForwardRef, is_classvar, resolve_annotations, update_field_forward_refs\nfrom .utils import (\n ClassAttribute,\n GetterDict,\n Representation,\n ValueItems,\n generate_model_signature,\n lenient_issubclass,\n sequence_like,\n validate_field_name,\n)\n\nif TYPE_CHECKING:\n import typing_extensions\n from inspect import Signature\n from .class_validators import ValidatorListDict\n from .types import ModelOrDc\n from .typing import CallableGenerator, TupleGenerator, DictStrAny, DictAny, SetStr\n from .typing import AbstractSetIntStr, MappingIntStrAny, ReprArgs # noqa: F401\n\n ConfigType = Type['BaseConfig']\n Model = TypeVar('Model', bound='BaseModel')\n\n class SchemaExtraCallable(typing_extensions.Protocol):\n @overload\n def __call__(self, schema: Dict[str, Any]) -> None:\n pass\n\n @overload # noqa: F811\n def __call__(self, schema: Dict[str, Any], model_class: Type['Model']) -> None: # noqa: F811\n pass\n\n\nelse:\n SchemaExtraCallable = Callable[..., None]\n\n\ntry:\n import cython # type: ignore\nexcept ImportError:\n compiled: bool = False\nelse: # pragma: no cover\n try:\n compiled = cython.compiled\n except AttributeError:\n compiled = False\n\n__all__ = 'BaseConfig', 'BaseModel', 'Extra', 'compiled', 'create_model', 'validate_model'\n\n\nclass Extra(str, Enum):\n allow = 'allow'\n ignore = 'ignore'\n forbid = 'forbid'\n\n\nclass BaseConfig:\n title = None\n anystr_strip_whitespace = False\n min_anystr_length = None\n max_anystr_length = None\n validate_all = False\n extra = Extra.ignore\n allow_mutation = True\n allow_population_by_field_name = False\n use_enum_values = False\n fields: Dict[str, Union[str, Dict[str, str]]] = {}\n validate_assignment = False\n error_msg_templates: Dict[str, str] = {}\n arbitrary_types_allowed = False\n orm_mode: bool = False\n getter_dict: Type[GetterDict] = GetterDict\n alias_generator: Optional[Callable[[str], str]] = None\n keep_untouched: Tuple[type, ...] = ()\n schema_extra: Union[Dict[str, Any], 'SchemaExtraCallable'] = {}\n json_loads: Callable[[str], Any] = json.loads\n json_dumps: Callable[..., str] = json.dumps\n json_encoders: Dict[AnyType, AnyCallable] = {}\n\n @classmethod\n def get_field_info(cls, name: str) -> Dict[str, Any]:\n fields_value = cls.fields.get(name)\n\n if isinstance(fields_value, str):\n field_info: Dict[str, Any] = {'alias': fields_value}\n elif isinstance(fields_value, dict):\n field_info = fields_value\n else:\n field_info = {}\n\n if 'alias' in field_info:\n field_info.setdefault('alias_priority', 2)\n\n if field_info.get('alias_priority', 0) <= 1 and cls.alias_generator:\n alias = cls.alias_generator(name)\n if not isinstance(alias, str):\n raise TypeError(f'Config.alias_generator must return str, not {alias.__class__}')\n field_info.update(alias=alias, alias_priority=1)\n return field_info\n\n @classmethod\n def prepare_field(cls, field: 'ModelField') -> None:\n \"\"\"\n Optional hook to check or modify fields during model creation.\n \"\"\"\n pass\n\n\ndef inherit_config(self_config: 'ConfigType', parent_config: 'ConfigType') -> 'ConfigType':\n if not self_config:\n base_classes = (parent_config,)\n elif self_config == parent_config:\n base_classes = (self_config,)\n else:\n base_classes = self_config, parent_config # type: ignore\n return type('Config', base_classes, {})\n\n\nEXTRA_LINK = 'https://pydantic-docs.helpmanual.io/usage/model_config/'\n\n\ndef prepare_config(config: Type[BaseConfig], cls_name: str) -> None:\n if not isinstance(config.extra, Extra):\n try:\n config.extra = Extra(config.extra)\n except ValueError:\n raise ValueError(f'\"{cls_name}\": {config.extra} is not a valid value for \"extra\"')\n\n if hasattr(config, 'allow_population_by_alias'):\n warnings.warn(\n f'{cls_name}: \"allow_population_by_alias\" is deprecated and replaced by \"allow_population_by_field_name\"',\n DeprecationWarning,\n )\n config.allow_population_by_field_name = config.allow_population_by_alias # type: ignore\n\n if hasattr(config, 'case_insensitive') and any('BaseSettings.Config' in c.__qualname__ for c in config.__mro__):\n warnings.warn(\n f'{cls_name}: \"case_insensitive\" is deprecated on BaseSettings config and replaced by '\n f'\"case_sensitive\" (default False)',\n DeprecationWarning,\n )\n config.case_sensitive = not config.case_insensitive # type: ignore\n\n\ndef is_valid_field(name: str) -> bool:\n if not name.startswith('_'):\n return True\n return ROOT_KEY == name\n\n\ndef validate_custom_root_type(fields: Dict[str, ModelField]) -> None:\n if len(fields) > 1:\n raise ValueError('__root__ cannot be mixed with other fields')\n\n\nUNTOUCHED_TYPES = FunctionType, property, type, classmethod, staticmethod\n\n# Note `ModelMetaclass` refers to `BaseModel`, but is also used to *create* `BaseModel`, so we need to add this extra\n# (somewhat hacky) boolean to keep track of whether we've created the `BaseModel` class yet, and therefore whether it's\n# safe to refer to it. If it *hasn't* been created, we assume that the `__new__` call we're in the middle of is for\n# the `BaseModel` class, since that's defined immediately after the metaclass.\n_is_base_model_class_defined = False\n\n\nclass ModelMetaclass(ABCMeta):\n @no_type_check # noqa C901\n def __new__(mcs, name, bases, namespace, **kwargs): # noqa C901\n fields: Dict[str, ModelField] = {}\n config = BaseConfig\n validators: 'ValidatorListDict' = {}\n fields_defaults: Dict[str, Any] = {}\n\n pre_root_validators, post_root_validators = [], []\n for base in reversed(bases):\n if _is_base_model_class_defined and issubclass(base, BaseModel) and base != BaseModel:\n fields.update(deepcopy(base.__fields__))\n config = inherit_config(base.__config__, config)\n validators = inherit_validators(base.__validators__, validators)\n pre_root_validators += base.__pre_root_validators__\n post_root_validators += base.__post_root_validators__\n\n config = inherit_config(namespace.get('Config'), config)\n validators = inherit_validators(extract_validators(namespace), validators)\n vg = ValidatorGroup(validators)\n\n for f in fields.values():\n if not f.required:\n fields_defaults[f.name] = f.default\n\n f.set_config(config)\n extra_validators = vg.get_validators(f.name)\n if extra_validators:\n f.class_validators.update(extra_validators)\n # re-run prepare to add extra validators\n f.populate_validators()\n\n prepare_config(config, name)\n\n class_vars = set()\n if (namespace.get('__module__'), namespace.get('__qualname__')) != ('pydantic.main', 'BaseModel'):\n annotations = resolve_annotations(namespace.get('__annotations__', {}), namespace.get('__module__', None))\n untouched_types = UNTOUCHED_TYPES + config.keep_untouched\n # annotation only fields need to come first in fields\n for ann_name, ann_type in annotations.items():\n if is_classvar(ann_type):\n class_vars.add(ann_name)\n elif is_valid_field(ann_name):\n validate_field_name(bases, ann_name)\n value = namespace.get(ann_name, Undefined)\n if (\n isinstance(value, untouched_types)\n and ann_type != PyObject\n and not lenient_issubclass(getattr(ann_type, '__origin__', None), Type)\n ):\n continue\n fields[ann_name] = inferred = ModelField.infer(\n name=ann_name,\n value=value,\n annotation=ann_type,\n class_validators=vg.get_validators(ann_name),\n config=config,\n )\n if not inferred.required:\n fields_defaults[ann_name] = inferred.default\n\n for var_name, value in namespace.items():\n if (\n var_name not in annotations\n and is_valid_field(var_name)\n and not isinstance(value, untouched_types)\n and var_name not in class_vars\n ):\n validate_field_name(bases, var_name)\n inferred = ModelField.infer(\n name=var_name,\n value=value,\n annotation=annotations.get(var_name),\n class_validators=vg.get_validators(var_name),\n config=config,\n )\n if var_name in fields and inferred.type_ != fields[var_name].type_:\n raise TypeError(\n f'The type of {name}.{var_name} differs from the new default value; '\n f'if you wish to change the type of this field, please use a type annotation'\n )\n fields[var_name] = inferred\n if not inferred.required:\n fields_defaults[var_name] = inferred.default\n\n _custom_root_type = ROOT_KEY in fields\n if _custom_root_type:\n validate_custom_root_type(fields)\n vg.check_for_unused()\n if config.json_encoders:\n json_encoder = partial(custom_pydantic_encoder, config.json_encoders)\n else:\n json_encoder = pydantic_encoder\n pre_rv_new, post_rv_new = extract_root_validators(namespace)\n new_namespace = {\n '__config__': config,\n '__fields__': fields,\n '__field_defaults__': fields_defaults,\n '__validators__': vg.validators,\n '__pre_root_validators__': pre_root_validators + pre_rv_new,\n '__post_root_validators__': post_root_validators + post_rv_new,\n '__schema_cache__': {},\n '__json_encoder__': staticmethod(json_encoder),\n '__custom_root_type__': _custom_root_type,\n **{n: v for n, v in namespace.items() if n not in fields},\n }\n\n cls = super().__new__(mcs, name, bases, new_namespace, **kwargs)\n # set __signature__ attr only for model class, but not for its instances\n cls.__signature__ = ClassAttribute('__signature__', generate_model_signature(cls.__init__, fields, config))\n return cls\n\n\nclass BaseModel(Representation, metaclass=ModelMetaclass):\n if TYPE_CHECKING:\n # populated by the metaclass, defined here to help IDEs only\n __fields__: Dict[str, ModelField] = {}\n __field_defaults__: Dict[str, Any] = {}\n __validators__: Dict[str, AnyCallable] = {}\n __pre_root_validators__: List[AnyCallable]\n __post_root_validators__: List[Tuple[bool, AnyCallable]]\n __config__: Type[BaseConfig] = BaseConfig\n __root__: Any = None\n __json_encoder__: Callable[[Any], Any] = lambda x: x\n __schema_cache__: 'DictAny' = {}\n __custom_root_type__: bool = False\n __signature__: 'Signature'\n\n Config = BaseConfig\n __slots__ = ('__dict__', '__fields_set__')\n __doc__ = '' # Null out the Representation docstring\n\n def __init__(__pydantic_self__, **data: Any) -> None:\n \"\"\"\n Create a new model by parsing and validating input data from keyword arguments.\n\n Raises ValidationError if the input data cannot be parsed to form a valid model.\n \"\"\"\n # Uses something other than `self` the first arg to allow \"self\" as a settable attribute\n if TYPE_CHECKING:\n __pydantic_self__.__dict__: Dict[str, Any] = {}\n __pydantic_self__.__fields_set__: 'SetStr' = set()\n values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)\n if validation_error:\n raise validation_error\n object.__setattr__(__pydantic_self__, '__dict__', values)\n object.__setattr__(__pydantic_self__, '__fields_set__', fields_set)\n\n @no_type_check\n def __setattr__(self, name, value):\n if self.__config__.extra is not Extra.allow and name not in self.__fields__:\n raise ValueError(f'\"{self.__class__.__name__}\" object has no field \"{name}\"')\n elif not self.__config__.allow_mutation:\n raise TypeError(f'\"{self.__class__.__name__}\" is immutable and does not support item assignment')\n elif self.__config__.validate_assignment:\n known_field = self.__fields__.get(name, None)\n if known_field:\n value, error_ = known_field.validate(value, self.dict(exclude={name}), loc=name, cls=self.__class__)\n if error_:\n raise ValidationError([error_], self.__class__)\n self.__dict__[name] = value\n self.__fields_set__.add(name)\n\n def __getstate__(self) -> 'DictAny':\n return {'__dict__': self.__dict__, '__fields_set__': self.__fields_set__}\n\n def __setstate__(self, state: 'DictAny') -> None:\n object.__setattr__(self, '__dict__', state['__dict__'])\n object.__setattr__(self, '__fields_set__', state['__fields_set__'])\n\n def dict(\n self,\n *,\n include: Union['AbstractSetIntStr', 'MappingIntStrAny'] = None,\n exclude: Union['AbstractSetIntStr', 'MappingIntStrAny'] = None,\n by_alias: bool = False,\n skip_defaults: bool = None,\n exclude_unset: bool = False,\n exclude_defaults: bool = False,\n exclude_none: bool = False,\n ) -> 'DictStrAny':\n \"\"\"\n Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.\n\n \"\"\"\n if skip_defaults is not None:\n warnings.warn(\n f'{self.__class__.__name__}.dict(): \"skip_defaults\" is deprecated and replaced by \"exclude_unset\"',\n DeprecationWarning,\n )\n exclude_unset = skip_defaults\n\n return dict(\n self._iter(\n to_dict=True,\n by_alias=by_alias,\n include=include,\n exclude=exclude,\n exclude_unset=exclude_unset,\n exclude_defaults=exclude_defaults,\n exclude_none=exclude_none,\n )\n )\n\n def json(\n self,\n *,\n include: Union['AbstractSetIntStr', 'MappingIntStrAny'] = None,\n exclude: Union['AbstractSetIntStr', 'MappingIntStrAny'] = None,\n by_alias: bool = False,\n skip_defaults: bool = None,\n exclude_unset: bool = False,\n exclude_defaults: bool = False,\n exclude_none: bool = False,\n encoder: Optional[Callable[[Any], Any]] = None,\n **dumps_kwargs: Any,\n ) -> str:\n \"\"\"\n Generate a JSON representation of the model, `include` and `exclude` arguments as per `dict()`.\n\n `encoder` is an optional function to supply as `default` to json.dumps(), other arguments as per `json.dumps()`.\n \"\"\"\n if skip_defaults is not None:\n warnings.warn(\n f'{self.__class__.__name__}.json(): \"skip_defaults\" is deprecated and replaced by \"exclude_unset\"',\n DeprecationWarning,\n )\n exclude_unset = skip_defaults\n encoder = cast(Callable[[Any], Any], encoder or self.__json_encoder__)\n data = self.dict(\n include=include,\n exclude=exclude,\n by_alias=by_alias,\n exclude_unset=exclude_unset,\n exclude_defaults=exclude_defaults,\n exclude_none=exclude_none,\n )\n if self.__custom_root_type__:\n data = data[ROOT_KEY]\n return self.__config__.json_dumps(data, default=encoder, **dumps_kwargs)\n\n @classmethod\n def parse_obj(cls: Type['Model'], obj: Any) -> 'Model':\n if cls.__custom_root_type__ and (\n not (isinstance(obj, dict) and obj.keys() == {ROOT_KEY}) or cls.__fields__[ROOT_KEY].shape == SHAPE_MAPPING\n ):\n obj = {ROOT_KEY: obj}\n elif not isinstance(obj, dict):\n try:\n obj = dict(obj)\n except (TypeError, ValueError) as e:\n exc = TypeError(f'{cls.__name__} expected dict not {obj.__class__.__name__}')\n raise ValidationError([ErrorWrapper(exc, loc=ROOT_KEY)], cls) from e\n return cls(**obj)\n\n @classmethod\n def parse_raw(\n cls: Type['Model'],\n b: StrBytes,\n *,\n content_type: str = None,\n encoding: str = 'utf8',\n proto: Protocol = None,\n allow_pickle: bool = False,\n ) -> 'Model':\n try:\n obj = load_str_bytes(\n b,\n proto=proto,\n content_type=content_type,\n encoding=encoding,\n allow_pickle=allow_pickle,\n json_loads=cls.__config__.json_loads,\n )\n except (ValueError, TypeError, UnicodeDecodeError) as e:\n raise ValidationError([ErrorWrapper(e, loc=ROOT_KEY)], cls)\n return cls.parse_obj(obj)\n\n @classmethod\n def parse_file(\n cls: Type['Model'],\n path: Union[str, Path],\n *,\n content_type: str = None,\n encoding: str = 'utf8',\n proto: Protocol = None,\n allow_pickle: bool = False,\n ) -> 'Model':\n obj = load_file(\n path,\n proto=proto,\n content_type=content_type,\n encoding=encoding,\n allow_pickle=allow_pickle,\n json_loads=cls.__config__.json_loads,\n )\n return cls.parse_obj(obj)\n\n @classmethod\n def from_orm(cls: Type['Model'], obj: Any) -> 'Model':\n if not cls.__config__.orm_mode:\n raise ConfigError('You must have the config attribute orm_mode=True to use from_orm')\n obj = cls._decompose_class(obj)\n m = cls.__new__(cls)\n values, fields_set, validation_error = validate_model(cls, obj)\n if validation_error:\n raise validation_error\n object.__setattr__(m, '__dict__', values)\n object.__setattr__(m, '__fields_set__', fields_set)\n return m\n\n @classmethod\n def construct(cls: Type['Model'], _fields_set: Optional['SetStr'] = None, **values: Any) -> 'Model':\n \"\"\"\n Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.\n Default values are respected, but no other validation is performed.\n \"\"\"\n m = cls.__new__(cls)\n object.__setattr__(m, '__dict__', {**deepcopy(cls.__field_defaults__), **values})\n if _fields_set is None:\n _fields_set = set(values.keys())\n object.__setattr__(m, '__fields_set__', _fields_set)\n return m\n\n def copy(\n self: 'Model',\n *,\n include: Union['AbstractSetIntStr', 'MappingIntStrAny'] = None,\n exclude: Union['AbstractSetIntStr', 'MappingIntStrAny'] = None,\n update: 'DictStrAny' = None,\n deep: bool = False,\n ) -> 'Model':\n \"\"\"\n Duplicate a model, optionally choose which fields to include, exclude and change.\n\n :param include: fields to include in new model\n :param exclude: fields to exclude from new model, as with values this takes precedence over include\n :param update: values to change/add in the new model. Note: the data is not validated before creating\n the new model: you should trust this data\n :param deep: set to `True` to make a deep copy of the model\n :return: new model instance\n \"\"\"\n\n v = dict(\n self._iter(to_dict=False, by_alias=False, include=include, exclude=exclude, exclude_unset=False),\n **(update or {}),\n )\n\n if deep:\n v = deepcopy(v)\n\n cls = self.__class__\n m = cls.__new__(cls)\n object.__setattr__(m, '__dict__', v)\n object.__setattr__(m, '__fields_set__', self.__fields_set__.copy())\n return m\n\n @classmethod\n def schema(cls, by_alias: bool = True) -> 'DictStrAny':\n cached = cls.__schema_cache__.get(by_alias)\n if cached is not None:\n return cached\n s = model_schema(cls, by_alias=by_alias)\n cls.__schema_cache__[by_alias] = s\n return s\n\n @classmethod\n def schema_json(cls, *, by_alias: bool = True, **dumps_kwargs: Any) -> str:\n from .json import pydantic_encoder\n\n return cls.__config__.json_dumps(cls.schema(by_alias=by_alias), default=pydantic_encoder, **dumps_kwargs)\n\n @classmethod\n def __get_validators__(cls) -> 'CallableGenerator':\n yield cls.validate\n\n @classmethod\n def validate(cls: Type['Model'], value: Any) -> 'Model':\n if isinstance(value, dict):\n return cls(**value)\n elif isinstance(value, cls):\n return value.copy()\n elif cls.__config__.orm_mode:\n return cls.from_orm(value)\n elif cls.__custom_root_type__:\n return cls.parse_obj(value)\n else:\n try:\n value_as_dict = dict(value)\n except (TypeError, ValueError) as e:\n raise DictError() from e\n return cls(**value_as_dict)\n\n @classmethod\n def _decompose_class(cls: Type['Model'], obj: Any) -> GetterDict:\n return cls.__config__.getter_dict(obj)\n\n @classmethod\n @no_type_check\n def _get_value(\n cls,\n v: Any,\n to_dict: bool,\n by_alias: bool,\n include: Optional[Union['AbstractSetIntStr', 'MappingIntStrAny']],\n exclude: Optional[Union['AbstractSetIntStr', 'MappingIntStrAny']],\n exclude_unset: bool,\n exclude_defaults: bool,\n exclude_none: bool,\n ) -> Any:\n\n if isinstance(v, BaseModel):\n if to_dict:\n return v.dict(\n by_alias=by_alias,\n exclude_unset=exclude_unset,\n exclude_defaults=exclude_defaults,\n include=include,\n exclude=exclude,\n exclude_none=exclude_none,\n )\n else:\n return v.copy(include=include, exclude=exclude)\n\n value_exclude = ValueItems(v, exclude) if exclude else None\n value_include = ValueItems(v, include) if include else None\n\n if isinstance(v, dict):\n return {\n k_: cls._get_value(\n v_,\n to_dict=to_dict,\n by_alias=by_alias,\n exclude_unset=exclude_unset,\n exclude_defaults=exclude_defaults,\n include=value_include and value_include.for_element(k_),\n exclude=value_exclude and value_exclude.for_element(k_),\n exclude_none=exclude_none,\n )\n for k_, v_ in v.items()\n if (not value_exclude or not value_exclude.is_excluded(k_))\n and (not value_include or value_include.is_included(k_))\n }\n\n elif sequence_like(v):\n return v.__class__(\n cls._get_value(\n v_,\n to_dict=to_dict,\n by_alias=by_alias,\n exclude_unset=exclude_unset,\n exclude_defaults=exclude_defaults,\n include=value_include and value_include.for_element(i),\n exclude=value_exclude and value_exclude.for_element(i),\n exclude_none=exclude_none,\n )\n for i, v_ in enumerate(v)\n if (not value_exclude or not value_exclude.is_excluded(i))\n and (not value_include or value_include.is_included(i))\n )\n\n else:\n return v\n\n @classmethod\n def update_forward_refs(cls, **localns: Any) -> None:\n \"\"\"\n Try to update ForwardRefs on fields based on this Model, globalns and localns.\n \"\"\"\n globalns = sys.modules[cls.__module__].__dict__.copy()\n globalns.setdefault(cls.__name__, cls)\n for f in cls.__fields__.values():\n update_field_forward_refs(f, globalns=globalns, localns=localns)\n\n def __iter__(self) -> 'TupleGenerator':\n \"\"\"\n so `dict(model)` works\n \"\"\"\n yield from self.__dict__.items()\n\n def _iter(\n self,\n to_dict: bool = False,\n by_alias: bool = False,\n include: Union['AbstractSetIntStr', 'MappingIntStrAny'] = None,\n exclude: Union['AbstractSetIntStr', 'MappingIntStrAny'] = None,\n exclude_unset: bool = False,\n exclude_defaults: bool = False,\n exclude_none: bool = False,\n ) -> 'TupleGenerator':\n\n allowed_keys = self._calculate_keys(include=include, exclude=exclude, exclude_unset=exclude_unset)\n if allowed_keys is None and not (to_dict or by_alias or exclude_unset or exclude_defaults or exclude_none):\n # huge boost for plain _iter()\n yield from self.__dict__.items()\n return\n\n value_exclude = ValueItems(self, exclude) if exclude else None\n value_include = ValueItems(self, include) if include else None\n\n for field_key, v in self.__dict__.items():\n if (\n (allowed_keys is not None and field_key not in allowed_keys)\n or (exclude_none and v is None)\n or (exclude_defaults and self.__field_defaults__.get(field_key, _missing) == v)\n ):\n continue\n if by_alias and field_key in self.__fields__:\n dict_key = self.__fields__[field_key].alias\n else:\n dict_key = field_key\n if to_dict or value_include or value_exclude:\n v = self._get_value(\n v,\n to_dict=to_dict,\n by_alias=by_alias,\n include=value_include and value_include.for_element(field_key),\n exclude=value_exclude and value_exclude.for_element(field_key),\n exclude_unset=exclude_unset,\n exclude_defaults=exclude_defaults,\n exclude_none=exclude_none,\n )\n yield dict_key, v\n\n def _calculate_keys(\n self,\n include: Optional[Union['AbstractSetIntStr', 'MappingIntStrAny']],\n exclude: Optional[Union['AbstractSetIntStr', 'MappingIntStrAny']],\n exclude_unset: bool,\n update: Optional['DictStrAny'] = None,\n ) -> Optional[AbstractSet[str]]:\n if include is None and exclude is None and exclude_unset is False:\n return None\n\n keys: AbstractSet[str]\n if exclude_unset:\n keys = self.__fields_set__.copy()\n else:\n keys = self.__dict__.keys()\n\n if include is not None:\n if isinstance(include, Mapping):\n keys &= include.keys()\n else:\n keys &= include\n\n if update:\n keys -= update.keys()\n\n if exclude:\n if isinstance(exclude, Mapping):\n keys -= {k for k, v in exclude.items() if v is ...}\n else:\n keys -= exclude\n\n return keys\n\n def __eq__(self, other: Any) -> bool:\n if isinstance(other, BaseModel):\n return self.dict() == other.dict()\n else:\n return self.dict() == other\n\n def __repr_args__(self) -> 'ReprArgs':\n return self.__dict__.items() # type: ignore\n\n @property\n def fields(self) -> Dict[str, ModelField]:\n warnings.warn('`fields` attribute is deprecated, use `__fields__` instead', DeprecationWarning)\n return self.__fields__\n\n def to_string(self, pretty: bool = False) -> str:\n warnings.warn('`model.to_string()` method is deprecated, use `str(model)` instead', DeprecationWarning)\n return str(self)\n\n @property\n def __values__(self) -> 'DictStrAny':\n warnings.warn('`__values__` attribute is deprecated, use `__dict__` instead', DeprecationWarning)\n return self.__dict__\n\n\n_is_base_model_class_defined = True\n\n\ndef create_model(\n __model_name: str,\n *,\n __config__: Type[BaseConfig] = None,\n __base__: Type[BaseModel] = None,\n __module__: Optional[str] = None,\n __validators__: Dict[str, classmethod] = None,\n **field_definitions: Any,\n) -> Type[BaseModel]:\n \"\"\"\n Dynamically create a model.\n :param __model_name: name of the created model\n :param __config__: config class to use for the new model\n :param __base__: base class for the new model to inherit from\n :param __validators__: a dict of method names and @validator class methods\n :param **field_definitions: fields of the model (or extra fields if a base is supplied) in the format\n `<name>=(<type>, <default default>)` or `<name>=<default value> eg. `foobar=(str, ...)` or `foobar=123`\n \"\"\"\n if __base__:\n if __config__ is not None:\n raise ConfigError('to avoid confusion __config__ and __base__ cannot be used together')\n else:\n __base__ = BaseModel\n\n fields = {}\n annotations = {}\n\n for f_name, f_def in field_definitions.items():\n if not is_valid_field(f_name):\n warnings.warn(f'fields may not start with an underscore, ignoring \"{f_name}\"', RuntimeWarning)\n if isinstance(f_def, tuple):\n try:\n f_annotation, f_value = f_def\n except ValueError as e:\n raise ConfigError(\n 'field definitions should either be a tuple of (<type>, <default>) or just a '\n 'default value, unfortunately this means tuples as '\n 'default values are not allowed'\n ) from e\n else:\n f_annotation, f_value = None, f_def\n\n if f_annotation:\n annotations[f_name] = f_annotation\n fields[f_name] = f_value\n\n namespace: 'DictStrAny' = {'__annotations__': annotations, '__module__': __module__}\n if __validators__:\n namespace.update(__validators__)\n namespace.update(fields)\n if __config__:\n namespace['Config'] = inherit_config(__config__, BaseConfig)\n\n return type(__model_name, (__base__,), namespace)\n\n\n_missing = object()\n\n\ndef validate_model( # noqa: C901 (ignore complexity)\n model: Type[BaseModel], input_data: 'DictStrAny', cls: 'ModelOrDc' = None\n) -> Tuple['DictStrAny', 'SetStr', Optional[ValidationError]]:\n \"\"\"\n validate data against a model.\n \"\"\"\n values = {}\n errors = []\n # input_data names, possibly alias\n names_used = set()\n # field names, never aliases\n fields_set = set()\n config = model.__config__\n check_extra = config.extra is not Extra.ignore\n cls_ = cls or model\n\n for validator in model.__pre_root_validators__:\n try:\n input_data = validator(cls_, input_data)\n except (ValueError, TypeError, AssertionError) as exc:\n return {}, set(), ValidationError([ErrorWrapper(exc, loc=ROOT_KEY)], cls_)\n\n for name, field in model.__fields__.items():\n if field.type_.__class__ == ForwardRef:\n raise ConfigError(\n f'field \"{field.name}\" not yet prepared so type is still a ForwardRef, '\n f'you might need to call {cls_.__name__}.update_forward_refs().'\n )\n\n value = input_data.get(field.alias, _missing)\n using_name = False\n if value is _missing and config.allow_population_by_field_name and field.alt_alias:\n value = input_data.get(field.name, _missing)\n using_name = True\n\n if value is _missing:\n if field.required:\n errors.append(ErrorWrapper(MissingError(), loc=field.alias))\n continue\n\n value = field.get_default()\n\n if not config.validate_all and not field.validate_always:\n values[name] = value\n continue\n else:\n fields_set.add(name)\n if check_extra:\n names_used.add(field.name if using_name else field.alias)\n\n v_, errors_ = field.validate(value, values, loc=field.alias, cls=cls_)\n if isinstance(errors_, ErrorWrapper):\n errors.append(errors_)\n elif isinstance(errors_, list):\n errors.extend(errors_)\n else:\n values[name] = v_\n\n if check_extra:\n if isinstance(input_data, GetterDict):\n extra = input_data.extra_keys() - names_used\n else:\n extra = input_data.keys() - names_used\n if extra:\n fields_set |= extra\n if config.extra is Extra.allow:\n for f in extra:\n values[f] = input_data[f]\n else:\n for f in sorted(extra):\n errors.append(ErrorWrapper(ExtraError(), loc=f))\n\n for skip_on_failure, validator in model.__post_root_validators__:\n if skip_on_failure and errors:\n continue\n try:\n values = validator(cls_, values)\n except (ValueError, TypeError, AssertionError) as exc:\n errors.append(ErrorWrapper(exc, loc=ROOT_KEY))\n break\n\n if errors:\n return values, fields_set, ValidationError(errors, cls_)\n else:\n return values, fields_set, None\n", "path": "pydantic/main.py"}]} |
gh_patches_debug_52 | rasdani/github-patches | git_diff | e2nIEE__pandapower-2263 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[bug] __format_version__ not increased.
### Issue Description
The `__format_version__` in `_version.py` has not been increased eventhough the format got changed!
This is an issue in the develop branch **not** in master!
In my fork I made an update to many test cases since I changed the format, so I saved many networks in as files, they contain the current format_version (2.14.0). After merging the current version of develop I got some tests that suddenly failed eventhough my code should not mess with them. So I did a little diging and found that the expected and actual results differ in `net.res_switch_est` DataFrame. This is because the expected result only contains the old columns while the actual result contains the updated columns.
This is because the expected results are loaded form file using the `pandapower.from_json` function and since then format version is the same as the current format verison in `_version.py` the conversion to the newest format is not done. So the network is returned as loaded from file.
The actual results however are a product of a conversion from a different network type. So they are the output of a converter that creates a new pandapowerNet. These then contain all new columns.
If new columns are added `__format_version__` should be incremented at least in the bugfix number. But I would expect that this constitutes at least a minor release as a new format version most likely breaks backwards compatibility. On a bugfix version I would expect I go backwards and forwards without issue. But this is not the case if the format version changes! A 2.13.1 Network should sucessfully load on 2.13.0 but this will not work if new columns are added. So this change should be reflected by an increase of the format verison to at least 2.15.0 in my opinion.
The breaking commit is 516f8af as it changed the format without changeing the format version.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pandapower/_version.py`
Content:
```
1 import importlib.metadata
2
3 __version__ = importlib.metadata.version("pandapower")
4 __format_version__ = "2.14.0"
5
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pandapower/_version.py b/pandapower/_version.py
--- a/pandapower/_version.py
+++ b/pandapower/_version.py
@@ -1,4 +1,4 @@
import importlib.metadata
__version__ = importlib.metadata.version("pandapower")
-__format_version__ = "2.14.0"
+__format_version__ = "2.15.0"
| {"golden_diff": "diff --git a/pandapower/_version.py b/pandapower/_version.py\n--- a/pandapower/_version.py\n+++ b/pandapower/_version.py\n@@ -1,4 +1,4 @@\n import importlib.metadata\n \n __version__ = importlib.metadata.version(\"pandapower\")\n-__format_version__ = \"2.14.0\"\n+__format_version__ = \"2.15.0\"\n", "issue": "[bug] __format_version__ not increased.\n### Issue Description\r\n\r\nThe `__format_version__` in `_version.py` has not been increased eventhough the format got changed!\r\n\r\nThis is an issue in the develop branch **not** in master!\r\n\r\nIn my fork I made an update to many test cases since I changed the format, so I saved many networks in as files, they contain the current format_version (2.14.0). After merging the current version of develop I got some tests that suddenly failed eventhough my code should not mess with them. So I did a little diging and found that the expected and actual results differ in `net.res_switch_est` DataFrame. This is because the expected result only contains the old columns while the actual result contains the updated columns.\r\n\r\nThis is because the expected results are loaded form file using the `pandapower.from_json` function and since then format version is the same as the current format verison in `_version.py` the conversion to the newest format is not done. So the network is returned as loaded from file.\r\nThe actual results however are a product of a conversion from a different network type. So they are the output of a converter that creates a new pandapowerNet. These then contain all new columns.\r\n\r\nIf new columns are added `__format_version__` should be incremented at least in the bugfix number. But I would expect that this constitutes at least a minor release as a new format version most likely breaks backwards compatibility. On a bugfix version I would expect I go backwards and forwards without issue. But this is not the case if the format version changes! A 2.13.1 Network should sucessfully load on 2.13.0 but this will not work if new columns are added. So this change should be reflected by an increase of the format verison to at least 2.15.0 in my opinion.\r\n\r\nThe breaking commit is 516f8af as it changed the format without changeing the format version.\r\n\n", "before_files": [{"content": "import importlib.metadata\n\n__version__ = importlib.metadata.version(\"pandapower\")\n__format_version__ = \"2.14.0\"\n", "path": "pandapower/_version.py"}], "after_files": [{"content": "import importlib.metadata\n\n__version__ = importlib.metadata.version(\"pandapower\")\n__format_version__ = \"2.15.0\"\n", "path": "pandapower/_version.py"}]} |
gh_patches_debug_53 | rasdani/github-patches | git_diff | facebookresearch__hydra-2242 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Bug] Colorlog plugin generates `.log` file in cwd instead of output dir
# π Bug
I'm using hydra v1.2 with `chdir` set to false.
When I don't use colorlog plugin, the `.log` file with python logs gets generated in my output directory (as expected).
But when I attach colorlog plugin with:
```yaml
defaults:
- override hydra/hydra_logging: colorlog
- override hydra/job_logging: colorlog
```
The `.log` file gets generated in current working directory
## Checklist
- [x] I checked on the latest version of Hydra
- [ ] I created a minimal repro (See [this](https://stackoverflow.com/help/minimal-reproducible-example) for tips).
## Expected Behavior
I would expect the `.log` file to be always saved in output directory by default.
## System information
- **Hydra Version** : 1.2
- **Python version** : 3.10
- **Virtual environment type and version** :
- **Operating system** : linux
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `plugins/hydra_colorlog/hydra_plugins/hydra_colorlog/__init__.py`
Content:
```
1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2
3 __version__ = "1.2.0"
4
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/plugins/hydra_colorlog/hydra_plugins/hydra_colorlog/__init__.py b/plugins/hydra_colorlog/hydra_plugins/hydra_colorlog/__init__.py
--- a/plugins/hydra_colorlog/hydra_plugins/hydra_colorlog/__init__.py
+++ b/plugins/hydra_colorlog/hydra_plugins/hydra_colorlog/__init__.py
@@ -1,3 +1,3 @@
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-__version__ = "1.2.0"
+__version__ = "1.2.1"
| {"golden_diff": "diff --git a/plugins/hydra_colorlog/hydra_plugins/hydra_colorlog/__init__.py b/plugins/hydra_colorlog/hydra_plugins/hydra_colorlog/__init__.py\n--- a/plugins/hydra_colorlog/hydra_plugins/hydra_colorlog/__init__.py\n+++ b/plugins/hydra_colorlog/hydra_plugins/hydra_colorlog/__init__.py\n@@ -1,3 +1,3 @@\n # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n \n-__version__ = \"1.2.0\"\n+__version__ = \"1.2.1\"\n", "issue": "[Bug] Colorlog plugin generates `.log` file in cwd instead of output dir\n# \ud83d\udc1b Bug\r\nI'm using hydra v1.2 with `chdir` set to false.\r\n\r\nWhen I don't use colorlog plugin, the `.log` file with python logs gets generated in my output directory (as expected).\r\n\r\nBut when I attach colorlog plugin with:\r\n```yaml\r\ndefaults:\r\n - override hydra/hydra_logging: colorlog\r\n - override hydra/job_logging: colorlog\r\n```\r\nThe `.log` file gets generated in current working directory\r\n\r\n## Checklist\r\n- [x] I checked on the latest version of Hydra\r\n- [ ] I created a minimal repro (See [this](https://stackoverflow.com/help/minimal-reproducible-example) for tips).\r\n\r\n## Expected Behavior\r\nI would expect the `.log` file to be always saved in output directory by default.\r\n\r\n## System information\r\n- **Hydra Version** : 1.2\r\n- **Python version** : 3.10\r\n- **Virtual environment type and version** : \r\n- **Operating system** : linux\r\n\r\n\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\n__version__ = \"1.2.0\"\n", "path": "plugins/hydra_colorlog/hydra_plugins/hydra_colorlog/__init__.py"}], "after_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\n__version__ = \"1.2.1\"\n", "path": "plugins/hydra_colorlog/hydra_plugins/hydra_colorlog/__init__.py"}]} |
gh_patches_debug_54 | rasdani/github-patches | git_diff | open-telemetry__opentelemetry-python-3557 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Test failure in opentelemetry-sdk on Python 3.12
**Describe your environment**
Running in a fresh checkout of `main`, https://github.com/open-telemetry/opentelemetry-python/commit/3f459d3a19fa6c4bbdeb9012c4a34f714d8cca1a, on Fedora Linux 38, x86_64, with
- `python3.11 -VV` = `Python 3.11.4 (main, Jun 7 2023, 00:00:00) [GCC 13.1.1 20230511 (Red Hat 13.1.1-2)]`
- `python3.12 -VV` = `Python 3.12.0b3 (main, Jun 21 2023, 00:00:00) [GCC 13.1.1 20230614 (Red Hat 13.1.1-4)]`
This should be platform-independent.
**Steps to reproduce**
The version pins in `dev-requirements.txt` cause a lot of problems when trying to test with Python 3.12. We will bypass them all and test without `tox`.
```
gh repo clone open-telemetry/opentelemetry-python
cd opentelemetry-python
python3.12 -m venv _e
. _e/bin/activate
cd opentelemetry-semantic-conventions
pip install -e .
cd ../opentelemetry-api
pip install -e .
cd ../opentelemetry-sdk
pip install -e .
cd ../tests/opentelemetry-test-utils/
pip install -e .
cd ../../opentelemetry-sdk
pip install pytest pytest-benchmark flaky
python -m pytest
```
**What is the expected behavior?**
If you repeat the above with `python3.11` instead of `python3.12`, or run `tox -e py311-opentelemetry-sdk`:
(lots of output, `DeprecationWarnings`, so on)
```
======================= 377 passed, 9 warnings in 16.09s ========================
```
**What is the actual behavior?**
```
=================================== FAILURES ====================================
______________ TestLoggingHandler.test_log_record_user_attributes _______________
self = <tests.logs.test_handler.TestLoggingHandler testMethod=test_log_record_user_attributes>
def test_log_record_user_attributes(self):
"""Attributes can be injected into logs by adding them to the LogRecord"""
emitter_provider_mock = Mock(spec=LoggerProvider)
emitter_mock = APIGetLogger(
__name__, logger_provider=emitter_provider_mock
)
logger = get_logger(logger_provider=emitter_provider_mock)
# Assert emit gets called for warning message
logger.warning("Warning message", extra={"http.status_code": 200})
args, _ = emitter_mock.emit.call_args_list[0]
log_record = args[0]
self.assertIsNotNone(log_record)
> self.assertEqual(log_record.attributes, {"http.status_code": 200})
E AssertionError: {'taskName': None, 'http.status_code': 200} != {'http.status_code': 200}
E - {'http.status_code': 200, 'taskName': None}
E + {'http.status_code': 200}
tests/logs/test_handler.py:93: AssertionError
------------------------------- Captured log call -------------------------------
WARNING tests.logs.test_handler:test_handler.py:88 Warning message
```
```
================== 1 failed, 376 passed, 17 warnings in 16.26s ==================
```
**Additional context**
We first encountered this in the Python 3.12 mass rebuild in Fedora Linux in preparation for the release of Fedora 39 this fall. Downstream issue: https://bugzilla.redhat.com/show_bug.cgi?id=2220378
I plan to skip this test in the Fedora Linux package for now; I donβt expect to spend more time looking for the root cause.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `opentelemetry-sdk/src/opentelemetry/sdk/_logs/_internal/__init__.py`
Content:
```
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import abc
16 import atexit
17 import concurrent.futures
18 import json
19 import logging
20 import threading
21 import traceback
22 from os import environ
23 from time import time_ns
24 from typing import Any, Callable, Optional, Tuple, Union # noqa
25
26 from opentelemetry._logs import Logger as APILogger
27 from opentelemetry._logs import LoggerProvider as APILoggerProvider
28 from opentelemetry._logs import LogRecord as APILogRecord
29 from opentelemetry._logs import (
30 NoOpLogger,
31 SeverityNumber,
32 get_logger,
33 get_logger_provider,
34 std_to_otel,
35 )
36 from opentelemetry.attributes import BoundedAttributes
37 from opentelemetry.sdk.environment_variables import (
38 OTEL_ATTRIBUTE_COUNT_LIMIT,
39 OTEL_ATTRIBUTE_VALUE_LENGTH_LIMIT,
40 )
41 from opentelemetry.sdk.resources import Resource
42 from opentelemetry.sdk.util import ns_to_iso_str
43 from opentelemetry.sdk.util.instrumentation import InstrumentationScope
44 from opentelemetry.semconv.trace import SpanAttributes
45 from opentelemetry.trace import (
46 format_span_id,
47 format_trace_id,
48 get_current_span,
49 )
50 from opentelemetry.trace.span import TraceFlags
51 from opentelemetry.util.types import Attributes
52
53 _logger = logging.getLogger(__name__)
54
55 _DEFAULT_OTEL_ATTRIBUTE_COUNT_LIMIT = 128
56 _ENV_VALUE_UNSET = ""
57
58
59 class LogLimits:
60 """This class is based on a SpanLimits class in the Tracing module.
61
62 This class represents the limits that should be enforced on recorded data such as events, links, attributes etc.
63
64 This class does not enforce any limits itself. It only provides a way to read limits from env,
65 default values and from user provided arguments.
66
67 All limit arguments must be either a non-negative integer, ``None`` or ``LogLimits.UNSET``.
68
69 - All limit arguments are optional.
70 - If a limit argument is not set, the class will try to read its value from the corresponding
71 environment variable.
72 - If the environment variable is not set, the default value, if any, will be used.
73
74 Limit precedence:
75
76 - If a model specific limit is set, it will be used.
77 - Else if the corresponding global limit is set, it will be used.
78 - Else if the model specific limit has a default value, the default value will be used.
79 - Else if the global limit has a default value, the default value will be used.
80
81 Args:
82 max_attributes: Maximum number of attributes that can be added to a span, event, and link.
83 Environment variable: ``OTEL_ATTRIBUTE_COUNT_LIMIT``
84 Default: {_DEFAULT_OTEL_ATTRIBUTE_COUNT_LIMIT}
85 max_attribute_length: Maximum length an attribute value can have. Values longer than
86 the specified length will be truncated.
87 """
88
89 UNSET = -1
90
91 def __init__(
92 self,
93 max_attributes: Optional[int] = None,
94 max_attribute_length: Optional[int] = None,
95 ):
96
97 # attribute count
98 global_max_attributes = self._from_env_if_absent(
99 max_attributes, OTEL_ATTRIBUTE_COUNT_LIMIT
100 )
101 self.max_attributes = (
102 global_max_attributes
103 if global_max_attributes is not None
104 else _DEFAULT_OTEL_ATTRIBUTE_COUNT_LIMIT
105 )
106
107 # attribute length
108 self.max_attribute_length = self._from_env_if_absent(
109 max_attribute_length,
110 OTEL_ATTRIBUTE_VALUE_LENGTH_LIMIT,
111 )
112
113 def __repr__(self):
114 return f"{type(self).__name__}(max_attributes={self.max_attributes}, max_attribute_length={self.max_attribute_length})"
115
116 @classmethod
117 def _from_env_if_absent(
118 cls, value: Optional[int], env_var: str, default: Optional[int] = None
119 ) -> Optional[int]:
120 if value == cls.UNSET:
121 return None
122
123 err_msg = "{0} must be a non-negative integer but got {}"
124
125 # if no value is provided for the limit, try to load it from env
126 if value is None:
127 # return default value if env var is not set
128 if env_var not in environ:
129 return default
130
131 str_value = environ.get(env_var, "").strip().lower()
132 if str_value == _ENV_VALUE_UNSET:
133 return None
134
135 try:
136 value = int(str_value)
137 except ValueError:
138 raise ValueError(err_msg.format(env_var, str_value))
139
140 if value < 0:
141 raise ValueError(err_msg.format(env_var, value))
142 return value
143
144
145 _UnsetLogLimits = LogLimits(
146 max_attributes=LogLimits.UNSET,
147 max_attribute_length=LogLimits.UNSET,
148 )
149
150
151 class LogRecord(APILogRecord):
152 """A LogRecord instance represents an event being logged.
153
154 LogRecord instances are created and emitted via `Logger`
155 every time something is logged. They contain all the information
156 pertinent to the event being logged.
157 """
158
159 def __init__(
160 self,
161 timestamp: Optional[int] = None,
162 observed_timestamp: Optional[int] = None,
163 trace_id: Optional[int] = None,
164 span_id: Optional[int] = None,
165 trace_flags: Optional[TraceFlags] = None,
166 severity_text: Optional[str] = None,
167 severity_number: Optional[SeverityNumber] = None,
168 body: Optional[Any] = None,
169 resource: Optional[Resource] = None,
170 attributes: Optional[Attributes] = None,
171 limits: Optional[LogLimits] = _UnsetLogLimits,
172 ):
173 super().__init__(
174 **{
175 "timestamp": timestamp,
176 "observed_timestamp": observed_timestamp,
177 "trace_id": trace_id,
178 "span_id": span_id,
179 "trace_flags": trace_flags,
180 "severity_text": severity_text,
181 "severity_number": severity_number,
182 "body": body,
183 "attributes": BoundedAttributes(
184 maxlen=limits.max_attributes,
185 attributes=attributes if bool(attributes) else None,
186 immutable=False,
187 max_value_len=limits.max_attribute_length,
188 ),
189 }
190 )
191 self.resource = resource
192
193 def __eq__(self, other: object) -> bool:
194 if not isinstance(other, LogRecord):
195 return NotImplemented
196 return self.__dict__ == other.__dict__
197
198 def to_json(self, indent=4) -> str:
199 return json.dumps(
200 {
201 "body": self.body,
202 "severity_number": repr(self.severity_number),
203 "severity_text": self.severity_text,
204 "attributes": dict(self.attributes)
205 if bool(self.attributes)
206 else None,
207 "dropped_attributes": self.dropped_attributes,
208 "timestamp": ns_to_iso_str(self.timestamp),
209 "trace_id": f"0x{format_trace_id(self.trace_id)}"
210 if self.trace_id is not None
211 else "",
212 "span_id": f"0x{format_span_id(self.span_id)}"
213 if self.span_id is not None
214 else "",
215 "trace_flags": self.trace_flags,
216 "resource": repr(self.resource.attributes)
217 if self.resource
218 else "",
219 },
220 indent=indent,
221 )
222
223 @property
224 def dropped_attributes(self) -> int:
225 if self.attributes:
226 return self.attributes.dropped
227 return 0
228
229
230 class LogData:
231 """Readable LogRecord data plus associated InstrumentationLibrary."""
232
233 def __init__(
234 self,
235 log_record: LogRecord,
236 instrumentation_scope: InstrumentationScope,
237 ):
238 self.log_record = log_record
239 self.instrumentation_scope = instrumentation_scope
240
241
242 class LogRecordProcessor(abc.ABC):
243 """Interface to hook the log record emitting action.
244
245 Log processors can be registered directly using
246 :func:`LoggerProvider.add_log_record_processor` and they are invoked
247 in the same order as they were registered.
248 """
249
250 @abc.abstractmethod
251 def emit(self, log_data: LogData):
252 """Emits the `LogData`"""
253
254 @abc.abstractmethod
255 def shutdown(self):
256 """Called when a :class:`opentelemetry.sdk._logs.Logger` is shutdown"""
257
258 @abc.abstractmethod
259 def force_flush(self, timeout_millis: int = 30000):
260 """Export all the received logs to the configured Exporter that have not yet
261 been exported.
262
263 Args:
264 timeout_millis: The maximum amount of time to wait for logs to be
265 exported.
266
267 Returns:
268 False if the timeout is exceeded, True otherwise.
269 """
270
271
272 # Temporary fix until https://github.com/PyCQA/pylint/issues/4098 is resolved
273 # pylint:disable=no-member
274 class SynchronousMultiLogRecordProcessor(LogRecordProcessor):
275 """Implementation of class:`LogRecordProcessor` that forwards all received
276 events to a list of log processors sequentially.
277
278 The underlying log processors are called in sequential order as they were
279 added.
280 """
281
282 def __init__(self):
283 # use a tuple to avoid race conditions when adding a new log and
284 # iterating through it on "emit".
285 self._log_record_processors = () # type: Tuple[LogRecordProcessor, ...]
286 self._lock = threading.Lock()
287
288 def add_log_record_processor(
289 self, log_record_processor: LogRecordProcessor
290 ) -> None:
291 """Adds a Logprocessor to the list of log processors handled by this instance"""
292 with self._lock:
293 self._log_record_processors += (log_record_processor,)
294
295 def emit(self, log_data: LogData) -> None:
296 for lp in self._log_record_processors:
297 lp.emit(log_data)
298
299 def shutdown(self) -> None:
300 """Shutdown the log processors one by one"""
301 for lp in self._log_record_processors:
302 lp.shutdown()
303
304 def force_flush(self, timeout_millis: int = 30000) -> bool:
305 """Force flush the log processors one by one
306
307 Args:
308 timeout_millis: The maximum amount of time to wait for logs to be
309 exported. If the first n log processors exceeded the timeout
310 then remaining log processors will not be flushed.
311
312 Returns:
313 True if all the log processors flushes the logs within timeout,
314 False otherwise.
315 """
316 deadline_ns = time_ns() + timeout_millis * 1000000
317 for lp in self._log_record_processors:
318 current_ts = time_ns()
319 if current_ts >= deadline_ns:
320 return False
321
322 if not lp.force_flush((deadline_ns - current_ts) // 1000000):
323 return False
324
325 return True
326
327
328 class ConcurrentMultiLogRecordProcessor(LogRecordProcessor):
329 """Implementation of :class:`LogRecordProcessor` that forwards all received
330 events to a list of log processors in parallel.
331
332 Calls to the underlying log processors are forwarded in parallel by
333 submitting them to a thread pool executor and waiting until each log
334 processor finished its work.
335
336 Args:
337 max_workers: The number of threads managed by the thread pool executor
338 and thus defining how many log processors can work in parallel.
339 """
340
341 def __init__(self, max_workers: int = 2):
342 # use a tuple to avoid race conditions when adding a new log and
343 # iterating through it on "emit".
344 self._log_record_processors = () # type: Tuple[LogRecordProcessor, ...]
345 self._lock = threading.Lock()
346 self._executor = concurrent.futures.ThreadPoolExecutor(
347 max_workers=max_workers
348 )
349
350 def add_log_record_processor(
351 self, log_record_processor: LogRecordProcessor
352 ):
353 with self._lock:
354 self._log_record_processors += (log_record_processor,)
355
356 def _submit_and_wait(
357 self,
358 func: Callable[[LogRecordProcessor], Callable[..., None]],
359 *args: Any,
360 **kwargs: Any,
361 ):
362 futures = []
363 for lp in self._log_record_processors:
364 future = self._executor.submit(func(lp), *args, **kwargs)
365 futures.append(future)
366 for future in futures:
367 future.result()
368
369 def emit(self, log_data: LogData):
370 self._submit_and_wait(lambda lp: lp.emit, log_data)
371
372 def shutdown(self):
373 self._submit_and_wait(lambda lp: lp.shutdown)
374
375 def force_flush(self, timeout_millis: int = 30000) -> bool:
376 """Force flush the log processors in parallel.
377
378 Args:
379 timeout_millis: The maximum amount of time to wait for logs to be
380 exported.
381
382 Returns:
383 True if all the log processors flushes the logs within timeout,
384 False otherwise.
385 """
386 futures = []
387 for lp in self._log_record_processors:
388 future = self._executor.submit(lp.force_flush, timeout_millis)
389 futures.append(future)
390
391 done_futures, not_done_futures = concurrent.futures.wait(
392 futures, timeout_millis / 1e3
393 )
394
395 if not_done_futures:
396 return False
397
398 for future in done_futures:
399 if not future.result():
400 return False
401
402 return True
403
404
405 # skip natural LogRecord attributes
406 # http://docs.python.org/library/logging.html#logrecord-attributes
407 _RESERVED_ATTRS = frozenset(
408 (
409 "asctime",
410 "args",
411 "created",
412 "exc_info",
413 "exc_text",
414 "filename",
415 "funcName",
416 "message",
417 "levelname",
418 "levelno",
419 "lineno",
420 "module",
421 "msecs",
422 "msg",
423 "name",
424 "pathname",
425 "process",
426 "processName",
427 "relativeCreated",
428 "stack_info",
429 "thread",
430 "threadName",
431 )
432 )
433
434
435 class LoggingHandler(logging.Handler):
436 """A handler class which writes logging records, in OTLP format, to
437 a network destination or file. Supports signals from the `logging` module.
438 https://docs.python.org/3/library/logging.html
439 """
440
441 def __init__(
442 self,
443 level=logging.NOTSET,
444 logger_provider=None,
445 ) -> None:
446 super().__init__(level=level)
447 self._logger_provider = logger_provider or get_logger_provider()
448 self._logger = get_logger(
449 __name__, logger_provider=self._logger_provider
450 )
451
452 @staticmethod
453 def _get_attributes(record: logging.LogRecord) -> Attributes:
454 attributes = {
455 k: v for k, v in vars(record).items() if k not in _RESERVED_ATTRS
456 }
457 if record.exc_info:
458 exc_type = ""
459 message = ""
460 stack_trace = ""
461 exctype, value, tb = record.exc_info
462 if exctype is not None:
463 exc_type = exctype.__name__
464 if value is not None and value.args:
465 message = value.args[0]
466 if tb is not None:
467 # https://github.com/open-telemetry/opentelemetry-specification/blob/9fa7c656b26647b27e485a6af7e38dc716eba98a/specification/trace/semantic_conventions/exceptions.md#stacktrace-representation
468 stack_trace = "".join(
469 traceback.format_exception(*record.exc_info)
470 )
471 attributes[SpanAttributes.EXCEPTION_TYPE] = exc_type
472 attributes[SpanAttributes.EXCEPTION_MESSAGE] = message
473 attributes[SpanAttributes.EXCEPTION_STACKTRACE] = stack_trace
474 return attributes
475
476 def _translate(self, record: logging.LogRecord) -> LogRecord:
477 timestamp = int(record.created * 1e9)
478 span_context = get_current_span().get_span_context()
479 attributes = self._get_attributes(record)
480 # This comment is taken from GanyedeNil's PR #3343, I have redacted it
481 # slightly for clarity:
482 # According to the definition of the Body field type in the
483 # OTel 1.22.0 Logs Data Model article, the Body field should be of
484 # type 'any' and should not use the str method to directly translate
485 # the msg. This is because str only converts non-text types into a
486 # human-readable form, rather than a standard format, which leads to
487 # the need for additional operations when collected through a log
488 # collector.
489 # Considering that he Body field should be of type 'any' and should not
490 # use the str method but record.msg is also a string type, then the
491 # difference is just the self.args formatting?
492 # The primary consideration depends on the ultimate purpose of the log.
493 # Converting the default log directly into a string is acceptable as it
494 # will be required to be presented in a more readable format. However,
495 # this approach might not be as "standard" when hoping to aggregate
496 # logs and perform subsequent data analysis. In the context of log
497 # extraction, it would be more appropriate for the msg to be
498 # converted into JSON format or remain unchanged, as it will eventually
499 # be transformed into JSON. If the final output JSON data contains a
500 # structure that appears similar to JSON but is not, it may confuse
501 # users. This is particularly true for operation and maintenance
502 # personnel who need to deal with log data in various languages.
503 # Where is the JSON converting occur? and what about when the msg
504 # represents something else but JSON, the expected behavior change?
505 # For the ConsoleLogExporter, it performs the to_json operation in
506 # opentelemetry.sdk._logs._internal.export.ConsoleLogExporter.__init__,
507 # so it can handle any type of input without problems. As for the
508 # OTLPLogExporter, it also handles any type of input encoding in
509 # _encode_log located in
510 # opentelemetry.exporter.otlp.proto.common._internal._log_encoder.
511 # Therefore, no extra operation is needed to support this change.
512 # The only thing to consider is the users who have already been using
513 # this SDK. If they upgrade the SDK after this change, they will need
514 # to readjust their logging collection rules to adapt to the latest
515 # output format. Therefore, this change is considered a breaking
516 # change and needs to be upgraded at an appropriate time.
517 severity_number = std_to_otel(record.levelno)
518 if isinstance(record.msg, str) and record.args:
519 body = record.msg % record.args
520 else:
521 body = record.msg
522 return LogRecord(
523 timestamp=timestamp,
524 trace_id=span_context.trace_id,
525 span_id=span_context.span_id,
526 trace_flags=span_context.trace_flags,
527 severity_text=record.levelname,
528 severity_number=severity_number,
529 body=body,
530 resource=self._logger.resource,
531 attributes=attributes,
532 )
533
534 def emit(self, record: logging.LogRecord) -> None:
535 """
536 Emit a record. Skip emitting if logger is NoOp.
537
538 The record is translated to OTel format, and then sent across the pipeline.
539 """
540 if not isinstance(self._logger, NoOpLogger):
541 self._logger.emit(self._translate(record))
542
543 def flush(self) -> None:
544 """
545 Flushes the logging output.
546 """
547 self._logger_provider.force_flush()
548
549
550 class Logger(APILogger):
551 def __init__(
552 self,
553 resource: Resource,
554 multi_log_record_processor: Union[
555 SynchronousMultiLogRecordProcessor,
556 ConcurrentMultiLogRecordProcessor,
557 ],
558 instrumentation_scope: InstrumentationScope,
559 ):
560 super().__init__(
561 instrumentation_scope.name,
562 instrumentation_scope.version,
563 instrumentation_scope.schema_url,
564 )
565 self._resource = resource
566 self._multi_log_record_processor = multi_log_record_processor
567 self._instrumentation_scope = instrumentation_scope
568
569 @property
570 def resource(self):
571 return self._resource
572
573 def emit(self, record: LogRecord):
574 """Emits the :class:`LogData` by associating :class:`LogRecord`
575 and instrumentation info.
576 """
577 log_data = LogData(record, self._instrumentation_scope)
578 self._multi_log_record_processor.emit(log_data)
579
580
581 class LoggerProvider(APILoggerProvider):
582 def __init__(
583 self,
584 resource: Resource = None,
585 shutdown_on_exit: bool = True,
586 multi_log_record_processor: Union[
587 SynchronousMultiLogRecordProcessor,
588 ConcurrentMultiLogRecordProcessor,
589 ] = None,
590 ):
591 if resource is None:
592 self._resource = Resource.create({})
593 else:
594 self._resource = resource
595 self._multi_log_record_processor = (
596 multi_log_record_processor or SynchronousMultiLogRecordProcessor()
597 )
598 self._at_exit_handler = None
599 if shutdown_on_exit:
600 self._at_exit_handler = atexit.register(self.shutdown)
601
602 @property
603 def resource(self):
604 return self._resource
605
606 def get_logger(
607 self,
608 name: str,
609 version: Optional[str] = None,
610 schema_url: Optional[str] = None,
611 ) -> Logger:
612 return Logger(
613 self._resource,
614 self._multi_log_record_processor,
615 InstrumentationScope(
616 name,
617 version,
618 schema_url,
619 ),
620 )
621
622 def add_log_record_processor(
623 self, log_record_processor: LogRecordProcessor
624 ):
625 """Registers a new :class:`LogRecordProcessor` for this `LoggerProvider` instance.
626
627 The log processors are invoked in the same order they are registered.
628 """
629 self._multi_log_record_processor.add_log_record_processor(
630 log_record_processor
631 )
632
633 def shutdown(self):
634 """Shuts down the log processors."""
635 self._multi_log_record_processor.shutdown()
636 if self._at_exit_handler is not None:
637 atexit.unregister(self._at_exit_handler)
638 self._at_exit_handler = None
639
640 def force_flush(self, timeout_millis: int = 30000) -> bool:
641 """Force flush the log processors.
642
643 Args:
644 timeout_millis: The maximum amount of time to wait for logs to be
645 exported.
646
647 Returns:
648 True if all the log processors flushes the logs within timeout,
649 False otherwise.
650 """
651 return self._multi_log_record_processor.force_flush(timeout_millis)
652
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/_logs/_internal/__init__.py b/opentelemetry-sdk/src/opentelemetry/sdk/_logs/_internal/__init__.py
--- a/opentelemetry-sdk/src/opentelemetry/sdk/_logs/_internal/__init__.py
+++ b/opentelemetry-sdk/src/opentelemetry/sdk/_logs/_internal/__init__.py
@@ -428,6 +428,7 @@
"stack_info",
"thread",
"threadName",
+ "taskName",
)
)
| {"golden_diff": "diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/_logs/_internal/__init__.py b/opentelemetry-sdk/src/opentelemetry/sdk/_logs/_internal/__init__.py\n--- a/opentelemetry-sdk/src/opentelemetry/sdk/_logs/_internal/__init__.py\n+++ b/opentelemetry-sdk/src/opentelemetry/sdk/_logs/_internal/__init__.py\n@@ -428,6 +428,7 @@\n \"stack_info\",\n \"thread\",\n \"threadName\",\n+ \"taskName\",\n )\n )\n", "issue": "Test failure in opentelemetry-sdk on Python 3.12\n**Describe your environment**\r\n\r\nRunning in a fresh checkout of `main`, https://github.com/open-telemetry/opentelemetry-python/commit/3f459d3a19fa6c4bbdeb9012c4a34f714d8cca1a, on Fedora Linux 38, x86_64, with\r\n\r\n- `python3.11 -VV` = `Python 3.11.4 (main, Jun 7 2023, 00:00:00) [GCC 13.1.1 20230511 (Red Hat 13.1.1-2)]`\r\n- `python3.12 -VV` = `Python 3.12.0b3 (main, Jun 21 2023, 00:00:00) [GCC 13.1.1 20230614 (Red Hat 13.1.1-4)]`\r\n\r\nThis should be platform-independent.\r\n\r\n**Steps to reproduce**\r\n\r\nThe version pins in `dev-requirements.txt` cause a lot of problems when trying to test with Python 3.12. We will bypass them all and test without `tox`.\r\n\r\n```\r\ngh repo clone open-telemetry/opentelemetry-python\r\ncd opentelemetry-python\r\npython3.12 -m venv _e\r\n. _e/bin/activate\r\ncd opentelemetry-semantic-conventions\r\npip install -e .\r\ncd ../opentelemetry-api\r\npip install -e .\r\ncd ../opentelemetry-sdk\r\npip install -e .\r\ncd ../tests/opentelemetry-test-utils/\r\npip install -e .\r\ncd ../../opentelemetry-sdk\r\npip install pytest pytest-benchmark flaky\r\npython -m pytest\r\n```\r\n\r\n**What is the expected behavior?**\r\n\r\nIf you repeat the above with `python3.11` instead of `python3.12`, or run `tox -e py311-opentelemetry-sdk`:\r\n\r\n(lots of output, `DeprecationWarnings`, so on)\r\n\r\n```\r\n======================= 377 passed, 9 warnings in 16.09s ========================\r\n```\r\n\r\n**What is the actual behavior?**\r\n\r\n```\r\n=================================== FAILURES ====================================\r\n______________ TestLoggingHandler.test_log_record_user_attributes _______________\r\n\r\nself = <tests.logs.test_handler.TestLoggingHandler testMethod=test_log_record_user_attributes>\r\n\r\n def test_log_record_user_attributes(self):\r\n \"\"\"Attributes can be injected into logs by adding them to the LogRecord\"\"\"\r\n emitter_provider_mock = Mock(spec=LoggerProvider)\r\n emitter_mock = APIGetLogger(\r\n __name__, logger_provider=emitter_provider_mock\r\n )\r\n logger = get_logger(logger_provider=emitter_provider_mock)\r\n # Assert emit gets called for warning message\r\n logger.warning(\"Warning message\", extra={\"http.status_code\": 200})\r\n args, _ = emitter_mock.emit.call_args_list[0]\r\n log_record = args[0]\r\n\r\n self.assertIsNotNone(log_record)\r\n> self.assertEqual(log_record.attributes, {\"http.status_code\": 200})\r\nE AssertionError: {'taskName': None, 'http.status_code': 200} != {'http.status_code': 200}\r\nE - {'http.status_code': 200, 'taskName': None}\r\nE + {'http.status_code': 200}\r\n\r\ntests/logs/test_handler.py:93: AssertionError\r\n------------------------------- Captured log call -------------------------------\r\nWARNING tests.logs.test_handler:test_handler.py:88 Warning message\r\n```\r\n\r\n```\r\n================== 1 failed, 376 passed, 17 warnings in 16.26s ==================\r\n```\r\n\r\n**Additional context**\r\n\r\nWe first encountered this in the Python 3.12 mass rebuild in Fedora Linux in preparation for the release of Fedora 39 this fall. Downstream issue: https://bugzilla.redhat.com/show_bug.cgi?id=2220378\r\n\r\nI plan to skip this test in the Fedora Linux package for now; I don\u2019t expect to spend more time looking for the root cause.\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport abc\nimport atexit\nimport concurrent.futures\nimport json\nimport logging\nimport threading\nimport traceback\nfrom os import environ\nfrom time import time_ns\nfrom typing import Any, Callable, Optional, Tuple, Union # noqa\n\nfrom opentelemetry._logs import Logger as APILogger\nfrom opentelemetry._logs import LoggerProvider as APILoggerProvider\nfrom opentelemetry._logs import LogRecord as APILogRecord\nfrom opentelemetry._logs import (\n NoOpLogger,\n SeverityNumber,\n get_logger,\n get_logger_provider,\n std_to_otel,\n)\nfrom opentelemetry.attributes import BoundedAttributes\nfrom opentelemetry.sdk.environment_variables import (\n OTEL_ATTRIBUTE_COUNT_LIMIT,\n OTEL_ATTRIBUTE_VALUE_LENGTH_LIMIT,\n)\nfrom opentelemetry.sdk.resources import Resource\nfrom opentelemetry.sdk.util import ns_to_iso_str\nfrom opentelemetry.sdk.util.instrumentation import InstrumentationScope\nfrom opentelemetry.semconv.trace import SpanAttributes\nfrom opentelemetry.trace import (\n format_span_id,\n format_trace_id,\n get_current_span,\n)\nfrom opentelemetry.trace.span import TraceFlags\nfrom opentelemetry.util.types import Attributes\n\n_logger = logging.getLogger(__name__)\n\n_DEFAULT_OTEL_ATTRIBUTE_COUNT_LIMIT = 128\n_ENV_VALUE_UNSET = \"\"\n\n\nclass LogLimits:\n \"\"\"This class is based on a SpanLimits class in the Tracing module.\n\n This class represents the limits that should be enforced on recorded data such as events, links, attributes etc.\n\n This class does not enforce any limits itself. It only provides a way to read limits from env,\n default values and from user provided arguments.\n\n All limit arguments must be either a non-negative integer, ``None`` or ``LogLimits.UNSET``.\n\n - All limit arguments are optional.\n - If a limit argument is not set, the class will try to read its value from the corresponding\n environment variable.\n - If the environment variable is not set, the default value, if any, will be used.\n\n Limit precedence:\n\n - If a model specific limit is set, it will be used.\n - Else if the corresponding global limit is set, it will be used.\n - Else if the model specific limit has a default value, the default value will be used.\n - Else if the global limit has a default value, the default value will be used.\n\n Args:\n max_attributes: Maximum number of attributes that can be added to a span, event, and link.\n Environment variable: ``OTEL_ATTRIBUTE_COUNT_LIMIT``\n Default: {_DEFAULT_OTEL_ATTRIBUTE_COUNT_LIMIT}\n max_attribute_length: Maximum length an attribute value can have. Values longer than\n the specified length will be truncated.\n \"\"\"\n\n UNSET = -1\n\n def __init__(\n self,\n max_attributes: Optional[int] = None,\n max_attribute_length: Optional[int] = None,\n ):\n\n # attribute count\n global_max_attributes = self._from_env_if_absent(\n max_attributes, OTEL_ATTRIBUTE_COUNT_LIMIT\n )\n self.max_attributes = (\n global_max_attributes\n if global_max_attributes is not None\n else _DEFAULT_OTEL_ATTRIBUTE_COUNT_LIMIT\n )\n\n # attribute length\n self.max_attribute_length = self._from_env_if_absent(\n max_attribute_length,\n OTEL_ATTRIBUTE_VALUE_LENGTH_LIMIT,\n )\n\n def __repr__(self):\n return f\"{type(self).__name__}(max_attributes={self.max_attributes}, max_attribute_length={self.max_attribute_length})\"\n\n @classmethod\n def _from_env_if_absent(\n cls, value: Optional[int], env_var: str, default: Optional[int] = None\n ) -> Optional[int]:\n if value == cls.UNSET:\n return None\n\n err_msg = \"{0} must be a non-negative integer but got {}\"\n\n # if no value is provided for the limit, try to load it from env\n if value is None:\n # return default value if env var is not set\n if env_var not in environ:\n return default\n\n str_value = environ.get(env_var, \"\").strip().lower()\n if str_value == _ENV_VALUE_UNSET:\n return None\n\n try:\n value = int(str_value)\n except ValueError:\n raise ValueError(err_msg.format(env_var, str_value))\n\n if value < 0:\n raise ValueError(err_msg.format(env_var, value))\n return value\n\n\n_UnsetLogLimits = LogLimits(\n max_attributes=LogLimits.UNSET,\n max_attribute_length=LogLimits.UNSET,\n)\n\n\nclass LogRecord(APILogRecord):\n \"\"\"A LogRecord instance represents an event being logged.\n\n LogRecord instances are created and emitted via `Logger`\n every time something is logged. They contain all the information\n pertinent to the event being logged.\n \"\"\"\n\n def __init__(\n self,\n timestamp: Optional[int] = None,\n observed_timestamp: Optional[int] = None,\n trace_id: Optional[int] = None,\n span_id: Optional[int] = None,\n trace_flags: Optional[TraceFlags] = None,\n severity_text: Optional[str] = None,\n severity_number: Optional[SeverityNumber] = None,\n body: Optional[Any] = None,\n resource: Optional[Resource] = None,\n attributes: Optional[Attributes] = None,\n limits: Optional[LogLimits] = _UnsetLogLimits,\n ):\n super().__init__(\n **{\n \"timestamp\": timestamp,\n \"observed_timestamp\": observed_timestamp,\n \"trace_id\": trace_id,\n \"span_id\": span_id,\n \"trace_flags\": trace_flags,\n \"severity_text\": severity_text,\n \"severity_number\": severity_number,\n \"body\": body,\n \"attributes\": BoundedAttributes(\n maxlen=limits.max_attributes,\n attributes=attributes if bool(attributes) else None,\n immutable=False,\n max_value_len=limits.max_attribute_length,\n ),\n }\n )\n self.resource = resource\n\n def __eq__(self, other: object) -> bool:\n if not isinstance(other, LogRecord):\n return NotImplemented\n return self.__dict__ == other.__dict__\n\n def to_json(self, indent=4) -> str:\n return json.dumps(\n {\n \"body\": self.body,\n \"severity_number\": repr(self.severity_number),\n \"severity_text\": self.severity_text,\n \"attributes\": dict(self.attributes)\n if bool(self.attributes)\n else None,\n \"dropped_attributes\": self.dropped_attributes,\n \"timestamp\": ns_to_iso_str(self.timestamp),\n \"trace_id\": f\"0x{format_trace_id(self.trace_id)}\"\n if self.trace_id is not None\n else \"\",\n \"span_id\": f\"0x{format_span_id(self.span_id)}\"\n if self.span_id is not None\n else \"\",\n \"trace_flags\": self.trace_flags,\n \"resource\": repr(self.resource.attributes)\n if self.resource\n else \"\",\n },\n indent=indent,\n )\n\n @property\n def dropped_attributes(self) -> int:\n if self.attributes:\n return self.attributes.dropped\n return 0\n\n\nclass LogData:\n \"\"\"Readable LogRecord data plus associated InstrumentationLibrary.\"\"\"\n\n def __init__(\n self,\n log_record: LogRecord,\n instrumentation_scope: InstrumentationScope,\n ):\n self.log_record = log_record\n self.instrumentation_scope = instrumentation_scope\n\n\nclass LogRecordProcessor(abc.ABC):\n \"\"\"Interface to hook the log record emitting action.\n\n Log processors can be registered directly using\n :func:`LoggerProvider.add_log_record_processor` and they are invoked\n in the same order as they were registered.\n \"\"\"\n\n @abc.abstractmethod\n def emit(self, log_data: LogData):\n \"\"\"Emits the `LogData`\"\"\"\n\n @abc.abstractmethod\n def shutdown(self):\n \"\"\"Called when a :class:`opentelemetry.sdk._logs.Logger` is shutdown\"\"\"\n\n @abc.abstractmethod\n def force_flush(self, timeout_millis: int = 30000):\n \"\"\"Export all the received logs to the configured Exporter that have not yet\n been exported.\n\n Args:\n timeout_millis: The maximum amount of time to wait for logs to be\n exported.\n\n Returns:\n False if the timeout is exceeded, True otherwise.\n \"\"\"\n\n\n# Temporary fix until https://github.com/PyCQA/pylint/issues/4098 is resolved\n# pylint:disable=no-member\nclass SynchronousMultiLogRecordProcessor(LogRecordProcessor):\n \"\"\"Implementation of class:`LogRecordProcessor` that forwards all received\n events to a list of log processors sequentially.\n\n The underlying log processors are called in sequential order as they were\n added.\n \"\"\"\n\n def __init__(self):\n # use a tuple to avoid race conditions when adding a new log and\n # iterating through it on \"emit\".\n self._log_record_processors = () # type: Tuple[LogRecordProcessor, ...]\n self._lock = threading.Lock()\n\n def add_log_record_processor(\n self, log_record_processor: LogRecordProcessor\n ) -> None:\n \"\"\"Adds a Logprocessor to the list of log processors handled by this instance\"\"\"\n with self._lock:\n self._log_record_processors += (log_record_processor,)\n\n def emit(self, log_data: LogData) -> None:\n for lp in self._log_record_processors:\n lp.emit(log_data)\n\n def shutdown(self) -> None:\n \"\"\"Shutdown the log processors one by one\"\"\"\n for lp in self._log_record_processors:\n lp.shutdown()\n\n def force_flush(self, timeout_millis: int = 30000) -> bool:\n \"\"\"Force flush the log processors one by one\n\n Args:\n timeout_millis: The maximum amount of time to wait for logs to be\n exported. If the first n log processors exceeded the timeout\n then remaining log processors will not be flushed.\n\n Returns:\n True if all the log processors flushes the logs within timeout,\n False otherwise.\n \"\"\"\n deadline_ns = time_ns() + timeout_millis * 1000000\n for lp in self._log_record_processors:\n current_ts = time_ns()\n if current_ts >= deadline_ns:\n return False\n\n if not lp.force_flush((deadline_ns - current_ts) // 1000000):\n return False\n\n return True\n\n\nclass ConcurrentMultiLogRecordProcessor(LogRecordProcessor):\n \"\"\"Implementation of :class:`LogRecordProcessor` that forwards all received\n events to a list of log processors in parallel.\n\n Calls to the underlying log processors are forwarded in parallel by\n submitting them to a thread pool executor and waiting until each log\n processor finished its work.\n\n Args:\n max_workers: The number of threads managed by the thread pool executor\n and thus defining how many log processors can work in parallel.\n \"\"\"\n\n def __init__(self, max_workers: int = 2):\n # use a tuple to avoid race conditions when adding a new log and\n # iterating through it on \"emit\".\n self._log_record_processors = () # type: Tuple[LogRecordProcessor, ...]\n self._lock = threading.Lock()\n self._executor = concurrent.futures.ThreadPoolExecutor(\n max_workers=max_workers\n )\n\n def add_log_record_processor(\n self, log_record_processor: LogRecordProcessor\n ):\n with self._lock:\n self._log_record_processors += (log_record_processor,)\n\n def _submit_and_wait(\n self,\n func: Callable[[LogRecordProcessor], Callable[..., None]],\n *args: Any,\n **kwargs: Any,\n ):\n futures = []\n for lp in self._log_record_processors:\n future = self._executor.submit(func(lp), *args, **kwargs)\n futures.append(future)\n for future in futures:\n future.result()\n\n def emit(self, log_data: LogData):\n self._submit_and_wait(lambda lp: lp.emit, log_data)\n\n def shutdown(self):\n self._submit_and_wait(lambda lp: lp.shutdown)\n\n def force_flush(self, timeout_millis: int = 30000) -> bool:\n \"\"\"Force flush the log processors in parallel.\n\n Args:\n timeout_millis: The maximum amount of time to wait for logs to be\n exported.\n\n Returns:\n True if all the log processors flushes the logs within timeout,\n False otherwise.\n \"\"\"\n futures = []\n for lp in self._log_record_processors:\n future = self._executor.submit(lp.force_flush, timeout_millis)\n futures.append(future)\n\n done_futures, not_done_futures = concurrent.futures.wait(\n futures, timeout_millis / 1e3\n )\n\n if not_done_futures:\n return False\n\n for future in done_futures:\n if not future.result():\n return False\n\n return True\n\n\n# skip natural LogRecord attributes\n# http://docs.python.org/library/logging.html#logrecord-attributes\n_RESERVED_ATTRS = frozenset(\n (\n \"asctime\",\n \"args\",\n \"created\",\n \"exc_info\",\n \"exc_text\",\n \"filename\",\n \"funcName\",\n \"message\",\n \"levelname\",\n \"levelno\",\n \"lineno\",\n \"module\",\n \"msecs\",\n \"msg\",\n \"name\",\n \"pathname\",\n \"process\",\n \"processName\",\n \"relativeCreated\",\n \"stack_info\",\n \"thread\",\n \"threadName\",\n )\n)\n\n\nclass LoggingHandler(logging.Handler):\n \"\"\"A handler class which writes logging records, in OTLP format, to\n a network destination or file. Supports signals from the `logging` module.\n https://docs.python.org/3/library/logging.html\n \"\"\"\n\n def __init__(\n self,\n level=logging.NOTSET,\n logger_provider=None,\n ) -> None:\n super().__init__(level=level)\n self._logger_provider = logger_provider or get_logger_provider()\n self._logger = get_logger(\n __name__, logger_provider=self._logger_provider\n )\n\n @staticmethod\n def _get_attributes(record: logging.LogRecord) -> Attributes:\n attributes = {\n k: v for k, v in vars(record).items() if k not in _RESERVED_ATTRS\n }\n if record.exc_info:\n exc_type = \"\"\n message = \"\"\n stack_trace = \"\"\n exctype, value, tb = record.exc_info\n if exctype is not None:\n exc_type = exctype.__name__\n if value is not None and value.args:\n message = value.args[0]\n if tb is not None:\n # https://github.com/open-telemetry/opentelemetry-specification/blob/9fa7c656b26647b27e485a6af7e38dc716eba98a/specification/trace/semantic_conventions/exceptions.md#stacktrace-representation\n stack_trace = \"\".join(\n traceback.format_exception(*record.exc_info)\n )\n attributes[SpanAttributes.EXCEPTION_TYPE] = exc_type\n attributes[SpanAttributes.EXCEPTION_MESSAGE] = message\n attributes[SpanAttributes.EXCEPTION_STACKTRACE] = stack_trace\n return attributes\n\n def _translate(self, record: logging.LogRecord) -> LogRecord:\n timestamp = int(record.created * 1e9)\n span_context = get_current_span().get_span_context()\n attributes = self._get_attributes(record)\n # This comment is taken from GanyedeNil's PR #3343, I have redacted it\n # slightly for clarity:\n # According to the definition of the Body field type in the\n # OTel 1.22.0 Logs Data Model article, the Body field should be of\n # type 'any' and should not use the str method to directly translate\n # the msg. This is because str only converts non-text types into a\n # human-readable form, rather than a standard format, which leads to\n # the need for additional operations when collected through a log\n # collector.\n # Considering that he Body field should be of type 'any' and should not\n # use the str method but record.msg is also a string type, then the\n # difference is just the self.args formatting?\n # The primary consideration depends on the ultimate purpose of the log.\n # Converting the default log directly into a string is acceptable as it\n # will be required to be presented in a more readable format. However,\n # this approach might not be as \"standard\" when hoping to aggregate\n # logs and perform subsequent data analysis. In the context of log\n # extraction, it would be more appropriate for the msg to be\n # converted into JSON format or remain unchanged, as it will eventually\n # be transformed into JSON. If the final output JSON data contains a\n # structure that appears similar to JSON but is not, it may confuse\n # users. This is particularly true for operation and maintenance\n # personnel who need to deal with log data in various languages.\n # Where is the JSON converting occur? and what about when the msg\n # represents something else but JSON, the expected behavior change?\n # For the ConsoleLogExporter, it performs the to_json operation in\n # opentelemetry.sdk._logs._internal.export.ConsoleLogExporter.__init__,\n # so it can handle any type of input without problems. As for the\n # OTLPLogExporter, it also handles any type of input encoding in\n # _encode_log located in\n # opentelemetry.exporter.otlp.proto.common._internal._log_encoder.\n # Therefore, no extra operation is needed to support this change.\n # The only thing to consider is the users who have already been using\n # this SDK. If they upgrade the SDK after this change, they will need\n # to readjust their logging collection rules to adapt to the latest\n # output format. Therefore, this change is considered a breaking\n # change and needs to be upgraded at an appropriate time.\n severity_number = std_to_otel(record.levelno)\n if isinstance(record.msg, str) and record.args:\n body = record.msg % record.args\n else:\n body = record.msg\n return LogRecord(\n timestamp=timestamp,\n trace_id=span_context.trace_id,\n span_id=span_context.span_id,\n trace_flags=span_context.trace_flags,\n severity_text=record.levelname,\n severity_number=severity_number,\n body=body,\n resource=self._logger.resource,\n attributes=attributes,\n )\n\n def emit(self, record: logging.LogRecord) -> None:\n \"\"\"\n Emit a record. Skip emitting if logger is NoOp.\n\n The record is translated to OTel format, and then sent across the pipeline.\n \"\"\"\n if not isinstance(self._logger, NoOpLogger):\n self._logger.emit(self._translate(record))\n\n def flush(self) -> None:\n \"\"\"\n Flushes the logging output.\n \"\"\"\n self._logger_provider.force_flush()\n\n\nclass Logger(APILogger):\n def __init__(\n self,\n resource: Resource,\n multi_log_record_processor: Union[\n SynchronousMultiLogRecordProcessor,\n ConcurrentMultiLogRecordProcessor,\n ],\n instrumentation_scope: InstrumentationScope,\n ):\n super().__init__(\n instrumentation_scope.name,\n instrumentation_scope.version,\n instrumentation_scope.schema_url,\n )\n self._resource = resource\n self._multi_log_record_processor = multi_log_record_processor\n self._instrumentation_scope = instrumentation_scope\n\n @property\n def resource(self):\n return self._resource\n\n def emit(self, record: LogRecord):\n \"\"\"Emits the :class:`LogData` by associating :class:`LogRecord`\n and instrumentation info.\n \"\"\"\n log_data = LogData(record, self._instrumentation_scope)\n self._multi_log_record_processor.emit(log_data)\n\n\nclass LoggerProvider(APILoggerProvider):\n def __init__(\n self,\n resource: Resource = None,\n shutdown_on_exit: bool = True,\n multi_log_record_processor: Union[\n SynchronousMultiLogRecordProcessor,\n ConcurrentMultiLogRecordProcessor,\n ] = None,\n ):\n if resource is None:\n self._resource = Resource.create({})\n else:\n self._resource = resource\n self._multi_log_record_processor = (\n multi_log_record_processor or SynchronousMultiLogRecordProcessor()\n )\n self._at_exit_handler = None\n if shutdown_on_exit:\n self._at_exit_handler = atexit.register(self.shutdown)\n\n @property\n def resource(self):\n return self._resource\n\n def get_logger(\n self,\n name: str,\n version: Optional[str] = None,\n schema_url: Optional[str] = None,\n ) -> Logger:\n return Logger(\n self._resource,\n self._multi_log_record_processor,\n InstrumentationScope(\n name,\n version,\n schema_url,\n ),\n )\n\n def add_log_record_processor(\n self, log_record_processor: LogRecordProcessor\n ):\n \"\"\"Registers a new :class:`LogRecordProcessor` for this `LoggerProvider` instance.\n\n The log processors are invoked in the same order they are registered.\n \"\"\"\n self._multi_log_record_processor.add_log_record_processor(\n log_record_processor\n )\n\n def shutdown(self):\n \"\"\"Shuts down the log processors.\"\"\"\n self._multi_log_record_processor.shutdown()\n if self._at_exit_handler is not None:\n atexit.unregister(self._at_exit_handler)\n self._at_exit_handler = None\n\n def force_flush(self, timeout_millis: int = 30000) -> bool:\n \"\"\"Force flush the log processors.\n\n Args:\n timeout_millis: The maximum amount of time to wait for logs to be\n exported.\n\n Returns:\n True if all the log processors flushes the logs within timeout,\n False otherwise.\n \"\"\"\n return self._multi_log_record_processor.force_flush(timeout_millis)\n", "path": "opentelemetry-sdk/src/opentelemetry/sdk/_logs/_internal/__init__.py"}], "after_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport abc\nimport atexit\nimport concurrent.futures\nimport json\nimport logging\nimport threading\nimport traceback\nfrom os import environ\nfrom time import time_ns\nfrom typing import Any, Callable, Optional, Tuple, Union # noqa\n\nfrom opentelemetry._logs import Logger as APILogger\nfrom opentelemetry._logs import LoggerProvider as APILoggerProvider\nfrom opentelemetry._logs import LogRecord as APILogRecord\nfrom opentelemetry._logs import (\n NoOpLogger,\n SeverityNumber,\n get_logger,\n get_logger_provider,\n std_to_otel,\n)\nfrom opentelemetry.attributes import BoundedAttributes\nfrom opentelemetry.sdk.environment_variables import (\n OTEL_ATTRIBUTE_COUNT_LIMIT,\n OTEL_ATTRIBUTE_VALUE_LENGTH_LIMIT,\n)\nfrom opentelemetry.sdk.resources import Resource\nfrom opentelemetry.sdk.util import ns_to_iso_str\nfrom opentelemetry.sdk.util.instrumentation import InstrumentationScope\nfrom opentelemetry.semconv.trace import SpanAttributes\nfrom opentelemetry.trace import (\n format_span_id,\n format_trace_id,\n get_current_span,\n)\nfrom opentelemetry.trace.span import TraceFlags\nfrom opentelemetry.util.types import Attributes\n\n_logger = logging.getLogger(__name__)\n\n_DEFAULT_OTEL_ATTRIBUTE_COUNT_LIMIT = 128\n_ENV_VALUE_UNSET = \"\"\n\n\nclass LogLimits:\n \"\"\"This class is based on a SpanLimits class in the Tracing module.\n\n This class represents the limits that should be enforced on recorded data such as events, links, attributes etc.\n\n This class does not enforce any limits itself. It only provides a way to read limits from env,\n default values and from user provided arguments.\n\n All limit arguments must be either a non-negative integer, ``None`` or ``LogLimits.UNSET``.\n\n - All limit arguments are optional.\n - If a limit argument is not set, the class will try to read its value from the corresponding\n environment variable.\n - If the environment variable is not set, the default value, if any, will be used.\n\n Limit precedence:\n\n - If a model specific limit is set, it will be used.\n - Else if the corresponding global limit is set, it will be used.\n - Else if the model specific limit has a default value, the default value will be used.\n - Else if the global limit has a default value, the default value will be used.\n\n Args:\n max_attributes: Maximum number of attributes that can be added to a span, event, and link.\n Environment variable: ``OTEL_ATTRIBUTE_COUNT_LIMIT``\n Default: {_DEFAULT_OTEL_ATTRIBUTE_COUNT_LIMIT}\n max_attribute_length: Maximum length an attribute value can have. Values longer than\n the specified length will be truncated.\n \"\"\"\n\n UNSET = -1\n\n def __init__(\n self,\n max_attributes: Optional[int] = None,\n max_attribute_length: Optional[int] = None,\n ):\n\n # attribute count\n global_max_attributes = self._from_env_if_absent(\n max_attributes, OTEL_ATTRIBUTE_COUNT_LIMIT\n )\n self.max_attributes = (\n global_max_attributes\n if global_max_attributes is not None\n else _DEFAULT_OTEL_ATTRIBUTE_COUNT_LIMIT\n )\n\n # attribute length\n self.max_attribute_length = self._from_env_if_absent(\n max_attribute_length,\n OTEL_ATTRIBUTE_VALUE_LENGTH_LIMIT,\n )\n\n def __repr__(self):\n return f\"{type(self).__name__}(max_attributes={self.max_attributes}, max_attribute_length={self.max_attribute_length})\"\n\n @classmethod\n def _from_env_if_absent(\n cls, value: Optional[int], env_var: str, default: Optional[int] = None\n ) -> Optional[int]:\n if value == cls.UNSET:\n return None\n\n err_msg = \"{0} must be a non-negative integer but got {}\"\n\n # if no value is provided for the limit, try to load it from env\n if value is None:\n # return default value if env var is not set\n if env_var not in environ:\n return default\n\n str_value = environ.get(env_var, \"\").strip().lower()\n if str_value == _ENV_VALUE_UNSET:\n return None\n\n try:\n value = int(str_value)\n except ValueError:\n raise ValueError(err_msg.format(env_var, str_value))\n\n if value < 0:\n raise ValueError(err_msg.format(env_var, value))\n return value\n\n\n_UnsetLogLimits = LogLimits(\n max_attributes=LogLimits.UNSET,\n max_attribute_length=LogLimits.UNSET,\n)\n\n\nclass LogRecord(APILogRecord):\n \"\"\"A LogRecord instance represents an event being logged.\n\n LogRecord instances are created and emitted via `Logger`\n every time something is logged. They contain all the information\n pertinent to the event being logged.\n \"\"\"\n\n def __init__(\n self,\n timestamp: Optional[int] = None,\n observed_timestamp: Optional[int] = None,\n trace_id: Optional[int] = None,\n span_id: Optional[int] = None,\n trace_flags: Optional[TraceFlags] = None,\n severity_text: Optional[str] = None,\n severity_number: Optional[SeverityNumber] = None,\n body: Optional[Any] = None,\n resource: Optional[Resource] = None,\n attributes: Optional[Attributes] = None,\n limits: Optional[LogLimits] = _UnsetLogLimits,\n ):\n super().__init__(\n **{\n \"timestamp\": timestamp,\n \"observed_timestamp\": observed_timestamp,\n \"trace_id\": trace_id,\n \"span_id\": span_id,\n \"trace_flags\": trace_flags,\n \"severity_text\": severity_text,\n \"severity_number\": severity_number,\n \"body\": body,\n \"attributes\": BoundedAttributes(\n maxlen=limits.max_attributes,\n attributes=attributes if bool(attributes) else None,\n immutable=False,\n max_value_len=limits.max_attribute_length,\n ),\n }\n )\n self.resource = resource\n\n def __eq__(self, other: object) -> bool:\n if not isinstance(other, LogRecord):\n return NotImplemented\n return self.__dict__ == other.__dict__\n\n def to_json(self, indent=4) -> str:\n return json.dumps(\n {\n \"body\": self.body,\n \"severity_number\": repr(self.severity_number),\n \"severity_text\": self.severity_text,\n \"attributes\": dict(self.attributes)\n if bool(self.attributes)\n else None,\n \"dropped_attributes\": self.dropped_attributes,\n \"timestamp\": ns_to_iso_str(self.timestamp),\n \"trace_id\": f\"0x{format_trace_id(self.trace_id)}\"\n if self.trace_id is not None\n else \"\",\n \"span_id\": f\"0x{format_span_id(self.span_id)}\"\n if self.span_id is not None\n else \"\",\n \"trace_flags\": self.trace_flags,\n \"resource\": repr(self.resource.attributes)\n if self.resource\n else \"\",\n },\n indent=indent,\n )\n\n @property\n def dropped_attributes(self) -> int:\n if self.attributes:\n return self.attributes.dropped\n return 0\n\n\nclass LogData:\n \"\"\"Readable LogRecord data plus associated InstrumentationLibrary.\"\"\"\n\n def __init__(\n self,\n log_record: LogRecord,\n instrumentation_scope: InstrumentationScope,\n ):\n self.log_record = log_record\n self.instrumentation_scope = instrumentation_scope\n\n\nclass LogRecordProcessor(abc.ABC):\n \"\"\"Interface to hook the log record emitting action.\n\n Log processors can be registered directly using\n :func:`LoggerProvider.add_log_record_processor` and they are invoked\n in the same order as they were registered.\n \"\"\"\n\n @abc.abstractmethod\n def emit(self, log_data: LogData):\n \"\"\"Emits the `LogData`\"\"\"\n\n @abc.abstractmethod\n def shutdown(self):\n \"\"\"Called when a :class:`opentelemetry.sdk._logs.Logger` is shutdown\"\"\"\n\n @abc.abstractmethod\n def force_flush(self, timeout_millis: int = 30000):\n \"\"\"Export all the received logs to the configured Exporter that have not yet\n been exported.\n\n Args:\n timeout_millis: The maximum amount of time to wait for logs to be\n exported.\n\n Returns:\n False if the timeout is exceeded, True otherwise.\n \"\"\"\n\n\n# Temporary fix until https://github.com/PyCQA/pylint/issues/4098 is resolved\n# pylint:disable=no-member\nclass SynchronousMultiLogRecordProcessor(LogRecordProcessor):\n \"\"\"Implementation of class:`LogRecordProcessor` that forwards all received\n events to a list of log processors sequentially.\n\n The underlying log processors are called in sequential order as they were\n added.\n \"\"\"\n\n def __init__(self):\n # use a tuple to avoid race conditions when adding a new log and\n # iterating through it on \"emit\".\n self._log_record_processors = () # type: Tuple[LogRecordProcessor, ...]\n self._lock = threading.Lock()\n\n def add_log_record_processor(\n self, log_record_processor: LogRecordProcessor\n ) -> None:\n \"\"\"Adds a Logprocessor to the list of log processors handled by this instance\"\"\"\n with self._lock:\n self._log_record_processors += (log_record_processor,)\n\n def emit(self, log_data: LogData) -> None:\n for lp in self._log_record_processors:\n lp.emit(log_data)\n\n def shutdown(self) -> None:\n \"\"\"Shutdown the log processors one by one\"\"\"\n for lp in self._log_record_processors:\n lp.shutdown()\n\n def force_flush(self, timeout_millis: int = 30000) -> bool:\n \"\"\"Force flush the log processors one by one\n\n Args:\n timeout_millis: The maximum amount of time to wait for logs to be\n exported. If the first n log processors exceeded the timeout\n then remaining log processors will not be flushed.\n\n Returns:\n True if all the log processors flushes the logs within timeout,\n False otherwise.\n \"\"\"\n deadline_ns = time_ns() + timeout_millis * 1000000\n for lp in self._log_record_processors:\n current_ts = time_ns()\n if current_ts >= deadline_ns:\n return False\n\n if not lp.force_flush((deadline_ns - current_ts) // 1000000):\n return False\n\n return True\n\n\nclass ConcurrentMultiLogRecordProcessor(LogRecordProcessor):\n \"\"\"Implementation of :class:`LogRecordProcessor` that forwards all received\n events to a list of log processors in parallel.\n\n Calls to the underlying log processors are forwarded in parallel by\n submitting them to a thread pool executor and waiting until each log\n processor finished its work.\n\n Args:\n max_workers: The number of threads managed by the thread pool executor\n and thus defining how many log processors can work in parallel.\n \"\"\"\n\n def __init__(self, max_workers: int = 2):\n # use a tuple to avoid race conditions when adding a new log and\n # iterating through it on \"emit\".\n self._log_record_processors = () # type: Tuple[LogRecordProcessor, ...]\n self._lock = threading.Lock()\n self._executor = concurrent.futures.ThreadPoolExecutor(\n max_workers=max_workers\n )\n\n def add_log_record_processor(\n self, log_record_processor: LogRecordProcessor\n ):\n with self._lock:\n self._log_record_processors += (log_record_processor,)\n\n def _submit_and_wait(\n self,\n func: Callable[[LogRecordProcessor], Callable[..., None]],\n *args: Any,\n **kwargs: Any,\n ):\n futures = []\n for lp in self._log_record_processors:\n future = self._executor.submit(func(lp), *args, **kwargs)\n futures.append(future)\n for future in futures:\n future.result()\n\n def emit(self, log_data: LogData):\n self._submit_and_wait(lambda lp: lp.emit, log_data)\n\n def shutdown(self):\n self._submit_and_wait(lambda lp: lp.shutdown)\n\n def force_flush(self, timeout_millis: int = 30000) -> bool:\n \"\"\"Force flush the log processors in parallel.\n\n Args:\n timeout_millis: The maximum amount of time to wait for logs to be\n exported.\n\n Returns:\n True if all the log processors flushes the logs within timeout,\n False otherwise.\n \"\"\"\n futures = []\n for lp in self._log_record_processors:\n future = self._executor.submit(lp.force_flush, timeout_millis)\n futures.append(future)\n\n done_futures, not_done_futures = concurrent.futures.wait(\n futures, timeout_millis / 1e3\n )\n\n if not_done_futures:\n return False\n\n for future in done_futures:\n if not future.result():\n return False\n\n return True\n\n\n# skip natural LogRecord attributes\n# http://docs.python.org/library/logging.html#logrecord-attributes\n_RESERVED_ATTRS = frozenset(\n (\n \"asctime\",\n \"args\",\n \"created\",\n \"exc_info\",\n \"exc_text\",\n \"filename\",\n \"funcName\",\n \"message\",\n \"levelname\",\n \"levelno\",\n \"lineno\",\n \"module\",\n \"msecs\",\n \"msg\",\n \"name\",\n \"pathname\",\n \"process\",\n \"processName\",\n \"relativeCreated\",\n \"stack_info\",\n \"thread\",\n \"threadName\",\n \"taskName\",\n )\n)\n\n\nclass LoggingHandler(logging.Handler):\n \"\"\"A handler class which writes logging records, in OTLP format, to\n a network destination or file. Supports signals from the `logging` module.\n https://docs.python.org/3/library/logging.html\n \"\"\"\n\n def __init__(\n self,\n level=logging.NOTSET,\n logger_provider=None,\n ) -> None:\n super().__init__(level=level)\n self._logger_provider = logger_provider or get_logger_provider()\n self._logger = get_logger(\n __name__, logger_provider=self._logger_provider\n )\n\n @staticmethod\n def _get_attributes(record: logging.LogRecord) -> Attributes:\n attributes = {\n k: v for k, v in vars(record).items() if k not in _RESERVED_ATTRS\n }\n if record.exc_info:\n exc_type = \"\"\n message = \"\"\n stack_trace = \"\"\n exctype, value, tb = record.exc_info\n if exctype is not None:\n exc_type = exctype.__name__\n if value is not None and value.args:\n message = value.args[0]\n if tb is not None:\n # https://github.com/open-telemetry/opentelemetry-specification/blob/9fa7c656b26647b27e485a6af7e38dc716eba98a/specification/trace/semantic_conventions/exceptions.md#stacktrace-representation\n stack_trace = \"\".join(\n traceback.format_exception(*record.exc_info)\n )\n attributes[SpanAttributes.EXCEPTION_TYPE] = exc_type\n attributes[SpanAttributes.EXCEPTION_MESSAGE] = message\n attributes[SpanAttributes.EXCEPTION_STACKTRACE] = stack_trace\n return attributes\n\n def _translate(self, record: logging.LogRecord) -> LogRecord:\n timestamp = int(record.created * 1e9)\n span_context = get_current_span().get_span_context()\n attributes = self._get_attributes(record)\n # This comment is taken from GanyedeNil's PR #3343, I have redacted it\n # slightly for clarity:\n # According to the definition of the Body field type in the\n # OTel 1.22.0 Logs Data Model article, the Body field should be of\n # type 'any' and should not use the str method to directly translate\n # the msg. This is because str only converts non-text types into a\n # human-readable form, rather than a standard format, which leads to\n # the need for additional operations when collected through a log\n # collector.\n # Considering that he Body field should be of type 'any' and should not\n # use the str method but record.msg is also a string type, then the\n # difference is just the self.args formatting?\n # The primary consideration depends on the ultimate purpose of the log.\n # Converting the default log directly into a string is acceptable as it\n # will be required to be presented in a more readable format. However,\n # this approach might not be as \"standard\" when hoping to aggregate\n # logs and perform subsequent data analysis. In the context of log\n # extraction, it would be more appropriate for the msg to be\n # converted into JSON format or remain unchanged, as it will eventually\n # be transformed into JSON. If the final output JSON data contains a\n # structure that appears similar to JSON but is not, it may confuse\n # users. This is particularly true for operation and maintenance\n # personnel who need to deal with log data in various languages.\n # Where is the JSON converting occur? and what about when the msg\n # represents something else but JSON, the expected behavior change?\n # For the ConsoleLogExporter, it performs the to_json operation in\n # opentelemetry.sdk._logs._internal.export.ConsoleLogExporter.__init__,\n # so it can handle any type of input without problems. As for the\n # OTLPLogExporter, it also handles any type of input encoding in\n # _encode_log located in\n # opentelemetry.exporter.otlp.proto.common._internal._log_encoder.\n # Therefore, no extra operation is needed to support this change.\n # The only thing to consider is the users who have already been using\n # this SDK. If they upgrade the SDK after this change, they will need\n # to readjust their logging collection rules to adapt to the latest\n # output format. Therefore, this change is considered a breaking\n # change and needs to be upgraded at an appropriate time.\n severity_number = std_to_otel(record.levelno)\n if isinstance(record.msg, str) and record.args:\n body = record.msg % record.args\n else:\n body = record.msg\n return LogRecord(\n timestamp=timestamp,\n trace_id=span_context.trace_id,\n span_id=span_context.span_id,\n trace_flags=span_context.trace_flags,\n severity_text=record.levelname,\n severity_number=severity_number,\n body=body,\n resource=self._logger.resource,\n attributes=attributes,\n )\n\n def emit(self, record: logging.LogRecord) -> None:\n \"\"\"\n Emit a record. Skip emitting if logger is NoOp.\n\n The record is translated to OTel format, and then sent across the pipeline.\n \"\"\"\n if not isinstance(self._logger, NoOpLogger):\n self._logger.emit(self._translate(record))\n\n def flush(self) -> None:\n \"\"\"\n Flushes the logging output.\n \"\"\"\n self._logger_provider.force_flush()\n\n\nclass Logger(APILogger):\n def __init__(\n self,\n resource: Resource,\n multi_log_record_processor: Union[\n SynchronousMultiLogRecordProcessor,\n ConcurrentMultiLogRecordProcessor,\n ],\n instrumentation_scope: InstrumentationScope,\n ):\n super().__init__(\n instrumentation_scope.name,\n instrumentation_scope.version,\n instrumentation_scope.schema_url,\n )\n self._resource = resource\n self._multi_log_record_processor = multi_log_record_processor\n self._instrumentation_scope = instrumentation_scope\n\n @property\n def resource(self):\n return self._resource\n\n def emit(self, record: LogRecord):\n \"\"\"Emits the :class:`LogData` by associating :class:`LogRecord`\n and instrumentation info.\n \"\"\"\n log_data = LogData(record, self._instrumentation_scope)\n self._multi_log_record_processor.emit(log_data)\n\n\nclass LoggerProvider(APILoggerProvider):\n def __init__(\n self,\n resource: Resource = None,\n shutdown_on_exit: bool = True,\n multi_log_record_processor: Union[\n SynchronousMultiLogRecordProcessor,\n ConcurrentMultiLogRecordProcessor,\n ] = None,\n ):\n if resource is None:\n self._resource = Resource.create({})\n else:\n self._resource = resource\n self._multi_log_record_processor = (\n multi_log_record_processor or SynchronousMultiLogRecordProcessor()\n )\n self._at_exit_handler = None\n if shutdown_on_exit:\n self._at_exit_handler = atexit.register(self.shutdown)\n\n @property\n def resource(self):\n return self._resource\n\n def get_logger(\n self,\n name: str,\n version: Optional[str] = None,\n schema_url: Optional[str] = None,\n ) -> Logger:\n return Logger(\n self._resource,\n self._multi_log_record_processor,\n InstrumentationScope(\n name,\n version,\n schema_url,\n ),\n )\n\n def add_log_record_processor(\n self, log_record_processor: LogRecordProcessor\n ):\n \"\"\"Registers a new :class:`LogRecordProcessor` for this `LoggerProvider` instance.\n\n The log processors are invoked in the same order they are registered.\n \"\"\"\n self._multi_log_record_processor.add_log_record_processor(\n log_record_processor\n )\n\n def shutdown(self):\n \"\"\"Shuts down the log processors.\"\"\"\n self._multi_log_record_processor.shutdown()\n if self._at_exit_handler is not None:\n atexit.unregister(self._at_exit_handler)\n self._at_exit_handler = None\n\n def force_flush(self, timeout_millis: int = 30000) -> bool:\n \"\"\"Force flush the log processors.\n\n Args:\n timeout_millis: The maximum amount of time to wait for logs to be\n exported.\n\n Returns:\n True if all the log processors flushes the logs within timeout,\n False otherwise.\n \"\"\"\n return self._multi_log_record_processor.force_flush(timeout_millis)\n", "path": "opentelemetry-sdk/src/opentelemetry/sdk/_logs/_internal/__init__.py"}]} |
gh_patches_debug_55 | rasdani/github-patches | git_diff | hylang__hy-411 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
problem with comment parsing
I was translating some code to Hy from a textbook on Python programming (http://inventwithpython.com/pygame/index.html) and ran into a problem with this direct translation.
```
(import pygame sys)
(import [pygame.locals [*]])
(pygame.init)
(setv *displaysurf* (pygame.display.set_mode (, 400 300)))
(pygame.display.set_caption "Hello World!")
(while True ; main game loop
(do (foreach [event (pygame.event.get)]
(if (= event.type QUIT)
(do (pygame.quit)
(sys.exit))))
(pygame.display.update)))
```
I get a parse error if the end-of-line comment ("main game loop") appears where it does. It works if I remove it.
The following interaction with the prompt also surprised me.
```
=> ; some comment
hy.lex.exceptions.LexException: Could not identify the next token at line -1, column -1
```
Fixing this isn't critical, but it should probably be fixed. I do occasionally type something, realize I need to do something else first, comment it, press Enter, type whatever setup I needed, press Enter, then press Up twice, uncomment the line, and Enter to run it.
problem with comment parsing
I was translating some code to Hy from a textbook on Python programming (http://inventwithpython.com/pygame/index.html) and ran into a problem with this direct translation.
```
(import pygame sys)
(import [pygame.locals [*]])
(pygame.init)
(setv *displaysurf* (pygame.display.set_mode (, 400 300)))
(pygame.display.set_caption "Hello World!")
(while True ; main game loop
(do (foreach [event (pygame.event.get)]
(if (= event.type QUIT)
(do (pygame.quit)
(sys.exit))))
(pygame.display.update)))
```
I get a parse error if the end-of-line comment ("main game loop") appears where it does. It works if I remove it.
The following interaction with the prompt also surprised me.
```
=> ; some comment
hy.lex.exceptions.LexException: Could not identify the next token at line -1, column -1
```
Fixing this isn't critical, but it should probably be fixed. I do occasionally type something, realize I need to do something else first, comment it, press Enter, type whatever setup I needed, press Enter, then press Up twice, uncomment the line, and Enter to run it.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `hy/lex/lexer.py`
Content:
```
1 # Copyright (c) 2013 Nicolas Dandrimont <[email protected]>
2 #
3 # Permission is hereby granted, free of charge, to any person obtaining a
4 # copy of this software and associated documentation files (the "Software"),
5 # to deal in the Software without restriction, including without limitation
6 # the rights to use, copy, modify, merge, publish, distribute, sublicense,
7 # and/or sell copies of the Software, and to permit persons to whom the
8 # Software is furnished to do so, subject to the following conditions:
9 #
10 # The above copyright notice and this permission notice shall be included in
11 # all copies or substantial portions of the Software.
12 #
13 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
14 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
15 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
16 # THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
17 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
18 # FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
19 # DEALINGS IN THE SOFTWARE.
20
21 from rply import LexerGenerator
22
23
24 lg = LexerGenerator()
25
26
27 # A regexp for something that should end a quoting/unquoting operator
28 # i.e. a space or a closing brace/paren/curly
29 end_quote = r'(?![\s\)\]\}])'
30
31
32 lg.add('LPAREN', r'\(')
33 lg.add('RPAREN', r'\)')
34 lg.add('LBRACKET', r'\[')
35 lg.add('RBRACKET', r'\]')
36 lg.add('LCURLY', r'\{')
37 lg.add('RCURLY', r'\}')
38 lg.add('QUOTE', r'\'%s' % end_quote)
39 lg.add('QUASIQUOTE', r'`%s' % end_quote)
40 lg.add('UNQUOTESPLICE', r'~@%s' % end_quote)
41 lg.add('UNQUOTE', r'~%s' % end_quote)
42 lg.add('HASHBANG', r'#!.*[^\r\n]')
43 lg.add('HASHREADER', r'#.')
44
45
46 lg.add('STRING', r'''(?x)
47 (?:u|r|ur|ru)? # prefix
48 " # start string
49 (?:
50 | [^"\\] # non-quote or backslash
51 | \\. # or escaped single character
52 | \\x[0-9a-fA-F]{2} # or escaped raw character
53 | \\u[0-9a-fA-F]{4} # or unicode escape
54 | \\U[0-9a-fA-F]{8} # or long unicode escape
55 )* # one or more times
56 " # end string
57 ''')
58
59
60 lg.add('IDENTIFIER', r'[^()\[\]{}\'"\s;]+')
61
62
63 lg.ignore(r';.*[\r\n]+')
64 lg.ignore(r'\s+')
65
66
67 lexer = lg.build()
68
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/hy/lex/lexer.py b/hy/lex/lexer.py
--- a/hy/lex/lexer.py
+++ b/hy/lex/lexer.py
@@ -60,7 +60,7 @@
lg.add('IDENTIFIER', r'[^()\[\]{}\'"\s;]+')
-lg.ignore(r';.*[\r\n]+')
+lg.ignore(r';.*(?=\r|\n|$)')
lg.ignore(r'\s+')
| {"golden_diff": "diff --git a/hy/lex/lexer.py b/hy/lex/lexer.py\n--- a/hy/lex/lexer.py\n+++ b/hy/lex/lexer.py\n@@ -60,7 +60,7 @@\n lg.add('IDENTIFIER', r'[^()\\[\\]{}\\'\"\\s;]+')\n \n \n-lg.ignore(r';.*[\\r\\n]+')\n+lg.ignore(r';.*(?=\\r|\\n|$)')\n lg.ignore(r'\\s+')\n", "issue": "problem with comment parsing\nI was translating some code to Hy from a textbook on Python programming (http://inventwithpython.com/pygame/index.html) and ran into a problem with this direct translation.\n\n```\n(import pygame sys)\n(import [pygame.locals [*]])\n\n(pygame.init)\n(setv *displaysurf* (pygame.display.set_mode (, 400 300)))\n(pygame.display.set_caption \"Hello World!\")\n(while True ; main game loop\n (do (foreach [event (pygame.event.get)]\n (if (= event.type QUIT)\n (do (pygame.quit)\n (sys.exit))))\n (pygame.display.update)))\n```\n\nI get a parse error if the end-of-line comment (\"main game loop\") appears where it does. It works if I remove it.\n\nThe following interaction with the prompt also surprised me.\n\n```\n=> ; some comment\nhy.lex.exceptions.LexException: Could not identify the next token at line -1, column -1\n```\n\nFixing this isn't critical, but it should probably be fixed. I do occasionally type something, realize I need to do something else first, comment it, press Enter, type whatever setup I needed, press Enter, then press Up twice, uncomment the line, and Enter to run it.\n\nproblem with comment parsing\nI was translating some code to Hy from a textbook on Python programming (http://inventwithpython.com/pygame/index.html) and ran into a problem with this direct translation.\n\n```\n(import pygame sys)\n(import [pygame.locals [*]])\n\n(pygame.init)\n(setv *displaysurf* (pygame.display.set_mode (, 400 300)))\n(pygame.display.set_caption \"Hello World!\")\n(while True ; main game loop\n (do (foreach [event (pygame.event.get)]\n (if (= event.type QUIT)\n (do (pygame.quit)\n (sys.exit))))\n (pygame.display.update)))\n```\n\nI get a parse error if the end-of-line comment (\"main game loop\") appears where it does. It works if I remove it.\n\nThe following interaction with the prompt also surprised me.\n\n```\n=> ; some comment\nhy.lex.exceptions.LexException: Could not identify the next token at line -1, column -1\n```\n\nFixing this isn't critical, but it should probably be fixed. I do occasionally type something, realize I need to do something else first, comment it, press Enter, type whatever setup I needed, press Enter, then press Up twice, uncomment the line, and Enter to run it.\n\n", "before_files": [{"content": "# Copyright (c) 2013 Nicolas Dandrimont <[email protected]>\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the \"Software\"),\n# to deal in the Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish, distribute, sublicense,\n# and/or sell copies of the Software, and to permit persons to whom the\n# Software is furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.\n\nfrom rply import LexerGenerator\n\n\nlg = LexerGenerator()\n\n\n# A regexp for something that should end a quoting/unquoting operator\n# i.e. a space or a closing brace/paren/curly\nend_quote = r'(?![\\s\\)\\]\\}])'\n\n\nlg.add('LPAREN', r'\\(')\nlg.add('RPAREN', r'\\)')\nlg.add('LBRACKET', r'\\[')\nlg.add('RBRACKET', r'\\]')\nlg.add('LCURLY', r'\\{')\nlg.add('RCURLY', r'\\}')\nlg.add('QUOTE', r'\\'%s' % end_quote)\nlg.add('QUASIQUOTE', r'`%s' % end_quote)\nlg.add('UNQUOTESPLICE', r'~@%s' % end_quote)\nlg.add('UNQUOTE', r'~%s' % end_quote)\nlg.add('HASHBANG', r'#!.*[^\\r\\n]')\nlg.add('HASHREADER', r'#.')\n\n\nlg.add('STRING', r'''(?x)\n (?:u|r|ur|ru)? # prefix\n \" # start string\n (?:\n | [^\"\\\\] # non-quote or backslash\n | \\\\. # or escaped single character\n | \\\\x[0-9a-fA-F]{2} # or escaped raw character\n | \\\\u[0-9a-fA-F]{4} # or unicode escape\n | \\\\U[0-9a-fA-F]{8} # or long unicode escape\n )* # one or more times\n \" # end string\n''')\n\n\nlg.add('IDENTIFIER', r'[^()\\[\\]{}\\'\"\\s;]+')\n\n\nlg.ignore(r';.*[\\r\\n]+')\nlg.ignore(r'\\s+')\n\n\nlexer = lg.build()\n", "path": "hy/lex/lexer.py"}], "after_files": [{"content": "# Copyright (c) 2013 Nicolas Dandrimont <[email protected]>\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the \"Software\"),\n# to deal in the Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish, distribute, sublicense,\n# and/or sell copies of the Software, and to permit persons to whom the\n# Software is furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.\n\nfrom rply import LexerGenerator\n\n\nlg = LexerGenerator()\n\n\n# A regexp for something that should end a quoting/unquoting operator\n# i.e. a space or a closing brace/paren/curly\nend_quote = r'(?![\\s\\)\\]\\}])'\n\n\nlg.add('LPAREN', r'\\(')\nlg.add('RPAREN', r'\\)')\nlg.add('LBRACKET', r'\\[')\nlg.add('RBRACKET', r'\\]')\nlg.add('LCURLY', r'\\{')\nlg.add('RCURLY', r'\\}')\nlg.add('QUOTE', r'\\'%s' % end_quote)\nlg.add('QUASIQUOTE', r'`%s' % end_quote)\nlg.add('UNQUOTESPLICE', r'~@%s' % end_quote)\nlg.add('UNQUOTE', r'~%s' % end_quote)\nlg.add('HASHBANG', r'#!.*[^\\r\\n]')\nlg.add('HASHREADER', r'#.')\n\n\nlg.add('STRING', r'''(?x)\n (?:u|r|ur|ru)? # prefix\n \" # start string\n (?:\n | [^\"\\\\] # non-quote or backslash\n | \\\\. # or escaped single character\n | \\\\x[0-9a-fA-F]{2} # or escaped raw character\n | \\\\u[0-9a-fA-F]{4} # or unicode escape\n | \\\\U[0-9a-fA-F]{8} # or long unicode escape\n )* # one or more times\n \" # end string\n''')\n\n\nlg.add('IDENTIFIER', r'[^()\\[\\]{}\\'\"\\s;]+')\n\n\nlg.ignore(r';.*(?=\\r|\\n|$)')\nlg.ignore(r'\\s+')\n\n\nlexer = lg.build()\n", "path": "hy/lex/lexer.py"}]} |
gh_patches_debug_56 | rasdani/github-patches | git_diff | spacetelescope__jwql-678 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Upgrade Django to 3.0
Django 3.0 is out, and since it is a major release, we should consider upgrading to this.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 import numpy as np
2 from setuptools import setup
3 from setuptools import find_packages
4
5 VERSION = '0.24.0'
6
7 AUTHORS = 'Matthew Bourque, Lauren Chambers, Misty Cracraft, Mike Engesser, Mees Fix, Joe Filippazzo, Bryan Hilbert, '
8 AUTHORS += 'Graham Kanarek, Teagan King, Catherine Martlin, Maria Pena-Guerrero, Johannes Sahlmann, Ben Sunnquist'
9
10 DESCRIPTION = 'The James Webb Space Telescope Quicklook Project'
11
12 DEPENDENCY_LINKS = ['git+https://github.com/spacetelescope/jwst_reffiles#egg=jwst_reffiles']
13
14 REQUIRES = [
15 'asdf>=2.3.3',
16 'astropy>=3.2.1',
17 'astroquery>=0.3.9',
18 'authlib',
19 'bokeh>=1.0,<1.4',
20 'codecov',
21 'crds',
22 'cryptography',
23 'django>=2.0,<3.0',
24 'flake8',
25 'inflection',
26 'ipython',
27 'jinja2',
28 'jsonschema',
29 'jwedb>=0.0.3',
30 'jwst',
31 'matplotlib',
32 'nodejs',
33 'numpy',
34 'numpydoc',
35 'pandas',
36 'psycopg2',
37 'pysiaf',
38 'pytest',
39 'pytest-cov',
40 'scipy',
41 'sphinx',
42 'sqlalchemy',
43 'stsci_rtd_theme',
44 'twine',
45 'wtforms'
46 ]
47
48 setup(
49 name='jwql',
50 version=VERSION,
51 description=DESCRIPTION,
52 url='https://github.com/spacetelescope/jwql.git',
53 author=AUTHORS,
54 author_email='[email protected]',
55 license='BSD',
56 keywords=['astronomy', 'python'],
57 classifiers=['Programming Language :: Python'],
58 packages=find_packages(),
59 install_requires=REQUIRES,
60 dependency_links=DEPENDENCY_LINKS,
61 include_package_data=True,
62 include_dirs=[np.get_include()],
63 )
64
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -20,7 +20,7 @@
'codecov',
'crds',
'cryptography',
- 'django>=2.0,<3.0',
+ 'django',
'flake8',
'inflection',
'ipython',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -20,7 +20,7 @@\n 'codecov',\n 'crds',\n 'cryptography',\n- 'django>=2.0,<3.0',\n+ 'django',\n 'flake8',\n 'inflection',\n 'ipython',\n", "issue": "Upgrade Django to 3.0\nDjango 3.0 is out, and since it is a major release, we should consider upgrading to this.\n", "before_files": [{"content": "import numpy as np\nfrom setuptools import setup\nfrom setuptools import find_packages\n\nVERSION = '0.24.0'\n\nAUTHORS = 'Matthew Bourque, Lauren Chambers, Misty Cracraft, Mike Engesser, Mees Fix, Joe Filippazzo, Bryan Hilbert, '\nAUTHORS += 'Graham Kanarek, Teagan King, Catherine Martlin, Maria Pena-Guerrero, Johannes Sahlmann, Ben Sunnquist'\n\nDESCRIPTION = 'The James Webb Space Telescope Quicklook Project'\n\nDEPENDENCY_LINKS = ['git+https://github.com/spacetelescope/jwst_reffiles#egg=jwst_reffiles']\n\nREQUIRES = [\n 'asdf>=2.3.3',\n 'astropy>=3.2.1',\n 'astroquery>=0.3.9',\n 'authlib',\n 'bokeh>=1.0,<1.4',\n 'codecov',\n 'crds',\n 'cryptography',\n 'django>=2.0,<3.0',\n 'flake8',\n 'inflection',\n 'ipython',\n 'jinja2',\n 'jsonschema',\n 'jwedb>=0.0.3',\n 'jwst',\n 'matplotlib',\n 'nodejs',\n 'numpy',\n 'numpydoc',\n 'pandas',\n 'psycopg2',\n 'pysiaf',\n 'pytest',\n 'pytest-cov',\n 'scipy',\n 'sphinx',\n 'sqlalchemy',\n 'stsci_rtd_theme',\n 'twine',\n 'wtforms'\n]\n\nsetup(\n name='jwql',\n version=VERSION,\n description=DESCRIPTION,\n url='https://github.com/spacetelescope/jwql.git',\n author=AUTHORS,\n author_email='[email protected]',\n license='BSD',\n keywords=['astronomy', 'python'],\n classifiers=['Programming Language :: Python'],\n packages=find_packages(),\n install_requires=REQUIRES,\n dependency_links=DEPENDENCY_LINKS,\n include_package_data=True,\n include_dirs=[np.get_include()],\n)\n", "path": "setup.py"}], "after_files": [{"content": "import numpy as np\nfrom setuptools import setup\nfrom setuptools import find_packages\n\nVERSION = '0.24.0'\n\nAUTHORS = 'Matthew Bourque, Lauren Chambers, Misty Cracraft, Mike Engesser, Mees Fix, Joe Filippazzo, Bryan Hilbert, '\nAUTHORS += 'Graham Kanarek, Teagan King, Catherine Martlin, Maria Pena-Guerrero, Johannes Sahlmann, Ben Sunnquist'\n\nDESCRIPTION = 'The James Webb Space Telescope Quicklook Project'\n\nDEPENDENCY_LINKS = ['git+https://github.com/spacetelescope/jwst_reffiles#egg=jwst_reffiles']\n\nREQUIRES = [\n 'asdf>=2.3.3',\n 'astropy>=3.2.1',\n 'astroquery>=0.3.9',\n 'authlib',\n 'bokeh>=1.0,<1.4',\n 'codecov',\n 'crds',\n 'cryptography',\n 'django',\n 'flake8',\n 'inflection',\n 'ipython',\n 'jinja2',\n 'jsonschema',\n 'jwedb>=0.0.3',\n 'jwst',\n 'matplotlib',\n 'nodejs',\n 'numpy',\n 'numpydoc',\n 'pandas',\n 'psycopg2',\n 'pysiaf',\n 'pytest',\n 'pytest-cov',\n 'scipy',\n 'sphinx',\n 'sqlalchemy',\n 'stsci_rtd_theme',\n 'twine',\n 'wtforms'\n]\n\nsetup(\n name='jwql',\n version=VERSION,\n description=DESCRIPTION,\n url='https://github.com/spacetelescope/jwql.git',\n author=AUTHORS,\n author_email='[email protected]',\n license='BSD',\n keywords=['astronomy', 'python'],\n classifiers=['Programming Language :: Python'],\n packages=find_packages(),\n install_requires=REQUIRES,\n dependency_links=DEPENDENCY_LINKS,\n include_package_data=True,\n include_dirs=[np.get_include()],\n)\n", "path": "setup.py"}]} |
gh_patches_debug_57 | rasdani/github-patches | git_diff | gratipay__gratipay.com-3206 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
move SQL functions out of schema.sql
Following on from #2360, and in view of the hack at https://github.com/gratipay/gratipay.com/pull/3154#issuecomment-73041912, what if we moved SQL functions to a separate file from schema.sql? If we had one file per function we could automate the process of updating those functions during deployment, and we'd get sensible diffs on PRs because we wouldn't have to use branch.sql as a go-between (branch.sql would remain for table changes).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gratipay/billing/payday.py`
Content:
```
1 """This is Gratipay's payday algorithm.
2
3 Exchanges (moving money between Gratipay and the outside world) and transfers
4 (moving money amongst Gratipay users) happen within an isolated event called
5 payday. This event has duration (it's not punctiliar).
6
7 Payday is designed to be crash-resistant. Everything that can be rolled back
8 happens inside a single DB transaction. Exchanges cannot be rolled back, so they
9 immediately affect the participant's balance.
10
11 """
12 from __future__ import unicode_literals
13
14 import itertools
15 from multiprocessing.dummy import Pool as ThreadPool
16
17 from balanced import CardHold
18
19 import aspen.utils
20 from aspen import log
21 from gratipay.billing.exchanges import (
22 ach_credit, cancel_card_hold, capture_card_hold, create_card_hold, upcharge
23 )
24 from gratipay.exceptions import NegativeBalance
25 from gratipay.models import check_db
26 from psycopg2 import IntegrityError
27
28
29 with open('fake_payday.sql') as f:
30 FAKE_PAYDAY = f.read()
31
32
33 class ExceptionWrapped(Exception): pass
34
35
36 def threaded_map(func, iterable, threads=5):
37 pool = ThreadPool(threads)
38 def g(*a, **kw):
39 # Without this wrapper we get a traceback from inside multiprocessing.
40 try:
41 return func(*a, **kw)
42 except Exception as e:
43 import traceback
44 raise ExceptionWrapped(e, traceback.format_exc())
45 try:
46 r = pool.map(g, iterable)
47 except ExceptionWrapped as e:
48 print(e.args[1])
49 raise e.args[0]
50 pool.close()
51 pool.join()
52 return r
53
54
55 class NoPayday(Exception):
56 __str__ = lambda self: "No payday found where one was expected."
57
58
59 class Payday(object):
60 """Represent an abstract event during which money is moved.
61
62 On Payday, we want to use a participant's Gratipay balance to settle their
63 tips due (pulling in more money via credit card as needed), but we only
64 want to use their balance at the start of Payday. Balance changes should be
65 atomic globally per-Payday.
66
67 Here's the call structure of the Payday.run method:
68
69 run
70 payin
71 prepare
72 create_card_holds
73 transfer_tips
74 transfer_takes
75 settle_card_holds
76 update_balances
77 take_over_balances
78 payout
79 update_stats
80 update_cached_amounts
81 end
82
83 """
84
85
86 @classmethod
87 def start(cls):
88 """Try to start a new Payday.
89
90 If there is a Payday that hasn't finished yet, then the UNIQUE
91 constraint on ts_end will kick in and notify us of that. In that case
92 we load the existing Payday and work on it some more. We use the start
93 time of the current Payday to synchronize our work.
94
95 """
96 try:
97 d = cls.db.one("""
98 INSERT INTO paydays DEFAULT VALUES
99 RETURNING id, (ts_start AT TIME ZONE 'UTC') AS ts_start, stage
100 """, back_as=dict)
101 log("Starting a new payday.")
102 except IntegrityError: # Collision, we have a Payday already.
103 d = cls.db.one("""
104 SELECT id, (ts_start AT TIME ZONE 'UTC') AS ts_start, stage
105 FROM paydays
106 WHERE ts_end='1970-01-01T00:00:00+00'::timestamptz
107 """, back_as=dict)
108 log("Picking up with an existing payday.")
109
110 d['ts_start'] = d['ts_start'].replace(tzinfo=aspen.utils.utc)
111
112 log("Payday started at %s." % d['ts_start'])
113
114 payday = Payday()
115 payday.__dict__.update(d)
116 return payday
117
118
119 def run(self):
120 """This is the starting point for payday.
121
122 This method runs every Thursday. It is structured such that it can be
123 run again safely (with a newly-instantiated Payday object) if it
124 crashes.
125
126 """
127 self.db.self_check()
128
129 _start = aspen.utils.utcnow()
130 log("Greetings, program! It's PAYDAY!!!!")
131
132 if self.stage < 1:
133 self.payin()
134 self.mark_stage_done()
135 if self.stage < 2:
136 self.payout()
137 self.mark_stage_done()
138 if self.stage < 3:
139 self.update_stats()
140 self.update_cached_amounts()
141 self.mark_stage_done()
142
143 self.end()
144
145 _end = aspen.utils.utcnow()
146 _delta = _end - _start
147 fmt_past = "Script ran for %%(age)s (%s)." % _delta
148 log(aspen.utils.to_age(_start, fmt_past=fmt_past))
149
150
151 def payin(self):
152 """The first stage of payday where we charge credit cards and transfer
153 money internally between participants.
154 """
155 with self.db.get_cursor() as cursor:
156 self.prepare(cursor, self.ts_start)
157 holds = self.create_card_holds(cursor)
158 self.transfer_tips(cursor)
159 self.transfer_takes(cursor, self.ts_start)
160 transfers = cursor.all("""
161 SELECT * FROM transfers WHERE "timestamp" > %s
162 """, (self.ts_start,))
163 try:
164 self.settle_card_holds(cursor, holds)
165 self.update_balances(cursor)
166 check_db(cursor)
167 except:
168 # Dump transfers for debugging
169 import csv
170 from time import time
171 with open('%s_transfers.csv' % time(), 'wb') as f:
172 csv.writer(f).writerows(transfers)
173 raise
174 self.take_over_balances()
175 # Clean up leftover functions
176 self.db.run("""
177 DROP FUNCTION process_take();
178 DROP FUNCTION process_tip();
179 DROP FUNCTION settle_tip_graph();
180 DROP FUNCTION transfer(text, text, numeric, context_type);
181 """)
182
183
184 @staticmethod
185 def prepare(cursor, ts_start):
186 """Prepare the DB: we need temporary tables with indexes and triggers.
187 """
188 cursor.run("""
189
190 -- Create the necessary temporary tables and indexes
191
192 CREATE TEMPORARY TABLE payday_participants ON COMMIT DROP AS
193 SELECT id
194 , username
195 , claimed_time
196 , balance AS old_balance
197 , balance AS new_balance
198 , balanced_customer_href
199 , last_bill_result
200 , is_suspicious
201 , goal
202 , false AS card_hold_ok
203 FROM participants
204 WHERE is_suspicious IS NOT true
205 AND claimed_time < %(ts_start)s
206 ORDER BY claimed_time;
207
208 CREATE UNIQUE INDEX ON payday_participants (id);
209 CREATE UNIQUE INDEX ON payday_participants (username);
210
211 CREATE TEMPORARY TABLE payday_transfers_done ON COMMIT DROP AS
212 SELECT *
213 FROM transfers t
214 WHERE t.timestamp > %(ts_start)s;
215
216 CREATE TEMPORARY TABLE payday_tips ON COMMIT DROP AS
217 SELECT tipper, tippee, amount
218 FROM ( SELECT DISTINCT ON (tipper, tippee) *
219 FROM tips
220 WHERE mtime < %(ts_start)s
221 ORDER BY tipper, tippee, mtime DESC
222 ) t
223 JOIN payday_participants p ON p.username = t.tipper
224 JOIN payday_participants p2 ON p2.username = t.tippee
225 WHERE t.amount > 0
226 AND (p2.goal IS NULL or p2.goal >= 0)
227 AND ( SELECT id
228 FROM payday_transfers_done t2
229 WHERE t.tipper = t2.tipper
230 AND t.tippee = t2.tippee
231 AND context = 'tip'
232 ) IS NULL
233 ORDER BY p.claimed_time ASC, t.ctime ASC;
234
235 CREATE INDEX ON payday_tips (tipper);
236 CREATE INDEX ON payday_tips (tippee);
237 ALTER TABLE payday_tips ADD COLUMN is_funded boolean;
238
239 ALTER TABLE payday_participants ADD COLUMN giving_today numeric(35,2);
240 UPDATE payday_participants
241 SET giving_today = COALESCE((
242 SELECT sum(amount)
243 FROM payday_tips
244 WHERE tipper = username
245 ), 0);
246
247 CREATE TEMPORARY TABLE payday_takes
248 ( team text
249 , member text
250 , amount numeric(35,2)
251 ) ON COMMIT DROP;
252
253 CREATE TEMPORARY TABLE payday_transfers
254 ( timestamp timestamptz DEFAULT now()
255 , tipper text
256 , tippee text
257 , amount numeric(35,2)
258 , context context_type
259 ) ON COMMIT DROP;
260
261
262 -- Prepare a statement that makes and records a transfer
263
264 CREATE OR REPLACE FUNCTION transfer(text, text, numeric, context_type)
265 RETURNS void AS $$
266 BEGIN
267 IF ($3 = 0) THEN RETURN; END IF;
268 UPDATE payday_participants
269 SET new_balance = (new_balance - $3)
270 WHERE username = $1;
271 UPDATE payday_participants
272 SET new_balance = (new_balance + $3)
273 WHERE username = $2;
274 INSERT INTO payday_transfers
275 (tipper, tippee, amount, context)
276 VALUES ( ( SELECT p.username
277 FROM participants p
278 JOIN payday_participants p2 ON p.id = p2.id
279 WHERE p2.username = $1 )
280 , ( SELECT p.username
281 FROM participants p
282 JOIN payday_participants p2 ON p.id = p2.id
283 WHERE p2.username = $2 )
284 , $3
285 , $4
286 );
287 END;
288 $$ LANGUAGE plpgsql;
289
290
291 -- Create a trigger to process tips
292
293 CREATE OR REPLACE FUNCTION process_tip() RETURNS trigger AS $$
294 DECLARE
295 tipper payday_participants;
296 BEGIN
297 tipper := (
298 SELECT p.*::payday_participants
299 FROM payday_participants p
300 WHERE username = NEW.tipper
301 );
302 IF (NEW.amount <= tipper.new_balance OR tipper.card_hold_ok) THEN
303 EXECUTE transfer(NEW.tipper, NEW.tippee, NEW.amount, 'tip');
304 RETURN NEW;
305 END IF;
306 RETURN NULL;
307 END;
308 $$ LANGUAGE plpgsql;
309
310 CREATE TRIGGER process_tip BEFORE UPDATE OF is_funded ON payday_tips
311 FOR EACH ROW
312 WHEN (NEW.is_funded IS true AND OLD.is_funded IS NOT true)
313 EXECUTE PROCEDURE process_tip();
314
315
316 -- Create a trigger to process takes
317
318 CREATE OR REPLACE FUNCTION process_take() RETURNS trigger AS $$
319 DECLARE
320 actual_amount numeric(35,2);
321 team_balance numeric(35,2);
322 BEGIN
323 team_balance := (
324 SELECT new_balance
325 FROM payday_participants
326 WHERE username = NEW.team
327 );
328 IF (team_balance <= 0) THEN RETURN NULL; END IF;
329 actual_amount := NEW.amount;
330 IF (team_balance < NEW.amount) THEN
331 actual_amount := team_balance;
332 END IF;
333 EXECUTE transfer(NEW.team, NEW.member, actual_amount, 'take');
334 RETURN NULL;
335 END;
336 $$ LANGUAGE plpgsql;
337
338 CREATE TRIGGER process_take AFTER INSERT ON payday_takes
339 FOR EACH ROW EXECUTE PROCEDURE process_take();
340
341
342 -- Create a function to settle whole tip graph
343
344 CREATE OR REPLACE FUNCTION settle_tip_graph() RETURNS void AS $$
345 DECLARE
346 count integer NOT NULL DEFAULT 0;
347 i integer := 0;
348 BEGIN
349 LOOP
350 i := i + 1;
351 WITH updated_rows AS (
352 UPDATE payday_tips
353 SET is_funded = true
354 WHERE is_funded IS NOT true
355 RETURNING *
356 )
357 SELECT COUNT(*) FROM updated_rows INTO count;
358 IF (count = 0) THEN
359 EXIT;
360 END IF;
361 IF (i > 50) THEN
362 RAISE 'Reached the maximum number of iterations';
363 END IF;
364 END LOOP;
365 END;
366 $$ LANGUAGE plpgsql;
367
368
369 -- Save the stats we already have
370
371 UPDATE paydays
372 SET nparticipants = (SELECT count(*) FROM payday_participants)
373 , ncc_missing = (
374 SELECT count(*)
375 FROM payday_participants
376 WHERE old_balance < giving_today
377 AND ( balanced_customer_href IS NULL
378 OR
379 last_bill_result IS NULL
380 )
381 )
382 WHERE ts_end='1970-01-01T00:00:00+00'::timestamptz;
383
384 """, dict(ts_start=ts_start))
385 log('Prepared the DB.')
386
387
388 @staticmethod
389 def fetch_card_holds(participant_ids):
390 holds = {}
391 for hold in CardHold.query.filter(CardHold.f.meta.state == 'new'):
392 state = 'new'
393 if hold.status == 'failed' or hold.failure_reason:
394 state = 'failed'
395 elif hold.voided_at:
396 state = 'cancelled'
397 elif getattr(hold, 'debit_href', None):
398 state = 'captured'
399 if state != 'new':
400 hold.meta['state'] = state
401 hold.save()
402 continue
403 p_id = int(hold.meta['participant_id'])
404 if p_id in participant_ids:
405 holds[p_id] = hold
406 else:
407 cancel_card_hold(hold)
408 return holds
409
410
411 def create_card_holds(self, cursor):
412
413 # Get the list of participants to create card holds for
414 participants = cursor.all("""
415 SELECT *
416 FROM payday_participants
417 WHERE old_balance < giving_today
418 AND balanced_customer_href IS NOT NULL
419 AND last_bill_result IS NOT NULL
420 AND is_suspicious IS false
421 """)
422 if not participants:
423 return {}
424
425 # Fetch existing holds
426 participant_ids = set(p.id for p in participants)
427 holds = self.fetch_card_holds(participant_ids)
428
429 # Create new holds and check amounts of existing ones
430 def f(p):
431 amount = p.giving_today
432 if p.old_balance < 0:
433 amount -= p.old_balance
434 if p.id in holds:
435 charge_amount = upcharge(amount)[0]
436 if holds[p.id].amount >= charge_amount * 100:
437 return
438 else:
439 # The amount is too low, cancel the hold and make a new one
440 cancel_card_hold(holds.pop(p.id))
441 hold, error = create_card_hold(self.db, p, amount)
442 if error:
443 return 1
444 else:
445 holds[p.id] = hold
446 n_failures = sum(filter(None, threaded_map(f, participants)))
447
448 # Record the number of failures
449 cursor.one("""
450 UPDATE paydays
451 SET ncc_failing = %s
452 WHERE ts_end='1970-01-01T00:00:00+00'::timestamptz
453 RETURNING id
454 """, (n_failures,), default=NoPayday)
455
456 # Update the values of card_hold_ok in our temporary table
457 if not holds:
458 return {}
459 cursor.run("""
460 UPDATE payday_participants p
461 SET card_hold_ok = true
462 WHERE p.id IN %s
463 """, (tuple(holds.keys()),))
464
465 return holds
466
467
468 @staticmethod
469 def transfer_tips(cursor):
470 cursor.run("""
471
472 UPDATE payday_tips t
473 SET is_funded = true
474 FROM payday_participants p
475 WHERE p.username = t.tipper
476 AND p.card_hold_ok;
477
478 SELECT settle_tip_graph();
479
480 """)
481
482
483 @staticmethod
484 def transfer_takes(cursor, ts_start):
485 cursor.run("""
486
487 INSERT INTO payday_takes
488 SELECT team, member, amount
489 FROM ( SELECT DISTINCT ON (team, member)
490 team, member, amount, ctime
491 FROM takes
492 WHERE mtime < %(ts_start)s
493 ORDER BY team, member, mtime DESC
494 ) t
495 WHERE t.amount > 0
496 AND t.team IN (SELECT username FROM payday_participants)
497 AND t.member IN (SELECT username FROM payday_participants)
498 AND ( SELECT id
499 FROM payday_transfers_done t2
500 WHERE t.team = t2.tipper
501 AND t.member = t2.tippee
502 AND context = 'take'
503 ) IS NULL
504 ORDER BY t.team, t.ctime DESC;
505
506 SELECT settle_tip_graph();
507
508 """, dict(ts_start=ts_start))
509
510
511 def settle_card_holds(self, cursor, holds):
512 participants = cursor.all("""
513 SELECT *
514 FROM payday_participants
515 WHERE new_balance < 0
516 """)
517 participants = [p for p in participants if p.id in holds]
518
519 # Capture holds to bring balances back up to (at least) zero
520 def capture(p):
521 amount = -p.new_balance
522 capture_card_hold(self.db, p, amount, holds.pop(p.id))
523 threaded_map(capture, participants)
524 log("Captured %i card holds." % len(participants))
525
526 # Cancel the remaining holds
527 threaded_map(cancel_card_hold, holds.values())
528 log("Canceled %i card holds." % len(holds))
529
530
531 @staticmethod
532 def update_balances(cursor):
533 participants = cursor.all("""
534
535 UPDATE participants p
536 SET balance = (balance + p2.new_balance - p2.old_balance)
537 FROM payday_participants p2
538 WHERE p.id = p2.id
539 AND p2.new_balance <> p2.old_balance
540 RETURNING p.id
541 , p.username
542 , balance AS new_balance
543 , ( SELECT balance
544 FROM participants p3
545 WHERE p3.id = p.id
546 ) AS cur_balance;
547
548 """)
549 # Check that balances aren't becoming (more) negative
550 for p in participants:
551 if p.new_balance < 0 and p.new_balance < p.cur_balance:
552 log(p)
553 raise NegativeBalance()
554 cursor.run("""
555 INSERT INTO transfers (timestamp, tipper, tippee, amount, context)
556 SELECT * FROM payday_transfers;
557 """)
558 log("Updated the balances of %i participants." % len(participants))
559
560
561 def take_over_balances(self):
562 """If an account that receives money is taken over during payin we need
563 to transfer the balance to the absorbing account.
564 """
565 for i in itertools.count():
566 if i > 10:
567 raise Exception('possible infinite loop')
568 count = self.db.one("""
569
570 DROP TABLE IF EXISTS temp;
571 CREATE TEMPORARY TABLE temp AS
572 SELECT archived_as, absorbed_by, balance AS archived_balance
573 FROM absorptions a
574 JOIN participants p ON a.archived_as = p.username
575 WHERE balance > 0;
576
577 SELECT count(*) FROM temp;
578
579 """)
580 if not count:
581 break
582 self.db.run("""
583
584 INSERT INTO transfers (tipper, tippee, amount, context)
585 SELECT archived_as, absorbed_by, archived_balance, 'take-over'
586 FROM temp;
587
588 UPDATE participants
589 SET balance = (balance - archived_balance)
590 FROM temp
591 WHERE username = archived_as;
592
593 UPDATE participants
594 SET balance = (balance + archived_balance)
595 FROM temp
596 WHERE username = absorbed_by;
597
598 """)
599
600
601 def payout(self):
602 """This is the second stage of payday in which we send money out to the
603 bank accounts of participants.
604 """
605 log("Starting payout loop.")
606 participants = self.db.all("""
607 SELECT p.*::participants
608 FROM participants p
609 WHERE balance > 0
610 AND balanced_customer_href IS NOT NULL
611 AND last_ach_result IS NOT NULL
612 """)
613 def credit(participant):
614 if participant.is_suspicious is None:
615 log("UNREVIEWED: %s" % participant.username)
616 return
617 withhold = participant.giving + participant.pledging
618 error = ach_credit(self.db, participant, withhold)
619 if error:
620 self.mark_ach_failed()
621 threaded_map(credit, participants)
622 log("Did payout for %d participants." % len(participants))
623 self.db.self_check()
624 log("Checked the DB.")
625
626
627 def update_stats(self):
628 self.db.run("""\
629
630 WITH our_transfers AS (
631 SELECT *
632 FROM transfers
633 WHERE "timestamp" >= %(ts_start)s
634 )
635 , our_tips AS (
636 SELECT *
637 FROM our_transfers
638 WHERE context = 'tip'
639 )
640 , our_pachinkos AS (
641 SELECT *
642 FROM our_transfers
643 WHERE context = 'take'
644 )
645 , our_exchanges AS (
646 SELECT *
647 FROM exchanges
648 WHERE "timestamp" >= %(ts_start)s
649 )
650 , our_achs AS (
651 SELECT *
652 FROM our_exchanges
653 WHERE amount < 0
654 )
655 , our_charges AS (
656 SELECT *
657 FROM our_exchanges
658 WHERE amount > 0
659 AND status <> 'failed'
660 )
661 UPDATE paydays
662 SET nactive = (
663 SELECT DISTINCT count(*) FROM (
664 SELECT tipper FROM our_transfers
665 UNION
666 SELECT tippee FROM our_transfers
667 ) AS foo
668 )
669 , ntippers = (SELECT count(DISTINCT tipper) FROM our_transfers)
670 , ntips = (SELECT count(*) FROM our_tips)
671 , npachinko = (SELECT count(*) FROM our_pachinkos)
672 , pachinko_volume = (SELECT COALESCE(sum(amount), 0) FROM our_pachinkos)
673 , ntransfers = (SELECT count(*) FROM our_transfers)
674 , transfer_volume = (SELECT COALESCE(sum(amount), 0) FROM our_transfers)
675 , nachs = (SELECT count(*) FROM our_achs)
676 , ach_volume = (SELECT COALESCE(sum(amount), 0) FROM our_achs)
677 , ach_fees_volume = (SELECT COALESCE(sum(fee), 0) FROM our_achs)
678 , ncharges = (SELECT count(*) FROM our_charges)
679 , charge_volume = (
680 SELECT COALESCE(sum(amount + fee), 0)
681 FROM our_charges
682 )
683 , charge_fees_volume = (SELECT COALESCE(sum(fee), 0) FROM our_charges)
684 WHERE ts_end='1970-01-01T00:00:00+00'::timestamptz
685
686 """, {'ts_start': self.ts_start})
687 log("Updated payday stats.")
688
689
690 def update_cached_amounts(self):
691 with self.db.get_cursor() as cursor:
692 cursor.execute(FAKE_PAYDAY)
693 log("Updated receiving amounts.")
694
695
696 def end(self):
697 self.ts_end = self.db.one("""\
698
699 UPDATE paydays
700 SET ts_end=now()
701 WHERE ts_end='1970-01-01T00:00:00+00'::timestamptz
702 RETURNING ts_end AT TIME ZONE 'UTC'
703
704 """, default=NoPayday).replace(tzinfo=aspen.utils.utc)
705
706
707 # Record-keeping.
708 # ===============
709
710 def mark_ach_failed(self):
711 self.db.one("""\
712
713 UPDATE paydays
714 SET nach_failing = nach_failing + 1
715 WHERE ts_end='1970-01-01T00:00:00+00'::timestamptz
716 RETURNING id
717
718 """, default=NoPayday)
719
720
721 def mark_stage_done(self):
722 self.db.one("""\
723
724 UPDATE paydays
725 SET stage = stage + 1
726 WHERE ts_end='1970-01-01T00:00:00+00'::timestamptz
727 RETURNING id
728
729 """, default=NoPayday)
730
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/gratipay/billing/payday.py b/gratipay/billing/payday.py
--- a/gratipay/billing/payday.py
+++ b/gratipay/billing/payday.py
@@ -26,7 +26,7 @@
from psycopg2 import IntegrityError
-with open('fake_payday.sql') as f:
+with open('sql/fake_payday.sql') as f:
FAKE_PAYDAY = f.read()
| {"golden_diff": "diff --git a/gratipay/billing/payday.py b/gratipay/billing/payday.py\n--- a/gratipay/billing/payday.py\n+++ b/gratipay/billing/payday.py\n@@ -26,7 +26,7 @@\n from psycopg2 import IntegrityError\n \n \n-with open('fake_payday.sql') as f:\n+with open('sql/fake_payday.sql') as f:\n FAKE_PAYDAY = f.read()\n", "issue": "move SQL functions out of schema.sql\nFollowing on from #2360, and in view of the hack at https://github.com/gratipay/gratipay.com/pull/3154#issuecomment-73041912, what if we moved SQL functions to a separate file from schema.sql? If we had one file per function we could automate the process of updating those functions during deployment, and we'd get sensible diffs on PRs because we wouldn't have to use branch.sql as a go-between (branch.sql would remain for table changes).\n\n", "before_files": [{"content": "\"\"\"This is Gratipay's payday algorithm.\n\nExchanges (moving money between Gratipay and the outside world) and transfers\n(moving money amongst Gratipay users) happen within an isolated event called\npayday. This event has duration (it's not punctiliar).\n\nPayday is designed to be crash-resistant. Everything that can be rolled back\nhappens inside a single DB transaction. Exchanges cannot be rolled back, so they\nimmediately affect the participant's balance.\n\n\"\"\"\nfrom __future__ import unicode_literals\n\nimport itertools\nfrom multiprocessing.dummy import Pool as ThreadPool\n\nfrom balanced import CardHold\n\nimport aspen.utils\nfrom aspen import log\nfrom gratipay.billing.exchanges import (\n ach_credit, cancel_card_hold, capture_card_hold, create_card_hold, upcharge\n)\nfrom gratipay.exceptions import NegativeBalance\nfrom gratipay.models import check_db\nfrom psycopg2 import IntegrityError\n\n\nwith open('fake_payday.sql') as f:\n FAKE_PAYDAY = f.read()\n\n\nclass ExceptionWrapped(Exception): pass\n\n\ndef threaded_map(func, iterable, threads=5):\n pool = ThreadPool(threads)\n def g(*a, **kw):\n # Without this wrapper we get a traceback from inside multiprocessing.\n try:\n return func(*a, **kw)\n except Exception as e:\n import traceback\n raise ExceptionWrapped(e, traceback.format_exc())\n try:\n r = pool.map(g, iterable)\n except ExceptionWrapped as e:\n print(e.args[1])\n raise e.args[0]\n pool.close()\n pool.join()\n return r\n\n\nclass NoPayday(Exception):\n __str__ = lambda self: \"No payday found where one was expected.\"\n\n\nclass Payday(object):\n \"\"\"Represent an abstract event during which money is moved.\n\n On Payday, we want to use a participant's Gratipay balance to settle their\n tips due (pulling in more money via credit card as needed), but we only\n want to use their balance at the start of Payday. Balance changes should be\n atomic globally per-Payday.\n\n Here's the call structure of the Payday.run method:\n\n run\n payin\n prepare\n create_card_holds\n transfer_tips\n transfer_takes\n settle_card_holds\n update_balances\n take_over_balances\n payout\n update_stats\n update_cached_amounts\n end\n\n \"\"\"\n\n\n @classmethod\n def start(cls):\n \"\"\"Try to start a new Payday.\n\n If there is a Payday that hasn't finished yet, then the UNIQUE\n constraint on ts_end will kick in and notify us of that. In that case\n we load the existing Payday and work on it some more. We use the start\n time of the current Payday to synchronize our work.\n\n \"\"\"\n try:\n d = cls.db.one(\"\"\"\n INSERT INTO paydays DEFAULT VALUES\n RETURNING id, (ts_start AT TIME ZONE 'UTC') AS ts_start, stage\n \"\"\", back_as=dict)\n log(\"Starting a new payday.\")\n except IntegrityError: # Collision, we have a Payday already.\n d = cls.db.one(\"\"\"\n SELECT id, (ts_start AT TIME ZONE 'UTC') AS ts_start, stage\n FROM paydays\n WHERE ts_end='1970-01-01T00:00:00+00'::timestamptz\n \"\"\", back_as=dict)\n log(\"Picking up with an existing payday.\")\n\n d['ts_start'] = d['ts_start'].replace(tzinfo=aspen.utils.utc)\n\n log(\"Payday started at %s.\" % d['ts_start'])\n\n payday = Payday()\n payday.__dict__.update(d)\n return payday\n\n\n def run(self):\n \"\"\"This is the starting point for payday.\n\n This method runs every Thursday. It is structured such that it can be\n run again safely (with a newly-instantiated Payday object) if it\n crashes.\n\n \"\"\"\n self.db.self_check()\n\n _start = aspen.utils.utcnow()\n log(\"Greetings, program! It's PAYDAY!!!!\")\n\n if self.stage < 1:\n self.payin()\n self.mark_stage_done()\n if self.stage < 2:\n self.payout()\n self.mark_stage_done()\n if self.stage < 3:\n self.update_stats()\n self.update_cached_amounts()\n self.mark_stage_done()\n\n self.end()\n\n _end = aspen.utils.utcnow()\n _delta = _end - _start\n fmt_past = \"Script ran for %%(age)s (%s).\" % _delta\n log(aspen.utils.to_age(_start, fmt_past=fmt_past))\n\n\n def payin(self):\n \"\"\"The first stage of payday where we charge credit cards and transfer\n money internally between participants.\n \"\"\"\n with self.db.get_cursor() as cursor:\n self.prepare(cursor, self.ts_start)\n holds = self.create_card_holds(cursor)\n self.transfer_tips(cursor)\n self.transfer_takes(cursor, self.ts_start)\n transfers = cursor.all(\"\"\"\n SELECT * FROM transfers WHERE \"timestamp\" > %s\n \"\"\", (self.ts_start,))\n try:\n self.settle_card_holds(cursor, holds)\n self.update_balances(cursor)\n check_db(cursor)\n except:\n # Dump transfers for debugging\n import csv\n from time import time\n with open('%s_transfers.csv' % time(), 'wb') as f:\n csv.writer(f).writerows(transfers)\n raise\n self.take_over_balances()\n # Clean up leftover functions\n self.db.run(\"\"\"\n DROP FUNCTION process_take();\n DROP FUNCTION process_tip();\n DROP FUNCTION settle_tip_graph();\n DROP FUNCTION transfer(text, text, numeric, context_type);\n \"\"\")\n\n\n @staticmethod\n def prepare(cursor, ts_start):\n \"\"\"Prepare the DB: we need temporary tables with indexes and triggers.\n \"\"\"\n cursor.run(\"\"\"\n\n -- Create the necessary temporary tables and indexes\n\n CREATE TEMPORARY TABLE payday_participants ON COMMIT DROP AS\n SELECT id\n , username\n , claimed_time\n , balance AS old_balance\n , balance AS new_balance\n , balanced_customer_href\n , last_bill_result\n , is_suspicious\n , goal\n , false AS card_hold_ok\n FROM participants\n WHERE is_suspicious IS NOT true\n AND claimed_time < %(ts_start)s\n ORDER BY claimed_time;\n\n CREATE UNIQUE INDEX ON payday_participants (id);\n CREATE UNIQUE INDEX ON payday_participants (username);\n\n CREATE TEMPORARY TABLE payday_transfers_done ON COMMIT DROP AS\n SELECT *\n FROM transfers t\n WHERE t.timestamp > %(ts_start)s;\n\n CREATE TEMPORARY TABLE payday_tips ON COMMIT DROP AS\n SELECT tipper, tippee, amount\n FROM ( SELECT DISTINCT ON (tipper, tippee) *\n FROM tips\n WHERE mtime < %(ts_start)s\n ORDER BY tipper, tippee, mtime DESC\n ) t\n JOIN payday_participants p ON p.username = t.tipper\n JOIN payday_participants p2 ON p2.username = t.tippee\n WHERE t.amount > 0\n AND (p2.goal IS NULL or p2.goal >= 0)\n AND ( SELECT id\n FROM payday_transfers_done t2\n WHERE t.tipper = t2.tipper\n AND t.tippee = t2.tippee\n AND context = 'tip'\n ) IS NULL\n ORDER BY p.claimed_time ASC, t.ctime ASC;\n\n CREATE INDEX ON payday_tips (tipper);\n CREATE INDEX ON payday_tips (tippee);\n ALTER TABLE payday_tips ADD COLUMN is_funded boolean;\n\n ALTER TABLE payday_participants ADD COLUMN giving_today numeric(35,2);\n UPDATE payday_participants\n SET giving_today = COALESCE((\n SELECT sum(amount)\n FROM payday_tips\n WHERE tipper = username\n ), 0);\n\n CREATE TEMPORARY TABLE payday_takes\n ( team text\n , member text\n , amount numeric(35,2)\n ) ON COMMIT DROP;\n\n CREATE TEMPORARY TABLE payday_transfers\n ( timestamp timestamptz DEFAULT now()\n , tipper text\n , tippee text\n , amount numeric(35,2)\n , context context_type\n ) ON COMMIT DROP;\n\n\n -- Prepare a statement that makes and records a transfer\n\n CREATE OR REPLACE FUNCTION transfer(text, text, numeric, context_type)\n RETURNS void AS $$\n BEGIN\n IF ($3 = 0) THEN RETURN; END IF;\n UPDATE payday_participants\n SET new_balance = (new_balance - $3)\n WHERE username = $1;\n UPDATE payday_participants\n SET new_balance = (new_balance + $3)\n WHERE username = $2;\n INSERT INTO payday_transfers\n (tipper, tippee, amount, context)\n VALUES ( ( SELECT p.username\n FROM participants p\n JOIN payday_participants p2 ON p.id = p2.id\n WHERE p2.username = $1 )\n , ( SELECT p.username\n FROM participants p\n JOIN payday_participants p2 ON p.id = p2.id\n WHERE p2.username = $2 )\n , $3\n , $4\n );\n END;\n $$ LANGUAGE plpgsql;\n\n\n -- Create a trigger to process tips\n\n CREATE OR REPLACE FUNCTION process_tip() RETURNS trigger AS $$\n DECLARE\n tipper payday_participants;\n BEGIN\n tipper := (\n SELECT p.*::payday_participants\n FROM payday_participants p\n WHERE username = NEW.tipper\n );\n IF (NEW.amount <= tipper.new_balance OR tipper.card_hold_ok) THEN\n EXECUTE transfer(NEW.tipper, NEW.tippee, NEW.amount, 'tip');\n RETURN NEW;\n END IF;\n RETURN NULL;\n END;\n $$ LANGUAGE plpgsql;\n\n CREATE TRIGGER process_tip BEFORE UPDATE OF is_funded ON payday_tips\n FOR EACH ROW\n WHEN (NEW.is_funded IS true AND OLD.is_funded IS NOT true)\n EXECUTE PROCEDURE process_tip();\n\n\n -- Create a trigger to process takes\n\n CREATE OR REPLACE FUNCTION process_take() RETURNS trigger AS $$\n DECLARE\n actual_amount numeric(35,2);\n team_balance numeric(35,2);\n BEGIN\n team_balance := (\n SELECT new_balance\n FROM payday_participants\n WHERE username = NEW.team\n );\n IF (team_balance <= 0) THEN RETURN NULL; END IF;\n actual_amount := NEW.amount;\n IF (team_balance < NEW.amount) THEN\n actual_amount := team_balance;\n END IF;\n EXECUTE transfer(NEW.team, NEW.member, actual_amount, 'take');\n RETURN NULL;\n END;\n $$ LANGUAGE plpgsql;\n\n CREATE TRIGGER process_take AFTER INSERT ON payday_takes\n FOR EACH ROW EXECUTE PROCEDURE process_take();\n\n\n -- Create a function to settle whole tip graph\n\n CREATE OR REPLACE FUNCTION settle_tip_graph() RETURNS void AS $$\n DECLARE\n count integer NOT NULL DEFAULT 0;\n i integer := 0;\n BEGIN\n LOOP\n i := i + 1;\n WITH updated_rows AS (\n UPDATE payday_tips\n SET is_funded = true\n WHERE is_funded IS NOT true\n RETURNING *\n )\n SELECT COUNT(*) FROM updated_rows INTO count;\n IF (count = 0) THEN\n EXIT;\n END IF;\n IF (i > 50) THEN\n RAISE 'Reached the maximum number of iterations';\n END IF;\n END LOOP;\n END;\n $$ LANGUAGE plpgsql;\n\n\n -- Save the stats we already have\n\n UPDATE paydays\n SET nparticipants = (SELECT count(*) FROM payday_participants)\n , ncc_missing = (\n SELECT count(*)\n FROM payday_participants\n WHERE old_balance < giving_today\n AND ( balanced_customer_href IS NULL\n OR\n last_bill_result IS NULL\n )\n )\n WHERE ts_end='1970-01-01T00:00:00+00'::timestamptz;\n\n \"\"\", dict(ts_start=ts_start))\n log('Prepared the DB.')\n\n\n @staticmethod\n def fetch_card_holds(participant_ids):\n holds = {}\n for hold in CardHold.query.filter(CardHold.f.meta.state == 'new'):\n state = 'new'\n if hold.status == 'failed' or hold.failure_reason:\n state = 'failed'\n elif hold.voided_at:\n state = 'cancelled'\n elif getattr(hold, 'debit_href', None):\n state = 'captured'\n if state != 'new':\n hold.meta['state'] = state\n hold.save()\n continue\n p_id = int(hold.meta['participant_id'])\n if p_id in participant_ids:\n holds[p_id] = hold\n else:\n cancel_card_hold(hold)\n return holds\n\n\n def create_card_holds(self, cursor):\n\n # Get the list of participants to create card holds for\n participants = cursor.all(\"\"\"\n SELECT *\n FROM payday_participants\n WHERE old_balance < giving_today\n AND balanced_customer_href IS NOT NULL\n AND last_bill_result IS NOT NULL\n AND is_suspicious IS false\n \"\"\")\n if not participants:\n return {}\n\n # Fetch existing holds\n participant_ids = set(p.id for p in participants)\n holds = self.fetch_card_holds(participant_ids)\n\n # Create new holds and check amounts of existing ones\n def f(p):\n amount = p.giving_today\n if p.old_balance < 0:\n amount -= p.old_balance\n if p.id in holds:\n charge_amount = upcharge(amount)[0]\n if holds[p.id].amount >= charge_amount * 100:\n return\n else:\n # The amount is too low, cancel the hold and make a new one\n cancel_card_hold(holds.pop(p.id))\n hold, error = create_card_hold(self.db, p, amount)\n if error:\n return 1\n else:\n holds[p.id] = hold\n n_failures = sum(filter(None, threaded_map(f, participants)))\n\n # Record the number of failures\n cursor.one(\"\"\"\n UPDATE paydays\n SET ncc_failing = %s\n WHERE ts_end='1970-01-01T00:00:00+00'::timestamptz\n RETURNING id\n \"\"\", (n_failures,), default=NoPayday)\n\n # Update the values of card_hold_ok in our temporary table\n if not holds:\n return {}\n cursor.run(\"\"\"\n UPDATE payday_participants p\n SET card_hold_ok = true\n WHERE p.id IN %s\n \"\"\", (tuple(holds.keys()),))\n\n return holds\n\n\n @staticmethod\n def transfer_tips(cursor):\n cursor.run(\"\"\"\n\n UPDATE payday_tips t\n SET is_funded = true\n FROM payday_participants p\n WHERE p.username = t.tipper\n AND p.card_hold_ok;\n\n SELECT settle_tip_graph();\n\n \"\"\")\n\n\n @staticmethod\n def transfer_takes(cursor, ts_start):\n cursor.run(\"\"\"\n\n INSERT INTO payday_takes\n SELECT team, member, amount\n FROM ( SELECT DISTINCT ON (team, member)\n team, member, amount, ctime\n FROM takes\n WHERE mtime < %(ts_start)s\n ORDER BY team, member, mtime DESC\n ) t\n WHERE t.amount > 0\n AND t.team IN (SELECT username FROM payday_participants)\n AND t.member IN (SELECT username FROM payday_participants)\n AND ( SELECT id\n FROM payday_transfers_done t2\n WHERE t.team = t2.tipper\n AND t.member = t2.tippee\n AND context = 'take'\n ) IS NULL\n ORDER BY t.team, t.ctime DESC;\n\n SELECT settle_tip_graph();\n\n \"\"\", dict(ts_start=ts_start))\n\n\n def settle_card_holds(self, cursor, holds):\n participants = cursor.all(\"\"\"\n SELECT *\n FROM payday_participants\n WHERE new_balance < 0\n \"\"\")\n participants = [p for p in participants if p.id in holds]\n\n # Capture holds to bring balances back up to (at least) zero\n def capture(p):\n amount = -p.new_balance\n capture_card_hold(self.db, p, amount, holds.pop(p.id))\n threaded_map(capture, participants)\n log(\"Captured %i card holds.\" % len(participants))\n\n # Cancel the remaining holds\n threaded_map(cancel_card_hold, holds.values())\n log(\"Canceled %i card holds.\" % len(holds))\n\n\n @staticmethod\n def update_balances(cursor):\n participants = cursor.all(\"\"\"\n\n UPDATE participants p\n SET balance = (balance + p2.new_balance - p2.old_balance)\n FROM payday_participants p2\n WHERE p.id = p2.id\n AND p2.new_balance <> p2.old_balance\n RETURNING p.id\n , p.username\n , balance AS new_balance\n , ( SELECT balance\n FROM participants p3\n WHERE p3.id = p.id\n ) AS cur_balance;\n\n \"\"\")\n # Check that balances aren't becoming (more) negative\n for p in participants:\n if p.new_balance < 0 and p.new_balance < p.cur_balance:\n log(p)\n raise NegativeBalance()\n cursor.run(\"\"\"\n INSERT INTO transfers (timestamp, tipper, tippee, amount, context)\n SELECT * FROM payday_transfers;\n \"\"\")\n log(\"Updated the balances of %i participants.\" % len(participants))\n\n\n def take_over_balances(self):\n \"\"\"If an account that receives money is taken over during payin we need\n to transfer the balance to the absorbing account.\n \"\"\"\n for i in itertools.count():\n if i > 10:\n raise Exception('possible infinite loop')\n count = self.db.one(\"\"\"\n\n DROP TABLE IF EXISTS temp;\n CREATE TEMPORARY TABLE temp AS\n SELECT archived_as, absorbed_by, balance AS archived_balance\n FROM absorptions a\n JOIN participants p ON a.archived_as = p.username\n WHERE balance > 0;\n\n SELECT count(*) FROM temp;\n\n \"\"\")\n if not count:\n break\n self.db.run(\"\"\"\n\n INSERT INTO transfers (tipper, tippee, amount, context)\n SELECT archived_as, absorbed_by, archived_balance, 'take-over'\n FROM temp;\n\n UPDATE participants\n SET balance = (balance - archived_balance)\n FROM temp\n WHERE username = archived_as;\n\n UPDATE participants\n SET balance = (balance + archived_balance)\n FROM temp\n WHERE username = absorbed_by;\n\n \"\"\")\n\n\n def payout(self):\n \"\"\"This is the second stage of payday in which we send money out to the\n bank accounts of participants.\n \"\"\"\n log(\"Starting payout loop.\")\n participants = self.db.all(\"\"\"\n SELECT p.*::participants\n FROM participants p\n WHERE balance > 0\n AND balanced_customer_href IS NOT NULL\n AND last_ach_result IS NOT NULL\n \"\"\")\n def credit(participant):\n if participant.is_suspicious is None:\n log(\"UNREVIEWED: %s\" % participant.username)\n return\n withhold = participant.giving + participant.pledging\n error = ach_credit(self.db, participant, withhold)\n if error:\n self.mark_ach_failed()\n threaded_map(credit, participants)\n log(\"Did payout for %d participants.\" % len(participants))\n self.db.self_check()\n log(\"Checked the DB.\")\n\n\n def update_stats(self):\n self.db.run(\"\"\"\\\n\n WITH our_transfers AS (\n SELECT *\n FROM transfers\n WHERE \"timestamp\" >= %(ts_start)s\n )\n , our_tips AS (\n SELECT *\n FROM our_transfers\n WHERE context = 'tip'\n )\n , our_pachinkos AS (\n SELECT *\n FROM our_transfers\n WHERE context = 'take'\n )\n , our_exchanges AS (\n SELECT *\n FROM exchanges\n WHERE \"timestamp\" >= %(ts_start)s\n )\n , our_achs AS (\n SELECT *\n FROM our_exchanges\n WHERE amount < 0\n )\n , our_charges AS (\n SELECT *\n FROM our_exchanges\n WHERE amount > 0\n AND status <> 'failed'\n )\n UPDATE paydays\n SET nactive = (\n SELECT DISTINCT count(*) FROM (\n SELECT tipper FROM our_transfers\n UNION\n SELECT tippee FROM our_transfers\n ) AS foo\n )\n , ntippers = (SELECT count(DISTINCT tipper) FROM our_transfers)\n , ntips = (SELECT count(*) FROM our_tips)\n , npachinko = (SELECT count(*) FROM our_pachinkos)\n , pachinko_volume = (SELECT COALESCE(sum(amount), 0) FROM our_pachinkos)\n , ntransfers = (SELECT count(*) FROM our_transfers)\n , transfer_volume = (SELECT COALESCE(sum(amount), 0) FROM our_transfers)\n , nachs = (SELECT count(*) FROM our_achs)\n , ach_volume = (SELECT COALESCE(sum(amount), 0) FROM our_achs)\n , ach_fees_volume = (SELECT COALESCE(sum(fee), 0) FROM our_achs)\n , ncharges = (SELECT count(*) FROM our_charges)\n , charge_volume = (\n SELECT COALESCE(sum(amount + fee), 0)\n FROM our_charges\n )\n , charge_fees_volume = (SELECT COALESCE(sum(fee), 0) FROM our_charges)\n WHERE ts_end='1970-01-01T00:00:00+00'::timestamptz\n\n \"\"\", {'ts_start': self.ts_start})\n log(\"Updated payday stats.\")\n\n\n def update_cached_amounts(self):\n with self.db.get_cursor() as cursor:\n cursor.execute(FAKE_PAYDAY)\n log(\"Updated receiving amounts.\")\n\n\n def end(self):\n self.ts_end = self.db.one(\"\"\"\\\n\n UPDATE paydays\n SET ts_end=now()\n WHERE ts_end='1970-01-01T00:00:00+00'::timestamptz\n RETURNING ts_end AT TIME ZONE 'UTC'\n\n \"\"\", default=NoPayday).replace(tzinfo=aspen.utils.utc)\n\n\n # Record-keeping.\n # ===============\n\n def mark_ach_failed(self):\n self.db.one(\"\"\"\\\n\n UPDATE paydays\n SET nach_failing = nach_failing + 1\n WHERE ts_end='1970-01-01T00:00:00+00'::timestamptz\n RETURNING id\n\n \"\"\", default=NoPayday)\n\n\n def mark_stage_done(self):\n self.db.one(\"\"\"\\\n\n UPDATE paydays\n SET stage = stage + 1\n WHERE ts_end='1970-01-01T00:00:00+00'::timestamptz\n RETURNING id\n\n \"\"\", default=NoPayday)\n", "path": "gratipay/billing/payday.py"}], "after_files": [{"content": "\"\"\"This is Gratipay's payday algorithm.\n\nExchanges (moving money between Gratipay and the outside world) and transfers\n(moving money amongst Gratipay users) happen within an isolated event called\npayday. This event has duration (it's not punctiliar).\n\nPayday is designed to be crash-resistant. Everything that can be rolled back\nhappens inside a single DB transaction. Exchanges cannot be rolled back, so they\nimmediately affect the participant's balance.\n\n\"\"\"\nfrom __future__ import unicode_literals\n\nimport itertools\nfrom multiprocessing.dummy import Pool as ThreadPool\n\nfrom balanced import CardHold\n\nimport aspen.utils\nfrom aspen import log\nfrom gratipay.billing.exchanges import (\n ach_credit, cancel_card_hold, capture_card_hold, create_card_hold, upcharge\n)\nfrom gratipay.exceptions import NegativeBalance\nfrom gratipay.models import check_db\nfrom psycopg2 import IntegrityError\n\n\nwith open('sql/fake_payday.sql') as f:\n FAKE_PAYDAY = f.read()\n\n\nclass ExceptionWrapped(Exception): pass\n\n\ndef threaded_map(func, iterable, threads=5):\n pool = ThreadPool(threads)\n def g(*a, **kw):\n # Without this wrapper we get a traceback from inside multiprocessing.\n try:\n return func(*a, **kw)\n except Exception as e:\n import traceback\n raise ExceptionWrapped(e, traceback.format_exc())\n try:\n r = pool.map(g, iterable)\n except ExceptionWrapped as e:\n print(e.args[1])\n raise e.args[0]\n pool.close()\n pool.join()\n return r\n\n\nclass NoPayday(Exception):\n __str__ = lambda self: \"No payday found where one was expected.\"\n\n\nclass Payday(object):\n \"\"\"Represent an abstract event during which money is moved.\n\n On Payday, we want to use a participant's Gratipay balance to settle their\n tips due (pulling in more money via credit card as needed), but we only\n want to use their balance at the start of Payday. Balance changes should be\n atomic globally per-Payday.\n\n Here's the call structure of the Payday.run method:\n\n run\n payin\n prepare\n create_card_holds\n transfer_tips\n transfer_takes\n settle_card_holds\n update_balances\n take_over_balances\n payout\n update_stats\n update_cached_amounts\n end\n\n \"\"\"\n\n\n @classmethod\n def start(cls):\n \"\"\"Try to start a new Payday.\n\n If there is a Payday that hasn't finished yet, then the UNIQUE\n constraint on ts_end will kick in and notify us of that. In that case\n we load the existing Payday and work on it some more. We use the start\n time of the current Payday to synchronize our work.\n\n \"\"\"\n try:\n d = cls.db.one(\"\"\"\n INSERT INTO paydays DEFAULT VALUES\n RETURNING id, (ts_start AT TIME ZONE 'UTC') AS ts_start, stage\n \"\"\", back_as=dict)\n log(\"Starting a new payday.\")\n except IntegrityError: # Collision, we have a Payday already.\n d = cls.db.one(\"\"\"\n SELECT id, (ts_start AT TIME ZONE 'UTC') AS ts_start, stage\n FROM paydays\n WHERE ts_end='1970-01-01T00:00:00+00'::timestamptz\n \"\"\", back_as=dict)\n log(\"Picking up with an existing payday.\")\n\n d['ts_start'] = d['ts_start'].replace(tzinfo=aspen.utils.utc)\n\n log(\"Payday started at %s.\" % d['ts_start'])\n\n payday = Payday()\n payday.__dict__.update(d)\n return payday\n\n\n def run(self):\n \"\"\"This is the starting point for payday.\n\n This method runs every Thursday. It is structured such that it can be\n run again safely (with a newly-instantiated Payday object) if it\n crashes.\n\n \"\"\"\n self.db.self_check()\n\n _start = aspen.utils.utcnow()\n log(\"Greetings, program! It's PAYDAY!!!!\")\n\n if self.stage < 1:\n self.payin()\n self.mark_stage_done()\n if self.stage < 2:\n self.payout()\n self.mark_stage_done()\n if self.stage < 3:\n self.update_stats()\n self.update_cached_amounts()\n self.mark_stage_done()\n\n self.end()\n\n _end = aspen.utils.utcnow()\n _delta = _end - _start\n fmt_past = \"Script ran for %%(age)s (%s).\" % _delta\n log(aspen.utils.to_age(_start, fmt_past=fmt_past))\n\n\n def payin(self):\n \"\"\"The first stage of payday where we charge credit cards and transfer\n money internally between participants.\n \"\"\"\n with self.db.get_cursor() as cursor:\n self.prepare(cursor, self.ts_start)\n holds = self.create_card_holds(cursor)\n self.transfer_tips(cursor)\n self.transfer_takes(cursor, self.ts_start)\n transfers = cursor.all(\"\"\"\n SELECT * FROM transfers WHERE \"timestamp\" > %s\n \"\"\", (self.ts_start,))\n try:\n self.settle_card_holds(cursor, holds)\n self.update_balances(cursor)\n check_db(cursor)\n except:\n # Dump transfers for debugging\n import csv\n from time import time\n with open('%s_transfers.csv' % time(), 'wb') as f:\n csv.writer(f).writerows(transfers)\n raise\n self.take_over_balances()\n # Clean up leftover functions\n self.db.run(\"\"\"\n DROP FUNCTION process_take();\n DROP FUNCTION process_tip();\n DROP FUNCTION settle_tip_graph();\n DROP FUNCTION transfer(text, text, numeric, context_type);\n \"\"\")\n\n\n @staticmethod\n def prepare(cursor, ts_start):\n \"\"\"Prepare the DB: we need temporary tables with indexes and triggers.\n \"\"\"\n cursor.run(\"\"\"\n\n -- Create the necessary temporary tables and indexes\n\n CREATE TEMPORARY TABLE payday_participants ON COMMIT DROP AS\n SELECT id\n , username\n , claimed_time\n , balance AS old_balance\n , balance AS new_balance\n , balanced_customer_href\n , last_bill_result\n , is_suspicious\n , goal\n , false AS card_hold_ok\n FROM participants\n WHERE is_suspicious IS NOT true\n AND claimed_time < %(ts_start)s\n ORDER BY claimed_time;\n\n CREATE UNIQUE INDEX ON payday_participants (id);\n CREATE UNIQUE INDEX ON payday_participants (username);\n\n CREATE TEMPORARY TABLE payday_transfers_done ON COMMIT DROP AS\n SELECT *\n FROM transfers t\n WHERE t.timestamp > %(ts_start)s;\n\n CREATE TEMPORARY TABLE payday_tips ON COMMIT DROP AS\n SELECT tipper, tippee, amount\n FROM ( SELECT DISTINCT ON (tipper, tippee) *\n FROM tips\n WHERE mtime < %(ts_start)s\n ORDER BY tipper, tippee, mtime DESC\n ) t\n JOIN payday_participants p ON p.username = t.tipper\n JOIN payday_participants p2 ON p2.username = t.tippee\n WHERE t.amount > 0\n AND (p2.goal IS NULL or p2.goal >= 0)\n AND ( SELECT id\n FROM payday_transfers_done t2\n WHERE t.tipper = t2.tipper\n AND t.tippee = t2.tippee\n AND context = 'tip'\n ) IS NULL\n ORDER BY p.claimed_time ASC, t.ctime ASC;\n\n CREATE INDEX ON payday_tips (tipper);\n CREATE INDEX ON payday_tips (tippee);\n ALTER TABLE payday_tips ADD COLUMN is_funded boolean;\n\n ALTER TABLE payday_participants ADD COLUMN giving_today numeric(35,2);\n UPDATE payday_participants\n SET giving_today = COALESCE((\n SELECT sum(amount)\n FROM payday_tips\n WHERE tipper = username\n ), 0);\n\n CREATE TEMPORARY TABLE payday_takes\n ( team text\n , member text\n , amount numeric(35,2)\n ) ON COMMIT DROP;\n\n CREATE TEMPORARY TABLE payday_transfers\n ( timestamp timestamptz DEFAULT now()\n , tipper text\n , tippee text\n , amount numeric(35,2)\n , context context_type\n ) ON COMMIT DROP;\n\n\n -- Prepare a statement that makes and records a transfer\n\n CREATE OR REPLACE FUNCTION transfer(text, text, numeric, context_type)\n RETURNS void AS $$\n BEGIN\n IF ($3 = 0) THEN RETURN; END IF;\n UPDATE payday_participants\n SET new_balance = (new_balance - $3)\n WHERE username = $1;\n UPDATE payday_participants\n SET new_balance = (new_balance + $3)\n WHERE username = $2;\n INSERT INTO payday_transfers\n (tipper, tippee, amount, context)\n VALUES ( ( SELECT p.username\n FROM participants p\n JOIN payday_participants p2 ON p.id = p2.id\n WHERE p2.username = $1 )\n , ( SELECT p.username\n FROM participants p\n JOIN payday_participants p2 ON p.id = p2.id\n WHERE p2.username = $2 )\n , $3\n , $4\n );\n END;\n $$ LANGUAGE plpgsql;\n\n\n -- Create a trigger to process tips\n\n CREATE OR REPLACE FUNCTION process_tip() RETURNS trigger AS $$\n DECLARE\n tipper payday_participants;\n BEGIN\n tipper := (\n SELECT p.*::payday_participants\n FROM payday_participants p\n WHERE username = NEW.tipper\n );\n IF (NEW.amount <= tipper.new_balance OR tipper.card_hold_ok) THEN\n EXECUTE transfer(NEW.tipper, NEW.tippee, NEW.amount, 'tip');\n RETURN NEW;\n END IF;\n RETURN NULL;\n END;\n $$ LANGUAGE plpgsql;\n\n CREATE TRIGGER process_tip BEFORE UPDATE OF is_funded ON payday_tips\n FOR EACH ROW\n WHEN (NEW.is_funded IS true AND OLD.is_funded IS NOT true)\n EXECUTE PROCEDURE process_tip();\n\n\n -- Create a trigger to process takes\n\n CREATE OR REPLACE FUNCTION process_take() RETURNS trigger AS $$\n DECLARE\n actual_amount numeric(35,2);\n team_balance numeric(35,2);\n BEGIN\n team_balance := (\n SELECT new_balance\n FROM payday_participants\n WHERE username = NEW.team\n );\n IF (team_balance <= 0) THEN RETURN NULL; END IF;\n actual_amount := NEW.amount;\n IF (team_balance < NEW.amount) THEN\n actual_amount := team_balance;\n END IF;\n EXECUTE transfer(NEW.team, NEW.member, actual_amount, 'take');\n RETURN NULL;\n END;\n $$ LANGUAGE plpgsql;\n\n CREATE TRIGGER process_take AFTER INSERT ON payday_takes\n FOR EACH ROW EXECUTE PROCEDURE process_take();\n\n\n -- Create a function to settle whole tip graph\n\n CREATE OR REPLACE FUNCTION settle_tip_graph() RETURNS void AS $$\n DECLARE\n count integer NOT NULL DEFAULT 0;\n i integer := 0;\n BEGIN\n LOOP\n i := i + 1;\n WITH updated_rows AS (\n UPDATE payday_tips\n SET is_funded = true\n WHERE is_funded IS NOT true\n RETURNING *\n )\n SELECT COUNT(*) FROM updated_rows INTO count;\n IF (count = 0) THEN\n EXIT;\n END IF;\n IF (i > 50) THEN\n RAISE 'Reached the maximum number of iterations';\n END IF;\n END LOOP;\n END;\n $$ LANGUAGE plpgsql;\n\n\n -- Save the stats we already have\n\n UPDATE paydays\n SET nparticipants = (SELECT count(*) FROM payday_participants)\n , ncc_missing = (\n SELECT count(*)\n FROM payday_participants\n WHERE old_balance < giving_today\n AND ( balanced_customer_href IS NULL\n OR\n last_bill_result IS NULL\n )\n )\n WHERE ts_end='1970-01-01T00:00:00+00'::timestamptz;\n\n \"\"\", dict(ts_start=ts_start))\n log('Prepared the DB.')\n\n\n @staticmethod\n def fetch_card_holds(participant_ids):\n holds = {}\n for hold in CardHold.query.filter(CardHold.f.meta.state == 'new'):\n state = 'new'\n if hold.status == 'failed' or hold.failure_reason:\n state = 'failed'\n elif hold.voided_at:\n state = 'cancelled'\n elif getattr(hold, 'debit_href', None):\n state = 'captured'\n if state != 'new':\n hold.meta['state'] = state\n hold.save()\n continue\n p_id = int(hold.meta['participant_id'])\n if p_id in participant_ids:\n holds[p_id] = hold\n else:\n cancel_card_hold(hold)\n return holds\n\n\n def create_card_holds(self, cursor):\n\n # Get the list of participants to create card holds for\n participants = cursor.all(\"\"\"\n SELECT *\n FROM payday_participants\n WHERE old_balance < giving_today\n AND balanced_customer_href IS NOT NULL\n AND last_bill_result IS NOT NULL\n AND is_suspicious IS false\n \"\"\")\n if not participants:\n return {}\n\n # Fetch existing holds\n participant_ids = set(p.id for p in participants)\n holds = self.fetch_card_holds(participant_ids)\n\n # Create new holds and check amounts of existing ones\n def f(p):\n amount = p.giving_today\n if p.old_balance < 0:\n amount -= p.old_balance\n if p.id in holds:\n charge_amount = upcharge(amount)[0]\n if holds[p.id].amount >= charge_amount * 100:\n return\n else:\n # The amount is too low, cancel the hold and make a new one\n cancel_card_hold(holds.pop(p.id))\n hold, error = create_card_hold(self.db, p, amount)\n if error:\n return 1\n else:\n holds[p.id] = hold\n n_failures = sum(filter(None, threaded_map(f, participants)))\n\n # Record the number of failures\n cursor.one(\"\"\"\n UPDATE paydays\n SET ncc_failing = %s\n WHERE ts_end='1970-01-01T00:00:00+00'::timestamptz\n RETURNING id\n \"\"\", (n_failures,), default=NoPayday)\n\n # Update the values of card_hold_ok in our temporary table\n if not holds:\n return {}\n cursor.run(\"\"\"\n UPDATE payday_participants p\n SET card_hold_ok = true\n WHERE p.id IN %s\n \"\"\", (tuple(holds.keys()),))\n\n return holds\n\n\n @staticmethod\n def transfer_tips(cursor):\n cursor.run(\"\"\"\n\n UPDATE payday_tips t\n SET is_funded = true\n FROM payday_participants p\n WHERE p.username = t.tipper\n AND p.card_hold_ok;\n\n SELECT settle_tip_graph();\n\n \"\"\")\n\n\n @staticmethod\n def transfer_takes(cursor, ts_start):\n cursor.run(\"\"\"\n\n INSERT INTO payday_takes\n SELECT team, member, amount\n FROM ( SELECT DISTINCT ON (team, member)\n team, member, amount, ctime\n FROM takes\n WHERE mtime < %(ts_start)s\n ORDER BY team, member, mtime DESC\n ) t\n WHERE t.amount > 0\n AND t.team IN (SELECT username FROM payday_participants)\n AND t.member IN (SELECT username FROM payday_participants)\n AND ( SELECT id\n FROM payday_transfers_done t2\n WHERE t.team = t2.tipper\n AND t.member = t2.tippee\n AND context = 'take'\n ) IS NULL\n ORDER BY t.team, t.ctime DESC;\n\n SELECT settle_tip_graph();\n\n \"\"\", dict(ts_start=ts_start))\n\n\n def settle_card_holds(self, cursor, holds):\n participants = cursor.all(\"\"\"\n SELECT *\n FROM payday_participants\n WHERE new_balance < 0\n \"\"\")\n participants = [p for p in participants if p.id in holds]\n\n # Capture holds to bring balances back up to (at least) zero\n def capture(p):\n amount = -p.new_balance\n capture_card_hold(self.db, p, amount, holds.pop(p.id))\n threaded_map(capture, participants)\n log(\"Captured %i card holds.\" % len(participants))\n\n # Cancel the remaining holds\n threaded_map(cancel_card_hold, holds.values())\n log(\"Canceled %i card holds.\" % len(holds))\n\n\n @staticmethod\n def update_balances(cursor):\n participants = cursor.all(\"\"\"\n\n UPDATE participants p\n SET balance = (balance + p2.new_balance - p2.old_balance)\n FROM payday_participants p2\n WHERE p.id = p2.id\n AND p2.new_balance <> p2.old_balance\n RETURNING p.id\n , p.username\n , balance AS new_balance\n , ( SELECT balance\n FROM participants p3\n WHERE p3.id = p.id\n ) AS cur_balance;\n\n \"\"\")\n # Check that balances aren't becoming (more) negative\n for p in participants:\n if p.new_balance < 0 and p.new_balance < p.cur_balance:\n log(p)\n raise NegativeBalance()\n cursor.run(\"\"\"\n INSERT INTO transfers (timestamp, tipper, tippee, amount, context)\n SELECT * FROM payday_transfers;\n \"\"\")\n log(\"Updated the balances of %i participants.\" % len(participants))\n\n\n def take_over_balances(self):\n \"\"\"If an account that receives money is taken over during payin we need\n to transfer the balance to the absorbing account.\n \"\"\"\n for i in itertools.count():\n if i > 10:\n raise Exception('possible infinite loop')\n count = self.db.one(\"\"\"\n\n DROP TABLE IF EXISTS temp;\n CREATE TEMPORARY TABLE temp AS\n SELECT archived_as, absorbed_by, balance AS archived_balance\n FROM absorptions a\n JOIN participants p ON a.archived_as = p.username\n WHERE balance > 0;\n\n SELECT count(*) FROM temp;\n\n \"\"\")\n if not count:\n break\n self.db.run(\"\"\"\n\n INSERT INTO transfers (tipper, tippee, amount, context)\n SELECT archived_as, absorbed_by, archived_balance, 'take-over'\n FROM temp;\n\n UPDATE participants\n SET balance = (balance - archived_balance)\n FROM temp\n WHERE username = archived_as;\n\n UPDATE participants\n SET balance = (balance + archived_balance)\n FROM temp\n WHERE username = absorbed_by;\n\n \"\"\")\n\n\n def payout(self):\n \"\"\"This is the second stage of payday in which we send money out to the\n bank accounts of participants.\n \"\"\"\n log(\"Starting payout loop.\")\n participants = self.db.all(\"\"\"\n SELECT p.*::participants\n FROM participants p\n WHERE balance > 0\n AND balanced_customer_href IS NOT NULL\n AND last_ach_result IS NOT NULL\n \"\"\")\n def credit(participant):\n if participant.is_suspicious is None:\n log(\"UNREVIEWED: %s\" % participant.username)\n return\n withhold = participant.giving + participant.pledging\n error = ach_credit(self.db, participant, withhold)\n if error:\n self.mark_ach_failed()\n threaded_map(credit, participants)\n log(\"Did payout for %d participants.\" % len(participants))\n self.db.self_check()\n log(\"Checked the DB.\")\n\n\n def update_stats(self):\n self.db.run(\"\"\"\\\n\n WITH our_transfers AS (\n SELECT *\n FROM transfers\n WHERE \"timestamp\" >= %(ts_start)s\n )\n , our_tips AS (\n SELECT *\n FROM our_transfers\n WHERE context = 'tip'\n )\n , our_pachinkos AS (\n SELECT *\n FROM our_transfers\n WHERE context = 'take'\n )\n , our_exchanges AS (\n SELECT *\n FROM exchanges\n WHERE \"timestamp\" >= %(ts_start)s\n )\n , our_achs AS (\n SELECT *\n FROM our_exchanges\n WHERE amount < 0\n )\n , our_charges AS (\n SELECT *\n FROM our_exchanges\n WHERE amount > 0\n AND status <> 'failed'\n )\n UPDATE paydays\n SET nactive = (\n SELECT DISTINCT count(*) FROM (\n SELECT tipper FROM our_transfers\n UNION\n SELECT tippee FROM our_transfers\n ) AS foo\n )\n , ntippers = (SELECT count(DISTINCT tipper) FROM our_transfers)\n , ntips = (SELECT count(*) FROM our_tips)\n , npachinko = (SELECT count(*) FROM our_pachinkos)\n , pachinko_volume = (SELECT COALESCE(sum(amount), 0) FROM our_pachinkos)\n , ntransfers = (SELECT count(*) FROM our_transfers)\n , transfer_volume = (SELECT COALESCE(sum(amount), 0) FROM our_transfers)\n , nachs = (SELECT count(*) FROM our_achs)\n , ach_volume = (SELECT COALESCE(sum(amount), 0) FROM our_achs)\n , ach_fees_volume = (SELECT COALESCE(sum(fee), 0) FROM our_achs)\n , ncharges = (SELECT count(*) FROM our_charges)\n , charge_volume = (\n SELECT COALESCE(sum(amount + fee), 0)\n FROM our_charges\n )\n , charge_fees_volume = (SELECT COALESCE(sum(fee), 0) FROM our_charges)\n WHERE ts_end='1970-01-01T00:00:00+00'::timestamptz\n\n \"\"\", {'ts_start': self.ts_start})\n log(\"Updated payday stats.\")\n\n\n def update_cached_amounts(self):\n with self.db.get_cursor() as cursor:\n cursor.execute(FAKE_PAYDAY)\n log(\"Updated receiving amounts.\")\n\n\n def end(self):\n self.ts_end = self.db.one(\"\"\"\\\n\n UPDATE paydays\n SET ts_end=now()\n WHERE ts_end='1970-01-01T00:00:00+00'::timestamptz\n RETURNING ts_end AT TIME ZONE 'UTC'\n\n \"\"\", default=NoPayday).replace(tzinfo=aspen.utils.utc)\n\n\n # Record-keeping.\n # ===============\n\n def mark_ach_failed(self):\n self.db.one(\"\"\"\\\n\n UPDATE paydays\n SET nach_failing = nach_failing + 1\n WHERE ts_end='1970-01-01T00:00:00+00'::timestamptz\n RETURNING id\n\n \"\"\", default=NoPayday)\n\n\n def mark_stage_done(self):\n self.db.one(\"\"\"\\\n\n UPDATE paydays\n SET stage = stage + 1\n WHERE ts_end='1970-01-01T00:00:00+00'::timestamptz\n RETURNING id\n\n \"\"\", default=NoPayday)\n", "path": "gratipay/billing/payday.py"}]} |
gh_patches_debug_58 | rasdani/github-patches | git_diff | PokemonGoF__PokemonGo-Bot-4931 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] Error in Telegram: "AttributeError: 'module' object has no attribute 'now'"
### Expected Behavior
<!-- Tell us what you expect to happen -->
Bot running with Telegram enabled
### Actual Behavior
<!-- Tell us what is happening -->
Bot not starting due to error message
### Your FULL config.json (remove your username, password, gmapkey and any other private info)
<!-- Provide your FULL config file, feel free to use services such as pastebin.com to reduce clutter -->
http://pastebin.com/5nQC2ceh
### Output when issue occurred
<!-- Provide a reasonable sample from your output log (not just the error message), feel free to use services such as pastebin.com to reduce clutter -->
Traceback (most recent call last):
File "pokecli.py", line 781, in <module>
main()
File "pokecli.py", line 128, in main
bot = start_bot(bot, config)
File "pokecli.py", line 88, in start_bot
initialize_task(bot, config)
File "pokecli.py", line 79, in initialize_task
tree = TreeConfigBuilder(bot, config.raw_tasks).build()
File "/PokemonGo-Bot/pokemongo_bot/tree_config_builder.py", line 79, in build
instance = worker(self.bot, task_config)
File "/PokemonGo-Bot/pokemongo_bot/base_task.py", line 23, in **init**
self.initialize()
File "/PokemonGo-Bot/pokemongo_bot/cell_workers/telegram_task.py", line 42, in initialize
self.next_job=datetime.now() + timedelta(seconds=self.min_interval)
AttributeError: 'module' object has no attribute 'now'
### Steps to Reproduce
<!-- Tell us the steps you have taken to reproduce the issue -->
Start the bot with the above config.
### Other Information
OS: CentOS
<!-- Tell us what Operating system you're using -->
Branch: dev
<!-- dev or master -->
Git Commit: 9e81c6ed90d79e181599ec7f0a0cfa2ecd4d09f5
<!-- run 'git log -n 1 --pretty=format:"%H"' -->
Python Version: Python 2.7.5
<!-- run 'python -V' and paste it here) -->
Any other relevant files/configs (eg: path files)
<!-- Anything else which may be of relevance -->
<!-- ===============END OF ISSUE SECTION=============== -->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pokemongo_bot/cell_workers/telegram_task.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 import datetime
3 import telegram
4 import os
5 import logging
6 import json
7 from pokemongo_bot.base_task import BaseTask
8 from pokemongo_bot.base_dir import _base_dir
9 from pokemongo_bot.event_handlers import TelegramHandler
10
11 from pprint import pprint
12 import re
13
14 class FileIOException(Exception):
15 pass
16
17 class TelegramTask(BaseTask):
18 SUPPORTED_TASK_API_VERSION = 1
19 update_id = None
20 tbot = None
21 min_interval=None
22 next_job=None
23
24 def initialize(self):
25 if not self.enabled:
26 return
27 api_key = self.bot.config.telegram_token
28 if api_key == None:
29 self.emit_event(
30 'config_error',
31 formatted='api_key not defined.'
32 )
33 return
34 self.tbot = telegram.Bot(api_key)
35 if self.config.get('master',None):
36 self.bot.event_manager.add_handler(TelegramHandler(self.tbot,self.config.get('master',None),self.config.get('alert_catch')))
37 try:
38 self.update_id = self.tbot.getUpdates()[0].update_id
39 except IndexError:
40 self.update_id = None
41 self.min_interval=self.config.get('min_interval',120)
42 self.next_job=datetime.now() + timedelta(seconds=self.min_interval)
43 def work(self):
44 if not self.enabled:
45 return
46 if datetime.now()<self.next_job:
47 return
48 self.next_job=datetime.now() + timedelta(seconds=self.min_interval)
49 for update in self.tbot.getUpdates(offset=self.update_id, timeout=10):
50 self.update_id = update.update_id+1
51 if update.message:
52 self.bot.logger.info("message from {} ({}): {}".format(update.message.from_user.username, update.message.from_user.id, update.message.text))
53 if self.config.get('master',None) and self.config.get('master',None) not in [update.message.from_user.id, "@{}".format(update.message.from_user.username)]:
54 self.emit_event(
55 'debug',
56 formatted="Master wrong: expecting {}, got {}({})".format(self.config.get('master',None), update.message.from_user.username, update.message.from_user.id))
57 continue
58 else:
59 if not re.match(r'^[0-9]+$', "{}".format(self.config['master'])): # master was not numeric...
60 self.config['master'] = update.message.chat_id
61 idx = (i for i,v in enumerate(self.bot.event_manager._handlers) if type(v) is TelegramHandler).next()
62 self.bot.event_manager._handlers[idx] = TelegramHandler(self.tbot,self.config['master'], self.config.get('alert_catch'))
63
64
65
66 if update.message.text == "/info":
67 stats = self._get_player_stats()
68 if stats:
69 with self.bot.database as conn:
70 cur = conn.cursor()
71 cur.execute("SELECT DISTINCT COUNT(encounter_id) FROM catch_log WHERE dated >= datetime('now','-1 day')")
72 catch_day = cur.fetchone()[0]
73 cur.execute("SELECT DISTINCT COUNT(pokestop) FROM pokestop_log WHERE dated >= datetime('now','-1 day')")
74 ps_day = cur.fetchone()[0]
75 res = (
76 "*"+self.bot.config.username+"*",
77 "_Level:_ "+str(stats["level"]),
78 "_XP:_ "+str(stats["experience"])+"/"+str(stats["next_level_xp"]),
79 "_Pokemons Captured:_ "+str(stats["pokemons_captured"])+" ("+str(catch_day)+" _last 24h_)",
80 "_Poke Stop Visits:_ "+str(stats["poke_stop_visits"])+" ("+str(ps_day)+" _last 24h_)",
81 "_KM Walked:_ "+str(stats["km_walked"])
82 )
83 self.tbot.sendMessage(chat_id=update.message.chat_id, parse_mode='Markdown', text="\n".join(res))
84 self.tbot.send_location(chat_id=update.message.chat_id, latitude=self.bot.api._position_lat, longitude=self.bot.api._position_lng)
85 else:
86 self.tbot.sendMessage(chat_id=update.message.chat_id, parse_mode='Markdown', text="Stats not loaded yet\n")
87 elif update.message.text == "/start" or update.message.text == "/help":
88 res = (
89 "Commands: ",
90 "/info - info about bot"
91 )
92 self.tbot.sendMessage(chat_id=update.message.chat_id, parse_mode='Markdown', text="\n".join(res))
93
94 def _get_player_stats(self):
95 """
96 Helper method parsing the bot inventory object and returning the player stats object.
97 :return: The player stats object.
98 :rtype: dict
99 """
100 web_inventory = os.path.join(_base_dir, "web", "inventory-%s.json" % self.bot.config.username)
101
102 try:
103 with open(web_inventory, "r") as infile:
104 json_inventory = json.load(infile)
105 except ValueError as e:
106 # Unable to read json from web inventory
107 # File may be corrupt. Create a new one.
108 self.bot.logger.info('[x] Error while opening inventory file for read: %s' % e)
109 json_inventory = []
110 except:
111 raise FileIOException("Unexpected error reading from {}".web_inventory)
112
113 return next((x["inventory_item_data"]["player_stats"]
114 for x in json_inventory
115 if x.get("inventory_item_data", {}).get("player_stats", {})),
116 None)
117
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pokemongo_bot/cell_workers/telegram_task.py b/pokemongo_bot/cell_workers/telegram_task.py
--- a/pokemongo_bot/cell_workers/telegram_task.py
+++ b/pokemongo_bot/cell_workers/telegram_task.py
@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
-import datetime
+from datetime import datetime
+from datetime import timedelta
import telegram
import os
import logging
| {"golden_diff": "diff --git a/pokemongo_bot/cell_workers/telegram_task.py b/pokemongo_bot/cell_workers/telegram_task.py\n--- a/pokemongo_bot/cell_workers/telegram_task.py\n+++ b/pokemongo_bot/cell_workers/telegram_task.py\n@@ -1,5 +1,6 @@\n # -*- coding: utf-8 -*-\n-import datetime\n+from datetime import datetime\n+from datetime import timedelta\n import telegram\n import os\n import logging\n", "issue": "[BUG] Error in Telegram: \"AttributeError: 'module' object has no attribute 'now'\"\n### Expected Behavior\n\n<!-- Tell us what you expect to happen -->\n\nBot running with Telegram enabled\n### Actual Behavior\n\n<!-- Tell us what is happening -->\n\nBot not starting due to error message\n### Your FULL config.json (remove your username, password, gmapkey and any other private info)\n\n<!-- Provide your FULL config file, feel free to use services such as pastebin.com to reduce clutter -->\n\nhttp://pastebin.com/5nQC2ceh\n### Output when issue occurred\n\n<!-- Provide a reasonable sample from your output log (not just the error message), feel free to use services such as pastebin.com to reduce clutter -->\n\nTraceback (most recent call last):\n File \"pokecli.py\", line 781, in <module>\n main()\n File \"pokecli.py\", line 128, in main\n bot = start_bot(bot, config)\n File \"pokecli.py\", line 88, in start_bot\n initialize_task(bot, config)\n File \"pokecli.py\", line 79, in initialize_task\n tree = TreeConfigBuilder(bot, config.raw_tasks).build()\n File \"/PokemonGo-Bot/pokemongo_bot/tree_config_builder.py\", line 79, in build\n instance = worker(self.bot, task_config)\n File \"/PokemonGo-Bot/pokemongo_bot/base_task.py\", line 23, in **init**\n self.initialize()\n File \"/PokemonGo-Bot/pokemongo_bot/cell_workers/telegram_task.py\", line 42, in initialize\n self.next_job=datetime.now() + timedelta(seconds=self.min_interval)\nAttributeError: 'module' object has no attribute 'now'\n### Steps to Reproduce\n\n<!-- Tell us the steps you have taken to reproduce the issue -->\n\nStart the bot with the above config.\n### Other Information\n\nOS: CentOS\n\n<!-- Tell us what Operating system you're using --> \n\nBranch: dev\n\n<!-- dev or master --> \n\nGit Commit: 9e81c6ed90d79e181599ec7f0a0cfa2ecd4d09f5\n\n<!-- run 'git log -n 1 --pretty=format:\"%H\"' --> \n\nPython Version: Python 2.7.5\n\n<!-- run 'python -V' and paste it here) --> \n\nAny other relevant files/configs (eg: path files) \n\n<!-- Anything else which may be of relevance -->\n\n<!-- ===============END OF ISSUE SECTION=============== -->\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport datetime\nimport telegram\nimport os\nimport logging\nimport json\nfrom pokemongo_bot.base_task import BaseTask\nfrom pokemongo_bot.base_dir import _base_dir\nfrom pokemongo_bot.event_handlers import TelegramHandler\n\nfrom pprint import pprint\nimport re\n\nclass FileIOException(Exception):\n pass\n\nclass TelegramTask(BaseTask):\n SUPPORTED_TASK_API_VERSION = 1\n update_id = None\n tbot = None\n min_interval=None\n next_job=None\n \n def initialize(self):\n if not self.enabled:\n return\n api_key = self.bot.config.telegram_token\n if api_key == None:\n self.emit_event(\n 'config_error',\n formatted='api_key not defined.'\n )\n return\n self.tbot = telegram.Bot(api_key)\n if self.config.get('master',None):\n self.bot.event_manager.add_handler(TelegramHandler(self.tbot,self.config.get('master',None),self.config.get('alert_catch')))\n try:\n self.update_id = self.tbot.getUpdates()[0].update_id\n except IndexError:\n self.update_id = None\n self.min_interval=self.config.get('min_interval',120)\n self.next_job=datetime.now() + timedelta(seconds=self.min_interval)\n def work(self):\n if not self.enabled:\n return\n if datetime.now()<self.next_job:\n return\n self.next_job=datetime.now() + timedelta(seconds=self.min_interval)\n for update in self.tbot.getUpdates(offset=self.update_id, timeout=10):\n self.update_id = update.update_id+1\n if update.message:\n self.bot.logger.info(\"message from {} ({}): {}\".format(update.message.from_user.username, update.message.from_user.id, update.message.text))\n if self.config.get('master',None) and self.config.get('master',None) not in [update.message.from_user.id, \"@{}\".format(update.message.from_user.username)]:\n self.emit_event( \n 'debug', \n formatted=\"Master wrong: expecting {}, got {}({})\".format(self.config.get('master',None), update.message.from_user.username, update.message.from_user.id))\n continue\n else:\n if not re.match(r'^[0-9]+$', \"{}\".format(self.config['master'])): # master was not numeric...\n self.config['master'] = update.message.chat_id\n idx = (i for i,v in enumerate(self.bot.event_manager._handlers) if type(v) is TelegramHandler).next()\n self.bot.event_manager._handlers[idx] = TelegramHandler(self.tbot,self.config['master'], self.config.get('alert_catch'))\n \n\n\n if update.message.text == \"/info\":\n stats = self._get_player_stats()\n if stats:\n with self.bot.database as conn:\n cur = conn.cursor()\n cur.execute(\"SELECT DISTINCT COUNT(encounter_id) FROM catch_log WHERE dated >= datetime('now','-1 day')\")\n catch_day = cur.fetchone()[0]\n cur.execute(\"SELECT DISTINCT COUNT(pokestop) FROM pokestop_log WHERE dated >= datetime('now','-1 day')\")\n ps_day = cur.fetchone()[0]\n res = (\n \"*\"+self.bot.config.username+\"*\",\n \"_Level:_ \"+str(stats[\"level\"]),\n \"_XP:_ \"+str(stats[\"experience\"])+\"/\"+str(stats[\"next_level_xp\"]),\n \"_Pokemons Captured:_ \"+str(stats[\"pokemons_captured\"])+\" (\"+str(catch_day)+\" _last 24h_)\",\n \"_Poke Stop Visits:_ \"+str(stats[\"poke_stop_visits\"])+\" (\"+str(ps_day)+\" _last 24h_)\",\n \"_KM Walked:_ \"+str(stats[\"km_walked\"])\n )\n self.tbot.sendMessage(chat_id=update.message.chat_id, parse_mode='Markdown', text=\"\\n\".join(res))\n self.tbot.send_location(chat_id=update.message.chat_id, latitude=self.bot.api._position_lat, longitude=self.bot.api._position_lng)\n else:\n self.tbot.sendMessage(chat_id=update.message.chat_id, parse_mode='Markdown', text=\"Stats not loaded yet\\n\")\n elif update.message.text == \"/start\" or update.message.text == \"/help\":\n res = (\n \"Commands: \",\n \"/info - info about bot\"\n )\n self.tbot.sendMessage(chat_id=update.message.chat_id, parse_mode='Markdown', text=\"\\n\".join(res))\n\n def _get_player_stats(self):\n \"\"\"\n Helper method parsing the bot inventory object and returning the player stats object.\n :return: The player stats object.\n :rtype: dict\n \"\"\"\n web_inventory = os.path.join(_base_dir, \"web\", \"inventory-%s.json\" % self.bot.config.username)\n \n try:\n with open(web_inventory, \"r\") as infile:\n json_inventory = json.load(infile)\n except ValueError as e:\n # Unable to read json from web inventory\n # File may be corrupt. Create a new one. \n self.bot.logger.info('[x] Error while opening inventory file for read: %s' % e)\n json_inventory = []\n except:\n raise FileIOException(\"Unexpected error reading from {}\".web_inventory)\n \n return next((x[\"inventory_item_data\"][\"player_stats\"]\n for x in json_inventory\n if x.get(\"inventory_item_data\", {}).get(\"player_stats\", {})),\n None)\n", "path": "pokemongo_bot/cell_workers/telegram_task.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nfrom datetime import datetime\nfrom datetime import timedelta\nimport telegram\nimport os\nimport logging\nimport json\nfrom pokemongo_bot.base_task import BaseTask\nfrom pokemongo_bot.base_dir import _base_dir\nfrom pokemongo_bot.event_handlers import TelegramHandler\n\nfrom pprint import pprint\nimport re\n\nclass FileIOException(Exception):\n pass\n\nclass TelegramTask(BaseTask):\n SUPPORTED_TASK_API_VERSION = 1\n update_id = None\n tbot = None\n min_interval=None\n next_job=None\n \n def initialize(self):\n if not self.enabled:\n return\n api_key = self.bot.config.telegram_token\n if api_key == None:\n self.emit_event(\n 'config_error',\n formatted='api_key not defined.'\n )\n return\n self.tbot = telegram.Bot(api_key)\n if self.config.get('master',None):\n self.bot.event_manager.add_handler(TelegramHandler(self.tbot,self.config.get('master',None),self.config.get('alert_catch')))\n try:\n self.update_id = self.tbot.getUpdates()[0].update_id\n except IndexError:\n self.update_id = None\n self.min_interval=self.config.get('min_interval',120)\n self.next_job=datetime.now() + timedelta(seconds=self.min_interval)\n def work(self):\n if not self.enabled:\n return\n if datetime.now()<self.next_job:\n return\n self.next_job=datetime.now() + timedelta(seconds=self.min_interval)\n for update in self.tbot.getUpdates(offset=self.update_id, timeout=10):\n self.update_id = update.update_id+1\n if update.message:\n self.bot.logger.info(\"message from {} ({}): {}\".format(update.message.from_user.username, update.message.from_user.id, update.message.text))\n if self.config.get('master',None) and self.config.get('master',None) not in [update.message.from_user.id, \"@{}\".format(update.message.from_user.username)]:\n self.emit_event( \n 'debug', \n formatted=\"Master wrong: expecting {}, got {}({})\".format(self.config.get('master',None), update.message.from_user.username, update.message.from_user.id))\n continue\n else:\n if not re.match(r'^[0-9]+$', \"{}\".format(self.config['master'])): # master was not numeric...\n self.config['master'] = update.message.chat_id\n idx = (i for i,v in enumerate(self.bot.event_manager._handlers) if type(v) is TelegramHandler).next()\n self.bot.event_manager._handlers[idx] = TelegramHandler(self.tbot,self.config['master'], self.config.get('alert_catch'))\n \n\n\n if update.message.text == \"/info\":\n stats = self._get_player_stats()\n if stats:\n with self.bot.database as conn:\n cur = conn.cursor()\n cur.execute(\"SELECT DISTINCT COUNT(encounter_id) FROM catch_log WHERE dated >= datetime('now','-1 day')\")\n catch_day = cur.fetchone()[0]\n cur.execute(\"SELECT DISTINCT COUNT(pokestop) FROM pokestop_log WHERE dated >= datetime('now','-1 day')\")\n ps_day = cur.fetchone()[0]\n res = (\n \"*\"+self.bot.config.username+\"*\",\n \"_Level:_ \"+str(stats[\"level\"]),\n \"_XP:_ \"+str(stats[\"experience\"])+\"/\"+str(stats[\"next_level_xp\"]),\n \"_Pokemons Captured:_ \"+str(stats[\"pokemons_captured\"])+\" (\"+str(catch_day)+\" _last 24h_)\",\n \"_Poke Stop Visits:_ \"+str(stats[\"poke_stop_visits\"])+\" (\"+str(ps_day)+\" _last 24h_)\",\n \"_KM Walked:_ \"+str(stats[\"km_walked\"])\n )\n self.tbot.sendMessage(chat_id=update.message.chat_id, parse_mode='Markdown', text=\"\\n\".join(res))\n self.tbot.send_location(chat_id=update.message.chat_id, latitude=self.bot.api._position_lat, longitude=self.bot.api._position_lng)\n else:\n self.tbot.sendMessage(chat_id=update.message.chat_id, parse_mode='Markdown', text=\"Stats not loaded yet\\n\")\n elif update.message.text == \"/start\" or update.message.text == \"/help\":\n res = (\n \"Commands: \",\n \"/info - info about bot\"\n )\n self.tbot.sendMessage(chat_id=update.message.chat_id, parse_mode='Markdown', text=\"\\n\".join(res))\n\n def _get_player_stats(self):\n \"\"\"\n Helper method parsing the bot inventory object and returning the player stats object.\n :return: The player stats object.\n :rtype: dict\n \"\"\"\n web_inventory = os.path.join(_base_dir, \"web\", \"inventory-%s.json\" % self.bot.config.username)\n \n try:\n with open(web_inventory, \"r\") as infile:\n json_inventory = json.load(infile)\n except ValueError as e:\n # Unable to read json from web inventory\n # File may be corrupt. Create a new one. \n self.bot.logger.info('[x] Error while opening inventory file for read: %s' % e)\n json_inventory = []\n except:\n raise FileIOException(\"Unexpected error reading from {}\".web_inventory)\n \n return next((x[\"inventory_item_data\"][\"player_stats\"]\n for x in json_inventory\n if x.get(\"inventory_item_data\", {}).get(\"player_stats\", {})),\n None)\n", "path": "pokemongo_bot/cell_workers/telegram_task.py"}]} |
gh_patches_debug_59 | rasdani/github-patches | git_diff | gratipay__gratipay.com-4454 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add a check in deploy.sh for environment variables
When deploying #4438, I forgot to add the `CHECK_NPM_SYNC_EVERY` env var, and gratipay.com was down for around 3 minutes until I figured out what was wrong and fix it.
We should be able to detect this before deploying by adding a check to `deploy.sh`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gratipay/wireup.py`
Content:
```
1 """Wireup
2 """
3 from __future__ import absolute_import, division, print_function, unicode_literals
4
5 import atexit
6 import os
7 import sys
8 import urlparse
9 from tempfile import mkstemp
10
11 import aspen
12 from aspen.testing.client import Client
13 from babel.core import Locale
14 from babel.messages.pofile import read_po
15 from babel.numbers import parse_pattern
16 import balanced
17 import braintree
18 import gratipay
19 import gratipay.billing.payday
20 import raven
21 from environment import Environment, is_yesish
22 from gratipay.application import Application
23 from gratipay.elsewhere import PlatformRegistry
24 from gratipay.elsewhere.bitbucket import Bitbucket
25 from gratipay.elsewhere.bountysource import Bountysource
26 from gratipay.elsewhere.github import GitHub
27 from gratipay.elsewhere.facebook import Facebook
28 from gratipay.elsewhere.google import Google
29 from gratipay.elsewhere.openstreetmap import OpenStreetMap
30 from gratipay.elsewhere.twitter import Twitter
31 from gratipay.elsewhere.venmo import Venmo
32 from gratipay.models.account_elsewhere import AccountElsewhere
33 from gratipay.models.participant import Participant, Identity
34 from gratipay.security.crypto import EncryptingPacker
35 from gratipay.utils import find_files
36 from gratipay.utils.http_caching import asset_etag
37 from gratipay.utils.i18n import (
38 ALIASES, ALIASES_R, COUNTRIES, LANGUAGES_2, LOCALES,
39 get_function_from_rule, make_sorted_dict
40 )
41
42 def base_url(website, env):
43 gratipay.base_url = website.base_url = env.base_url
44
45 def secure_cookies(env):
46 gratipay.use_secure_cookies = env.base_url.startswith('https')
47
48 def db(env):
49
50 # Instantiating Application calls the rest of these wireup functions, and
51 # is side-effecty (e.g., writing to stdout, which interferes with some of
52 # our scripts). Eventually scripts that use this function should be
53 # rewritten to instantiate Application directly.
54
55 sys.stdout = sys.stderr
56 app = Application()
57 sys.stdout = sys.__stdout__
58 return app.db
59
60 def crypto(env):
61 keys = [k.encode('ASCII') for k in env.crypto_keys.split()]
62 out = Identity.encrypting_packer = EncryptingPacker(*keys)
63 return out
64
65 def billing(env):
66 balanced.configure(env.balanced_api_secret)
67
68 if env.braintree_sandbox_mode:
69 braintree_env = braintree.Environment.Sandbox
70 else:
71 braintree_env = braintree.Environment.Production
72
73 braintree.Configuration.configure(
74 braintree_env,
75 env.braintree_merchant_id,
76 env.braintree_public_key,
77 env.braintree_private_key
78 )
79
80
81 def username_restrictions(website):
82 gratipay.RESTRICTED_USERNAMES = os.listdir(website.www_root)
83
84
85 def make_sentry_teller(env, _noop=None):
86 if not env.sentry_dsn:
87 aspen.log_dammit("Won't log to Sentry (SENTRY_DSN is empty).")
88 noop = _noop or (lambda *a, **kw: None)
89 Participant._tell_sentry = noop
90 return noop
91
92 sentry = raven.Client(env.sentry_dsn)
93
94 def tell_sentry(exception, state):
95
96 # Decide if we care.
97 # ==================
98
99 if isinstance(exception, aspen.Response):
100
101 if exception.code < 500:
102
103 # Only log server errors to Sentry. For responses < 500 we use
104 # stream-/line-based access logging. See discussion on:
105
106 # https://github.com/gratipay/gratipay.com/pull/1560.
107
108 return
109
110
111 # Find a user.
112 # ============
113 # | is disallowed in usernames, so we can use it here to indicate
114 # situations in which we can't get a username.
115
116 user = state.get('user')
117 user_id = 'n/a'
118 if user is None:
119 username = '| no user'
120 else:
121 is_anon = getattr(user, 'ANON', None)
122 if is_anon is None:
123 username = '| no ANON'
124 elif is_anon:
125 username = '| anonymous'
126 else:
127 participant = getattr(user, 'participant', None)
128 if participant is None:
129 username = '| no participant'
130 else:
131 username = getattr(user.participant, 'username', None)
132 if username is None:
133 username = '| no username'
134 else:
135 user_id = user.participant.id
136 username = username.encode('utf8')
137 user = { 'id': user_id
138 , 'is_admin': user.participant.is_admin
139 , 'is_suspicious': user.participant.is_suspicious
140 , 'claimed_time': user.participant.claimed_time.isoformat()
141 , 'url': 'https://gratipay.com/{}/'.format(username)
142 }
143
144
145 # Fire off a Sentry call.
146 # =======================
147
148 dispatch_result = state.get('dispatch_result')
149 request = state.get('request')
150 tags = { 'username': username
151 , 'user_id': user_id
152 }
153 extra = { 'filepath': getattr(dispatch_result, 'match', None)
154 , 'request': str(request).splitlines()
155 , 'user': user
156 }
157 result = sentry.captureException(tags=tags, extra=extra)
158
159
160 # Emit a reference string to stdout.
161 # ==================================
162
163 ident = sentry.get_ident(result)
164 aspen.log_dammit('Exception reference: ' + ident)
165
166 Participant._tell_sentry = tell_sentry
167 return tell_sentry
168
169
170 class BadEnvironment(SystemExit):
171 pass
172
173
174 def accounts_elsewhere(website, env):
175
176 twitter = Twitter(
177 env.twitter_consumer_key,
178 env.twitter_consumer_secret,
179 env.twitter_callback,
180 )
181 facebook = Facebook(
182 env.facebook_app_id,
183 env.facebook_app_secret,
184 env.facebook_callback,
185 )
186 github = GitHub(
187 env.github_client_id,
188 env.github_client_secret,
189 env.github_callback,
190 )
191 google = Google(
192 env.google_client_id,
193 env.google_client_secret,
194 env.google_callback,
195 )
196 bitbucket = Bitbucket(
197 env.bitbucket_consumer_key,
198 env.bitbucket_consumer_secret,
199 env.bitbucket_callback,
200 )
201 openstreetmap = OpenStreetMap(
202 env.openstreetmap_consumer_key,
203 env.openstreetmap_consumer_secret,
204 env.openstreetmap_callback,
205 env.openstreetmap_api_url,
206 env.openstreetmap_auth_url,
207 )
208 bountysource = Bountysource(
209 None,
210 env.bountysource_api_secret,
211 env.bountysource_callback,
212 env.bountysource_api_host,
213 env.bountysource_www_host,
214 )
215 venmo = Venmo(
216 env.venmo_client_id,
217 env.venmo_client_secret,
218 env.venmo_callback,
219 )
220
221 signin_platforms = [twitter, github, facebook, google, bitbucket, openstreetmap]
222 website.signin_platforms = PlatformRegistry(signin_platforms)
223 AccountElsewhere.signin_platforms_names = tuple(p.name for p in signin_platforms)
224
225 # For displaying "Connected Accounts"
226 website.social_profiles = [twitter, github, facebook, google, bitbucket, openstreetmap, bountysource]
227
228 all_platforms = signin_platforms + [bountysource, venmo]
229 website.platforms = AccountElsewhere.platforms = PlatformRegistry(all_platforms)
230
231 friends_platforms = [p for p in website.platforms if getattr(p, 'api_friends_path', None)]
232 website.friends_platforms = PlatformRegistry(friends_platforms)
233
234 for platform in all_platforms:
235 platform.icon = website.asset('platforms/%s.16.png' % platform.name)
236 platform.logo = website.asset('platforms/%s.png' % platform.name)
237
238
239 def compile_assets(website):
240 client = Client(website.www_root, website.project_root)
241 client._website = website
242 for spt in find_files(website.www_root+'/assets/', '*.spt'):
243 filepath = spt[:-4] # /path/to/www/assets/foo.css
244 urlpath = spt[spt.rfind('/assets/'):-4] # /assets/foo.css
245 try:
246 # Remove any existing compiled asset, so we can access the dynamic
247 # one instead (Aspen prefers foo.css over foo.css.spt).
248 os.unlink(filepath)
249 except:
250 pass
251 headers = {}
252 if website.base_url:
253 url = urlparse.urlparse(website.base_url)
254 headers[b'HTTP_X_FORWARDED_PROTO'] = str(url.scheme)
255 headers[b'HTTP_HOST'] = str(url.netloc)
256 content = client.GET(urlpath, **headers).body
257 tmpfd, tmpfpath = mkstemp(dir='.')
258 os.write(tmpfd, content)
259 os.close(tmpfd)
260 os.rename(tmpfpath, filepath)
261 atexit.register(lambda: clean_assets(website.www_root))
262
263
264 def clean_assets(www_root):
265 for spt in find_files(www_root+'/assets/', '*.spt'):
266 try:
267 os.unlink(spt[:-4])
268 except:
269 pass
270
271
272 def load_i18n(project_root, tell_sentry):
273 # Load the locales
274 localeDir = os.path.join(project_root, 'i18n', 'core')
275 locales = LOCALES
276 for file in os.listdir(localeDir):
277 try:
278 parts = file.split(".")
279 if not (len(parts) == 2 and parts[1] == "po"):
280 continue
281 lang = parts[0]
282 with open(os.path.join(localeDir, file)) as f:
283 l = locales[lang.lower()] = Locale(lang)
284 c = l.catalog = read_po(f)
285 c.plural_func = get_function_from_rule(c.plural_expr)
286 try:
287 l.countries = make_sorted_dict(COUNTRIES, l.territories)
288 except KeyError:
289 l.countries = COUNTRIES
290 try:
291 l.languages_2 = make_sorted_dict(LANGUAGES_2, l.languages)
292 except KeyError:
293 l.languages_2 = LANGUAGES_2
294 except Exception as e:
295 tell_sentry(e, {})
296
297 # Add aliases
298 for k, v in list(locales.items()):
299 locales.setdefault(ALIASES.get(k, k), v)
300 locales.setdefault(ALIASES_R.get(k, k), v)
301 for k, v in list(locales.items()):
302 locales.setdefault(k.split('_', 1)[0], v)
303
304 # Patch the locales to look less formal
305 locales['fr'].currency_formats[None] = parse_pattern('#,##0.00\u202f\xa4')
306 locales['fr'].currency_symbols['USD'] = '$'
307
308
309 def other_stuff(website, env):
310 website.cache_static = env.gratipay_cache_static
311 website.compress_assets = env.gratipay_compress_assets
312
313 if website.cache_static:
314 def asset(path):
315 fspath = website.www_root+'/assets/'+path
316 etag = ''
317 try:
318 etag = asset_etag(fspath)
319 except Exception as e:
320 website.tell_sentry(e, {})
321 return env.gratipay_asset_url+path+(etag and '?etag='+etag)
322 website.asset = asset
323 compile_assets(website)
324 else:
325 website.asset = lambda path: env.gratipay_asset_url+path
326 clean_assets(website.www_root)
327
328 website.optimizely_id = env.optimizely_id
329 website.include_piwik = env.include_piwik
330
331 website.log_metrics = env.log_metrics
332
333
334 def env():
335 env = Environment(
336 AWS_SES_ACCESS_KEY_ID = unicode,
337 AWS_SES_SECRET_ACCESS_KEY = unicode,
338 AWS_SES_DEFAULT_REGION = unicode,
339 BASE_URL = unicode,
340 DATABASE_URL = unicode,
341 DATABASE_MAXCONN = int,
342 CRYPTO_KEYS = unicode,
343 GRATIPAY_ASSET_URL = unicode,
344 GRATIPAY_CACHE_STATIC = is_yesish,
345 GRATIPAY_COMPRESS_ASSETS = is_yesish,
346 BALANCED_API_SECRET = unicode,
347 BRAINTREE_SANDBOX_MODE = is_yesish,
348 BRAINTREE_MERCHANT_ID = unicode,
349 BRAINTREE_PUBLIC_KEY = unicode,
350 BRAINTREE_PRIVATE_KEY = unicode,
351 GITHUB_CLIENT_ID = unicode,
352 GITHUB_CLIENT_SECRET = unicode,
353 GITHUB_CALLBACK = unicode,
354 BITBUCKET_CONSUMER_KEY = unicode,
355 BITBUCKET_CONSUMER_SECRET = unicode,
356 BITBUCKET_CALLBACK = unicode,
357 TWITTER_CONSUMER_KEY = unicode,
358 TWITTER_CONSUMER_SECRET = unicode,
359 TWITTER_CALLBACK = unicode,
360 FACEBOOK_APP_ID = unicode,
361 FACEBOOK_APP_SECRET = unicode,
362 FACEBOOK_CALLBACK = unicode,
363 GOOGLE_CLIENT_ID = unicode,
364 GOOGLE_CLIENT_SECRET = unicode,
365 GOOGLE_CALLBACK = unicode,
366 BOUNTYSOURCE_API_SECRET = unicode,
367 BOUNTYSOURCE_CALLBACK = unicode,
368 BOUNTYSOURCE_API_HOST = unicode,
369 BOUNTYSOURCE_WWW_HOST = unicode,
370 VENMO_CLIENT_ID = unicode,
371 VENMO_CLIENT_SECRET = unicode,
372 VENMO_CALLBACK = unicode,
373 OPENSTREETMAP_CONSUMER_KEY = unicode,
374 OPENSTREETMAP_CONSUMER_SECRET = unicode,
375 OPENSTREETMAP_CALLBACK = unicode,
376 OPENSTREETMAP_API_URL = unicode,
377 OPENSTREETMAP_AUTH_URL = unicode,
378 UPDATE_CTA_EVERY = int,
379 CHECK_DB_EVERY = int,
380 CHECK_NPM_SYNC_EVERY = int,
381 EMAIL_QUEUE_FLUSH_EVERY = int,
382 EMAIL_QUEUE_SLEEP_FOR = int,
383 EMAIL_QUEUE_ALLOW_UP_TO = int,
384 OPTIMIZELY_ID = unicode,
385 SENTRY_DSN = unicode,
386 LOG_METRICS = is_yesish,
387 INCLUDE_PIWIK = is_yesish,
388 PROJECT_REVIEW_REPO = unicode,
389 PROJECT_REVIEW_USERNAME = unicode,
390 PROJECT_REVIEW_TOKEN = unicode,
391 RAISE_SIGNIN_NOTIFICATIONS = is_yesish,
392 REQUIRE_YAJL = is_yesish,
393 GUNICORN_OPTS = unicode,
394 )
395
396
397 # Error Checking
398 # ==============
399
400 if env.malformed:
401 these = len(env.malformed) != 1 and 'these' or 'this'
402 plural = len(env.malformed) != 1 and 's' or ''
403 aspen.log_dammit("=" * 42)
404 aspen.log_dammit( "Oh no! Gratipay.com couldn't understand %s " % these
405 , "environment variable%s:" % plural
406 )
407 aspen.log_dammit(" ")
408 for key, err in env.malformed:
409 aspen.log_dammit(" {} ({})".format(key, err))
410 aspen.log_dammit(" ")
411 aspen.log_dammit("See ./default_local.env for hints.")
412
413 aspen.log_dammit("=" * 42)
414 keys = ', '.join([key for key, value in env.malformed])
415 raise BadEnvironment("Malformed envvar{}: {}.".format(plural, keys))
416
417 if env.missing:
418 these = len(env.missing) != 1 and 'these' or 'this'
419 plural = len(env.missing) != 1 and 's' or ''
420 aspen.log_dammit("=" * 42)
421 aspen.log_dammit( "Oh no! Gratipay.com needs %s missing " % these
422 , "environment variable%s:" % plural
423 )
424 aspen.log_dammit(" ")
425 for key in env.missing:
426 aspen.log_dammit(" " + key)
427 aspen.log_dammit(" ")
428 aspen.log_dammit( "(Sorry, we must've started looking for "
429 , "%s since you last updated Gratipay!)" % these
430 )
431 aspen.log_dammit(" ")
432 aspen.log_dammit("Running Gratipay locally? Edit ./local.env.")
433 aspen.log_dammit("Running the test suite? Edit ./tests/env.")
434 aspen.log_dammit(" ")
435 aspen.log_dammit("See ./default_local.env for hints.")
436
437 aspen.log_dammit("=" * 42)
438 keys = ', '.join([key for key in env.missing])
439 raise BadEnvironment("Missing envvar{}: {}.".format(plural, keys))
440
441 return env
442
443
444 def __main__():
445 # deploy.sh uses this to validate production env config
446 env()
447
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/gratipay/wireup.py b/gratipay/wireup.py
--- a/gratipay/wireup.py
+++ b/gratipay/wireup.py
@@ -441,6 +441,6 @@
return env
-def __main__():
+if __name__ == '__main__':
# deploy.sh uses this to validate production env config
env()
| {"golden_diff": "diff --git a/gratipay/wireup.py b/gratipay/wireup.py\n--- a/gratipay/wireup.py\n+++ b/gratipay/wireup.py\n@@ -441,6 +441,6 @@\n return env\n \n \n-def __main__():\n+if __name__ == '__main__':\n # deploy.sh uses this to validate production env config\n env()\n", "issue": "Add a check in deploy.sh for environment variables\nWhen deploying #4438, I forgot to add the `CHECK_NPM_SYNC_EVERY` env var, and gratipay.com was down for around 3 minutes until I figured out what was wrong and fix it.\r\n\r\nWe should be able to detect this before deploying by adding a check to `deploy.sh`\n", "before_files": [{"content": "\"\"\"Wireup\n\"\"\"\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport atexit\nimport os\nimport sys\nimport urlparse\nfrom tempfile import mkstemp\n\nimport aspen\nfrom aspen.testing.client import Client\nfrom babel.core import Locale\nfrom babel.messages.pofile import read_po\nfrom babel.numbers import parse_pattern\nimport balanced\nimport braintree\nimport gratipay\nimport gratipay.billing.payday\nimport raven\nfrom environment import Environment, is_yesish\nfrom gratipay.application import Application\nfrom gratipay.elsewhere import PlatformRegistry\nfrom gratipay.elsewhere.bitbucket import Bitbucket\nfrom gratipay.elsewhere.bountysource import Bountysource\nfrom gratipay.elsewhere.github import GitHub\nfrom gratipay.elsewhere.facebook import Facebook\nfrom gratipay.elsewhere.google import Google\nfrom gratipay.elsewhere.openstreetmap import OpenStreetMap\nfrom gratipay.elsewhere.twitter import Twitter\nfrom gratipay.elsewhere.venmo import Venmo\nfrom gratipay.models.account_elsewhere import AccountElsewhere\nfrom gratipay.models.participant import Participant, Identity\nfrom gratipay.security.crypto import EncryptingPacker\nfrom gratipay.utils import find_files\nfrom gratipay.utils.http_caching import asset_etag\nfrom gratipay.utils.i18n import (\n ALIASES, ALIASES_R, COUNTRIES, LANGUAGES_2, LOCALES,\n get_function_from_rule, make_sorted_dict\n)\n\ndef base_url(website, env):\n gratipay.base_url = website.base_url = env.base_url\n\ndef secure_cookies(env):\n gratipay.use_secure_cookies = env.base_url.startswith('https')\n\ndef db(env):\n\n # Instantiating Application calls the rest of these wireup functions, and\n # is side-effecty (e.g., writing to stdout, which interferes with some of\n # our scripts). Eventually scripts that use this function should be\n # rewritten to instantiate Application directly.\n\n sys.stdout = sys.stderr\n app = Application()\n sys.stdout = sys.__stdout__\n return app.db\n\ndef crypto(env):\n keys = [k.encode('ASCII') for k in env.crypto_keys.split()]\n out = Identity.encrypting_packer = EncryptingPacker(*keys)\n return out\n\ndef billing(env):\n balanced.configure(env.balanced_api_secret)\n\n if env.braintree_sandbox_mode:\n braintree_env = braintree.Environment.Sandbox\n else:\n braintree_env = braintree.Environment.Production\n\n braintree.Configuration.configure(\n braintree_env,\n env.braintree_merchant_id,\n env.braintree_public_key,\n env.braintree_private_key\n )\n\n\ndef username_restrictions(website):\n gratipay.RESTRICTED_USERNAMES = os.listdir(website.www_root)\n\n\ndef make_sentry_teller(env, _noop=None):\n if not env.sentry_dsn:\n aspen.log_dammit(\"Won't log to Sentry (SENTRY_DSN is empty).\")\n noop = _noop or (lambda *a, **kw: None)\n Participant._tell_sentry = noop\n return noop\n\n sentry = raven.Client(env.sentry_dsn)\n\n def tell_sentry(exception, state):\n\n # Decide if we care.\n # ==================\n\n if isinstance(exception, aspen.Response):\n\n if exception.code < 500:\n\n # Only log server errors to Sentry. For responses < 500 we use\n # stream-/line-based access logging. See discussion on:\n\n # https://github.com/gratipay/gratipay.com/pull/1560.\n\n return\n\n\n # Find a user.\n # ============\n # | is disallowed in usernames, so we can use it here to indicate\n # situations in which we can't get a username.\n\n user = state.get('user')\n user_id = 'n/a'\n if user is None:\n username = '| no user'\n else:\n is_anon = getattr(user, 'ANON', None)\n if is_anon is None:\n username = '| no ANON'\n elif is_anon:\n username = '| anonymous'\n else:\n participant = getattr(user, 'participant', None)\n if participant is None:\n username = '| no participant'\n else:\n username = getattr(user.participant, 'username', None)\n if username is None:\n username = '| no username'\n else:\n user_id = user.participant.id\n username = username.encode('utf8')\n user = { 'id': user_id\n , 'is_admin': user.participant.is_admin\n , 'is_suspicious': user.participant.is_suspicious\n , 'claimed_time': user.participant.claimed_time.isoformat()\n , 'url': 'https://gratipay.com/{}/'.format(username)\n }\n\n\n # Fire off a Sentry call.\n # =======================\n\n dispatch_result = state.get('dispatch_result')\n request = state.get('request')\n tags = { 'username': username\n , 'user_id': user_id\n }\n extra = { 'filepath': getattr(dispatch_result, 'match', None)\n , 'request': str(request).splitlines()\n , 'user': user\n }\n result = sentry.captureException(tags=tags, extra=extra)\n\n\n # Emit a reference string to stdout.\n # ==================================\n\n ident = sentry.get_ident(result)\n aspen.log_dammit('Exception reference: ' + ident)\n\n Participant._tell_sentry = tell_sentry\n return tell_sentry\n\n\nclass BadEnvironment(SystemExit):\n pass\n\n\ndef accounts_elsewhere(website, env):\n\n twitter = Twitter(\n env.twitter_consumer_key,\n env.twitter_consumer_secret,\n env.twitter_callback,\n )\n facebook = Facebook(\n env.facebook_app_id,\n env.facebook_app_secret,\n env.facebook_callback,\n )\n github = GitHub(\n env.github_client_id,\n env.github_client_secret,\n env.github_callback,\n )\n google = Google(\n env.google_client_id,\n env.google_client_secret,\n env.google_callback,\n )\n bitbucket = Bitbucket(\n env.bitbucket_consumer_key,\n env.bitbucket_consumer_secret,\n env.bitbucket_callback,\n )\n openstreetmap = OpenStreetMap(\n env.openstreetmap_consumer_key,\n env.openstreetmap_consumer_secret,\n env.openstreetmap_callback,\n env.openstreetmap_api_url,\n env.openstreetmap_auth_url,\n )\n bountysource = Bountysource(\n None,\n env.bountysource_api_secret,\n env.bountysource_callback,\n env.bountysource_api_host,\n env.bountysource_www_host,\n )\n venmo = Venmo(\n env.venmo_client_id,\n env.venmo_client_secret,\n env.venmo_callback,\n )\n\n signin_platforms = [twitter, github, facebook, google, bitbucket, openstreetmap]\n website.signin_platforms = PlatformRegistry(signin_platforms)\n AccountElsewhere.signin_platforms_names = tuple(p.name for p in signin_platforms)\n\n # For displaying \"Connected Accounts\"\n website.social_profiles = [twitter, github, facebook, google, bitbucket, openstreetmap, bountysource]\n\n all_platforms = signin_platforms + [bountysource, venmo]\n website.platforms = AccountElsewhere.platforms = PlatformRegistry(all_platforms)\n\n friends_platforms = [p for p in website.platforms if getattr(p, 'api_friends_path', None)]\n website.friends_platforms = PlatformRegistry(friends_platforms)\n\n for platform in all_platforms:\n platform.icon = website.asset('platforms/%s.16.png' % platform.name)\n platform.logo = website.asset('platforms/%s.png' % platform.name)\n\n\ndef compile_assets(website):\n client = Client(website.www_root, website.project_root)\n client._website = website\n for spt in find_files(website.www_root+'/assets/', '*.spt'):\n filepath = spt[:-4] # /path/to/www/assets/foo.css\n urlpath = spt[spt.rfind('/assets/'):-4] # /assets/foo.css\n try:\n # Remove any existing compiled asset, so we can access the dynamic\n # one instead (Aspen prefers foo.css over foo.css.spt).\n os.unlink(filepath)\n except:\n pass\n headers = {}\n if website.base_url:\n url = urlparse.urlparse(website.base_url)\n headers[b'HTTP_X_FORWARDED_PROTO'] = str(url.scheme)\n headers[b'HTTP_HOST'] = str(url.netloc)\n content = client.GET(urlpath, **headers).body\n tmpfd, tmpfpath = mkstemp(dir='.')\n os.write(tmpfd, content)\n os.close(tmpfd)\n os.rename(tmpfpath, filepath)\n atexit.register(lambda: clean_assets(website.www_root))\n\n\ndef clean_assets(www_root):\n for spt in find_files(www_root+'/assets/', '*.spt'):\n try:\n os.unlink(spt[:-4])\n except:\n pass\n\n\ndef load_i18n(project_root, tell_sentry):\n # Load the locales\n localeDir = os.path.join(project_root, 'i18n', 'core')\n locales = LOCALES\n for file in os.listdir(localeDir):\n try:\n parts = file.split(\".\")\n if not (len(parts) == 2 and parts[1] == \"po\"):\n continue\n lang = parts[0]\n with open(os.path.join(localeDir, file)) as f:\n l = locales[lang.lower()] = Locale(lang)\n c = l.catalog = read_po(f)\n c.plural_func = get_function_from_rule(c.plural_expr)\n try:\n l.countries = make_sorted_dict(COUNTRIES, l.territories)\n except KeyError:\n l.countries = COUNTRIES\n try:\n l.languages_2 = make_sorted_dict(LANGUAGES_2, l.languages)\n except KeyError:\n l.languages_2 = LANGUAGES_2\n except Exception as e:\n tell_sentry(e, {})\n\n # Add aliases\n for k, v in list(locales.items()):\n locales.setdefault(ALIASES.get(k, k), v)\n locales.setdefault(ALIASES_R.get(k, k), v)\n for k, v in list(locales.items()):\n locales.setdefault(k.split('_', 1)[0], v)\n\n # Patch the locales to look less formal\n locales['fr'].currency_formats[None] = parse_pattern('#,##0.00\\u202f\\xa4')\n locales['fr'].currency_symbols['USD'] = '$'\n\n\ndef other_stuff(website, env):\n website.cache_static = env.gratipay_cache_static\n website.compress_assets = env.gratipay_compress_assets\n\n if website.cache_static:\n def asset(path):\n fspath = website.www_root+'/assets/'+path\n etag = ''\n try:\n etag = asset_etag(fspath)\n except Exception as e:\n website.tell_sentry(e, {})\n return env.gratipay_asset_url+path+(etag and '?etag='+etag)\n website.asset = asset\n compile_assets(website)\n else:\n website.asset = lambda path: env.gratipay_asset_url+path\n clean_assets(website.www_root)\n\n website.optimizely_id = env.optimizely_id\n website.include_piwik = env.include_piwik\n\n website.log_metrics = env.log_metrics\n\n\ndef env():\n env = Environment(\n AWS_SES_ACCESS_KEY_ID = unicode,\n AWS_SES_SECRET_ACCESS_KEY = unicode,\n AWS_SES_DEFAULT_REGION = unicode,\n BASE_URL = unicode,\n DATABASE_URL = unicode,\n DATABASE_MAXCONN = int,\n CRYPTO_KEYS = unicode,\n GRATIPAY_ASSET_URL = unicode,\n GRATIPAY_CACHE_STATIC = is_yesish,\n GRATIPAY_COMPRESS_ASSETS = is_yesish,\n BALANCED_API_SECRET = unicode,\n BRAINTREE_SANDBOX_MODE = is_yesish,\n BRAINTREE_MERCHANT_ID = unicode,\n BRAINTREE_PUBLIC_KEY = unicode,\n BRAINTREE_PRIVATE_KEY = unicode,\n GITHUB_CLIENT_ID = unicode,\n GITHUB_CLIENT_SECRET = unicode,\n GITHUB_CALLBACK = unicode,\n BITBUCKET_CONSUMER_KEY = unicode,\n BITBUCKET_CONSUMER_SECRET = unicode,\n BITBUCKET_CALLBACK = unicode,\n TWITTER_CONSUMER_KEY = unicode,\n TWITTER_CONSUMER_SECRET = unicode,\n TWITTER_CALLBACK = unicode,\n FACEBOOK_APP_ID = unicode,\n FACEBOOK_APP_SECRET = unicode,\n FACEBOOK_CALLBACK = unicode,\n GOOGLE_CLIENT_ID = unicode,\n GOOGLE_CLIENT_SECRET = unicode,\n GOOGLE_CALLBACK = unicode,\n BOUNTYSOURCE_API_SECRET = unicode,\n BOUNTYSOURCE_CALLBACK = unicode,\n BOUNTYSOURCE_API_HOST = unicode,\n BOUNTYSOURCE_WWW_HOST = unicode,\n VENMO_CLIENT_ID = unicode,\n VENMO_CLIENT_SECRET = unicode,\n VENMO_CALLBACK = unicode,\n OPENSTREETMAP_CONSUMER_KEY = unicode,\n OPENSTREETMAP_CONSUMER_SECRET = unicode,\n OPENSTREETMAP_CALLBACK = unicode,\n OPENSTREETMAP_API_URL = unicode,\n OPENSTREETMAP_AUTH_URL = unicode,\n UPDATE_CTA_EVERY = int,\n CHECK_DB_EVERY = int,\n CHECK_NPM_SYNC_EVERY = int,\n EMAIL_QUEUE_FLUSH_EVERY = int,\n EMAIL_QUEUE_SLEEP_FOR = int,\n EMAIL_QUEUE_ALLOW_UP_TO = int,\n OPTIMIZELY_ID = unicode,\n SENTRY_DSN = unicode,\n LOG_METRICS = is_yesish,\n INCLUDE_PIWIK = is_yesish,\n PROJECT_REVIEW_REPO = unicode,\n PROJECT_REVIEW_USERNAME = unicode,\n PROJECT_REVIEW_TOKEN = unicode,\n RAISE_SIGNIN_NOTIFICATIONS = is_yesish,\n REQUIRE_YAJL = is_yesish,\n GUNICORN_OPTS = unicode,\n )\n\n\n # Error Checking\n # ==============\n\n if env.malformed:\n these = len(env.malformed) != 1 and 'these' or 'this'\n plural = len(env.malformed) != 1 and 's' or ''\n aspen.log_dammit(\"=\" * 42)\n aspen.log_dammit( \"Oh no! Gratipay.com couldn't understand %s \" % these\n , \"environment variable%s:\" % plural\n )\n aspen.log_dammit(\" \")\n for key, err in env.malformed:\n aspen.log_dammit(\" {} ({})\".format(key, err))\n aspen.log_dammit(\" \")\n aspen.log_dammit(\"See ./default_local.env for hints.\")\n\n aspen.log_dammit(\"=\" * 42)\n keys = ', '.join([key for key, value in env.malformed])\n raise BadEnvironment(\"Malformed envvar{}: {}.\".format(plural, keys))\n\n if env.missing:\n these = len(env.missing) != 1 and 'these' or 'this'\n plural = len(env.missing) != 1 and 's' or ''\n aspen.log_dammit(\"=\" * 42)\n aspen.log_dammit( \"Oh no! Gratipay.com needs %s missing \" % these\n , \"environment variable%s:\" % plural\n )\n aspen.log_dammit(\" \")\n for key in env.missing:\n aspen.log_dammit(\" \" + key)\n aspen.log_dammit(\" \")\n aspen.log_dammit( \"(Sorry, we must've started looking for \"\n , \"%s since you last updated Gratipay!)\" % these\n )\n aspen.log_dammit(\" \")\n aspen.log_dammit(\"Running Gratipay locally? Edit ./local.env.\")\n aspen.log_dammit(\"Running the test suite? Edit ./tests/env.\")\n aspen.log_dammit(\" \")\n aspen.log_dammit(\"See ./default_local.env for hints.\")\n\n aspen.log_dammit(\"=\" * 42)\n keys = ', '.join([key for key in env.missing])\n raise BadEnvironment(\"Missing envvar{}: {}.\".format(plural, keys))\n\n return env\n\n\ndef __main__():\n # deploy.sh uses this to validate production env config\n env()\n", "path": "gratipay/wireup.py"}], "after_files": [{"content": "\"\"\"Wireup\n\"\"\"\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport atexit\nimport os\nimport sys\nimport urlparse\nfrom tempfile import mkstemp\n\nimport aspen\nfrom aspen.testing.client import Client\nfrom babel.core import Locale\nfrom babel.messages.pofile import read_po\nfrom babel.numbers import parse_pattern\nimport balanced\nimport braintree\nimport gratipay\nimport gratipay.billing.payday\nimport raven\nfrom environment import Environment, is_yesish\nfrom gratipay.application import Application\nfrom gratipay.elsewhere import PlatformRegistry\nfrom gratipay.elsewhere.bitbucket import Bitbucket\nfrom gratipay.elsewhere.bountysource import Bountysource\nfrom gratipay.elsewhere.github import GitHub\nfrom gratipay.elsewhere.facebook import Facebook\nfrom gratipay.elsewhere.google import Google\nfrom gratipay.elsewhere.openstreetmap import OpenStreetMap\nfrom gratipay.elsewhere.twitter import Twitter\nfrom gratipay.elsewhere.venmo import Venmo\nfrom gratipay.models.account_elsewhere import AccountElsewhere\nfrom gratipay.models.participant import Participant, Identity\nfrom gratipay.security.crypto import EncryptingPacker\nfrom gratipay.utils import find_files\nfrom gratipay.utils.http_caching import asset_etag\nfrom gratipay.utils.i18n import (\n ALIASES, ALIASES_R, COUNTRIES, LANGUAGES_2, LOCALES,\n get_function_from_rule, make_sorted_dict\n)\n\ndef base_url(website, env):\n gratipay.base_url = website.base_url = env.base_url\n\ndef secure_cookies(env):\n gratipay.use_secure_cookies = env.base_url.startswith('https')\n\ndef db(env):\n\n # Instantiating Application calls the rest of these wireup functions, and\n # is side-effecty (e.g., writing to stdout, which interferes with some of\n # our scripts). Eventually scripts that use this function should be\n # rewritten to instantiate Application directly.\n\n sys.stdout = sys.stderr\n app = Application()\n sys.stdout = sys.__stdout__\n return app.db\n\ndef crypto(env):\n keys = [k.encode('ASCII') for k in env.crypto_keys.split()]\n out = Identity.encrypting_packer = EncryptingPacker(*keys)\n return out\n\ndef billing(env):\n balanced.configure(env.balanced_api_secret)\n\n if env.braintree_sandbox_mode:\n braintree_env = braintree.Environment.Sandbox\n else:\n braintree_env = braintree.Environment.Production\n\n braintree.Configuration.configure(\n braintree_env,\n env.braintree_merchant_id,\n env.braintree_public_key,\n env.braintree_private_key\n )\n\n\ndef username_restrictions(website):\n gratipay.RESTRICTED_USERNAMES = os.listdir(website.www_root)\n\n\ndef make_sentry_teller(env, _noop=None):\n if not env.sentry_dsn:\n aspen.log_dammit(\"Won't log to Sentry (SENTRY_DSN is empty).\")\n noop = _noop or (lambda *a, **kw: None)\n Participant._tell_sentry = noop\n return noop\n\n sentry = raven.Client(env.sentry_dsn)\n\n def tell_sentry(exception, state):\n\n # Decide if we care.\n # ==================\n\n if isinstance(exception, aspen.Response):\n\n if exception.code < 500:\n\n # Only log server errors to Sentry. For responses < 500 we use\n # stream-/line-based access logging. See discussion on:\n\n # https://github.com/gratipay/gratipay.com/pull/1560.\n\n return\n\n\n # Find a user.\n # ============\n # | is disallowed in usernames, so we can use it here to indicate\n # situations in which we can't get a username.\n\n user = state.get('user')\n user_id = 'n/a'\n if user is None:\n username = '| no user'\n else:\n is_anon = getattr(user, 'ANON', None)\n if is_anon is None:\n username = '| no ANON'\n elif is_anon:\n username = '| anonymous'\n else:\n participant = getattr(user, 'participant', None)\n if participant is None:\n username = '| no participant'\n else:\n username = getattr(user.participant, 'username', None)\n if username is None:\n username = '| no username'\n else:\n user_id = user.participant.id\n username = username.encode('utf8')\n user = { 'id': user_id\n , 'is_admin': user.participant.is_admin\n , 'is_suspicious': user.participant.is_suspicious\n , 'claimed_time': user.participant.claimed_time.isoformat()\n , 'url': 'https://gratipay.com/{}/'.format(username)\n }\n\n\n # Fire off a Sentry call.\n # =======================\n\n dispatch_result = state.get('dispatch_result')\n request = state.get('request')\n tags = { 'username': username\n , 'user_id': user_id\n }\n extra = { 'filepath': getattr(dispatch_result, 'match', None)\n , 'request': str(request).splitlines()\n , 'user': user\n }\n result = sentry.captureException(tags=tags, extra=extra)\n\n\n # Emit a reference string to stdout.\n # ==================================\n\n ident = sentry.get_ident(result)\n aspen.log_dammit('Exception reference: ' + ident)\n\n Participant._tell_sentry = tell_sentry\n return tell_sentry\n\n\nclass BadEnvironment(SystemExit):\n pass\n\n\ndef accounts_elsewhere(website, env):\n\n twitter = Twitter(\n env.twitter_consumer_key,\n env.twitter_consumer_secret,\n env.twitter_callback,\n )\n facebook = Facebook(\n env.facebook_app_id,\n env.facebook_app_secret,\n env.facebook_callback,\n )\n github = GitHub(\n env.github_client_id,\n env.github_client_secret,\n env.github_callback,\n )\n google = Google(\n env.google_client_id,\n env.google_client_secret,\n env.google_callback,\n )\n bitbucket = Bitbucket(\n env.bitbucket_consumer_key,\n env.bitbucket_consumer_secret,\n env.bitbucket_callback,\n )\n openstreetmap = OpenStreetMap(\n env.openstreetmap_consumer_key,\n env.openstreetmap_consumer_secret,\n env.openstreetmap_callback,\n env.openstreetmap_api_url,\n env.openstreetmap_auth_url,\n )\n bountysource = Bountysource(\n None,\n env.bountysource_api_secret,\n env.bountysource_callback,\n env.bountysource_api_host,\n env.bountysource_www_host,\n )\n venmo = Venmo(\n env.venmo_client_id,\n env.venmo_client_secret,\n env.venmo_callback,\n )\n\n signin_platforms = [twitter, github, facebook, google, bitbucket, openstreetmap]\n website.signin_platforms = PlatformRegistry(signin_platforms)\n AccountElsewhere.signin_platforms_names = tuple(p.name for p in signin_platforms)\n\n # For displaying \"Connected Accounts\"\n website.social_profiles = [twitter, github, facebook, google, bitbucket, openstreetmap, bountysource]\n\n all_platforms = signin_platforms + [bountysource, venmo]\n website.platforms = AccountElsewhere.platforms = PlatformRegistry(all_platforms)\n\n friends_platforms = [p for p in website.platforms if getattr(p, 'api_friends_path', None)]\n website.friends_platforms = PlatformRegistry(friends_platforms)\n\n for platform in all_platforms:\n platform.icon = website.asset('platforms/%s.16.png' % platform.name)\n platform.logo = website.asset('platforms/%s.png' % platform.name)\n\n\ndef compile_assets(website):\n client = Client(website.www_root, website.project_root)\n client._website = website\n for spt in find_files(website.www_root+'/assets/', '*.spt'):\n filepath = spt[:-4] # /path/to/www/assets/foo.css\n urlpath = spt[spt.rfind('/assets/'):-4] # /assets/foo.css\n try:\n # Remove any existing compiled asset, so we can access the dynamic\n # one instead (Aspen prefers foo.css over foo.css.spt).\n os.unlink(filepath)\n except:\n pass\n headers = {}\n if website.base_url:\n url = urlparse.urlparse(website.base_url)\n headers[b'HTTP_X_FORWARDED_PROTO'] = str(url.scheme)\n headers[b'HTTP_HOST'] = str(url.netloc)\n content = client.GET(urlpath, **headers).body\n tmpfd, tmpfpath = mkstemp(dir='.')\n os.write(tmpfd, content)\n os.close(tmpfd)\n os.rename(tmpfpath, filepath)\n atexit.register(lambda: clean_assets(website.www_root))\n\n\ndef clean_assets(www_root):\n for spt in find_files(www_root+'/assets/', '*.spt'):\n try:\n os.unlink(spt[:-4])\n except:\n pass\n\n\ndef load_i18n(project_root, tell_sentry):\n # Load the locales\n localeDir = os.path.join(project_root, 'i18n', 'core')\n locales = LOCALES\n for file in os.listdir(localeDir):\n try:\n parts = file.split(\".\")\n if not (len(parts) == 2 and parts[1] == \"po\"):\n continue\n lang = parts[0]\n with open(os.path.join(localeDir, file)) as f:\n l = locales[lang.lower()] = Locale(lang)\n c = l.catalog = read_po(f)\n c.plural_func = get_function_from_rule(c.plural_expr)\n try:\n l.countries = make_sorted_dict(COUNTRIES, l.territories)\n except KeyError:\n l.countries = COUNTRIES\n try:\n l.languages_2 = make_sorted_dict(LANGUAGES_2, l.languages)\n except KeyError:\n l.languages_2 = LANGUAGES_2\n except Exception as e:\n tell_sentry(e, {})\n\n # Add aliases\n for k, v in list(locales.items()):\n locales.setdefault(ALIASES.get(k, k), v)\n locales.setdefault(ALIASES_R.get(k, k), v)\n for k, v in list(locales.items()):\n locales.setdefault(k.split('_', 1)[0], v)\n\n # Patch the locales to look less formal\n locales['fr'].currency_formats[None] = parse_pattern('#,##0.00\\u202f\\xa4')\n locales['fr'].currency_symbols['USD'] = '$'\n\n\ndef other_stuff(website, env):\n website.cache_static = env.gratipay_cache_static\n website.compress_assets = env.gratipay_compress_assets\n\n if website.cache_static:\n def asset(path):\n fspath = website.www_root+'/assets/'+path\n etag = ''\n try:\n etag = asset_etag(fspath)\n except Exception as e:\n website.tell_sentry(e, {})\n return env.gratipay_asset_url+path+(etag and '?etag='+etag)\n website.asset = asset\n compile_assets(website)\n else:\n website.asset = lambda path: env.gratipay_asset_url+path\n clean_assets(website.www_root)\n\n website.optimizely_id = env.optimizely_id\n website.include_piwik = env.include_piwik\n\n website.log_metrics = env.log_metrics\n\n\ndef env():\n env = Environment(\n AWS_SES_ACCESS_KEY_ID = unicode,\n AWS_SES_SECRET_ACCESS_KEY = unicode,\n AWS_SES_DEFAULT_REGION = unicode,\n BASE_URL = unicode,\n DATABASE_URL = unicode,\n DATABASE_MAXCONN = int,\n CRYPTO_KEYS = unicode,\n GRATIPAY_ASSET_URL = unicode,\n GRATIPAY_CACHE_STATIC = is_yesish,\n GRATIPAY_COMPRESS_ASSETS = is_yesish,\n BALANCED_API_SECRET = unicode,\n BRAINTREE_SANDBOX_MODE = is_yesish,\n BRAINTREE_MERCHANT_ID = unicode,\n BRAINTREE_PUBLIC_KEY = unicode,\n BRAINTREE_PRIVATE_KEY = unicode,\n GITHUB_CLIENT_ID = unicode,\n GITHUB_CLIENT_SECRET = unicode,\n GITHUB_CALLBACK = unicode,\n BITBUCKET_CONSUMER_KEY = unicode,\n BITBUCKET_CONSUMER_SECRET = unicode,\n BITBUCKET_CALLBACK = unicode,\n TWITTER_CONSUMER_KEY = unicode,\n TWITTER_CONSUMER_SECRET = unicode,\n TWITTER_CALLBACK = unicode,\n FACEBOOK_APP_ID = unicode,\n FACEBOOK_APP_SECRET = unicode,\n FACEBOOK_CALLBACK = unicode,\n GOOGLE_CLIENT_ID = unicode,\n GOOGLE_CLIENT_SECRET = unicode,\n GOOGLE_CALLBACK = unicode,\n BOUNTYSOURCE_API_SECRET = unicode,\n BOUNTYSOURCE_CALLBACK = unicode,\n BOUNTYSOURCE_API_HOST = unicode,\n BOUNTYSOURCE_WWW_HOST = unicode,\n VENMO_CLIENT_ID = unicode,\n VENMO_CLIENT_SECRET = unicode,\n VENMO_CALLBACK = unicode,\n OPENSTREETMAP_CONSUMER_KEY = unicode,\n OPENSTREETMAP_CONSUMER_SECRET = unicode,\n OPENSTREETMAP_CALLBACK = unicode,\n OPENSTREETMAP_API_URL = unicode,\n OPENSTREETMAP_AUTH_URL = unicode,\n UPDATE_CTA_EVERY = int,\n CHECK_DB_EVERY = int,\n CHECK_NPM_SYNC_EVERY = int,\n EMAIL_QUEUE_FLUSH_EVERY = int,\n EMAIL_QUEUE_SLEEP_FOR = int,\n EMAIL_QUEUE_ALLOW_UP_TO = int,\n OPTIMIZELY_ID = unicode,\n SENTRY_DSN = unicode,\n LOG_METRICS = is_yesish,\n INCLUDE_PIWIK = is_yesish,\n PROJECT_REVIEW_REPO = unicode,\n PROJECT_REVIEW_USERNAME = unicode,\n PROJECT_REVIEW_TOKEN = unicode,\n RAISE_SIGNIN_NOTIFICATIONS = is_yesish,\n REQUIRE_YAJL = is_yesish,\n GUNICORN_OPTS = unicode,\n )\n\n\n # Error Checking\n # ==============\n\n if env.malformed:\n these = len(env.malformed) != 1 and 'these' or 'this'\n plural = len(env.malformed) != 1 and 's' or ''\n aspen.log_dammit(\"=\" * 42)\n aspen.log_dammit( \"Oh no! Gratipay.com couldn't understand %s \" % these\n , \"environment variable%s:\" % plural\n )\n aspen.log_dammit(\" \")\n for key, err in env.malformed:\n aspen.log_dammit(\" {} ({})\".format(key, err))\n aspen.log_dammit(\" \")\n aspen.log_dammit(\"See ./default_local.env for hints.\")\n\n aspen.log_dammit(\"=\" * 42)\n keys = ', '.join([key for key, value in env.malformed])\n raise BadEnvironment(\"Malformed envvar{}: {}.\".format(plural, keys))\n\n if env.missing:\n these = len(env.missing) != 1 and 'these' or 'this'\n plural = len(env.missing) != 1 and 's' or ''\n aspen.log_dammit(\"=\" * 42)\n aspen.log_dammit( \"Oh no! Gratipay.com needs %s missing \" % these\n , \"environment variable%s:\" % plural\n )\n aspen.log_dammit(\" \")\n for key in env.missing:\n aspen.log_dammit(\" \" + key)\n aspen.log_dammit(\" \")\n aspen.log_dammit( \"(Sorry, we must've started looking for \"\n , \"%s since you last updated Gratipay!)\" % these\n )\n aspen.log_dammit(\" \")\n aspen.log_dammit(\"Running Gratipay locally? Edit ./local.env.\")\n aspen.log_dammit(\"Running the test suite? Edit ./tests/env.\")\n aspen.log_dammit(\" \")\n aspen.log_dammit(\"See ./default_local.env for hints.\")\n\n aspen.log_dammit(\"=\" * 42)\n keys = ', '.join([key for key in env.missing])\n raise BadEnvironment(\"Missing envvar{}: {}.\".format(plural, keys))\n\n return env\n\n\nif __name__ == '__main__':\n # deploy.sh uses this to validate production env config\n env()\n", "path": "gratipay/wireup.py"}]} |
gh_patches_debug_60 | rasdani/github-patches | git_diff | goauthentik__authentik-8594 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Proxy provider incorrect redirect behaviour
**Describe the bug**
This bug manifests as a seemingly random-ish redirect after a page refresh when the proxy token has expired and the user is redirected back to the app from the proxy outpost that has just generated a new token.
To make it more clear - let's say we have a forward auth single application set up correctly and working - let's call the application The Echo Server: `https://echo.domain.tld`. We visit the Echo Server, idle for some time, and manage to wait enough so that the proxy token expires. When the Echo Servers sends a request to some kind of Echo Server's **resource (`https://echo.domain.tld/static/resource.json`)** AFTER the token expires (which fails because of the expired token) and THEN we refresh the current page (`https://echo.domain.tld/home`), we get redirected to authentik proxy outpost which in turns generates a new token and redirects the user back to the Echo Server - but when authentik eventually redirects the user back to the app the URL is **`https://echo.domain.tld/static/resource.json`** and not the original path from which the flow started `https://echo.domain.tld/home`.
**To Reproduce**
Steps to reproduce the behavior:
1. Have a working setup of a protected application with a proxy provider in Forward auth (single application) mode ("The Echo Server" described above)
2. In the app's proxy provider update the Token validity setting to a short duration, e.g. `seconds=30` for demonstration purposes
3. Go to the app (e.g. `https://echo.domain.tld/home`) and successfully authenticate/authorize.
4. Wait 30 seconds until the token expires
5. Open a developer console and simulate an artificial resource request of some kind: `fetch("https://echo.domain.tld/static/resource.json")` - the fetch fails because the token has expired. You can verify in the network tab that the fetch request gets redirected to the outpost and fails: `https://echo.domain.tld/outpost.goauthentik.io/start?rd=https%3A%2F%2Fecho.domain.tld%2Fstatic%2Fresource.json`.
6. Now refresh the page while still being at the same URL: `https://echo.domain.tld/home` - You can verify in the network tab that the refresh request gets redirected to the outpost _with correct redirect argument set_: `https://echo.domain.tld/outpost.goauthentik.io/start?rd=https%3A%2F%2Fecho.domain.tld%2Fhome`
7. You eventually get redirected back to the app's resource requested in step 5: `https://echo.domain.tld/static/resource.json`
**Expected behavior**
I would expect to be eventually redirected back to the `https://echo.domain.tld/home` page.
**Logs**
<details>
<summary>Logs</summary>
```
{"event":"/outpost.goauthentik.io/auth/nginx","host":"echo.domain.tld","level":"info","logger":"authentik.outpost.proxyv2.application","method":"GET","name":"Echo server proxy","remote":"xxx.xxx.xxx.xxx","runtime":"0.343","scheme":"http","size":0,"status":200,"timestamp":"2023-05-14T22:28:10Z","user":"UserName","user_agent":"USER_AGENT"}
{"event":"/outpost.goauthentik.io/auth/nginx","host":"echo.domain.tld","level":"info","logger":"authentik.outpost.proxyv2.application","method":"GET","name":"Echo server proxy","remote":"xxx.xxx.xxx.xxx","runtime":"0.930","scheme":"http","size":21,"status":401,"timestamp":"2023-05-14T22:29:07Z","user_agent":"USER_AGENT"}
{"event":"/outpost.goauthentik.io/start?rd=https%3A%2F%2Fecho.domain.tld%2Fstatic%2Fresource.json","host":"echo.domain.tld","level":"info","logger":"authentik.outpost.proxyv2.application","method":"GET","name":"Echo server proxy","remote":"xxx.xxx.xxx.xxx","runtime":"1.123","scheme":"http","size":359,"status":302,"timestamp":"2023-05-14T22:29:07Z","user_agent":"USER_AGENT"}
{"event":"/outpost.goauthentik.io/auth/nginx","host":"echo.domain.tld","level":"info","logger":"authentik.outpost.proxyv2.application","method":"GET","name":"Echo server proxy","remote":"xxx.xxx.xxx.xxx","runtime":"0.933","scheme":"http","size":21,"status":401,"timestamp":"2023-05-14T22:29:11Z","user_agent":"USER_AGENT"}
{"event":"/outpost.goauthentik.io/start?rd=https%3A%2F%2Fecho.domain.tld%2Fhome","host":"echo.domain.tld","level":"info","logger":"authentik.outpost.proxyv2.application","method":"GET","name":"Echo server proxy","remote":"xxx.xxx.xxx.xxx","runtime":"1.319","scheme":"http","size":359,"status":302,"timestamp":"2023-05-14T22:29:11Z","user_agent":"USER_AGENT"}
{"auth_via": "session", "event": "/application/o/authorize/?client_id=ffffffffffffffffffffffffffffffff&redirect_uri=https%3A%2F%2Fecho.domain.tld%2Foutpost.goauthentik.io%2Fcallback%3FX-authentik-auth-callback%3Dtrue&response_type=code&scope=openid+profile+email+ak_proxy&state=NdC1Ol7jMCT8oS_P9oKDQ3J6gvnETS4eWUbMbo4DbRo", "host": "auth.domain.tld", "level": "info", "logger": "authentik.asgi", "method": "GET", "pid": 24, "remote": "xxx.xxx.xxx.xxx", "request_id": "5b1ce3f63ab44b67ae482cd4eef3548d", "runtime": 74, "scheme": "https", "status": 302, "timestamp": "2023-05-14T22:29:11.311489", "user": "UserName", "user_agent": "USER_AGENT"}
{"auth_via": "session", "event": "/if/flow/default-provider-authorization-explicit-consent/?client_id=ffffffffffffffffffffffffffffffff&redirect_uri=https%3A%2F%2Fecho.domain.tld%2Foutpost.goauthentik.io%2Fcallback%3FX-authentik-auth-callback%3Dtrue&response_type=code&scope=openid+profile+email+ak_proxy&state=NdC1Ol7jMCT8oS_P9oKDQ3J6gvnETS4eWUbMbo4DbRo", "host": "auth.domain.tld", "level": "info", "logger": "authentik.asgi", "method": "GET", "pid": 24, "remote": "xxx.xxx.xxx.xxx", "request_id": "9dabf88c7f7f40cb909a317c47132181", "runtime": 33, "scheme": "https", "status": 200, "timestamp": "2023-05-14T22:29:11.362915", "user": "UserName", "user_agent": "USER_AGENT"}
{"event":"/static/dist/poly.js.map","host":"auth.domain.tld","level":"info","logger":"authentik.router","method":"GET","remote":"xxx.xxx.xxx.xxx","runtime":"4.872","scheme":"http","size":0,"status":304,"timestamp":"2023-05-14T22:29:11Z","user_agent":"USER_AGENT"}
{"event":"/static/dist/standalone/loading/vendor-320681c9.js.map","host":"auth.domain.tld","level":"info","logger":"authentik.router","method":"GET","remote":"xxx.xxx.xxx.xxx","runtime":"1.094","scheme":"http","size":0,"status":304,"timestamp":"2023-05-14T22:29:12Z","user_agent":"USER_AGENT"}
{"event":"/static/dist/standalone/loading/api-f65fd993.js.map","host":"auth.domain.tld","level":"info","logger":"authentik.router","method":"GET","remote":"xxx.xxx.xxx.xxx","runtime":"0.742","scheme":"http","size":0,"status":304,"timestamp":"2023-05-14T22:29:12Z","user_agent":"USER_AGENT"}
{"event":"/static/dist/standalone/loading/locale-en-f660cb3b.js.map","host":"auth.domain.tld","level":"info","logger":"authentik.router","method":"GET","remote":"xxx.xxx.xxx.xxx","runtime":"0.523","scheme":"http","size":0,"status":304,"timestamp":"2023-05-14T22:29:12Z","user_agent":"USER_AGENT"}
{"event":"/static/dist/standalone/loading/index.js.map","host":"auth.domain.tld","level":"info","logger":"authentik.router","method":"GET","remote":"xxx.xxx.xxx.xxx","runtime":"0.438","scheme":"http","size":53898,"status":200,"timestamp":"2023-05-14T22:29:12Z","user_agent":"USER_AGENT"}
{"event":"/static/dist/standalone/loading/index.js.map","host":"auth.domain.tld","level":"info","logger":"authentik.router","method":"GET","remote":"xxx.xxx.xxx.xxx","runtime":"0.196","scheme":"http","size":0,"status":304,"timestamp":"2023-05-14T22:29:12Z","user_agent":"USER_AGENT"}
{"event":"/static/dist/flow/FlowInterface-d33d9ac4.js.map","host":"auth.domain.tld","level":"info","logger":"authentik.router","method":"GET","remote":"xxx.xxx.xxx.xxx","runtime":"2.285","scheme":"http","size":0,"status":304,"timestamp":"2023-05-14T22:29:12Z","user_agent":"USER_AGENT"}
{"event":"/static/dist/flow/FlowInterface.js.map","host":"auth.domain.tld","level":"info","logger":"authentik.router","method":"GET","remote":"xxx.xxx.xxx.xxx","runtime":"0.127","scheme":"http","size":0,"status":304,"timestamp":"2023-05-14T22:29:12Z","user_agent":"USER_AGENT"}
{"event":"/static/dist/flow/locale-en-f660cb3b.js.map","host":"auth.domain.tld","level":"info","logger":"authentik.router","method":"GET","remote":"xxx.xxx.xxx.xxx","runtime":"1.856","scheme":"http","size":0,"status":304,"timestamp":"2023-05-14T22:29:12Z","user_agent":"USER_AGENT"}
{"event":"/static/dist/flow/vendor-cm-00a4719e.js.map","host":"auth.domain.tld","level":"info","logger":"authentik.router","method":"GET","remote":"xxx.xxx.xxx.xxx","runtime":"7.299","scheme":"http","size":0,"status":304,"timestamp":"2023-05-14T22:29:12Z","user_agent":"USER_AGENT"}
{"event":"/static/dist/flow/api-befd9628.js.map","host":"auth.domain.tld","level":"info","logger":"authentik.router","method":"GET","remote":"xxx.xxx.xxx.xxx","runtime":"13.889","scheme":"http","size":0,"status":304,"timestamp":"2023-05-14T22:29:12Z","user_agent":"USER_AGENT"}
{"event":"/static/dist/assets/icons/icon_left_brand.svg","host":"auth.domain.tld","level":"info","logger":"authentik.router","method":"GET","remote":"xxx.xxx.xxx.xxx","runtime":"0.511","scheme":"http","size":4861,"status":200,"timestamp":"2023-05-14T22:29:12Z","user_agent":"USER_AGENT"}
{"event":"/if/flow/default-provider-authorization-explicit-consent/assets/fonts/RedHatText/RedHatText-Regular.woff2","host":"auth.domain.tld","level":"info","logger":"authentik.router","method":"GET","remote":"xxx.xxx.xxx.xxx","runtime":"0.523","scheme":"http","size":3768,"status":200,"timestamp":"2023-05-14T22:29:12Z","user_agent":"USER_AGENT"}
{"event":"/if/flow/default-provider-authorization-explicit-consent/assets/fonts/RedHatDisplay/RedHatDisplay-Medium.woff2","host":"auth.domain.tld","level":"info","logger":"authentik.router","method":"GET","remote":"xxx.xxx.xxx.xxx","runtime":"0.521","scheme":"http","size":28661,"status":200,"timestamp":"2023-05-14T22:29:12Z","user_agent":"USER_AGENT"}
{"event":"/static/dist/flow/vendor-25865c6e.js.map","host":"auth.domain.tld","level":"info","logger":"authentik.router","method":"GET","remote":"xxx.xxx.xxx.xxx","runtime":"45.016","scheme":"http","size":0,"status":304,"timestamp":"2023-05-14T22:29:12Z","user_agent":"USER_AGENT"}
{"event":"/static/dist/flow/FlowInterface-d33d9ac4.js.map","host":"auth.domain.tld","level":"info","logger":"authentik.router","method":"GET","remote":"xxx.xxx.xxx.xxx","runtime":"1.530","scheme":"http","size":0,"status":304,"timestamp":"2023-05-14T22:29:12Z","user_agent":"USER_AGENT"}
{"auth_via": "session", "event": "/api/v3/flows/executor/default-provider-authorization-explicit-consent/?query=client_id%3Dffffffffffffffffffffffffffffffff%26redirect_uri%3Dhttps%253A%252F%252Fecho.domain.tld%252Foutpost.goauthentik.io%252Fcallback%253FX-authentik-auth-callback%253Dtrue%26response_type%3Dcode%26scope%3Dopenid%2Bprofile%2Bemail%2Bak_proxy%26state%3DNdC1Ol7jMCT8oS_P9oKDQ3J6gvnETS4eWUbMbo4DbRo", "host": "auth.domain.tld", "level": "info", "logger": "authentik.asgi", "method": "GET", "pid": 24, "remote": "xxx.xxx.xxx.xxx", "request_id": "bccf832ab85840c7899bef18fb76899e", "runtime": 233, "scheme": "https", "status": 302, "timestamp": "2023-05-14T22:29:12.466727", "user": "UserName", "user_agent": "USER_AGENT"}
{"action": "authorize_application", "auth_via": "session", "client_ip": "xxx.xxx.xxx.xxx", "context": {"authorized_application": {"app": "authentik_core", "model_name": "application", "name": "Echo server", "pk": "d208963c731d4cb282ae64397f731688"}, "flow": "a8c59e9e6fbc4e1d9a53365db1bf8704", "http_request": {"args": {"client_id": "ffffffffffffffffffffffffffffffff", "redirect_uri": "https://echo.domain.tld/outpost.goauthentik.io/callback?X-authentik-auth-callback=true", "response_type": "code", "scope": "openid profile email ak_proxy", "state": "NdC1Ol7jMCT8oS_P9oKDQ3J6gvnETS4eWUbMbo4DbRo"}, "method": "GET", "path": "/api/v3/flows/executor/default-provider-authorization-explicit-consent/"}, "scopes": "openid profile email ak_proxy"}, "event": "Created Event", "host": "auth.domain.tld", "level": "info", "logger": "authentik.events.models", "pid": 24, "request_id": "15975e5a84894e668b1127b804d7b3d8", "timestamp": "2023-05-14T22:29:12.654030", "user": {"email": "[email protected]", "pk": 12, "username": "UserName"}}
{"auth_via": "session", "event": "Task published", "host": "auth.domain.tld", "level": "info", "logger": "authentik.root.celery", "pid": 24, "request_id": "15975e5a84894e668b1127b804d7b3d8", "task_id": "94594c44-1672-4710-b659-96c22b3580f6", "task_name": "authentik.events.tasks.event_notification_handler", "timestamp": "2023-05-14T22:29:12.678197"}
{"auth_via": "session", "event": "/api/v3/flows/executor/default-provider-authorization-explicit-consent/?query=client_id%3Dffffffffffffffffffffffffffffffff%26redirect_uri%3Dhttps%253A%252F%252Fecho.domain.tld%252Foutpost.goauthentik.io%252Fcallback%253FX-authentik-auth-callback%253Dtrue%26response_type%3Dcode%26scope%3Dopenid%2Bprofile%2Bemail%2Bak_proxy%26state%3DNdC1Ol7jMCT8oS_P9oKDQ3J6gvnETS4eWUbMbo4DbRo", "host": "auth.domain.tld", "level": "info", "logger": "authentik.asgi", "method": "GET", "pid": 24, "remote": "xxx.xxx.xxx.xxx", "request_id": "15975e5a84894e668b1127b804d7b3d8", "runtime": 113, "scheme": "https", "status": 200, "timestamp": "2023-05-14T22:29:12.709587", "user": "UserName", "user_agent": "USER_AGENT"}
{"auth_via": "unauthenticated", "event": "/-/health/ready/", "host": "localhost:9000", "level": "info", "logger": "authentik.asgi", "method": "HEAD", "pid": 24, "remote": "127.0.0.1", "request_id": "5cc814939c734f85ab612559d77ee914", "runtime": 18, "scheme": "http", "status": 204, "timestamp": "2023-05-14T22:29:12.845074", "user": "", "user_agent": "goauthentik.io lifecycle Healthcheck"}
{"auth_via": "unauthenticated", "event": "/application/o/token/", "host": "auth.domain.tld", "level": "info", "logger": "authentik.asgi", "method": "POST", "pid": 10514, "remote": "127.0.0.1", "request_id": "dbc6b792cbc247dd8a879fb0dd8ec8f4", "runtime": 54, "scheme": "https", "status": 200, "timestamp": "2023-05-14T22:29:13.024719", "user": "", "user_agent": "goauthentik.io/outpost/2023.4.1 (provider=Echo server proxy)"}
{"event":"/outpost.goauthentik.io/callback?X-authentik-auth-callback=true&code=942d95ae2232466aa67a89e8bc8f826f&state=NdC1Ol7jMCT8oS_P9oKDQ3J6gvnETS4eWUbMbo4DbRo","host":"echo.domain.tld","level":"info","logger":"authentik.outpost.proxyv2.application","method":"GET","name":"Echo server proxy","remote":"xxx.xxx.xxx.xxx","runtime":"57.903","scheme":"http","size":68,"status":302,"timestamp":"2023-05-14T22:29:13Z","user_agent":"USER_AGENT"}
{"event":"/outpost.goauthentik.io/auth/nginx","host":"echo.domain.tld","level":"info","logger":"authentik.outpost.proxyv2.application","method":"GET","name":"Echo server proxy","remote":"xxx.xxx.xxx.xxx","runtime":"0.308","scheme":"http","size":0,"status":200,"timestamp":"2023-05-14T22:29:13Z","user":"UserName","user_agent":"USER_AGENT"}
{"event":"/outpost.goauthentik.io/auth/nginx","host":"echo.domain.tld","level":"info","logger":"authentik.outpost.proxyv2.application","method":"GET","name":"Echo server proxy","remote":"xxx.xxx.xxx.xxx","runtime":"0.486","scheme":"http","size":0,"status":200,"timestamp":"2023-05-14T22:29:13Z","user":"UserName","user_agent":"USER_AGENT"}
```
</details>
**Version and Deployment (please complete the following information):**
- authentik version: 2023.4.1
- Deployment: docker-compose
**Additional context**
Using nginx reverse proxy.
There are also users on Discord experiencing the same behaviour: https://discord.com/channels/809154715984199690/809154716507963434/1101389383300567060
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `authentik/events/api/tasks.py`
Content:
```
1 """Tasks API"""
2
3 from importlib import import_module
4
5 from django.contrib import messages
6 from django.utils.translation import gettext_lazy as _
7 from drf_spectacular.types import OpenApiTypes
8 from drf_spectacular.utils import OpenApiResponse, extend_schema
9 from rest_framework.decorators import action
10 from rest_framework.fields import (
11 CharField,
12 ChoiceField,
13 DateTimeField,
14 FloatField,
15 SerializerMethodField,
16 )
17 from rest_framework.request import Request
18 from rest_framework.response import Response
19 from rest_framework.serializers import ModelSerializer
20 from rest_framework.viewsets import ReadOnlyModelViewSet
21 from structlog.stdlib import get_logger
22
23 from authentik.events.logs import LogEventSerializer
24 from authentik.events.models import SystemTask, TaskStatus
25 from authentik.rbac.decorators import permission_required
26
27 LOGGER = get_logger()
28
29
30 class SystemTaskSerializer(ModelSerializer):
31 """Serialize TaskInfo and TaskResult"""
32
33 name = CharField()
34 full_name = SerializerMethodField()
35 uid = CharField(required=False)
36 description = CharField()
37 start_timestamp = DateTimeField(read_only=True)
38 finish_timestamp = DateTimeField(read_only=True)
39 duration = FloatField(read_only=True)
40
41 status = ChoiceField(choices=[(x.value, x.name) for x in TaskStatus])
42 messages = LogEventSerializer(many=True)
43
44 def get_full_name(self, instance: SystemTask) -> str:
45 """Get full name with UID"""
46 if instance.uid:
47 return f"{instance.name}:{instance.uid}"
48 return instance.name
49
50 class Meta:
51 model = SystemTask
52 fields = [
53 "uuid",
54 "name",
55 "full_name",
56 "uid",
57 "description",
58 "start_timestamp",
59 "finish_timestamp",
60 "duration",
61 "status",
62 "messages",
63 ]
64
65
66 class SystemTaskViewSet(ReadOnlyModelViewSet):
67 """Read-only view set that returns all background tasks"""
68
69 queryset = SystemTask.objects.all()
70 serializer_class = SystemTaskSerializer
71 filterset_fields = ["name", "uid", "status"]
72 ordering = ["name", "uid", "status"]
73 search_fields = ["name", "description", "uid", "status"]
74
75 @permission_required(None, ["authentik_events.run_task"])
76 @extend_schema(
77 request=OpenApiTypes.NONE,
78 responses={
79 204: OpenApiResponse(description="Task retried successfully"),
80 404: OpenApiResponse(description="Task not found"),
81 500: OpenApiResponse(description="Failed to retry task"),
82 },
83 )
84 @action(detail=True, methods=["POST"], permission_classes=[])
85 def run(self, request: Request, pk=None) -> Response:
86 """Run task"""
87 task: SystemTask = self.get_object()
88 try:
89 task_module = import_module(task.task_call_module)
90 task_func = getattr(task_module, task.task_call_func)
91 LOGGER.info("Running task", task=task_func)
92 task_func.delay(*task.task_call_args, **task.task_call_kwargs)
93 messages.success(
94 self.request,
95 _("Successfully started task {name}.".format_map({"name": task.name})),
96 )
97 return Response(status=204)
98 except (ImportError, AttributeError) as exc: # pragma: no cover
99 LOGGER.warning("Failed to run task, remove state", task=task.name, exc=exc)
100 # if we get an import error, the module path has probably changed
101 task.delete()
102 return Response(status=500)
103
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/authentik/events/api/tasks.py b/authentik/events/api/tasks.py
--- a/authentik/events/api/tasks.py
+++ b/authentik/events/api/tasks.py
@@ -60,6 +60,8 @@
"duration",
"status",
"messages",
+ "expires",
+ "expiring",
]
| {"golden_diff": "diff --git a/authentik/events/api/tasks.py b/authentik/events/api/tasks.py\n--- a/authentik/events/api/tasks.py\n+++ b/authentik/events/api/tasks.py\n@@ -60,6 +60,8 @@\n \"duration\",\n \"status\",\n \"messages\",\n+ \"expires\",\n+ \"expiring\",\n ]\n", "issue": "Proxy provider incorrect redirect behaviour\n**Describe the bug**\r\nThis bug manifests as a seemingly random-ish redirect after a page refresh when the proxy token has expired and the user is redirected back to the app from the proxy outpost that has just generated a new token.\r\n\r\nTo make it more clear - let's say we have a forward auth single application set up correctly and working - let's call the application The Echo Server: `https://echo.domain.tld`. We visit the Echo Server, idle for some time, and manage to wait enough so that the proxy token expires. When the Echo Servers sends a request to some kind of Echo Server's **resource (`https://echo.domain.tld/static/resource.json`)** AFTER the token expires (which fails because of the expired token) and THEN we refresh the current page (`https://echo.domain.tld/home`), we get redirected to authentik proxy outpost which in turns generates a new token and redirects the user back to the Echo Server - but when authentik eventually redirects the user back to the app the URL is **`https://echo.domain.tld/static/resource.json`** and not the original path from which the flow started `https://echo.domain.tld/home`.\r\n\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n\r\n1. Have a working setup of a protected application with a proxy provider in Forward auth (single application) mode (\"The Echo Server\" described above)\r\n2. In the app's proxy provider update the Token validity setting to a short duration, e.g. `seconds=30` for demonstration purposes\r\n3. Go to the app (e.g. `https://echo.domain.tld/home`) and successfully authenticate/authorize.\r\n4. Wait 30 seconds until the token expires\r\n5. Open a developer console and simulate an artificial resource request of some kind: `fetch(\"https://echo.domain.tld/static/resource.json\")` - the fetch fails because the token has expired. You can verify in the network tab that the fetch request gets redirected to the outpost and fails: `https://echo.domain.tld/outpost.goauthentik.io/start?rd=https%3A%2F%2Fecho.domain.tld%2Fstatic%2Fresource.json`.\r\n6. Now refresh the page while still being at the same URL: `https://echo.domain.tld/home` - You can verify in the network tab that the refresh request gets redirected to the outpost _with correct redirect argument set_: `https://echo.domain.tld/outpost.goauthentik.io/start?rd=https%3A%2F%2Fecho.domain.tld%2Fhome`\r\n7. You eventually get redirected back to the app's resource requested in step 5: `https://echo.domain.tld/static/resource.json`\r\n\r\n**Expected behavior**\r\nI would expect to be eventually redirected back to the `https://echo.domain.tld/home` page.\r\n\r\n**Logs**\r\n\r\n<details>\r\n<summary>Logs</summary>\r\n\r\n```\r\n{\"event\":\"/outpost.goauthentik.io/auth/nginx\",\"host\":\"echo.domain.tld\",\"level\":\"info\",\"logger\":\"authentik.outpost.proxyv2.application\",\"method\":\"GET\",\"name\":\"Echo server proxy\",\"remote\":\"xxx.xxx.xxx.xxx\",\"runtime\":\"0.343\",\"scheme\":\"http\",\"size\":0,\"status\":200,\"timestamp\":\"2023-05-14T22:28:10Z\",\"user\":\"UserName\",\"user_agent\":\"USER_AGENT\"}\r\n{\"event\":\"/outpost.goauthentik.io/auth/nginx\",\"host\":\"echo.domain.tld\",\"level\":\"info\",\"logger\":\"authentik.outpost.proxyv2.application\",\"method\":\"GET\",\"name\":\"Echo server proxy\",\"remote\":\"xxx.xxx.xxx.xxx\",\"runtime\":\"0.930\",\"scheme\":\"http\",\"size\":21,\"status\":401,\"timestamp\":\"2023-05-14T22:29:07Z\",\"user_agent\":\"USER_AGENT\"}\r\n{\"event\":\"/outpost.goauthentik.io/start?rd=https%3A%2F%2Fecho.domain.tld%2Fstatic%2Fresource.json\",\"host\":\"echo.domain.tld\",\"level\":\"info\",\"logger\":\"authentik.outpost.proxyv2.application\",\"method\":\"GET\",\"name\":\"Echo server proxy\",\"remote\":\"xxx.xxx.xxx.xxx\",\"runtime\":\"1.123\",\"scheme\":\"http\",\"size\":359,\"status\":302,\"timestamp\":\"2023-05-14T22:29:07Z\",\"user_agent\":\"USER_AGENT\"}\r\n{\"event\":\"/outpost.goauthentik.io/auth/nginx\",\"host\":\"echo.domain.tld\",\"level\":\"info\",\"logger\":\"authentik.outpost.proxyv2.application\",\"method\":\"GET\",\"name\":\"Echo server proxy\",\"remote\":\"xxx.xxx.xxx.xxx\",\"runtime\":\"0.933\",\"scheme\":\"http\",\"size\":21,\"status\":401,\"timestamp\":\"2023-05-14T22:29:11Z\",\"user_agent\":\"USER_AGENT\"}\r\n{\"event\":\"/outpost.goauthentik.io/start?rd=https%3A%2F%2Fecho.domain.tld%2Fhome\",\"host\":\"echo.domain.tld\",\"level\":\"info\",\"logger\":\"authentik.outpost.proxyv2.application\",\"method\":\"GET\",\"name\":\"Echo server proxy\",\"remote\":\"xxx.xxx.xxx.xxx\",\"runtime\":\"1.319\",\"scheme\":\"http\",\"size\":359,\"status\":302,\"timestamp\":\"2023-05-14T22:29:11Z\",\"user_agent\":\"USER_AGENT\"}\r\n{\"auth_via\": \"session\", \"event\": \"/application/o/authorize/?client_id=ffffffffffffffffffffffffffffffff&redirect_uri=https%3A%2F%2Fecho.domain.tld%2Foutpost.goauthentik.io%2Fcallback%3FX-authentik-auth-callback%3Dtrue&response_type=code&scope=openid+profile+email+ak_proxy&state=NdC1Ol7jMCT8oS_P9oKDQ3J6gvnETS4eWUbMbo4DbRo\", \"host\": \"auth.domain.tld\", \"level\": \"info\", \"logger\": \"authentik.asgi\", \"method\": \"GET\", \"pid\": 24, \"remote\": \"xxx.xxx.xxx.xxx\", \"request_id\": \"5b1ce3f63ab44b67ae482cd4eef3548d\", \"runtime\": 74, \"scheme\": \"https\", \"status\": 302, \"timestamp\": \"2023-05-14T22:29:11.311489\", \"user\": \"UserName\", \"user_agent\": \"USER_AGENT\"}\r\n{\"auth_via\": \"session\", \"event\": \"/if/flow/default-provider-authorization-explicit-consent/?client_id=ffffffffffffffffffffffffffffffff&redirect_uri=https%3A%2F%2Fecho.domain.tld%2Foutpost.goauthentik.io%2Fcallback%3FX-authentik-auth-callback%3Dtrue&response_type=code&scope=openid+profile+email+ak_proxy&state=NdC1Ol7jMCT8oS_P9oKDQ3J6gvnETS4eWUbMbo4DbRo\", \"host\": \"auth.domain.tld\", \"level\": \"info\", \"logger\": \"authentik.asgi\", \"method\": \"GET\", \"pid\": 24, \"remote\": \"xxx.xxx.xxx.xxx\", \"request_id\": \"9dabf88c7f7f40cb909a317c47132181\", \"runtime\": 33, \"scheme\": \"https\", \"status\": 200, \"timestamp\": \"2023-05-14T22:29:11.362915\", \"user\": \"UserName\", \"user_agent\": \"USER_AGENT\"}\r\n{\"event\":\"/static/dist/poly.js.map\",\"host\":\"auth.domain.tld\",\"level\":\"info\",\"logger\":\"authentik.router\",\"method\":\"GET\",\"remote\":\"xxx.xxx.xxx.xxx\",\"runtime\":\"4.872\",\"scheme\":\"http\",\"size\":0,\"status\":304,\"timestamp\":\"2023-05-14T22:29:11Z\",\"user_agent\":\"USER_AGENT\"}\r\n{\"event\":\"/static/dist/standalone/loading/vendor-320681c9.js.map\",\"host\":\"auth.domain.tld\",\"level\":\"info\",\"logger\":\"authentik.router\",\"method\":\"GET\",\"remote\":\"xxx.xxx.xxx.xxx\",\"runtime\":\"1.094\",\"scheme\":\"http\",\"size\":0,\"status\":304,\"timestamp\":\"2023-05-14T22:29:12Z\",\"user_agent\":\"USER_AGENT\"}\r\n{\"event\":\"/static/dist/standalone/loading/api-f65fd993.js.map\",\"host\":\"auth.domain.tld\",\"level\":\"info\",\"logger\":\"authentik.router\",\"method\":\"GET\",\"remote\":\"xxx.xxx.xxx.xxx\",\"runtime\":\"0.742\",\"scheme\":\"http\",\"size\":0,\"status\":304,\"timestamp\":\"2023-05-14T22:29:12Z\",\"user_agent\":\"USER_AGENT\"}\r\n{\"event\":\"/static/dist/standalone/loading/locale-en-f660cb3b.js.map\",\"host\":\"auth.domain.tld\",\"level\":\"info\",\"logger\":\"authentik.router\",\"method\":\"GET\",\"remote\":\"xxx.xxx.xxx.xxx\",\"runtime\":\"0.523\",\"scheme\":\"http\",\"size\":0,\"status\":304,\"timestamp\":\"2023-05-14T22:29:12Z\",\"user_agent\":\"USER_AGENT\"}\r\n{\"event\":\"/static/dist/standalone/loading/index.js.map\",\"host\":\"auth.domain.tld\",\"level\":\"info\",\"logger\":\"authentik.router\",\"method\":\"GET\",\"remote\":\"xxx.xxx.xxx.xxx\",\"runtime\":\"0.438\",\"scheme\":\"http\",\"size\":53898,\"status\":200,\"timestamp\":\"2023-05-14T22:29:12Z\",\"user_agent\":\"USER_AGENT\"}\r\n{\"event\":\"/static/dist/standalone/loading/index.js.map\",\"host\":\"auth.domain.tld\",\"level\":\"info\",\"logger\":\"authentik.router\",\"method\":\"GET\",\"remote\":\"xxx.xxx.xxx.xxx\",\"runtime\":\"0.196\",\"scheme\":\"http\",\"size\":0,\"status\":304,\"timestamp\":\"2023-05-14T22:29:12Z\",\"user_agent\":\"USER_AGENT\"}\r\n{\"event\":\"/static/dist/flow/FlowInterface-d33d9ac4.js.map\",\"host\":\"auth.domain.tld\",\"level\":\"info\",\"logger\":\"authentik.router\",\"method\":\"GET\",\"remote\":\"xxx.xxx.xxx.xxx\",\"runtime\":\"2.285\",\"scheme\":\"http\",\"size\":0,\"status\":304,\"timestamp\":\"2023-05-14T22:29:12Z\",\"user_agent\":\"USER_AGENT\"}\r\n{\"event\":\"/static/dist/flow/FlowInterface.js.map\",\"host\":\"auth.domain.tld\",\"level\":\"info\",\"logger\":\"authentik.router\",\"method\":\"GET\",\"remote\":\"xxx.xxx.xxx.xxx\",\"runtime\":\"0.127\",\"scheme\":\"http\",\"size\":0,\"status\":304,\"timestamp\":\"2023-05-14T22:29:12Z\",\"user_agent\":\"USER_AGENT\"}\r\n{\"event\":\"/static/dist/flow/locale-en-f660cb3b.js.map\",\"host\":\"auth.domain.tld\",\"level\":\"info\",\"logger\":\"authentik.router\",\"method\":\"GET\",\"remote\":\"xxx.xxx.xxx.xxx\",\"runtime\":\"1.856\",\"scheme\":\"http\",\"size\":0,\"status\":304,\"timestamp\":\"2023-05-14T22:29:12Z\",\"user_agent\":\"USER_AGENT\"}\r\n{\"event\":\"/static/dist/flow/vendor-cm-00a4719e.js.map\",\"host\":\"auth.domain.tld\",\"level\":\"info\",\"logger\":\"authentik.router\",\"method\":\"GET\",\"remote\":\"xxx.xxx.xxx.xxx\",\"runtime\":\"7.299\",\"scheme\":\"http\",\"size\":0,\"status\":304,\"timestamp\":\"2023-05-14T22:29:12Z\",\"user_agent\":\"USER_AGENT\"}\r\n{\"event\":\"/static/dist/flow/api-befd9628.js.map\",\"host\":\"auth.domain.tld\",\"level\":\"info\",\"logger\":\"authentik.router\",\"method\":\"GET\",\"remote\":\"xxx.xxx.xxx.xxx\",\"runtime\":\"13.889\",\"scheme\":\"http\",\"size\":0,\"status\":304,\"timestamp\":\"2023-05-14T22:29:12Z\",\"user_agent\":\"USER_AGENT\"}\r\n{\"event\":\"/static/dist/assets/icons/icon_left_brand.svg\",\"host\":\"auth.domain.tld\",\"level\":\"info\",\"logger\":\"authentik.router\",\"method\":\"GET\",\"remote\":\"xxx.xxx.xxx.xxx\",\"runtime\":\"0.511\",\"scheme\":\"http\",\"size\":4861,\"status\":200,\"timestamp\":\"2023-05-14T22:29:12Z\",\"user_agent\":\"USER_AGENT\"}\r\n{\"event\":\"/if/flow/default-provider-authorization-explicit-consent/assets/fonts/RedHatText/RedHatText-Regular.woff2\",\"host\":\"auth.domain.tld\",\"level\":\"info\",\"logger\":\"authentik.router\",\"method\":\"GET\",\"remote\":\"xxx.xxx.xxx.xxx\",\"runtime\":\"0.523\",\"scheme\":\"http\",\"size\":3768,\"status\":200,\"timestamp\":\"2023-05-14T22:29:12Z\",\"user_agent\":\"USER_AGENT\"}\r\n{\"event\":\"/if/flow/default-provider-authorization-explicit-consent/assets/fonts/RedHatDisplay/RedHatDisplay-Medium.woff2\",\"host\":\"auth.domain.tld\",\"level\":\"info\",\"logger\":\"authentik.router\",\"method\":\"GET\",\"remote\":\"xxx.xxx.xxx.xxx\",\"runtime\":\"0.521\",\"scheme\":\"http\",\"size\":28661,\"status\":200,\"timestamp\":\"2023-05-14T22:29:12Z\",\"user_agent\":\"USER_AGENT\"}\r\n{\"event\":\"/static/dist/flow/vendor-25865c6e.js.map\",\"host\":\"auth.domain.tld\",\"level\":\"info\",\"logger\":\"authentik.router\",\"method\":\"GET\",\"remote\":\"xxx.xxx.xxx.xxx\",\"runtime\":\"45.016\",\"scheme\":\"http\",\"size\":0,\"status\":304,\"timestamp\":\"2023-05-14T22:29:12Z\",\"user_agent\":\"USER_AGENT\"}\r\n{\"event\":\"/static/dist/flow/FlowInterface-d33d9ac4.js.map\",\"host\":\"auth.domain.tld\",\"level\":\"info\",\"logger\":\"authentik.router\",\"method\":\"GET\",\"remote\":\"xxx.xxx.xxx.xxx\",\"runtime\":\"1.530\",\"scheme\":\"http\",\"size\":0,\"status\":304,\"timestamp\":\"2023-05-14T22:29:12Z\",\"user_agent\":\"USER_AGENT\"}\r\n{\"auth_via\": \"session\", \"event\": \"/api/v3/flows/executor/default-provider-authorization-explicit-consent/?query=client_id%3Dffffffffffffffffffffffffffffffff%26redirect_uri%3Dhttps%253A%252F%252Fecho.domain.tld%252Foutpost.goauthentik.io%252Fcallback%253FX-authentik-auth-callback%253Dtrue%26response_type%3Dcode%26scope%3Dopenid%2Bprofile%2Bemail%2Bak_proxy%26state%3DNdC1Ol7jMCT8oS_P9oKDQ3J6gvnETS4eWUbMbo4DbRo\", \"host\": \"auth.domain.tld\", \"level\": \"info\", \"logger\": \"authentik.asgi\", \"method\": \"GET\", \"pid\": 24, \"remote\": \"xxx.xxx.xxx.xxx\", \"request_id\": \"bccf832ab85840c7899bef18fb76899e\", \"runtime\": 233, \"scheme\": \"https\", \"status\": 302, \"timestamp\": \"2023-05-14T22:29:12.466727\", \"user\": \"UserName\", \"user_agent\": \"USER_AGENT\"}\r\n{\"action\": \"authorize_application\", \"auth_via\": \"session\", \"client_ip\": \"xxx.xxx.xxx.xxx\", \"context\": {\"authorized_application\": {\"app\": \"authentik_core\", \"model_name\": \"application\", \"name\": \"Echo server\", \"pk\": \"d208963c731d4cb282ae64397f731688\"}, \"flow\": \"a8c59e9e6fbc4e1d9a53365db1bf8704\", \"http_request\": {\"args\": {\"client_id\": \"ffffffffffffffffffffffffffffffff\", \"redirect_uri\": \"https://echo.domain.tld/outpost.goauthentik.io/callback?X-authentik-auth-callback=true\", \"response_type\": \"code\", \"scope\": \"openid profile email ak_proxy\", \"state\": \"NdC1Ol7jMCT8oS_P9oKDQ3J6gvnETS4eWUbMbo4DbRo\"}, \"method\": \"GET\", \"path\": \"/api/v3/flows/executor/default-provider-authorization-explicit-consent/\"}, \"scopes\": \"openid profile email ak_proxy\"}, \"event\": \"Created Event\", \"host\": \"auth.domain.tld\", \"level\": \"info\", \"logger\": \"authentik.events.models\", \"pid\": 24, \"request_id\": \"15975e5a84894e668b1127b804d7b3d8\", \"timestamp\": \"2023-05-14T22:29:12.654030\", \"user\": {\"email\": \"[email protected]\", \"pk\": 12, \"username\": \"UserName\"}}\r\n{\"auth_via\": \"session\", \"event\": \"Task published\", \"host\": \"auth.domain.tld\", \"level\": \"info\", \"logger\": \"authentik.root.celery\", \"pid\": 24, \"request_id\": \"15975e5a84894e668b1127b804d7b3d8\", \"task_id\": \"94594c44-1672-4710-b659-96c22b3580f6\", \"task_name\": \"authentik.events.tasks.event_notification_handler\", \"timestamp\": \"2023-05-14T22:29:12.678197\"}\r\n{\"auth_via\": \"session\", \"event\": \"/api/v3/flows/executor/default-provider-authorization-explicit-consent/?query=client_id%3Dffffffffffffffffffffffffffffffff%26redirect_uri%3Dhttps%253A%252F%252Fecho.domain.tld%252Foutpost.goauthentik.io%252Fcallback%253FX-authentik-auth-callback%253Dtrue%26response_type%3Dcode%26scope%3Dopenid%2Bprofile%2Bemail%2Bak_proxy%26state%3DNdC1Ol7jMCT8oS_P9oKDQ3J6gvnETS4eWUbMbo4DbRo\", \"host\": \"auth.domain.tld\", \"level\": \"info\", \"logger\": \"authentik.asgi\", \"method\": \"GET\", \"pid\": 24, \"remote\": \"xxx.xxx.xxx.xxx\", \"request_id\": \"15975e5a84894e668b1127b804d7b3d8\", \"runtime\": 113, \"scheme\": \"https\", \"status\": 200, \"timestamp\": \"2023-05-14T22:29:12.709587\", \"user\": \"UserName\", \"user_agent\": \"USER_AGENT\"}\r\n{\"auth_via\": \"unauthenticated\", \"event\": \"/-/health/ready/\", \"host\": \"localhost:9000\", \"level\": \"info\", \"logger\": \"authentik.asgi\", \"method\": \"HEAD\", \"pid\": 24, \"remote\": \"127.0.0.1\", \"request_id\": \"5cc814939c734f85ab612559d77ee914\", \"runtime\": 18, \"scheme\": \"http\", \"status\": 204, \"timestamp\": \"2023-05-14T22:29:12.845074\", \"user\": \"\", \"user_agent\": \"goauthentik.io lifecycle Healthcheck\"}\r\n{\"auth_via\": \"unauthenticated\", \"event\": \"/application/o/token/\", \"host\": \"auth.domain.tld\", \"level\": \"info\", \"logger\": \"authentik.asgi\", \"method\": \"POST\", \"pid\": 10514, \"remote\": \"127.0.0.1\", \"request_id\": \"dbc6b792cbc247dd8a879fb0dd8ec8f4\", \"runtime\": 54, \"scheme\": \"https\", \"status\": 200, \"timestamp\": \"2023-05-14T22:29:13.024719\", \"user\": \"\", \"user_agent\": \"goauthentik.io/outpost/2023.4.1 (provider=Echo server proxy)\"}\r\n{\"event\":\"/outpost.goauthentik.io/callback?X-authentik-auth-callback=true&code=942d95ae2232466aa67a89e8bc8f826f&state=NdC1Ol7jMCT8oS_P9oKDQ3J6gvnETS4eWUbMbo4DbRo\",\"host\":\"echo.domain.tld\",\"level\":\"info\",\"logger\":\"authentik.outpost.proxyv2.application\",\"method\":\"GET\",\"name\":\"Echo server proxy\",\"remote\":\"xxx.xxx.xxx.xxx\",\"runtime\":\"57.903\",\"scheme\":\"http\",\"size\":68,\"status\":302,\"timestamp\":\"2023-05-14T22:29:13Z\",\"user_agent\":\"USER_AGENT\"}\r\n{\"event\":\"/outpost.goauthentik.io/auth/nginx\",\"host\":\"echo.domain.tld\",\"level\":\"info\",\"logger\":\"authentik.outpost.proxyv2.application\",\"method\":\"GET\",\"name\":\"Echo server proxy\",\"remote\":\"xxx.xxx.xxx.xxx\",\"runtime\":\"0.308\",\"scheme\":\"http\",\"size\":0,\"status\":200,\"timestamp\":\"2023-05-14T22:29:13Z\",\"user\":\"UserName\",\"user_agent\":\"USER_AGENT\"}\r\n{\"event\":\"/outpost.goauthentik.io/auth/nginx\",\"host\":\"echo.domain.tld\",\"level\":\"info\",\"logger\":\"authentik.outpost.proxyv2.application\",\"method\":\"GET\",\"name\":\"Echo server proxy\",\"remote\":\"xxx.xxx.xxx.xxx\",\"runtime\":\"0.486\",\"scheme\":\"http\",\"size\":0,\"status\":200,\"timestamp\":\"2023-05-14T22:29:13Z\",\"user\":\"UserName\",\"user_agent\":\"USER_AGENT\"}\r\n```\r\n\r\n</details>\r\n\r\n**Version and Deployment (please complete the following information):**\r\n\r\n- authentik version: 2023.4.1\r\n- Deployment: docker-compose\r\n\r\n**Additional context**\r\nUsing nginx reverse proxy.\r\n\r\nThere are also users on Discord experiencing the same behaviour: https://discord.com/channels/809154715984199690/809154716507963434/1101389383300567060\n", "before_files": [{"content": "\"\"\"Tasks API\"\"\"\n\nfrom importlib import import_module\n\nfrom django.contrib import messages\nfrom django.utils.translation import gettext_lazy as _\nfrom drf_spectacular.types import OpenApiTypes\nfrom drf_spectacular.utils import OpenApiResponse, extend_schema\nfrom rest_framework.decorators import action\nfrom rest_framework.fields import (\n CharField,\n ChoiceField,\n DateTimeField,\n FloatField,\n SerializerMethodField,\n)\nfrom rest_framework.request import Request\nfrom rest_framework.response import Response\nfrom rest_framework.serializers import ModelSerializer\nfrom rest_framework.viewsets import ReadOnlyModelViewSet\nfrom structlog.stdlib import get_logger\n\nfrom authentik.events.logs import LogEventSerializer\nfrom authentik.events.models import SystemTask, TaskStatus\nfrom authentik.rbac.decorators import permission_required\n\nLOGGER = get_logger()\n\n\nclass SystemTaskSerializer(ModelSerializer):\n \"\"\"Serialize TaskInfo and TaskResult\"\"\"\n\n name = CharField()\n full_name = SerializerMethodField()\n uid = CharField(required=False)\n description = CharField()\n start_timestamp = DateTimeField(read_only=True)\n finish_timestamp = DateTimeField(read_only=True)\n duration = FloatField(read_only=True)\n\n status = ChoiceField(choices=[(x.value, x.name) for x in TaskStatus])\n messages = LogEventSerializer(many=True)\n\n def get_full_name(self, instance: SystemTask) -> str:\n \"\"\"Get full name with UID\"\"\"\n if instance.uid:\n return f\"{instance.name}:{instance.uid}\"\n return instance.name\n\n class Meta:\n model = SystemTask\n fields = [\n \"uuid\",\n \"name\",\n \"full_name\",\n \"uid\",\n \"description\",\n \"start_timestamp\",\n \"finish_timestamp\",\n \"duration\",\n \"status\",\n \"messages\",\n ]\n\n\nclass SystemTaskViewSet(ReadOnlyModelViewSet):\n \"\"\"Read-only view set that returns all background tasks\"\"\"\n\n queryset = SystemTask.objects.all()\n serializer_class = SystemTaskSerializer\n filterset_fields = [\"name\", \"uid\", \"status\"]\n ordering = [\"name\", \"uid\", \"status\"]\n search_fields = [\"name\", \"description\", \"uid\", \"status\"]\n\n @permission_required(None, [\"authentik_events.run_task\"])\n @extend_schema(\n request=OpenApiTypes.NONE,\n responses={\n 204: OpenApiResponse(description=\"Task retried successfully\"),\n 404: OpenApiResponse(description=\"Task not found\"),\n 500: OpenApiResponse(description=\"Failed to retry task\"),\n },\n )\n @action(detail=True, methods=[\"POST\"], permission_classes=[])\n def run(self, request: Request, pk=None) -> Response:\n \"\"\"Run task\"\"\"\n task: SystemTask = self.get_object()\n try:\n task_module = import_module(task.task_call_module)\n task_func = getattr(task_module, task.task_call_func)\n LOGGER.info(\"Running task\", task=task_func)\n task_func.delay(*task.task_call_args, **task.task_call_kwargs)\n messages.success(\n self.request,\n _(\"Successfully started task {name}.\".format_map({\"name\": task.name})),\n )\n return Response(status=204)\n except (ImportError, AttributeError) as exc: # pragma: no cover\n LOGGER.warning(\"Failed to run task, remove state\", task=task.name, exc=exc)\n # if we get an import error, the module path has probably changed\n task.delete()\n return Response(status=500)\n", "path": "authentik/events/api/tasks.py"}], "after_files": [{"content": "\"\"\"Tasks API\"\"\"\n\nfrom importlib import import_module\n\nfrom django.contrib import messages\nfrom django.utils.translation import gettext_lazy as _\nfrom drf_spectacular.types import OpenApiTypes\nfrom drf_spectacular.utils import OpenApiResponse, extend_schema\nfrom rest_framework.decorators import action\nfrom rest_framework.fields import (\n CharField,\n ChoiceField,\n DateTimeField,\n FloatField,\n SerializerMethodField,\n)\nfrom rest_framework.request import Request\nfrom rest_framework.response import Response\nfrom rest_framework.serializers import ModelSerializer\nfrom rest_framework.viewsets import ReadOnlyModelViewSet\nfrom structlog.stdlib import get_logger\n\nfrom authentik.events.logs import LogEventSerializer\nfrom authentik.events.models import SystemTask, TaskStatus\nfrom authentik.rbac.decorators import permission_required\n\nLOGGER = get_logger()\n\n\nclass SystemTaskSerializer(ModelSerializer):\n \"\"\"Serialize TaskInfo and TaskResult\"\"\"\n\n name = CharField()\n full_name = SerializerMethodField()\n uid = CharField(required=False)\n description = CharField()\n start_timestamp = DateTimeField(read_only=True)\n finish_timestamp = DateTimeField(read_only=True)\n duration = FloatField(read_only=True)\n\n status = ChoiceField(choices=[(x.value, x.name) for x in TaskStatus])\n messages = LogEventSerializer(many=True)\n\n def get_full_name(self, instance: SystemTask) -> str:\n \"\"\"Get full name with UID\"\"\"\n if instance.uid:\n return f\"{instance.name}:{instance.uid}\"\n return instance.name\n\n class Meta:\n model = SystemTask\n fields = [\n \"uuid\",\n \"name\",\n \"full_name\",\n \"uid\",\n \"description\",\n \"start_timestamp\",\n \"finish_timestamp\",\n \"duration\",\n \"status\",\n \"messages\",\n \"expires\",\n \"expiring\",\n ]\n\n\nclass SystemTaskViewSet(ReadOnlyModelViewSet):\n \"\"\"Read-only view set that returns all background tasks\"\"\"\n\n queryset = SystemTask.objects.all()\n serializer_class = SystemTaskSerializer\n filterset_fields = [\"name\", \"uid\", \"status\"]\n ordering = [\"name\", \"uid\", \"status\"]\n search_fields = [\"name\", \"description\", \"uid\", \"status\"]\n\n @permission_required(None, [\"authentik_events.run_task\"])\n @extend_schema(\n request=OpenApiTypes.NONE,\n responses={\n 204: OpenApiResponse(description=\"Task retried successfully\"),\n 404: OpenApiResponse(description=\"Task not found\"),\n 500: OpenApiResponse(description=\"Failed to retry task\"),\n },\n )\n @action(detail=True, methods=[\"POST\"], permission_classes=[])\n def run(self, request: Request, pk=None) -> Response:\n \"\"\"Run task\"\"\"\n task: SystemTask = self.get_object()\n try:\n task_module = import_module(task.task_call_module)\n task_func = getattr(task_module, task.task_call_func)\n LOGGER.info(\"Running task\", task=task_func)\n task_func.delay(*task.task_call_args, **task.task_call_kwargs)\n messages.success(\n self.request,\n _(\"Successfully started task {name}.\".format_map({\"name\": task.name})),\n )\n return Response(status=204)\n except (ImportError, AttributeError) as exc: # pragma: no cover\n LOGGER.warning(\"Failed to run task, remove state\", task=task.name, exc=exc)\n # if we get an import error, the module path has probably changed\n task.delete()\n return Response(status=500)\n", "path": "authentik/events/api/tasks.py"}]} |
gh_patches_debug_61 | rasdani/github-patches | git_diff | bokeh__bokeh-9310 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] Documentation panels empty
Hello,
I was looking for documentation on Tabs and I went to the page :
https://docs.bokeh.org/en/latest/docs/reference/models/widgets.panels.html
However it display a blank page :

The last time the page was ot empty was on:
https://docs.bokeh.org/en/1.0.4/docs/reference/models/widgets.panels.html
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bokeh/models/layouts.py`
Content:
```
1 #-----------------------------------------------------------------------------
2 # Copyright (c) 2012 - 2019, Anaconda, Inc., and Bokeh Contributors.
3 # All rights reserved.
4 #
5 # The full license is in the file LICENSE.txt, distributed with this software.
6 #-----------------------------------------------------------------------------
7 ''' Various kinds of layout components.
8
9 '''
10
11 #-----------------------------------------------------------------------------
12 # Boilerplate
13 #-----------------------------------------------------------------------------
14 from __future__ import absolute_import, division, print_function, unicode_literals
15
16 import logging
17 log = logging.getLogger(__name__)
18
19 #-----------------------------------------------------------------------------
20 # Imports
21 #-----------------------------------------------------------------------------
22
23 # Standard library imports
24
25 # External imports
26
27 # Bokeh imports
28 from ..core.enums import Align, SizingMode, SizingPolicy, Location
29 from ..core.has_props import abstract
30 from ..core.properties import (Bool, Auto, Enum, Int, NonNegativeInt, Float,
31 Instance, List, Seq, Tuple, Dict, String, Either, Struct, Color)
32 from ..core.validation import warning, error
33 from ..core.validation.warnings import (BOTH_CHILD_AND_ROOT, EMPTY_LAYOUT,
34 FIXED_SIZING_MODE, FIXED_WIDTH_POLICY, FIXED_HEIGHT_POLICY)
35 from ..core.validation.errors import MIN_PREFERRED_MAX_WIDTH, MIN_PREFERRED_MAX_HEIGHT
36 from ..model import Model
37 from .callbacks import Callback
38
39 #-----------------------------------------------------------------------------
40 # Globals and constants
41 #-----------------------------------------------------------------------------
42
43 __all__ = (
44 'Box',
45 'Column',
46 'GridBox',
47 'HTMLBox',
48 'LayoutDOM',
49 'Row',
50 'Spacer',
51 'WidgetBox',
52 )
53
54 #-----------------------------------------------------------------------------
55 # General API
56 #-----------------------------------------------------------------------------
57
58 @abstract
59 class LayoutDOM(Model):
60 """ The base class for layoutable components.
61
62 """
63
64 disabled = Bool(False, help="""
65 Whether the widget will be disabled when rendered.
66
67 If ``True``, the widget will be greyed-out and not responsive to UI events.
68 """)
69
70 visible = Bool(True, help="""
71 Whether the component will be visible and a part of a layout.
72 """)
73
74 width = NonNegativeInt(default=None, help="""
75 The width of the component (in pixels).
76
77 This can be either fixed or preferred width, depending on width sizing policy.
78 """)
79
80 height = NonNegativeInt(default=None, help="""
81 The height of the component (in pixels).
82
83 This can be either fixed or preferred height, depending on height sizing policy.
84 """)
85
86 min_width = NonNegativeInt(default=None, help="""
87 Minimal width of the component (in pixels) if width is adjustable.
88 """)
89
90 min_height = NonNegativeInt(default=None, help="""
91 Minimal height of the component (in pixels) if height is adjustable.
92 """)
93
94 max_width = NonNegativeInt(default=None, help="""
95 Minimal width of the component (in pixels) if width is adjustable.
96 """)
97
98 max_height = NonNegativeInt(default=None, help="""
99 Minimal height of the component (in pixels) if height is adjustable.
100 """)
101
102 margin = Tuple(Int, Int, Int, Int, default=(0, 0, 0, 0), help="""
103 Allows to create additional space around the component.
104 """).accepts(Tuple(Int, Int), lambda v_h: (v_h[0], v_h[1], v_h[0], v_h[1])) \
105 .accepts(Int, lambda m: (m, m, m, m))
106
107 width_policy = Either(Auto, Enum(SizingPolicy), default="auto", help="""
108 Describes how the component should maintain its width.
109
110 ``"auto"``
111 Use component's preferred sizing policy.
112
113 ``"fixed"``
114 Use exactly ``width`` pixels. Component will overflow if it can't fit in the
115 available horizontal space.
116
117 ``"fit"``
118 Use component's preferred width (if set) and allow it to fit into the available
119 horizontal space within the minimum and maximum width bounds (if set). Component's
120 width neither will be aggressively minimized nor maximized.
121
122 ``"min"``
123 Use as little horizontal space as possible, not less than the minimum width (if set).
124 The starting point is the preferred width (if set). The width of the component may
125 shrink or grow depending on the parent layout, aspect management and other factors.
126
127 ``"max"``
128 Use as much horizontal space as possible, not more than the maximum width (if set).
129 The starting point is the preferred width (if set). The width of the component may
130 shrink or grow depending on the parent layout, aspect management and other factors.
131
132 .. note::
133 This is an experimental feature and may change in future. Use it at your
134 own discretion. Prefer using ``sizing_mode`` if this level of control isn't
135 strictly necessary.
136
137 """)
138
139 height_policy = Either(Auto, Enum(SizingPolicy), default="auto", help="""
140 Describes how the component should maintain its height.
141
142 ``"auto"``
143 Use component's preferred sizing policy.
144
145 ``"fixed"``
146 Use exactly ``height`` pixels. Component will overflow if it can't fit in the
147 available vertical space.
148
149 ``"fit"``
150 Use component's preferred height (if set) and allow to fit into the available
151 vertical space within the minimum and maximum height bounds (if set). Component's
152 height neither will be aggressively minimized nor maximized.
153
154 ``"min"``
155 Use as little vertical space as possible, not less than the minimum height (if set).
156 The starting point is the preferred height (if set). The height of the component may
157 shrink or grow depending on the parent layout, aspect management and other factors.
158
159 ``"max"``
160 Use as much vertical space as possible, not more than the maximum height (if set).
161 The starting point is the preferred height (if set). The height of the component may
162 shrink or grow depending on the parent layout, aspect management and other factors.
163
164 .. note::
165 This is an experimental feature and may change in future. Use it at your
166 own discretion. Prefer using ``sizing_mode`` if this level of control isn't
167 strictly necessary.
168
169 """)
170
171 aspect_ratio = Either(Enum("auto"), Float, default=None, help="""
172 Describes the proportional relationship between component's width and height.
173
174 This works if any of component's dimensions are flexible in size. If set to
175 a number, ``width / height = aspect_ratio`` relationship will be maintained.
176 Otherwise, if set to ``"auto"``, component's preferred width and height will
177 be used to determine the aspect (if not set, no aspect will be preserved).
178
179 """)
180
181 sizing_mode = Enum(SizingMode, default=None, help="""
182 How the component should size itself.
183
184 This is a high-level setting for maintaining width and height of the component. To
185 gain more fine grained control over sizing, use ``width_policy``, ``height_policy``
186 and ``aspect_ratio`` instead (those take precedence over ``sizing_mode``).
187
188 Possible scenarios:
189
190 ``"fixed"``
191 Component is not responsive. It will retain its original width and height
192 regardless of any subsequent browser window resize events.
193
194 ``"stretch_width"``
195 Component will responsively resize to stretch to the available width, without
196 maintaining any aspect ratio. The height of the component depends on the type
197 of the component and may be fixed or fit to component's contents.
198
199 ``"stretch_height"``
200 Component will responsively resize to stretch to the available height, without
201 maintaining any aspect ratio. The width of the component depends on the type
202 of the component and may be fixed or fit to component's contents.
203
204 ``"stretch_both"``
205 Component is completely responsive, independently in width and height, and
206 will occupy all the available horizontal and vertical space, even if this
207 changes the aspect ratio of the component.
208
209 ``"scale_width"``
210 Component will responsively resize to stretch to the available width, while
211 maintaining the original or provided aspect ratio.
212
213 ``"scale_height"``
214 Component will responsively resize to stretch to the available height, while
215 maintaining the original or provided aspect ratio.
216
217 ``"scale_both"``
218 Component will responsively resize to both the available width and height, while
219 maintaining the original or provided aspect ratio.
220
221 """)
222
223 align = Either(Enum(Align), Tuple(Enum(Align), Enum(Align)), default="start", help="""
224 The alignment point within the parent container.
225
226 This property is useful only if this component is a child element of a layout
227 (e.g. a grid). Self alignment can be overridden by the parent container (e.g.
228 grid track align).
229 """)
230
231 background = Color(default=None, help="""
232 Background color of the component.
233 """)
234
235 # List in order for in-place changes to trigger changes, ref: https://github.com/bokeh/bokeh/issues/6841
236 css_classes = List(String, help="""
237 A list of CSS class names to add to this DOM element. Note: the class names are
238 simply added as-is, no other guarantees are provided.
239
240 It is also permissible to assign from tuples, however these are adapted -- the
241 property will always contain a list.
242 """).accepts(Seq(String), lambda x: list(x))
243
244 @warning(FIXED_SIZING_MODE)
245 def _check_fixed_sizing_mode(self):
246 if self.sizing_mode == "fixed" and (self.width is None or self.height is None):
247 return str(self)
248
249 @warning(FIXED_WIDTH_POLICY)
250 def _check_fixed_width_policy(self):
251 if self.width_policy == "fixed" and self.width is None:
252 return str(self)
253
254 @warning(FIXED_HEIGHT_POLICY)
255 def _check_fixed_height_policy(self):
256 if self.height_policy == "fixed" and self.height is None:
257 return str(self)
258
259 @error(MIN_PREFERRED_MAX_WIDTH)
260 def _min_preferred_max_width(self):
261 min_width = self.min_width if self.min_width is not None else 0
262 width = self.width if self.width is not None else min_width
263 max_width = self.max_width if self.max_width is not None else width
264
265 if not (min_width <= width <= max_width):
266 return str(self)
267
268 @error(MIN_PREFERRED_MAX_HEIGHT)
269 def _min_preferred_max_height(self):
270 min_height = self.min_height if self.min_height is not None else 0
271 height = self.height if self.height is not None else min_height
272 max_height = self.max_height if self.max_height is not None else height
273
274 if not (min_height <= height <= max_height):
275 return str(self)
276
277 @abstract
278 class HTMLBox(LayoutDOM):
279 ''' A component which size is determined by its HTML content.
280
281 '''
282
283 class Spacer(LayoutDOM):
284 ''' A container for space used to fill an empty spot in a row or column.
285
286 '''
287
288 QuickTrackSizing = Either(Enum("auto", "min", "fit", "max"), Int)
289
290 TrackAlign = Either(Auto, Enum(Align))
291
292 RowSizing = Either(
293 QuickTrackSizing,
294 Struct(policy=Enum("auto", "min"), align=TrackAlign),
295 Struct(policy=Enum("fixed"), height=Int, align=TrackAlign),
296 Struct(policy=Enum("fit", "max"), flex=Float, align=TrackAlign))
297
298 ColSizing = Either(
299 QuickTrackSizing,
300 Struct(policy=Enum("auto", "min"), align=TrackAlign),
301 Struct(policy=Enum("fixed"), width=Int, align=TrackAlign),
302 Struct(policy=Enum("fit", "max"), flex=Float, align=TrackAlign))
303
304 IntOrString = Either(Int, String) # XXX: work around issue #8166
305
306 class GridBox(LayoutDOM):
307
308 children = List(Either(
309 Tuple(Instance(LayoutDOM), Int, Int),
310 Tuple(Instance(LayoutDOM), Int, Int, Int, Int)), default=[], help="""
311 A list of children with their associated position in the grid (row, column).
312 """)
313
314 rows = Either(QuickTrackSizing, Dict(IntOrString, RowSizing), default="auto", help="""
315 Describes how the grid should maintain its rows' heights.
316
317 .. note::
318 This is an experimental feature and may change in future. Use it at your
319 own discretion.
320
321 """)
322
323 cols = Either(QuickTrackSizing, Dict(IntOrString, ColSizing), default="auto", help="""
324 Describes how the grid should maintain its columns' widths.
325
326 .. note::
327 This is an experimental feature and may change in future. Use it at your
328 own discretion.
329
330 """)
331
332 spacing = Either(Int, Tuple(Int, Int), default=0, help="""
333 The gap between children (in pixels).
334
335 Either a number, if spacing is the same for both dimensions, or a pair
336 of numbers indicating spacing in the vertical and horizontal dimensions
337 respectively.
338 """)
339
340 @abstract
341 class Box(LayoutDOM):
342 ''' Abstract base class for Row and Column. Do not use directly.
343
344 '''
345
346 def __init__(self, *args, **kwargs):
347
348 if len(args) > 0 and "children" in kwargs:
349 raise ValueError("'children' keyword cannot be used with positional arguments")
350 elif len(args) > 0:
351 kwargs["children"] = list(args)
352
353 super(Box, self).__init__(**kwargs)
354
355 @warning(EMPTY_LAYOUT)
356 def _check_empty_layout(self):
357 from itertools import chain
358 if not list(chain(self.children)):
359 return str(self)
360
361 @warning(BOTH_CHILD_AND_ROOT)
362 def _check_child_is_also_root(self):
363 problems = []
364 for c in self.children:
365 if c.document is not None and c in c.document.roots:
366 problems.append(str(c))
367 if problems:
368 return ", ".join(problems)
369 else:
370 return None
371
372 children = List(Instance(LayoutDOM), help="""
373 The list of children, which can be other components including plots, rows, columns, and widgets.
374 """)
375
376 spacing = Int(default=0, help="""
377 The gap between children (in pixels).
378 """)
379
380
381 class Row(Box):
382 ''' Lay out child components in a single horizontal row.
383
384 Children can be specified as positional arguments, as a single argument
385 that is a sequence, or using the ``children`` keyword argument.
386 '''
387
388 cols = Either(QuickTrackSizing, Dict(IntOrString, ColSizing), default="auto", help="""
389 Describes how the component should maintain its columns' widths.
390
391 .. note::
392 This is an experimental feature and may change in future. Use it at your
393 own discretion.
394
395 """)
396
397 class Column(Box):
398 ''' Lay out child components in a single vertical row.
399
400 Children can be specified as positional arguments, as a single argument
401 that is a sequence, or using the ``children`` keyword argument.
402 '''
403
404 rows = Either(QuickTrackSizing, Dict(IntOrString, RowSizing), default="auto", help="""
405 Describes how the component should maintain its rows' heights.
406
407 .. note::
408 This is an experimental feature and may change in future. Use it at your
409 own discretion.
410
411 """)
412
413 class WidgetBox(Column):
414 ''' Create a column of bokeh widgets with predefined styling.
415
416 '''
417
418 class Panel(Model):
419 ''' A single-widget container with title bar and controls.
420
421 '''
422
423 title = String(default="", help="""
424 The text title of the panel.
425 """)
426
427 child = Instance(LayoutDOM, help="""
428 The child widget. If you need more children, use a layout widget, e.g. a ``Column``.
429 """)
430
431 closable = Bool(False, help="""
432 Whether this panel is closable or not. If True, an "x" button will appear.
433
434 Closing a panel is equivalent to removing it from its parent container (e.g. tabs).
435 """)
436
437 class Tabs(LayoutDOM):
438 ''' A panel widget with navigation tabs.
439
440 '''
441
442 __example__ = "sphinx/source/docs/user_guide/examples/interaction_tab_panes.py"
443
444 tabs = List(Instance(Panel), help="""
445 The list of child panel widgets.
446 """).accepts(List(Tuple(String, Instance(LayoutDOM))),
447 lambda items: [ Panel(title=title, child=child) for (title, child) in items ])
448
449 tabs_location = Enum(Location, default="above", help="""
450 The location of the buttons that activate tabs.
451 """)
452
453 active = Int(0, help="""
454 The index of the active tab.
455 """)
456
457 callback = Instance(Callback, help="""
458 A callback to run in the browser whenever the button is activated.
459 """)
460
461 #-----------------------------------------------------------------------------
462 # Dev API
463 #-----------------------------------------------------------------------------
464
465 #-----------------------------------------------------------------------------
466 # Private API
467 #-----------------------------------------------------------------------------
468
469 #-----------------------------------------------------------------------------
470 # Code
471 #-----------------------------------------------------------------------------
472
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bokeh/models/layouts.py b/bokeh/models/layouts.py
--- a/bokeh/models/layouts.py
+++ b/bokeh/models/layouts.py
@@ -46,8 +46,10 @@
'GridBox',
'HTMLBox',
'LayoutDOM',
+ 'Panel',
'Row',
'Spacer',
+ 'Tabs',
'WidgetBox',
)
| {"golden_diff": "diff --git a/bokeh/models/layouts.py b/bokeh/models/layouts.py\n--- a/bokeh/models/layouts.py\n+++ b/bokeh/models/layouts.py\n@@ -46,8 +46,10 @@\n 'GridBox',\n 'HTMLBox',\n 'LayoutDOM',\n+ 'Panel',\n 'Row',\n 'Spacer',\n+ 'Tabs',\n 'WidgetBox',\n )\n", "issue": "[BUG] Documentation panels empty\nHello,\r\n\r\nI was looking for documentation on Tabs and I went to the page :\r\n\r\nhttps://docs.bokeh.org/en/latest/docs/reference/models/widgets.panels.html\r\nHowever it display a blank page :\r\n\r\n\r\nThe last time the page was ot empty was on:\r\n\r\nhttps://docs.bokeh.org/en/1.0.4/docs/reference/models/widgets.panels.html\r\n\r\n\n", "before_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2012 - 2019, Anaconda, Inc., and Bokeh Contributors.\n# All rights reserved.\n#\n# The full license is in the file LICENSE.txt, distributed with this software.\n#-----------------------------------------------------------------------------\n''' Various kinds of layout components.\n\n'''\n\n#-----------------------------------------------------------------------------\n# Boilerplate\n#-----------------------------------------------------------------------------\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport logging\nlog = logging.getLogger(__name__)\n\n#-----------------------------------------------------------------------------\n# Imports\n#-----------------------------------------------------------------------------\n\n# Standard library imports\n\n# External imports\n\n# Bokeh imports\nfrom ..core.enums import Align, SizingMode, SizingPolicy, Location\nfrom ..core.has_props import abstract\nfrom ..core.properties import (Bool, Auto, Enum, Int, NonNegativeInt, Float,\n Instance, List, Seq, Tuple, Dict, String, Either, Struct, Color)\nfrom ..core.validation import warning, error\nfrom ..core.validation.warnings import (BOTH_CHILD_AND_ROOT, EMPTY_LAYOUT,\n FIXED_SIZING_MODE, FIXED_WIDTH_POLICY, FIXED_HEIGHT_POLICY)\nfrom ..core.validation.errors import MIN_PREFERRED_MAX_WIDTH, MIN_PREFERRED_MAX_HEIGHT\nfrom ..model import Model\nfrom .callbacks import Callback\n\n#-----------------------------------------------------------------------------\n# Globals and constants\n#-----------------------------------------------------------------------------\n\n__all__ = (\n 'Box',\n 'Column',\n 'GridBox',\n 'HTMLBox',\n 'LayoutDOM',\n 'Row',\n 'Spacer',\n 'WidgetBox',\n)\n\n#-----------------------------------------------------------------------------\n# General API\n#-----------------------------------------------------------------------------\n\n@abstract\nclass LayoutDOM(Model):\n \"\"\" The base class for layoutable components.\n\n \"\"\"\n\n disabled = Bool(False, help=\"\"\"\n Whether the widget will be disabled when rendered.\n\n If ``True``, the widget will be greyed-out and not responsive to UI events.\n \"\"\")\n\n visible = Bool(True, help=\"\"\"\n Whether the component will be visible and a part of a layout.\n \"\"\")\n\n width = NonNegativeInt(default=None, help=\"\"\"\n The width of the component (in pixels).\n\n This can be either fixed or preferred width, depending on width sizing policy.\n \"\"\")\n\n height = NonNegativeInt(default=None, help=\"\"\"\n The height of the component (in pixels).\n\n This can be either fixed or preferred height, depending on height sizing policy.\n \"\"\")\n\n min_width = NonNegativeInt(default=None, help=\"\"\"\n Minimal width of the component (in pixels) if width is adjustable.\n \"\"\")\n\n min_height = NonNegativeInt(default=None, help=\"\"\"\n Minimal height of the component (in pixels) if height is adjustable.\n \"\"\")\n\n max_width = NonNegativeInt(default=None, help=\"\"\"\n Minimal width of the component (in pixels) if width is adjustable.\n \"\"\")\n\n max_height = NonNegativeInt(default=None, help=\"\"\"\n Minimal height of the component (in pixels) if height is adjustable.\n \"\"\")\n\n margin = Tuple(Int, Int, Int, Int, default=(0, 0, 0, 0), help=\"\"\"\n Allows to create additional space around the component.\n \"\"\").accepts(Tuple(Int, Int), lambda v_h: (v_h[0], v_h[1], v_h[0], v_h[1])) \\\n .accepts(Int, lambda m: (m, m, m, m))\n\n width_policy = Either(Auto, Enum(SizingPolicy), default=\"auto\", help=\"\"\"\n Describes how the component should maintain its width.\n\n ``\"auto\"``\n Use component's preferred sizing policy.\n\n ``\"fixed\"``\n Use exactly ``width`` pixels. Component will overflow if it can't fit in the\n available horizontal space.\n\n ``\"fit\"``\n Use component's preferred width (if set) and allow it to fit into the available\n horizontal space within the minimum and maximum width bounds (if set). Component's\n width neither will be aggressively minimized nor maximized.\n\n ``\"min\"``\n Use as little horizontal space as possible, not less than the minimum width (if set).\n The starting point is the preferred width (if set). The width of the component may\n shrink or grow depending on the parent layout, aspect management and other factors.\n\n ``\"max\"``\n Use as much horizontal space as possible, not more than the maximum width (if set).\n The starting point is the preferred width (if set). The width of the component may\n shrink or grow depending on the parent layout, aspect management and other factors.\n\n .. note::\n This is an experimental feature and may change in future. Use it at your\n own discretion. Prefer using ``sizing_mode`` if this level of control isn't\n strictly necessary.\n\n \"\"\")\n\n height_policy = Either(Auto, Enum(SizingPolicy), default=\"auto\", help=\"\"\"\n Describes how the component should maintain its height.\n\n ``\"auto\"``\n Use component's preferred sizing policy.\n\n ``\"fixed\"``\n Use exactly ``height`` pixels. Component will overflow if it can't fit in the\n available vertical space.\n\n ``\"fit\"``\n Use component's preferred height (if set) and allow to fit into the available\n vertical space within the minimum and maximum height bounds (if set). Component's\n height neither will be aggressively minimized nor maximized.\n\n ``\"min\"``\n Use as little vertical space as possible, not less than the minimum height (if set).\n The starting point is the preferred height (if set). The height of the component may\n shrink or grow depending on the parent layout, aspect management and other factors.\n\n ``\"max\"``\n Use as much vertical space as possible, not more than the maximum height (if set).\n The starting point is the preferred height (if set). The height of the component may\n shrink or grow depending on the parent layout, aspect management and other factors.\n\n .. note::\n This is an experimental feature and may change in future. Use it at your\n own discretion. Prefer using ``sizing_mode`` if this level of control isn't\n strictly necessary.\n\n \"\"\")\n\n aspect_ratio = Either(Enum(\"auto\"), Float, default=None, help=\"\"\"\n Describes the proportional relationship between component's width and height.\n\n This works if any of component's dimensions are flexible in size. If set to\n a number, ``width / height = aspect_ratio`` relationship will be maintained.\n Otherwise, if set to ``\"auto\"``, component's preferred width and height will\n be used to determine the aspect (if not set, no aspect will be preserved).\n\n \"\"\")\n\n sizing_mode = Enum(SizingMode, default=None, help=\"\"\"\n How the component should size itself.\n\n This is a high-level setting for maintaining width and height of the component. To\n gain more fine grained control over sizing, use ``width_policy``, ``height_policy``\n and ``aspect_ratio`` instead (those take precedence over ``sizing_mode``).\n\n Possible scenarios:\n\n ``\"fixed\"``\n Component is not responsive. It will retain its original width and height\n regardless of any subsequent browser window resize events.\n\n ``\"stretch_width\"``\n Component will responsively resize to stretch to the available width, without\n maintaining any aspect ratio. The height of the component depends on the type\n of the component and may be fixed or fit to component's contents.\n\n ``\"stretch_height\"``\n Component will responsively resize to stretch to the available height, without\n maintaining any aspect ratio. The width of the component depends on the type\n of the component and may be fixed or fit to component's contents.\n\n ``\"stretch_both\"``\n Component is completely responsive, independently in width and height, and\n will occupy all the available horizontal and vertical space, even if this\n changes the aspect ratio of the component.\n\n ``\"scale_width\"``\n Component will responsively resize to stretch to the available width, while\n maintaining the original or provided aspect ratio.\n\n ``\"scale_height\"``\n Component will responsively resize to stretch to the available height, while\n maintaining the original or provided aspect ratio.\n\n ``\"scale_both\"``\n Component will responsively resize to both the available width and height, while\n maintaining the original or provided aspect ratio.\n\n \"\"\")\n\n align = Either(Enum(Align), Tuple(Enum(Align), Enum(Align)), default=\"start\", help=\"\"\"\n The alignment point within the parent container.\n\n This property is useful only if this component is a child element of a layout\n (e.g. a grid). Self alignment can be overridden by the parent container (e.g.\n grid track align).\n \"\"\")\n\n background = Color(default=None, help=\"\"\"\n Background color of the component.\n \"\"\")\n\n # List in order for in-place changes to trigger changes, ref: https://github.com/bokeh/bokeh/issues/6841\n css_classes = List(String, help=\"\"\"\n A list of CSS class names to add to this DOM element. Note: the class names are\n simply added as-is, no other guarantees are provided.\n\n It is also permissible to assign from tuples, however these are adapted -- the\n property will always contain a list.\n \"\"\").accepts(Seq(String), lambda x: list(x))\n\n @warning(FIXED_SIZING_MODE)\n def _check_fixed_sizing_mode(self):\n if self.sizing_mode == \"fixed\" and (self.width is None or self.height is None):\n return str(self)\n\n @warning(FIXED_WIDTH_POLICY)\n def _check_fixed_width_policy(self):\n if self.width_policy == \"fixed\" and self.width is None:\n return str(self)\n\n @warning(FIXED_HEIGHT_POLICY)\n def _check_fixed_height_policy(self):\n if self.height_policy == \"fixed\" and self.height is None:\n return str(self)\n\n @error(MIN_PREFERRED_MAX_WIDTH)\n def _min_preferred_max_width(self):\n min_width = self.min_width if self.min_width is not None else 0\n width = self.width if self.width is not None else min_width\n max_width = self.max_width if self.max_width is not None else width\n\n if not (min_width <= width <= max_width):\n return str(self)\n\n @error(MIN_PREFERRED_MAX_HEIGHT)\n def _min_preferred_max_height(self):\n min_height = self.min_height if self.min_height is not None else 0\n height = self.height if self.height is not None else min_height\n max_height = self.max_height if self.max_height is not None else height\n\n if not (min_height <= height <= max_height):\n return str(self)\n\n@abstract\nclass HTMLBox(LayoutDOM):\n ''' A component which size is determined by its HTML content.\n\n '''\n\nclass Spacer(LayoutDOM):\n ''' A container for space used to fill an empty spot in a row or column.\n\n '''\n\nQuickTrackSizing = Either(Enum(\"auto\", \"min\", \"fit\", \"max\"), Int)\n\nTrackAlign = Either(Auto, Enum(Align))\n\nRowSizing = Either(\n QuickTrackSizing,\n Struct(policy=Enum(\"auto\", \"min\"), align=TrackAlign),\n Struct(policy=Enum(\"fixed\"), height=Int, align=TrackAlign),\n Struct(policy=Enum(\"fit\", \"max\"), flex=Float, align=TrackAlign))\n\nColSizing = Either(\n QuickTrackSizing,\n Struct(policy=Enum(\"auto\", \"min\"), align=TrackAlign),\n Struct(policy=Enum(\"fixed\"), width=Int, align=TrackAlign),\n Struct(policy=Enum(\"fit\", \"max\"), flex=Float, align=TrackAlign))\n\nIntOrString = Either(Int, String) # XXX: work around issue #8166\n\nclass GridBox(LayoutDOM):\n\n children = List(Either(\n Tuple(Instance(LayoutDOM), Int, Int),\n Tuple(Instance(LayoutDOM), Int, Int, Int, Int)), default=[], help=\"\"\"\n A list of children with their associated position in the grid (row, column).\n \"\"\")\n\n rows = Either(QuickTrackSizing, Dict(IntOrString, RowSizing), default=\"auto\", help=\"\"\"\n Describes how the grid should maintain its rows' heights.\n\n .. note::\n This is an experimental feature and may change in future. Use it at your\n own discretion.\n\n \"\"\")\n\n cols = Either(QuickTrackSizing, Dict(IntOrString, ColSizing), default=\"auto\", help=\"\"\"\n Describes how the grid should maintain its columns' widths.\n\n .. note::\n This is an experimental feature and may change in future. Use it at your\n own discretion.\n\n \"\"\")\n\n spacing = Either(Int, Tuple(Int, Int), default=0, help=\"\"\"\n The gap between children (in pixels).\n\n Either a number, if spacing is the same for both dimensions, or a pair\n of numbers indicating spacing in the vertical and horizontal dimensions\n respectively.\n \"\"\")\n\n@abstract\nclass Box(LayoutDOM):\n ''' Abstract base class for Row and Column. Do not use directly.\n\n '''\n\n def __init__(self, *args, **kwargs):\n\n if len(args) > 0 and \"children\" in kwargs:\n raise ValueError(\"'children' keyword cannot be used with positional arguments\")\n elif len(args) > 0:\n kwargs[\"children\"] = list(args)\n\n super(Box, self).__init__(**kwargs)\n\n @warning(EMPTY_LAYOUT)\n def _check_empty_layout(self):\n from itertools import chain\n if not list(chain(self.children)):\n return str(self)\n\n @warning(BOTH_CHILD_AND_ROOT)\n def _check_child_is_also_root(self):\n problems = []\n for c in self.children:\n if c.document is not None and c in c.document.roots:\n problems.append(str(c))\n if problems:\n return \", \".join(problems)\n else:\n return None\n\n children = List(Instance(LayoutDOM), help=\"\"\"\n The list of children, which can be other components including plots, rows, columns, and widgets.\n \"\"\")\n\n spacing = Int(default=0, help=\"\"\"\n The gap between children (in pixels).\n \"\"\")\n\n\nclass Row(Box):\n ''' Lay out child components in a single horizontal row.\n\n Children can be specified as positional arguments, as a single argument\n that is a sequence, or using the ``children`` keyword argument.\n '''\n\n cols = Either(QuickTrackSizing, Dict(IntOrString, ColSizing), default=\"auto\", help=\"\"\"\n Describes how the component should maintain its columns' widths.\n\n .. note::\n This is an experimental feature and may change in future. Use it at your\n own discretion.\n\n \"\"\")\n\nclass Column(Box):\n ''' Lay out child components in a single vertical row.\n\n Children can be specified as positional arguments, as a single argument\n that is a sequence, or using the ``children`` keyword argument.\n '''\n\n rows = Either(QuickTrackSizing, Dict(IntOrString, RowSizing), default=\"auto\", help=\"\"\"\n Describes how the component should maintain its rows' heights.\n\n .. note::\n This is an experimental feature and may change in future. Use it at your\n own discretion.\n\n \"\"\")\n\nclass WidgetBox(Column):\n ''' Create a column of bokeh widgets with predefined styling.\n\n '''\n\nclass Panel(Model):\n ''' A single-widget container with title bar and controls.\n\n '''\n\n title = String(default=\"\", help=\"\"\"\n The text title of the panel.\n \"\"\")\n\n child = Instance(LayoutDOM, help=\"\"\"\n The child widget. If you need more children, use a layout widget, e.g. a ``Column``.\n \"\"\")\n\n closable = Bool(False, help=\"\"\"\n Whether this panel is closable or not. If True, an \"x\" button will appear.\n\n Closing a panel is equivalent to removing it from its parent container (e.g. tabs).\n \"\"\")\n\nclass Tabs(LayoutDOM):\n ''' A panel widget with navigation tabs.\n\n '''\n\n __example__ = \"sphinx/source/docs/user_guide/examples/interaction_tab_panes.py\"\n\n tabs = List(Instance(Panel), help=\"\"\"\n The list of child panel widgets.\n \"\"\").accepts(List(Tuple(String, Instance(LayoutDOM))),\n lambda items: [ Panel(title=title, child=child) for (title, child) in items ])\n\n tabs_location = Enum(Location, default=\"above\", help=\"\"\"\n The location of the buttons that activate tabs.\n \"\"\")\n\n active = Int(0, help=\"\"\"\n The index of the active tab.\n \"\"\")\n\n callback = Instance(Callback, help=\"\"\"\n A callback to run in the browser whenever the button is activated.\n \"\"\")\n\n#-----------------------------------------------------------------------------\n# Dev API\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Private API\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Code\n#-----------------------------------------------------------------------------\n", "path": "bokeh/models/layouts.py"}], "after_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2012 - 2019, Anaconda, Inc., and Bokeh Contributors.\n# All rights reserved.\n#\n# The full license is in the file LICENSE.txt, distributed with this software.\n#-----------------------------------------------------------------------------\n''' Various kinds of layout components.\n\n'''\n\n#-----------------------------------------------------------------------------\n# Boilerplate\n#-----------------------------------------------------------------------------\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport logging\nlog = logging.getLogger(__name__)\n\n#-----------------------------------------------------------------------------\n# Imports\n#-----------------------------------------------------------------------------\n\n# Standard library imports\n\n# External imports\n\n# Bokeh imports\nfrom ..core.enums import Align, SizingMode, SizingPolicy, Location\nfrom ..core.has_props import abstract\nfrom ..core.properties import (Bool, Auto, Enum, Int, NonNegativeInt, Float,\n Instance, List, Seq, Tuple, Dict, String, Either, Struct, Color)\nfrom ..core.validation import warning, error\nfrom ..core.validation.warnings import (BOTH_CHILD_AND_ROOT, EMPTY_LAYOUT,\n FIXED_SIZING_MODE, FIXED_WIDTH_POLICY, FIXED_HEIGHT_POLICY)\nfrom ..core.validation.errors import MIN_PREFERRED_MAX_WIDTH, MIN_PREFERRED_MAX_HEIGHT\nfrom ..model import Model\nfrom .callbacks import Callback\n\n#-----------------------------------------------------------------------------\n# Globals and constants\n#-----------------------------------------------------------------------------\n\n__all__ = (\n 'Box',\n 'Column',\n 'GridBox',\n 'HTMLBox',\n 'LayoutDOM',\n 'Panel',\n 'Row',\n 'Spacer',\n 'Tabs',\n 'WidgetBox',\n)\n\n#-----------------------------------------------------------------------------\n# General API\n#-----------------------------------------------------------------------------\n\n@abstract\nclass LayoutDOM(Model):\n \"\"\" The base class for layoutable components.\n\n \"\"\"\n\n disabled = Bool(False, help=\"\"\"\n Whether the widget will be disabled when rendered.\n\n If ``True``, the widget will be greyed-out and not responsive to UI events.\n \"\"\")\n\n visible = Bool(True, help=\"\"\"\n Whether the component will be visible and a part of a layout.\n \"\"\")\n\n width = NonNegativeInt(default=None, help=\"\"\"\n The width of the component (in pixels).\n\n This can be either fixed or preferred width, depending on width sizing policy.\n \"\"\")\n\n height = NonNegativeInt(default=None, help=\"\"\"\n The height of the component (in pixels).\n\n This can be either fixed or preferred height, depending on height sizing policy.\n \"\"\")\n\n min_width = NonNegativeInt(default=None, help=\"\"\"\n Minimal width of the component (in pixels) if width is adjustable.\n \"\"\")\n\n min_height = NonNegativeInt(default=None, help=\"\"\"\n Minimal height of the component (in pixels) if height is adjustable.\n \"\"\")\n\n max_width = NonNegativeInt(default=None, help=\"\"\"\n Minimal width of the component (in pixels) if width is adjustable.\n \"\"\")\n\n max_height = NonNegativeInt(default=None, help=\"\"\"\n Minimal height of the component (in pixels) if height is adjustable.\n \"\"\")\n\n margin = Tuple(Int, Int, Int, Int, default=(0, 0, 0, 0), help=\"\"\"\n Allows to create additional space around the component.\n \"\"\").accepts(Tuple(Int, Int), lambda v_h: (v_h[0], v_h[1], v_h[0], v_h[1])) \\\n .accepts(Int, lambda m: (m, m, m, m))\n\n width_policy = Either(Auto, Enum(SizingPolicy), default=\"auto\", help=\"\"\"\n Describes how the component should maintain its width.\n\n ``\"auto\"``\n Use component's preferred sizing policy.\n\n ``\"fixed\"``\n Use exactly ``width`` pixels. Component will overflow if it can't fit in the\n available horizontal space.\n\n ``\"fit\"``\n Use component's preferred width (if set) and allow it to fit into the available\n horizontal space within the minimum and maximum width bounds (if set). Component's\n width neither will be aggressively minimized nor maximized.\n\n ``\"min\"``\n Use as little horizontal space as possible, not less than the minimum width (if set).\n The starting point is the preferred width (if set). The width of the component may\n shrink or grow depending on the parent layout, aspect management and other factors.\n\n ``\"max\"``\n Use as much horizontal space as possible, not more than the maximum width (if set).\n The starting point is the preferred width (if set). The width of the component may\n shrink or grow depending on the parent layout, aspect management and other factors.\n\n .. note::\n This is an experimental feature and may change in future. Use it at your\n own discretion. Prefer using ``sizing_mode`` if this level of control isn't\n strictly necessary.\n\n \"\"\")\n\n height_policy = Either(Auto, Enum(SizingPolicy), default=\"auto\", help=\"\"\"\n Describes how the component should maintain its height.\n\n ``\"auto\"``\n Use component's preferred sizing policy.\n\n ``\"fixed\"``\n Use exactly ``height`` pixels. Component will overflow if it can't fit in the\n available vertical space.\n\n ``\"fit\"``\n Use component's preferred height (if set) and allow to fit into the available\n vertical space within the minimum and maximum height bounds (if set). Component's\n height neither will be aggressively minimized nor maximized.\n\n ``\"min\"``\n Use as little vertical space as possible, not less than the minimum height (if set).\n The starting point is the preferred height (if set). The height of the component may\n shrink or grow depending on the parent layout, aspect management and other factors.\n\n ``\"max\"``\n Use as much vertical space as possible, not more than the maximum height (if set).\n The starting point is the preferred height (if set). The height of the component may\n shrink or grow depending on the parent layout, aspect management and other factors.\n\n .. note::\n This is an experimental feature and may change in future. Use it at your\n own discretion. Prefer using ``sizing_mode`` if this level of control isn't\n strictly necessary.\n\n \"\"\")\n\n aspect_ratio = Either(Enum(\"auto\"), Float, default=None, help=\"\"\"\n Describes the proportional relationship between component's width and height.\n\n This works if any of component's dimensions are flexible in size. If set to\n a number, ``width / height = aspect_ratio`` relationship will be maintained.\n Otherwise, if set to ``\"auto\"``, component's preferred width and height will\n be used to determine the aspect (if not set, no aspect will be preserved).\n\n \"\"\")\n\n sizing_mode = Enum(SizingMode, default=None, help=\"\"\"\n How the component should size itself.\n\n This is a high-level setting for maintaining width and height of the component. To\n gain more fine grained control over sizing, use ``width_policy``, ``height_policy``\n and ``aspect_ratio`` instead (those take precedence over ``sizing_mode``).\n\n Possible scenarios:\n\n ``\"fixed\"``\n Component is not responsive. It will retain its original width and height\n regardless of any subsequent browser window resize events.\n\n ``\"stretch_width\"``\n Component will responsively resize to stretch to the available width, without\n maintaining any aspect ratio. The height of the component depends on the type\n of the component and may be fixed or fit to component's contents.\n\n ``\"stretch_height\"``\n Component will responsively resize to stretch to the available height, without\n maintaining any aspect ratio. The width of the component depends on the type\n of the component and may be fixed or fit to component's contents.\n\n ``\"stretch_both\"``\n Component is completely responsive, independently in width and height, and\n will occupy all the available horizontal and vertical space, even if this\n changes the aspect ratio of the component.\n\n ``\"scale_width\"``\n Component will responsively resize to stretch to the available width, while\n maintaining the original or provided aspect ratio.\n\n ``\"scale_height\"``\n Component will responsively resize to stretch to the available height, while\n maintaining the original or provided aspect ratio.\n\n ``\"scale_both\"``\n Component will responsively resize to both the available width and height, while\n maintaining the original or provided aspect ratio.\n\n \"\"\")\n\n align = Either(Enum(Align), Tuple(Enum(Align), Enum(Align)), default=\"start\", help=\"\"\"\n The alignment point within the parent container.\n\n This property is useful only if this component is a child element of a layout\n (e.g. a grid). Self alignment can be overridden by the parent container (e.g.\n grid track align).\n \"\"\")\n\n background = Color(default=None, help=\"\"\"\n Background color of the component.\n \"\"\")\n\n # List in order for in-place changes to trigger changes, ref: https://github.com/bokeh/bokeh/issues/6841\n css_classes = List(String, help=\"\"\"\n A list of CSS class names to add to this DOM element. Note: the class names are\n simply added as-is, no other guarantees are provided.\n\n It is also permissible to assign from tuples, however these are adapted -- the\n property will always contain a list.\n \"\"\").accepts(Seq(String), lambda x: list(x))\n\n @warning(FIXED_SIZING_MODE)\n def _check_fixed_sizing_mode(self):\n if self.sizing_mode == \"fixed\" and (self.width is None or self.height is None):\n return str(self)\n\n @warning(FIXED_WIDTH_POLICY)\n def _check_fixed_width_policy(self):\n if self.width_policy == \"fixed\" and self.width is None:\n return str(self)\n\n @warning(FIXED_HEIGHT_POLICY)\n def _check_fixed_height_policy(self):\n if self.height_policy == \"fixed\" and self.height is None:\n return str(self)\n\n @error(MIN_PREFERRED_MAX_WIDTH)\n def _min_preferred_max_width(self):\n min_width = self.min_width if self.min_width is not None else 0\n width = self.width if self.width is not None else min_width\n max_width = self.max_width if self.max_width is not None else width\n\n if not (min_width <= width <= max_width):\n return str(self)\n\n @error(MIN_PREFERRED_MAX_HEIGHT)\n def _min_preferred_max_height(self):\n min_height = self.min_height if self.min_height is not None else 0\n height = self.height if self.height is not None else min_height\n max_height = self.max_height if self.max_height is not None else height\n\n if not (min_height <= height <= max_height):\n return str(self)\n\n@abstract\nclass HTMLBox(LayoutDOM):\n ''' A component which size is determined by its HTML content.\n\n '''\n\nclass Spacer(LayoutDOM):\n ''' A container for space used to fill an empty spot in a row or column.\n\n '''\n\nQuickTrackSizing = Either(Enum(\"auto\", \"min\", \"fit\", \"max\"), Int)\n\nTrackAlign = Either(Auto, Enum(Align))\n\nRowSizing = Either(\n QuickTrackSizing,\n Struct(policy=Enum(\"auto\", \"min\"), align=TrackAlign),\n Struct(policy=Enum(\"fixed\"), height=Int, align=TrackAlign),\n Struct(policy=Enum(\"fit\", \"max\"), flex=Float, align=TrackAlign))\n\nColSizing = Either(\n QuickTrackSizing,\n Struct(policy=Enum(\"auto\", \"min\"), align=TrackAlign),\n Struct(policy=Enum(\"fixed\"), width=Int, align=TrackAlign),\n Struct(policy=Enum(\"fit\", \"max\"), flex=Float, align=TrackAlign))\n\nIntOrString = Either(Int, String) # XXX: work around issue #8166\n\nclass GridBox(LayoutDOM):\n\n children = List(Either(\n Tuple(Instance(LayoutDOM), Int, Int),\n Tuple(Instance(LayoutDOM), Int, Int, Int, Int)), default=[], help=\"\"\"\n A list of children with their associated position in the grid (row, column).\n \"\"\")\n\n rows = Either(QuickTrackSizing, Dict(IntOrString, RowSizing), default=\"auto\", help=\"\"\"\n Describes how the grid should maintain its rows' heights.\n\n .. note::\n This is an experimental feature and may change in future. Use it at your\n own discretion.\n\n \"\"\")\n\n cols = Either(QuickTrackSizing, Dict(IntOrString, ColSizing), default=\"auto\", help=\"\"\"\n Describes how the grid should maintain its columns' widths.\n\n .. note::\n This is an experimental feature and may change in future. Use it at your\n own discretion.\n\n \"\"\")\n\n spacing = Either(Int, Tuple(Int, Int), default=0, help=\"\"\"\n The gap between children (in pixels).\n\n Either a number, if spacing is the same for both dimensions, or a pair\n of numbers indicating spacing in the vertical and horizontal dimensions\n respectively.\n \"\"\")\n\n@abstract\nclass Box(LayoutDOM):\n ''' Abstract base class for Row and Column. Do not use directly.\n\n '''\n\n def __init__(self, *args, **kwargs):\n\n if len(args) > 0 and \"children\" in kwargs:\n raise ValueError(\"'children' keyword cannot be used with positional arguments\")\n elif len(args) > 0:\n kwargs[\"children\"] = list(args)\n\n super(Box, self).__init__(**kwargs)\n\n @warning(EMPTY_LAYOUT)\n def _check_empty_layout(self):\n from itertools import chain\n if not list(chain(self.children)):\n return str(self)\n\n @warning(BOTH_CHILD_AND_ROOT)\n def _check_child_is_also_root(self):\n problems = []\n for c in self.children:\n if c.document is not None and c in c.document.roots:\n problems.append(str(c))\n if problems:\n return \", \".join(problems)\n else:\n return None\n\n children = List(Instance(LayoutDOM), help=\"\"\"\n The list of children, which can be other components including plots, rows, columns, and widgets.\n \"\"\")\n\n spacing = Int(default=0, help=\"\"\"\n The gap between children (in pixels).\n \"\"\")\n\n\nclass Row(Box):\n ''' Lay out child components in a single horizontal row.\n\n Children can be specified as positional arguments, as a single argument\n that is a sequence, or using the ``children`` keyword argument.\n '''\n\n cols = Either(QuickTrackSizing, Dict(IntOrString, ColSizing), default=\"auto\", help=\"\"\"\n Describes how the component should maintain its columns' widths.\n\n .. note::\n This is an experimental feature and may change in future. Use it at your\n own discretion.\n\n \"\"\")\n\nclass Column(Box):\n ''' Lay out child components in a single vertical row.\n\n Children can be specified as positional arguments, as a single argument\n that is a sequence, or using the ``children`` keyword argument.\n '''\n\n rows = Either(QuickTrackSizing, Dict(IntOrString, RowSizing), default=\"auto\", help=\"\"\"\n Describes how the component should maintain its rows' heights.\n\n .. note::\n This is an experimental feature and may change in future. Use it at your\n own discretion.\n\n \"\"\")\n\nclass WidgetBox(Column):\n ''' Create a column of bokeh widgets with predefined styling.\n\n '''\n\nclass Panel(Model):\n ''' A single-widget container with title bar and controls.\n\n '''\n\n title = String(default=\"\", help=\"\"\"\n The text title of the panel.\n \"\"\")\n\n child = Instance(LayoutDOM, help=\"\"\"\n The child widget. If you need more children, use a layout widget, e.g. a ``Column``.\n \"\"\")\n\n closable = Bool(False, help=\"\"\"\n Whether this panel is closable or not. If True, an \"x\" button will appear.\n\n Closing a panel is equivalent to removing it from its parent container (e.g. tabs).\n \"\"\")\n\nclass Tabs(LayoutDOM):\n ''' A panel widget with navigation tabs.\n\n '''\n\n __example__ = \"sphinx/source/docs/user_guide/examples/interaction_tab_panes.py\"\n\n tabs = List(Instance(Panel), help=\"\"\"\n The list of child panel widgets.\n \"\"\").accepts(List(Tuple(String, Instance(LayoutDOM))),\n lambda items: [ Panel(title=title, child=child) for (title, child) in items ])\n\n tabs_location = Enum(Location, default=\"above\", help=\"\"\"\n The location of the buttons that activate tabs.\n \"\"\")\n\n active = Int(0, help=\"\"\"\n The index of the active tab.\n \"\"\")\n\n callback = Instance(Callback, help=\"\"\"\n A callback to run in the browser whenever the button is activated.\n \"\"\")\n\n#-----------------------------------------------------------------------------\n# Dev API\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Private API\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Code\n#-----------------------------------------------------------------------------\n", "path": "bokeh/models/layouts.py"}]} |
gh_patches_debug_62 | rasdani/github-patches | git_diff | chainer__chainer-2992 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Install bug: Mock required for gradient_check
#2972 Install bug
Chainer installed with `pip install chainer`
`from chainer import gradient_check` fails due to unable to find mock to import
Fixed by `conda install mock`
`gradient_check` is included in the block declarations in the tutorial, so it should either be removed from there or mock should be added to default install so that people doing the tutorial do not get an error during the import commands.
```
from chainer import gradient_check
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-1-0ba4708b632d> in <module>()
1 import numpy as np
2 import chainer
----> 3 from chainer import gradient_check
4 from chainer import datasets, iterators, optimizers, serializers
5 from chainer import Link, Chain, ChainList
/home/crissman/.pyenv/versions/anaconda3-4.2.0/lib/python3.5/site-packages/chainer/gradient_check.py in <module>()
7 from chainer import cuda
8 from chainer.functions.math import identity
----> 9 from chainer import testing
10 from chainer import variable
11
/home/crissman/.pyenv/versions/anaconda3-4.2.0/lib/python3.5/site-packages/chainer/testing/__init__.py in <module>()
5 from chainer.testing import parameterized # NOQA
6 from chainer.testing import serializer # NOQA
----> 7 from chainer.testing import training # NOQA
8 from chainer.testing import unary_math_function_test # NOQA
9
/home/crissman/.pyenv/versions/anaconda3-4.2.0/lib/python3.5/site-packages/chainer/testing/training.py in <module>()
1 from __future__ import division
2
----> 3 import mock
4
5 from chainer import training
ImportError: No module named 'mock'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2
3 import os
4 import pkg_resources
5 import sys
6
7 from setuptools import setup
8
9
10 if sys.version_info[:3] == (3, 5, 0):
11 if not int(os.getenv('CHAINER_PYTHON_350_FORCE', '0')):
12 msg = """
13 Chainer does not work with Python 3.5.0.
14
15 We strongly recommend to use another version of Python.
16 If you want to use Chainer with Python 3.5.0 at your own risk,
17 set CHAINER_PYTHON_350_FORCE environment variable to 1."""
18 print(msg)
19 sys.exit(1)
20
21
22 setup_requires = []
23 install_requires = [
24 'filelock',
25 'nose',
26 'numpy>=1.9.0',
27 'protobuf>=2.6.0',
28 'six>=1.9.0',
29 ]
30 cupy_require = 'cupy==2.0.0a1'
31
32 cupy_pkg = None
33 try:
34 cupy_pkg = pkg_resources.get_distribution('cupy')
35 except pkg_resources.DistributionNotFound:
36 pass
37
38 if cupy_pkg is not None:
39 install_requires.append(cupy_require)
40 print('Use %s' % cupy_require)
41
42 setup(
43 name='chainer',
44 version='3.0.0a1',
45 description='A flexible framework of neural networks',
46 author='Seiya Tokui',
47 author_email='[email protected]',
48 url='https://chainer.org/',
49 license='MIT License',
50 packages=['chainer',
51 'chainer.dataset',
52 'chainer.datasets',
53 'chainer.functions',
54 'chainer.functions.activation',
55 'chainer.functions.array',
56 'chainer.functions.connection',
57 'chainer.functions.evaluation',
58 'chainer.functions.loss',
59 'chainer.functions.math',
60 'chainer.functions.noise',
61 'chainer.functions.normalization',
62 'chainer.functions.pooling',
63 'chainer.functions.theano',
64 'chainer.functions.util',
65 'chainer.function_hooks',
66 'chainer.iterators',
67 'chainer.initializers',
68 'chainer.links',
69 'chainer.links.activation',
70 'chainer.links.caffe',
71 'chainer.links.caffe.protobuf2',
72 'chainer.links.caffe.protobuf3',
73 'chainer.links.connection',
74 'chainer.links.loss',
75 'chainer.links.model',
76 'chainer.links.model.vision',
77 'chainer.links.normalization',
78 'chainer.links.theano',
79 'chainer.optimizers',
80 'chainer.serializers',
81 'chainer.testing',
82 'chainer.training',
83 'chainer.training.extensions',
84 'chainer.training.triggers',
85 'chainer.training.updaters',
86 'chainer.utils'],
87 zip_safe=False,
88 setup_requires=setup_requires,
89 install_requires=install_requires,
90 tests_require=['mock',
91 'nose'],
92 )
93
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -22,6 +22,7 @@
setup_requires = []
install_requires = [
'filelock',
+ 'mock',
'nose',
'numpy>=1.9.0',
'protobuf>=2.6.0',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -22,6 +22,7 @@\n setup_requires = []\n install_requires = [\n 'filelock',\n+ 'mock',\n 'nose',\n 'numpy>=1.9.0',\n 'protobuf>=2.6.0',\n", "issue": "Install bug: Mock required for gradient_check\n#2972 Install bug\r\n\r\nChainer installed with `pip install chainer`\r\n`from chainer import gradient_check` fails due to unable to find mock to import\r\nFixed by `conda install mock`\r\n\r\n`gradient_check` is included in the block declarations in the tutorial, so it should either be removed from there or mock should be added to default install so that people doing the tutorial do not get an error during the import commands.\r\n\r\n```\r\nfrom chainer import gradient_check\r\n\r\n---------------------------------------------------------------------------\r\nImportError Traceback (most recent call last)\r\n<ipython-input-1-0ba4708b632d> in <module>()\r\n 1 import numpy as np\r\n 2 import chainer\r\n----> 3 from chainer import gradient_check\r\n 4 from chainer import datasets, iterators, optimizers, serializers\r\n 5 from chainer import Link, Chain, ChainList\r\n\r\n/home/crissman/.pyenv/versions/anaconda3-4.2.0/lib/python3.5/site-packages/chainer/gradient_check.py in <module>()\r\n 7 from chainer import cuda\r\n 8 from chainer.functions.math import identity\r\n----> 9 from chainer import testing\r\n 10 from chainer import variable\r\n 11 \r\n\r\n/home/crissman/.pyenv/versions/anaconda3-4.2.0/lib/python3.5/site-packages/chainer/testing/__init__.py in <module>()\r\n 5 from chainer.testing import parameterized # NOQA\r\n 6 from chainer.testing import serializer # NOQA\r\n----> 7 from chainer.testing import training # NOQA\r\n 8 from chainer.testing import unary_math_function_test # NOQA\r\n 9 \r\n\r\n/home/crissman/.pyenv/versions/anaconda3-4.2.0/lib/python3.5/site-packages/chainer/testing/training.py in <module>()\r\n 1 from __future__ import division\r\n 2 \r\n----> 3 import mock\r\n 4 \r\n 5 from chainer import training\r\n\r\nImportError: No module named 'mock'\r\n```\n", "before_files": [{"content": "#!/usr/bin/env python\n\nimport os\nimport pkg_resources\nimport sys\n\nfrom setuptools import setup\n\n\nif sys.version_info[:3] == (3, 5, 0):\n if not int(os.getenv('CHAINER_PYTHON_350_FORCE', '0')):\n msg = \"\"\"\nChainer does not work with Python 3.5.0.\n\nWe strongly recommend to use another version of Python.\nIf you want to use Chainer with Python 3.5.0 at your own risk,\nset CHAINER_PYTHON_350_FORCE environment variable to 1.\"\"\"\n print(msg)\n sys.exit(1)\n\n\nsetup_requires = []\ninstall_requires = [\n 'filelock',\n 'nose',\n 'numpy>=1.9.0',\n 'protobuf>=2.6.0',\n 'six>=1.9.0',\n]\ncupy_require = 'cupy==2.0.0a1'\n\ncupy_pkg = None\ntry:\n cupy_pkg = pkg_resources.get_distribution('cupy')\nexcept pkg_resources.DistributionNotFound:\n pass\n\nif cupy_pkg is not None:\n install_requires.append(cupy_require)\n print('Use %s' % cupy_require)\n\nsetup(\n name='chainer',\n version='3.0.0a1',\n description='A flexible framework of neural networks',\n author='Seiya Tokui',\n author_email='[email protected]',\n url='https://chainer.org/',\n license='MIT License',\n packages=['chainer',\n 'chainer.dataset',\n 'chainer.datasets',\n 'chainer.functions',\n 'chainer.functions.activation',\n 'chainer.functions.array',\n 'chainer.functions.connection',\n 'chainer.functions.evaluation',\n 'chainer.functions.loss',\n 'chainer.functions.math',\n 'chainer.functions.noise',\n 'chainer.functions.normalization',\n 'chainer.functions.pooling',\n 'chainer.functions.theano',\n 'chainer.functions.util',\n 'chainer.function_hooks',\n 'chainer.iterators',\n 'chainer.initializers',\n 'chainer.links',\n 'chainer.links.activation',\n 'chainer.links.caffe',\n 'chainer.links.caffe.protobuf2',\n 'chainer.links.caffe.protobuf3',\n 'chainer.links.connection',\n 'chainer.links.loss',\n 'chainer.links.model',\n 'chainer.links.model.vision',\n 'chainer.links.normalization',\n 'chainer.links.theano',\n 'chainer.optimizers',\n 'chainer.serializers',\n 'chainer.testing',\n 'chainer.training',\n 'chainer.training.extensions',\n 'chainer.training.triggers',\n 'chainer.training.updaters',\n 'chainer.utils'],\n zip_safe=False,\n setup_requires=setup_requires,\n install_requires=install_requires,\n tests_require=['mock',\n 'nose'],\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\nimport os\nimport pkg_resources\nimport sys\n\nfrom setuptools import setup\n\n\nif sys.version_info[:3] == (3, 5, 0):\n if not int(os.getenv('CHAINER_PYTHON_350_FORCE', '0')):\n msg = \"\"\"\nChainer does not work with Python 3.5.0.\n\nWe strongly recommend to use another version of Python.\nIf you want to use Chainer with Python 3.5.0 at your own risk,\nset CHAINER_PYTHON_350_FORCE environment variable to 1.\"\"\"\n print(msg)\n sys.exit(1)\n\n\nsetup_requires = []\ninstall_requires = [\n 'filelock',\n 'mock',\n 'nose',\n 'numpy>=1.9.0',\n 'protobuf>=2.6.0',\n 'six>=1.9.0',\n]\ncupy_require = 'cupy==2.0.0a1'\n\ncupy_pkg = None\ntry:\n cupy_pkg = pkg_resources.get_distribution('cupy')\nexcept pkg_resources.DistributionNotFound:\n pass\n\nif cupy_pkg is not None:\n install_requires.append(cupy_require)\n print('Use %s' % cupy_require)\n\nsetup(\n name='chainer',\n version='3.0.0a1',\n description='A flexible framework of neural networks',\n author='Seiya Tokui',\n author_email='[email protected]',\n url='https://chainer.org/',\n license='MIT License',\n packages=['chainer',\n 'chainer.dataset',\n 'chainer.datasets',\n 'chainer.functions',\n 'chainer.functions.activation',\n 'chainer.functions.array',\n 'chainer.functions.connection',\n 'chainer.functions.evaluation',\n 'chainer.functions.loss',\n 'chainer.functions.math',\n 'chainer.functions.noise',\n 'chainer.functions.normalization',\n 'chainer.functions.pooling',\n 'chainer.functions.theano',\n 'chainer.functions.util',\n 'chainer.function_hooks',\n 'chainer.iterators',\n 'chainer.initializers',\n 'chainer.links',\n 'chainer.links.activation',\n 'chainer.links.caffe',\n 'chainer.links.caffe.protobuf2',\n 'chainer.links.caffe.protobuf3',\n 'chainer.links.connection',\n 'chainer.links.loss',\n 'chainer.links.model',\n 'chainer.links.model.vision',\n 'chainer.links.normalization',\n 'chainer.links.theano',\n 'chainer.optimizers',\n 'chainer.serializers',\n 'chainer.testing',\n 'chainer.training',\n 'chainer.training.extensions',\n 'chainer.training.triggers',\n 'chainer.training.updaters',\n 'chainer.utils'],\n zip_safe=False,\n setup_requires=setup_requires,\n install_requires=install_requires,\n tests_require=['mock',\n 'nose'],\n)\n", "path": "setup.py"}]} |
gh_patches_debug_63 | rasdani/github-patches | git_diff | microsoft__torchgeo-1646 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Inconsistency in Sentinel 2 band names in transforms tutorial and eurosat dataset
### Issue
In the [tutorial](https://torchgeo.readthedocs.io/en/stable/tutorials/transforms.html) they are `B1`, `B2` etc but in [the dataset](https://github.com/microsoft/torchgeo/blob/main/torchgeo/datasets/eurosat.py) `B01`, `B02` etc. To avoid confusion it would be good to stick to one format. I've noticed this whilst adapting the tutorial
### Fix
Sitck to `B01`, `B02` etc
Inconsistency in Sentinel 2 band names in transforms tutorial and eurosat dataset
### Issue
In the [tutorial](https://torchgeo.readthedocs.io/en/stable/tutorials/transforms.html) they are `B1`, `B2` etc but in [the dataset](https://github.com/microsoft/torchgeo/blob/main/torchgeo/datasets/eurosat.py) `B01`, `B02` etc. To avoid confusion it would be good to stick to one format. I've noticed this whilst adapting the tutorial
### Fix
Sitck to `B01`, `B02` etc
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `torchgeo/datasets/eurosat.py`
Content:
```
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License.
3
4 """EuroSAT dataset."""
5
6 import os
7 from collections.abc import Sequence
8 from typing import Callable, Optional, cast
9
10 import matplotlib.pyplot as plt
11 import numpy as np
12 import torch
13 from matplotlib.figure import Figure
14 from torch import Tensor
15
16 from .geo import NonGeoClassificationDataset
17 from .utils import check_integrity, download_url, extract_archive, rasterio_loader
18
19
20 class EuroSAT(NonGeoClassificationDataset):
21 """EuroSAT dataset.
22
23 The `EuroSAT <https://github.com/phelber/EuroSAT>`__ dataset is based on Sentinel-2
24 satellite images covering 13 spectral bands and consists of 10 target classes with
25 a total of 27,000 labeled and geo-referenced images.
26
27 Dataset format:
28
29 * rasters are 13-channel GeoTiffs
30 * labels are values in the range [0,9]
31
32 Dataset classes:
33
34 * Industrial Buildings
35 * Residential Buildings
36 * Annual Crop
37 * Permanent Crop
38 * River
39 * Sea and Lake
40 * Herbaceous Vegetation
41 * Highway
42 * Pasture
43 * Forest
44
45 This dataset uses the train/val/test splits defined in the "In-domain representation
46 learning for remote sensing" paper:
47
48 * https://arxiv.org/abs/1911.06721
49
50 If you use this dataset in your research, please cite the following papers:
51
52 * https://ieeexplore.ieee.org/document/8736785
53 * https://ieeexplore.ieee.org/document/8519248
54 """
55
56 url = "https://huggingface.co/datasets/torchgeo/eurosat/resolve/main/EuroSATallBands.zip" # noqa: E501
57 filename = "EuroSATallBands.zip"
58 md5 = "5ac12b3b2557aa56e1826e981e8e200e"
59
60 # For some reason the class directories are actually nested in this directory
61 base_dir = os.path.join(
62 "ds", "images", "remote_sensing", "otherDatasets", "sentinel_2", "tif"
63 )
64
65 splits = ["train", "val", "test"]
66 split_urls = {
67 "train": "https://storage.googleapis.com/remote_sensing_representations/eurosat-train.txt", # noqa: E501
68 "val": "https://storage.googleapis.com/remote_sensing_representations/eurosat-val.txt", # noqa: E501
69 "test": "https://storage.googleapis.com/remote_sensing_representations/eurosat-test.txt", # noqa: E501
70 }
71 split_md5s = {
72 "train": "908f142e73d6acdf3f482c5e80d851b1",
73 "val": "95de90f2aa998f70a3b2416bfe0687b4",
74 "test": "7ae5ab94471417b6e315763121e67c5f",
75 }
76 classes = [
77 "Industrial Buildings",
78 "Residential Buildings",
79 "Annual Crop",
80 "Permanent Crop",
81 "River",
82 "Sea and Lake",
83 "Herbaceous Vegetation",
84 "Highway",
85 "Pasture",
86 "Forest",
87 ]
88
89 all_band_names = (
90 "B01",
91 "B02",
92 "B03",
93 "B04",
94 "B05",
95 "B06",
96 "B07",
97 "B08",
98 "B08A",
99 "B09",
100 "B10",
101 "B11",
102 "B12",
103 )
104
105 rgb_bands = ("B04", "B03", "B02")
106
107 BAND_SETS = {"all": all_band_names, "rgb": rgb_bands}
108
109 def __init__(
110 self,
111 root: str = "data",
112 split: str = "train",
113 bands: Sequence[str] = BAND_SETS["all"],
114 transforms: Optional[Callable[[dict[str, Tensor]], dict[str, Tensor]]] = None,
115 download: bool = False,
116 checksum: bool = False,
117 ) -> None:
118 """Initialize a new EuroSAT dataset instance.
119
120 Args:
121 root: root directory where dataset can be found
122 split: one of "train", "val", or "test"
123 bands: a sequence of band names to load
124 transforms: a function/transform that takes input sample and its target as
125 entry and returns a transformed version
126 download: if True, download dataset and store it in the root directory
127 checksum: if True, check the MD5 of the downloaded files (may be slow)
128
129 Raises:
130 AssertionError: if ``split`` argument is invalid
131 RuntimeError: if ``download=False`` and data is not found, or checksums
132 don't match
133
134 .. versionadded:: 0.3
135 The *bands* parameter.
136 """
137 self.root = root
138 self.transforms = transforms
139 self.download = download
140 self.checksum = checksum
141
142 assert split in ["train", "val", "test"]
143
144 self._validate_bands(bands)
145 self.bands = bands
146 self.band_indices = Tensor(
147 [self.all_band_names.index(b) for b in bands if b in self.all_band_names]
148 ).long()
149
150 self._verify()
151
152 valid_fns = set()
153 with open(os.path.join(self.root, f"eurosat-{split}.txt")) as f:
154 for fn in f:
155 valid_fns.add(fn.strip().replace(".jpg", ".tif"))
156 is_in_split: Callable[[str], bool] = lambda x: os.path.basename(x) in valid_fns
157
158 super().__init__(
159 root=os.path.join(root, self.base_dir),
160 transforms=transforms,
161 loader=rasterio_loader,
162 is_valid_file=is_in_split,
163 )
164
165 def __getitem__(self, index: int) -> dict[str, Tensor]:
166 """Return an index within the dataset.
167
168 Args:
169 index: index to return
170 Returns:
171 data and label at that index
172 """
173 image, label = self._load_image(index)
174
175 image = torch.index_select(image, dim=0, index=self.band_indices).float()
176 sample = {"image": image, "label": label}
177
178 if self.transforms is not None:
179 sample = self.transforms(sample)
180
181 return sample
182
183 def _check_integrity(self) -> bool:
184 """Check integrity of dataset.
185
186 Returns:
187 True if dataset files are found and/or MD5s match, else False
188 """
189 integrity: bool = check_integrity(
190 os.path.join(self.root, self.filename), self.md5 if self.checksum else None
191 )
192 return integrity
193
194 def _verify(self) -> None:
195 """Verify the integrity of the dataset.
196
197 Raises:
198 RuntimeError: if ``download=False`` but dataset is missing or checksum fails
199 """
200 # Check if the files already exist
201 filepath = os.path.join(self.root, self.base_dir)
202 if os.path.exists(filepath):
203 return
204
205 # Check if zip file already exists (if so then extract)
206 if self._check_integrity():
207 self._extract()
208 return
209
210 # Check if the user requested to download the dataset
211 if not self.download:
212 raise RuntimeError(
213 "Dataset not found in `root` directory and `download=False`, "
214 "either specify a different `root` directory or use `download=True` "
215 "to automatically download the dataset."
216 )
217
218 # Download and extract the dataset
219 self._download()
220 self._extract()
221
222 def _download(self) -> None:
223 """Download the dataset."""
224 download_url(
225 self.url,
226 self.root,
227 filename=self.filename,
228 md5=self.md5 if self.checksum else None,
229 )
230 for split in self.splits:
231 download_url(
232 self.split_urls[split],
233 self.root,
234 filename=f"eurosat-{split}.txt",
235 md5=self.split_md5s[split] if self.checksum else None,
236 )
237
238 def _extract(self) -> None:
239 """Extract the dataset."""
240 filepath = os.path.join(self.root, self.filename)
241 extract_archive(filepath)
242
243 def _validate_bands(self, bands: Sequence[str]) -> None:
244 """Validate list of bands.
245
246 Args:
247 bands: user-provided sequence of bands to load
248
249 Raises:
250 AssertionError: if ``bands`` is not a sequence
251 ValueError: if an invalid band name is provided
252
253 .. versionadded:: 0.3
254 """
255 assert isinstance(bands, Sequence), "'bands' must be a sequence"
256 for band in bands:
257 if band not in self.all_band_names:
258 raise ValueError(f"'{band}' is an invalid band name.")
259
260 def plot(
261 self,
262 sample: dict[str, Tensor],
263 show_titles: bool = True,
264 suptitle: Optional[str] = None,
265 ) -> Figure:
266 """Plot a sample from the dataset.
267
268 Args:
269 sample: a sample returned by :meth:`NonGeoClassificationDataset.__getitem__`
270 show_titles: flag indicating whether to show titles above each panel
271 suptitle: optional string to use as a suptitle
272
273 Returns:
274 a matplotlib Figure with the rendered sample
275
276 Raises:
277 ValueError: if RGB bands are not found in dataset
278
279 .. versionadded:: 0.2
280 """
281 rgb_indices = []
282 for band in self.rgb_bands:
283 if band in self.bands:
284 rgb_indices.append(self.bands.index(band))
285 else:
286 raise ValueError("Dataset doesn't contain some of the RGB bands")
287
288 image = np.take(sample["image"].numpy(), indices=rgb_indices, axis=0)
289 image = np.rollaxis(image, 0, 3)
290 image = np.clip(image / 3000, 0, 1)
291
292 label = cast(int, sample["label"].item())
293 label_class = self.classes[label]
294
295 showing_predictions = "prediction" in sample
296 if showing_predictions:
297 prediction = cast(int, sample["prediction"].item())
298 prediction_class = self.classes[prediction]
299
300 fig, ax = plt.subplots(figsize=(4, 4))
301 ax.imshow(image)
302 ax.axis("off")
303 if show_titles:
304 title = f"Label: {label_class}"
305 if showing_predictions:
306 title += f"\nPrediction: {prediction_class}"
307 ax.set_title(title)
308
309 if suptitle is not None:
310 plt.suptitle(suptitle)
311 return fig
312
313
314 class EuroSAT100(EuroSAT):
315 """Subset of EuroSAT containing only 100 images.
316
317 Intended for tutorials and demonstrations, not for benchmarking.
318
319 Maintains the same file structure, classes, and train-val-test split. Each class has
320 10 images (6 train, 2 val, 2 test), for a total of 100 images.
321
322 .. versionadded:: 0.5
323 """
324
325 url = "https://huggingface.co/datasets/torchgeo/eurosat/resolve/main/EuroSAT100.zip"
326 filename = "EuroSAT100.zip"
327 md5 = "c21c649ba747e86eda813407ef17d596"
328
329 split_urls = {
330 "train": "https://huggingface.co/datasets/torchgeo/eurosat/raw/main/eurosat-train.txt", # noqa: E501
331 "val": "https://huggingface.co/datasets/torchgeo/eurosat/raw/main/eurosat-val.txt", # noqa: E501
332 "test": "https://huggingface.co/datasets/torchgeo/eurosat/raw/main/eurosat-test.txt", # noqa: E501
333 }
334 split_md5s = {
335 "train": "033d0c23e3a75e3fa79618b0e35fe1c7",
336 "val": "3e3f8b3c344182b8d126c4cc88f3f215",
337 "test": "f908f151b950f270ad18e61153579794",
338 }
339
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/torchgeo/datasets/eurosat.py b/torchgeo/datasets/eurosat.py
--- a/torchgeo/datasets/eurosat.py
+++ b/torchgeo/datasets/eurosat.py
@@ -95,7 +95,7 @@
"B06",
"B07",
"B08",
- "B08A",
+ "B8A",
"B09",
"B10",
"B11",
| {"golden_diff": "diff --git a/torchgeo/datasets/eurosat.py b/torchgeo/datasets/eurosat.py\n--- a/torchgeo/datasets/eurosat.py\n+++ b/torchgeo/datasets/eurosat.py\n@@ -95,7 +95,7 @@\n \"B06\",\n \"B07\",\n \"B08\",\n- \"B08A\",\n+ \"B8A\",\n \"B09\",\n \"B10\",\n \"B11\",\n", "issue": "Inconsistency in Sentinel 2 band names in transforms tutorial and eurosat dataset\n### Issue\r\n\r\nIn the [tutorial](https://torchgeo.readthedocs.io/en/stable/tutorials/transforms.html) they are `B1`, `B2` etc but in [the dataset](https://github.com/microsoft/torchgeo/blob/main/torchgeo/datasets/eurosat.py) `B01`, `B02` etc. To avoid confusion it would be good to stick to one format. I've noticed this whilst adapting the tutorial\r\n\r\n### Fix\r\n\r\nSitck to `B01`, `B02` etc\nInconsistency in Sentinel 2 band names in transforms tutorial and eurosat dataset\n### Issue\r\n\r\nIn the [tutorial](https://torchgeo.readthedocs.io/en/stable/tutorials/transforms.html) they are `B1`, `B2` etc but in [the dataset](https://github.com/microsoft/torchgeo/blob/main/torchgeo/datasets/eurosat.py) `B01`, `B02` etc. To avoid confusion it would be good to stick to one format. I've noticed this whilst adapting the tutorial\r\n\r\n### Fix\r\n\r\nSitck to `B01`, `B02` etc\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\n\"\"\"EuroSAT dataset.\"\"\"\n\nimport os\nfrom collections.abc import Sequence\nfrom typing import Callable, Optional, cast\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport torch\nfrom matplotlib.figure import Figure\nfrom torch import Tensor\n\nfrom .geo import NonGeoClassificationDataset\nfrom .utils import check_integrity, download_url, extract_archive, rasterio_loader\n\n\nclass EuroSAT(NonGeoClassificationDataset):\n \"\"\"EuroSAT dataset.\n\n The `EuroSAT <https://github.com/phelber/EuroSAT>`__ dataset is based on Sentinel-2\n satellite images covering 13 spectral bands and consists of 10 target classes with\n a total of 27,000 labeled and geo-referenced images.\n\n Dataset format:\n\n * rasters are 13-channel GeoTiffs\n * labels are values in the range [0,9]\n\n Dataset classes:\n\n * Industrial Buildings\n * Residential Buildings\n * Annual Crop\n * Permanent Crop\n * River\n * Sea and Lake\n * Herbaceous Vegetation\n * Highway\n * Pasture\n * Forest\n\n This dataset uses the train/val/test splits defined in the \"In-domain representation\n learning for remote sensing\" paper:\n\n * https://arxiv.org/abs/1911.06721\n\n If you use this dataset in your research, please cite the following papers:\n\n * https://ieeexplore.ieee.org/document/8736785\n * https://ieeexplore.ieee.org/document/8519248\n \"\"\"\n\n url = \"https://huggingface.co/datasets/torchgeo/eurosat/resolve/main/EuroSATallBands.zip\" # noqa: E501\n filename = \"EuroSATallBands.zip\"\n md5 = \"5ac12b3b2557aa56e1826e981e8e200e\"\n\n # For some reason the class directories are actually nested in this directory\n base_dir = os.path.join(\n \"ds\", \"images\", \"remote_sensing\", \"otherDatasets\", \"sentinel_2\", \"tif\"\n )\n\n splits = [\"train\", \"val\", \"test\"]\n split_urls = {\n \"train\": \"https://storage.googleapis.com/remote_sensing_representations/eurosat-train.txt\", # noqa: E501\n \"val\": \"https://storage.googleapis.com/remote_sensing_representations/eurosat-val.txt\", # noqa: E501\n \"test\": \"https://storage.googleapis.com/remote_sensing_representations/eurosat-test.txt\", # noqa: E501\n }\n split_md5s = {\n \"train\": \"908f142e73d6acdf3f482c5e80d851b1\",\n \"val\": \"95de90f2aa998f70a3b2416bfe0687b4\",\n \"test\": \"7ae5ab94471417b6e315763121e67c5f\",\n }\n classes = [\n \"Industrial Buildings\",\n \"Residential Buildings\",\n \"Annual Crop\",\n \"Permanent Crop\",\n \"River\",\n \"Sea and Lake\",\n \"Herbaceous Vegetation\",\n \"Highway\",\n \"Pasture\",\n \"Forest\",\n ]\n\n all_band_names = (\n \"B01\",\n \"B02\",\n \"B03\",\n \"B04\",\n \"B05\",\n \"B06\",\n \"B07\",\n \"B08\",\n \"B08A\",\n \"B09\",\n \"B10\",\n \"B11\",\n \"B12\",\n )\n\n rgb_bands = (\"B04\", \"B03\", \"B02\")\n\n BAND_SETS = {\"all\": all_band_names, \"rgb\": rgb_bands}\n\n def __init__(\n self,\n root: str = \"data\",\n split: str = \"train\",\n bands: Sequence[str] = BAND_SETS[\"all\"],\n transforms: Optional[Callable[[dict[str, Tensor]], dict[str, Tensor]]] = None,\n download: bool = False,\n checksum: bool = False,\n ) -> None:\n \"\"\"Initialize a new EuroSAT dataset instance.\n\n Args:\n root: root directory where dataset can be found\n split: one of \"train\", \"val\", or \"test\"\n bands: a sequence of band names to load\n transforms: a function/transform that takes input sample and its target as\n entry and returns a transformed version\n download: if True, download dataset and store it in the root directory\n checksum: if True, check the MD5 of the downloaded files (may be slow)\n\n Raises:\n AssertionError: if ``split`` argument is invalid\n RuntimeError: if ``download=False`` and data is not found, or checksums\n don't match\n\n .. versionadded:: 0.3\n The *bands* parameter.\n \"\"\"\n self.root = root\n self.transforms = transforms\n self.download = download\n self.checksum = checksum\n\n assert split in [\"train\", \"val\", \"test\"]\n\n self._validate_bands(bands)\n self.bands = bands\n self.band_indices = Tensor(\n [self.all_band_names.index(b) for b in bands if b in self.all_band_names]\n ).long()\n\n self._verify()\n\n valid_fns = set()\n with open(os.path.join(self.root, f\"eurosat-{split}.txt\")) as f:\n for fn in f:\n valid_fns.add(fn.strip().replace(\".jpg\", \".tif\"))\n is_in_split: Callable[[str], bool] = lambda x: os.path.basename(x) in valid_fns\n\n super().__init__(\n root=os.path.join(root, self.base_dir),\n transforms=transforms,\n loader=rasterio_loader,\n is_valid_file=is_in_split,\n )\n\n def __getitem__(self, index: int) -> dict[str, Tensor]:\n \"\"\"Return an index within the dataset.\n\n Args:\n index: index to return\n Returns:\n data and label at that index\n \"\"\"\n image, label = self._load_image(index)\n\n image = torch.index_select(image, dim=0, index=self.band_indices).float()\n sample = {\"image\": image, \"label\": label}\n\n if self.transforms is not None:\n sample = self.transforms(sample)\n\n return sample\n\n def _check_integrity(self) -> bool:\n \"\"\"Check integrity of dataset.\n\n Returns:\n True if dataset files are found and/or MD5s match, else False\n \"\"\"\n integrity: bool = check_integrity(\n os.path.join(self.root, self.filename), self.md5 if self.checksum else None\n )\n return integrity\n\n def _verify(self) -> None:\n \"\"\"Verify the integrity of the dataset.\n\n Raises:\n RuntimeError: if ``download=False`` but dataset is missing or checksum fails\n \"\"\"\n # Check if the files already exist\n filepath = os.path.join(self.root, self.base_dir)\n if os.path.exists(filepath):\n return\n\n # Check if zip file already exists (if so then extract)\n if self._check_integrity():\n self._extract()\n return\n\n # Check if the user requested to download the dataset\n if not self.download:\n raise RuntimeError(\n \"Dataset not found in `root` directory and `download=False`, \"\n \"either specify a different `root` directory or use `download=True` \"\n \"to automatically download the dataset.\"\n )\n\n # Download and extract the dataset\n self._download()\n self._extract()\n\n def _download(self) -> None:\n \"\"\"Download the dataset.\"\"\"\n download_url(\n self.url,\n self.root,\n filename=self.filename,\n md5=self.md5 if self.checksum else None,\n )\n for split in self.splits:\n download_url(\n self.split_urls[split],\n self.root,\n filename=f\"eurosat-{split}.txt\",\n md5=self.split_md5s[split] if self.checksum else None,\n )\n\n def _extract(self) -> None:\n \"\"\"Extract the dataset.\"\"\"\n filepath = os.path.join(self.root, self.filename)\n extract_archive(filepath)\n\n def _validate_bands(self, bands: Sequence[str]) -> None:\n \"\"\"Validate list of bands.\n\n Args:\n bands: user-provided sequence of bands to load\n\n Raises:\n AssertionError: if ``bands`` is not a sequence\n ValueError: if an invalid band name is provided\n\n .. versionadded:: 0.3\n \"\"\"\n assert isinstance(bands, Sequence), \"'bands' must be a sequence\"\n for band in bands:\n if band not in self.all_band_names:\n raise ValueError(f\"'{band}' is an invalid band name.\")\n\n def plot(\n self,\n sample: dict[str, Tensor],\n show_titles: bool = True,\n suptitle: Optional[str] = None,\n ) -> Figure:\n \"\"\"Plot a sample from the dataset.\n\n Args:\n sample: a sample returned by :meth:`NonGeoClassificationDataset.__getitem__`\n show_titles: flag indicating whether to show titles above each panel\n suptitle: optional string to use as a suptitle\n\n Returns:\n a matplotlib Figure with the rendered sample\n\n Raises:\n ValueError: if RGB bands are not found in dataset\n\n .. versionadded:: 0.2\n \"\"\"\n rgb_indices = []\n for band in self.rgb_bands:\n if band in self.bands:\n rgb_indices.append(self.bands.index(band))\n else:\n raise ValueError(\"Dataset doesn't contain some of the RGB bands\")\n\n image = np.take(sample[\"image\"].numpy(), indices=rgb_indices, axis=0)\n image = np.rollaxis(image, 0, 3)\n image = np.clip(image / 3000, 0, 1)\n\n label = cast(int, sample[\"label\"].item())\n label_class = self.classes[label]\n\n showing_predictions = \"prediction\" in sample\n if showing_predictions:\n prediction = cast(int, sample[\"prediction\"].item())\n prediction_class = self.classes[prediction]\n\n fig, ax = plt.subplots(figsize=(4, 4))\n ax.imshow(image)\n ax.axis(\"off\")\n if show_titles:\n title = f\"Label: {label_class}\"\n if showing_predictions:\n title += f\"\\nPrediction: {prediction_class}\"\n ax.set_title(title)\n\n if suptitle is not None:\n plt.suptitle(suptitle)\n return fig\n\n\nclass EuroSAT100(EuroSAT):\n \"\"\"Subset of EuroSAT containing only 100 images.\n\n Intended for tutorials and demonstrations, not for benchmarking.\n\n Maintains the same file structure, classes, and train-val-test split. Each class has\n 10 images (6 train, 2 val, 2 test), for a total of 100 images.\n\n .. versionadded:: 0.5\n \"\"\"\n\n url = \"https://huggingface.co/datasets/torchgeo/eurosat/resolve/main/EuroSAT100.zip\"\n filename = \"EuroSAT100.zip\"\n md5 = \"c21c649ba747e86eda813407ef17d596\"\n\n split_urls = {\n \"train\": \"https://huggingface.co/datasets/torchgeo/eurosat/raw/main/eurosat-train.txt\", # noqa: E501\n \"val\": \"https://huggingface.co/datasets/torchgeo/eurosat/raw/main/eurosat-val.txt\", # noqa: E501\n \"test\": \"https://huggingface.co/datasets/torchgeo/eurosat/raw/main/eurosat-test.txt\", # noqa: E501\n }\n split_md5s = {\n \"train\": \"033d0c23e3a75e3fa79618b0e35fe1c7\",\n \"val\": \"3e3f8b3c344182b8d126c4cc88f3f215\",\n \"test\": \"f908f151b950f270ad18e61153579794\",\n }\n", "path": "torchgeo/datasets/eurosat.py"}], "after_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\n\"\"\"EuroSAT dataset.\"\"\"\n\nimport os\nfrom collections.abc import Sequence\nfrom typing import Callable, Optional, cast\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport torch\nfrom matplotlib.figure import Figure\nfrom torch import Tensor\n\nfrom .geo import NonGeoClassificationDataset\nfrom .utils import check_integrity, download_url, extract_archive, rasterio_loader\n\n\nclass EuroSAT(NonGeoClassificationDataset):\n \"\"\"EuroSAT dataset.\n\n The `EuroSAT <https://github.com/phelber/EuroSAT>`__ dataset is based on Sentinel-2\n satellite images covering 13 spectral bands and consists of 10 target classes with\n a total of 27,000 labeled and geo-referenced images.\n\n Dataset format:\n\n * rasters are 13-channel GeoTiffs\n * labels are values in the range [0,9]\n\n Dataset classes:\n\n * Industrial Buildings\n * Residential Buildings\n * Annual Crop\n * Permanent Crop\n * River\n * Sea and Lake\n * Herbaceous Vegetation\n * Highway\n * Pasture\n * Forest\n\n This dataset uses the train/val/test splits defined in the \"In-domain representation\n learning for remote sensing\" paper:\n\n * https://arxiv.org/abs/1911.06721\n\n If you use this dataset in your research, please cite the following papers:\n\n * https://ieeexplore.ieee.org/document/8736785\n * https://ieeexplore.ieee.org/document/8519248\n \"\"\"\n\n url = \"https://huggingface.co/datasets/torchgeo/eurosat/resolve/main/EuroSATallBands.zip\" # noqa: E501\n filename = \"EuroSATallBands.zip\"\n md5 = \"5ac12b3b2557aa56e1826e981e8e200e\"\n\n # For some reason the class directories are actually nested in this directory\n base_dir = os.path.join(\n \"ds\", \"images\", \"remote_sensing\", \"otherDatasets\", \"sentinel_2\", \"tif\"\n )\n\n splits = [\"train\", \"val\", \"test\"]\n split_urls = {\n \"train\": \"https://storage.googleapis.com/remote_sensing_representations/eurosat-train.txt\", # noqa: E501\n \"val\": \"https://storage.googleapis.com/remote_sensing_representations/eurosat-val.txt\", # noqa: E501\n \"test\": \"https://storage.googleapis.com/remote_sensing_representations/eurosat-test.txt\", # noqa: E501\n }\n split_md5s = {\n \"train\": \"908f142e73d6acdf3f482c5e80d851b1\",\n \"val\": \"95de90f2aa998f70a3b2416bfe0687b4\",\n \"test\": \"7ae5ab94471417b6e315763121e67c5f\",\n }\n classes = [\n \"Industrial Buildings\",\n \"Residential Buildings\",\n \"Annual Crop\",\n \"Permanent Crop\",\n \"River\",\n \"Sea and Lake\",\n \"Herbaceous Vegetation\",\n \"Highway\",\n \"Pasture\",\n \"Forest\",\n ]\n\n all_band_names = (\n \"B01\",\n \"B02\",\n \"B03\",\n \"B04\",\n \"B05\",\n \"B06\",\n \"B07\",\n \"B08\",\n \"B8A\",\n \"B09\",\n \"B10\",\n \"B11\",\n \"B12\",\n )\n\n rgb_bands = (\"B04\", \"B03\", \"B02\")\n\n BAND_SETS = {\"all\": all_band_names, \"rgb\": rgb_bands}\n\n def __init__(\n self,\n root: str = \"data\",\n split: str = \"train\",\n bands: Sequence[str] = BAND_SETS[\"all\"],\n transforms: Optional[Callable[[dict[str, Tensor]], dict[str, Tensor]]] = None,\n download: bool = False,\n checksum: bool = False,\n ) -> None:\n \"\"\"Initialize a new EuroSAT dataset instance.\n\n Args:\n root: root directory where dataset can be found\n split: one of \"train\", \"val\", or \"test\"\n bands: a sequence of band names to load\n transforms: a function/transform that takes input sample and its target as\n entry and returns a transformed version\n download: if True, download dataset and store it in the root directory\n checksum: if True, check the MD5 of the downloaded files (may be slow)\n\n Raises:\n AssertionError: if ``split`` argument is invalid\n RuntimeError: if ``download=False`` and data is not found, or checksums\n don't match\n\n .. versionadded:: 0.3\n The *bands* parameter.\n \"\"\"\n self.root = root\n self.transforms = transforms\n self.download = download\n self.checksum = checksum\n\n assert split in [\"train\", \"val\", \"test\"]\n\n self._validate_bands(bands)\n self.bands = bands\n self.band_indices = Tensor(\n [self.all_band_names.index(b) for b in bands if b in self.all_band_names]\n ).long()\n\n self._verify()\n\n valid_fns = set()\n with open(os.path.join(self.root, f\"eurosat-{split}.txt\")) as f:\n for fn in f:\n valid_fns.add(fn.strip().replace(\".jpg\", \".tif\"))\n is_in_split: Callable[[str], bool] = lambda x: os.path.basename(x) in valid_fns\n\n super().__init__(\n root=os.path.join(root, self.base_dir),\n transforms=transforms,\n loader=rasterio_loader,\n is_valid_file=is_in_split,\n )\n\n def __getitem__(self, index: int) -> dict[str, Tensor]:\n \"\"\"Return an index within the dataset.\n\n Args:\n index: index to return\n Returns:\n data and label at that index\n \"\"\"\n image, label = self._load_image(index)\n\n image = torch.index_select(image, dim=0, index=self.band_indices).float()\n sample = {\"image\": image, \"label\": label}\n\n if self.transforms is not None:\n sample = self.transforms(sample)\n\n return sample\n\n def _check_integrity(self) -> bool:\n \"\"\"Check integrity of dataset.\n\n Returns:\n True if dataset files are found and/or MD5s match, else False\n \"\"\"\n integrity: bool = check_integrity(\n os.path.join(self.root, self.filename), self.md5 if self.checksum else None\n )\n return integrity\n\n def _verify(self) -> None:\n \"\"\"Verify the integrity of the dataset.\n\n Raises:\n RuntimeError: if ``download=False`` but dataset is missing or checksum fails\n \"\"\"\n # Check if the files already exist\n filepath = os.path.join(self.root, self.base_dir)\n if os.path.exists(filepath):\n return\n\n # Check if zip file already exists (if so then extract)\n if self._check_integrity():\n self._extract()\n return\n\n # Check if the user requested to download the dataset\n if not self.download:\n raise RuntimeError(\n \"Dataset not found in `root` directory and `download=False`, \"\n \"either specify a different `root` directory or use `download=True` \"\n \"to automatically download the dataset.\"\n )\n\n # Download and extract the dataset\n self._download()\n self._extract()\n\n def _download(self) -> None:\n \"\"\"Download the dataset.\"\"\"\n download_url(\n self.url,\n self.root,\n filename=self.filename,\n md5=self.md5 if self.checksum else None,\n )\n for split in self.splits:\n download_url(\n self.split_urls[split],\n self.root,\n filename=f\"eurosat-{split}.txt\",\n md5=self.split_md5s[split] if self.checksum else None,\n )\n\n def _extract(self) -> None:\n \"\"\"Extract the dataset.\"\"\"\n filepath = os.path.join(self.root, self.filename)\n extract_archive(filepath)\n\n def _validate_bands(self, bands: Sequence[str]) -> None:\n \"\"\"Validate list of bands.\n\n Args:\n bands: user-provided sequence of bands to load\n\n Raises:\n AssertionError: if ``bands`` is not a sequence\n ValueError: if an invalid band name is provided\n\n .. versionadded:: 0.3\n \"\"\"\n assert isinstance(bands, Sequence), \"'bands' must be a sequence\"\n for band in bands:\n if band not in self.all_band_names:\n raise ValueError(f\"'{band}' is an invalid band name.\")\n\n def plot(\n self,\n sample: dict[str, Tensor],\n show_titles: bool = True,\n suptitle: Optional[str] = None,\n ) -> Figure:\n \"\"\"Plot a sample from the dataset.\n\n Args:\n sample: a sample returned by :meth:`NonGeoClassificationDataset.__getitem__`\n show_titles: flag indicating whether to show titles above each panel\n suptitle: optional string to use as a suptitle\n\n Returns:\n a matplotlib Figure with the rendered sample\n\n Raises:\n ValueError: if RGB bands are not found in dataset\n\n .. versionadded:: 0.2\n \"\"\"\n rgb_indices = []\n for band in self.rgb_bands:\n if band in self.bands:\n rgb_indices.append(self.bands.index(band))\n else:\n raise ValueError(\"Dataset doesn't contain some of the RGB bands\")\n\n image = np.take(sample[\"image\"].numpy(), indices=rgb_indices, axis=0)\n image = np.rollaxis(image, 0, 3)\n image = np.clip(image / 3000, 0, 1)\n\n label = cast(int, sample[\"label\"].item())\n label_class = self.classes[label]\n\n showing_predictions = \"prediction\" in sample\n if showing_predictions:\n prediction = cast(int, sample[\"prediction\"].item())\n prediction_class = self.classes[prediction]\n\n fig, ax = plt.subplots(figsize=(4, 4))\n ax.imshow(image)\n ax.axis(\"off\")\n if show_titles:\n title = f\"Label: {label_class}\"\n if showing_predictions:\n title += f\"\\nPrediction: {prediction_class}\"\n ax.set_title(title)\n\n if suptitle is not None:\n plt.suptitle(suptitle)\n return fig\n\n\nclass EuroSAT100(EuroSAT):\n \"\"\"Subset of EuroSAT containing only 100 images.\n\n Intended for tutorials and demonstrations, not for benchmarking.\n\n Maintains the same file structure, classes, and train-val-test split. Each class has\n 10 images (6 train, 2 val, 2 test), for a total of 100 images.\n\n .. versionadded:: 0.5\n \"\"\"\n\n url = \"https://huggingface.co/datasets/torchgeo/eurosat/resolve/main/EuroSAT100.zip\"\n filename = \"EuroSAT100.zip\"\n md5 = \"c21c649ba747e86eda813407ef17d596\"\n\n split_urls = {\n \"train\": \"https://huggingface.co/datasets/torchgeo/eurosat/raw/main/eurosat-train.txt\", # noqa: E501\n \"val\": \"https://huggingface.co/datasets/torchgeo/eurosat/raw/main/eurosat-val.txt\", # noqa: E501\n \"test\": \"https://huggingface.co/datasets/torchgeo/eurosat/raw/main/eurosat-test.txt\", # noqa: E501\n }\n split_md5s = {\n \"train\": \"033d0c23e3a75e3fa79618b0e35fe1c7\",\n \"val\": \"3e3f8b3c344182b8d126c4cc88f3f215\",\n \"test\": \"f908f151b950f270ad18e61153579794\",\n }\n", "path": "torchgeo/datasets/eurosat.py"}]} |
gh_patches_debug_64 | rasdani/github-patches | git_diff | streamlit__streamlit-2811 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Sliders should show current value [regression] [Baseweb]
# Summary
(via @tvst: )
Our sliders right now require you to hover in order to see the selected value. This makes it really hard to understand what the user selected. I reported this before, but I just spent some time debugging my app thinking it was broken because I was reading the wrong slider value. Frustrating.
I understand this is the new behavior of sliders in Base Web, but we have alternatives:
**1. Roll back Base Web to a previous version**
This is the preferable solution in order to get this fix out ASAP. Even if we decide it's only a temporary solution.
2. Try to find a solution using the latest Base Web
3. Copy/paste the old Baseweb slider into our own repo and modify it there. Their slider is based on another library, btw (I forget which), so maybe we should just use that library directly instead?
## Is this a regression?
yes
# Debug info
- Streamlit version: 0.75-special
Allow hiding tracebacks
Currently, when a Streamlit app throws an exception, we print the traceback to the browser. This isn't necessarily the right thing to do for all apps; we should allow this to be configurable.
Maybe a `[client] showTracebacks = false` option? And presumably, if tracebacks are disabled, we should filter them at the server level, so that the client never even receives the string, in case the user is worried about leaking internal app details.
(Related discussion here: https://discuss.streamlit.io/t/dont-show-users-tracebacks/1746)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `e2e/scripts/st_columns.py`
Content:
```
1 # Copyright 2018-2021 Streamlit Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import streamlit as st
16
17 CAT_IMAGE = "https://images.unsplash.com/photo-1552933529-e359b2477252?ixlib=rb-1.2.1&ixid=eyJhcHBfaWQiOjEyMDd9&auto=format&fit=crop&w=950&q=80"
18
19 if st.button("Layout should not shift when this is pressed"):
20 st.write("Pressed!")
21
22 # Same-width columns
23 c1, c2, c3 = st.beta_columns(3)
24 c1.image(CAT_IMAGE, use_column_width=True)
25 c2.image(CAT_IMAGE, use_column_width=True)
26 c3.image(CAT_IMAGE, use_column_width=True)
27
28
29 # Variable-width columns
30 for c in st.beta_columns((1, 2, 4, 8)):
31 c.image(CAT_IMAGE, use_column_width=True)
32
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/e2e/scripts/st_columns.py b/e2e/scripts/st_columns.py
--- a/e2e/scripts/st_columns.py
+++ b/e2e/scripts/st_columns.py
@@ -27,5 +27,5 @@
# Variable-width columns
-for c in st.beta_columns((1, 2, 4, 8)):
+for c in st.beta_columns((1, 2, 3, 4)):
c.image(CAT_IMAGE, use_column_width=True)
| {"golden_diff": "diff --git a/e2e/scripts/st_columns.py b/e2e/scripts/st_columns.py\n--- a/e2e/scripts/st_columns.py\n+++ b/e2e/scripts/st_columns.py\n@@ -27,5 +27,5 @@\n \n \n # Variable-width columns\n-for c in st.beta_columns((1, 2, 4, 8)):\n+for c in st.beta_columns((1, 2, 3, 4)):\n c.image(CAT_IMAGE, use_column_width=True)\n", "issue": "Sliders should show current value [regression] [Baseweb]\n# Summary\r\n\r\n(via @tvst: )\r\n\r\nOur sliders right now require you to hover in order to see the selected value. This makes it really hard to understand what the user selected. I reported this before, but I just spent some time debugging my app thinking it was broken because I was reading the wrong slider value. Frustrating.\r\n\r\nI understand this is the new behavior of sliders in Base Web, but we have alternatives:\r\n\r\n**1. Roll back Base Web to a previous version**\r\n\r\n This is the preferable solution in order to get this fix out ASAP. Even if we decide it's only a temporary solution.\r\n\r\n2. Try to find a solution using the latest Base Web\r\n\r\n3. Copy/paste the old Baseweb slider into our own repo and modify it there. Their slider is based on another library, btw (I forget which), so maybe we should just use that library directly instead?\r\n\r\n\r\n## Is this a regression?\r\n\r\nyes \r\n\r\n# Debug info\r\n\r\n- Streamlit version: 0.75-special\nAllow hiding tracebacks\nCurrently, when a Streamlit app throws an exception, we print the traceback to the browser. This isn't necessarily the right thing to do for all apps; we should allow this to be configurable.\r\n\r\nMaybe a `[client] showTracebacks = false` option? And presumably, if tracebacks are disabled, we should filter them at the server level, so that the client never even receives the string, in case the user is worried about leaking internal app details.\r\n\r\n(Related discussion here: https://discuss.streamlit.io/t/dont-show-users-tracebacks/1746)\n", "before_files": [{"content": "# Copyright 2018-2021 Streamlit Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport streamlit as st\n\nCAT_IMAGE = \"https://images.unsplash.com/photo-1552933529-e359b2477252?ixlib=rb-1.2.1&ixid=eyJhcHBfaWQiOjEyMDd9&auto=format&fit=crop&w=950&q=80\"\n\nif st.button(\"Layout should not shift when this is pressed\"):\n st.write(\"Pressed!\")\n\n# Same-width columns\nc1, c2, c3 = st.beta_columns(3)\nc1.image(CAT_IMAGE, use_column_width=True)\nc2.image(CAT_IMAGE, use_column_width=True)\nc3.image(CAT_IMAGE, use_column_width=True)\n\n\n# Variable-width columns\nfor c in st.beta_columns((1, 2, 4, 8)):\n c.image(CAT_IMAGE, use_column_width=True)\n", "path": "e2e/scripts/st_columns.py"}], "after_files": [{"content": "# Copyright 2018-2021 Streamlit Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport streamlit as st\n\nCAT_IMAGE = \"https://images.unsplash.com/photo-1552933529-e359b2477252?ixlib=rb-1.2.1&ixid=eyJhcHBfaWQiOjEyMDd9&auto=format&fit=crop&w=950&q=80\"\n\nif st.button(\"Layout should not shift when this is pressed\"):\n st.write(\"Pressed!\")\n\n# Same-width columns\nc1, c2, c3 = st.beta_columns(3)\nc1.image(CAT_IMAGE, use_column_width=True)\nc2.image(CAT_IMAGE, use_column_width=True)\nc3.image(CAT_IMAGE, use_column_width=True)\n\n\n# Variable-width columns\nfor c in st.beta_columns((1, 2, 3, 4)):\n c.image(CAT_IMAGE, use_column_width=True)\n", "path": "e2e/scripts/st_columns.py"}]} |
gh_patches_debug_65 | rasdani/github-patches | git_diff | fonttools__fonttools-2472 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[feaLib] "fonttools feaLib" should error out, not continue
If there's a parse/build error when using the feaLib command line tool, we currently do this:
https://github.com/fonttools/fonttools/blob/445108f735b22a5ca37f669808d47906d024fe24/Lib/fontTools/feaLib/__main__.py#L69-L73
i.e. we save the font anyway and exit with status code 0.
My Makefiles and I think this is a terrible idea, and I would like to change it. Any objections / thoughts?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `Lib/fontTools/feaLib/__main__.py`
Content:
```
1 from fontTools.ttLib import TTFont
2 from fontTools.feaLib.builder import addOpenTypeFeatures, Builder
3 from fontTools.feaLib.error import FeatureLibError
4 from fontTools import configLogger
5 from fontTools.misc.cliTools import makeOutputFileName
6 import sys
7 import argparse
8 import logging
9
10
11 log = logging.getLogger("fontTools.feaLib")
12
13
14 def main(args=None):
15 """Add features from a feature file (.fea) into a OTF font"""
16 parser = argparse.ArgumentParser(
17 description="Use fontTools to compile OpenType feature files (*.fea)."
18 )
19 parser.add_argument(
20 "input_fea", metavar="FEATURES", help="Path to the feature file"
21 )
22 parser.add_argument(
23 "input_font", metavar="INPUT_FONT", help="Path to the input font"
24 )
25 parser.add_argument(
26 "-o",
27 "--output",
28 dest="output_font",
29 metavar="OUTPUT_FONT",
30 help="Path to the output font.",
31 )
32 parser.add_argument(
33 "-t",
34 "--tables",
35 metavar="TABLE_TAG",
36 choices=Builder.supportedTables,
37 nargs="+",
38 help="Specify the table(s) to be built.",
39 )
40 parser.add_argument(
41 "-d",
42 "--debug",
43 action="store_true",
44 help="Add source-level debugging information to font.",
45 )
46 parser.add_argument(
47 "-v",
48 "--verbose",
49 help="increase the logger verbosity. Multiple -v " "options are allowed.",
50 action="count",
51 default=0,
52 )
53 parser.add_argument(
54 "--traceback", help="show traceback for exceptions.", action="store_true"
55 )
56 options = parser.parse_args(args)
57
58 levels = ["WARNING", "INFO", "DEBUG"]
59 configLogger(level=levels[min(len(levels) - 1, options.verbose)])
60
61 output_font = options.output_font or makeOutputFileName(options.input_font)
62 log.info("Compiling features to '%s'" % (output_font))
63
64 font = TTFont(options.input_font)
65 try:
66 addOpenTypeFeatures(
67 font, options.input_fea, tables=options.tables, debug=options.debug
68 )
69 except FeatureLibError as e:
70 if options.traceback:
71 raise
72 log.error(e)
73 font.save(output_font)
74
75
76 if __name__ == "__main__":
77 sys.exit(main())
78
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/Lib/fontTools/feaLib/__main__.py b/Lib/fontTools/feaLib/__main__.py
--- a/Lib/fontTools/feaLib/__main__.py
+++ b/Lib/fontTools/feaLib/__main__.py
@@ -70,6 +70,7 @@
if options.traceback:
raise
log.error(e)
+ sys.exit(1)
font.save(output_font)
| {"golden_diff": "diff --git a/Lib/fontTools/feaLib/__main__.py b/Lib/fontTools/feaLib/__main__.py\n--- a/Lib/fontTools/feaLib/__main__.py\n+++ b/Lib/fontTools/feaLib/__main__.py\n@@ -70,6 +70,7 @@\n if options.traceback:\n raise\n log.error(e)\n+ sys.exit(1)\n font.save(output_font)\n", "issue": "[feaLib] \"fonttools feaLib\" should error out, not continue\nIf there's a parse/build error when using the feaLib command line tool, we currently do this:\r\n\r\nhttps://github.com/fonttools/fonttools/blob/445108f735b22a5ca37f669808d47906d024fe24/Lib/fontTools/feaLib/__main__.py#L69-L73\r\n\r\ni.e. we save the font anyway and exit with status code 0.\r\n\r\nMy Makefiles and I think this is a terrible idea, and I would like to change it. Any objections / thoughts?\r\n\r\n\n", "before_files": [{"content": "from fontTools.ttLib import TTFont\nfrom fontTools.feaLib.builder import addOpenTypeFeatures, Builder\nfrom fontTools.feaLib.error import FeatureLibError\nfrom fontTools import configLogger\nfrom fontTools.misc.cliTools import makeOutputFileName\nimport sys\nimport argparse\nimport logging\n\n\nlog = logging.getLogger(\"fontTools.feaLib\")\n\n\ndef main(args=None):\n \"\"\"Add features from a feature file (.fea) into a OTF font\"\"\"\n parser = argparse.ArgumentParser(\n description=\"Use fontTools to compile OpenType feature files (*.fea).\"\n )\n parser.add_argument(\n \"input_fea\", metavar=\"FEATURES\", help=\"Path to the feature file\"\n )\n parser.add_argument(\n \"input_font\", metavar=\"INPUT_FONT\", help=\"Path to the input font\"\n )\n parser.add_argument(\n \"-o\",\n \"--output\",\n dest=\"output_font\",\n metavar=\"OUTPUT_FONT\",\n help=\"Path to the output font.\",\n )\n parser.add_argument(\n \"-t\",\n \"--tables\",\n metavar=\"TABLE_TAG\",\n choices=Builder.supportedTables,\n nargs=\"+\",\n help=\"Specify the table(s) to be built.\",\n )\n parser.add_argument(\n \"-d\",\n \"--debug\",\n action=\"store_true\",\n help=\"Add source-level debugging information to font.\",\n )\n parser.add_argument(\n \"-v\",\n \"--verbose\",\n help=\"increase the logger verbosity. Multiple -v \" \"options are allowed.\",\n action=\"count\",\n default=0,\n )\n parser.add_argument(\n \"--traceback\", help=\"show traceback for exceptions.\", action=\"store_true\"\n )\n options = parser.parse_args(args)\n\n levels = [\"WARNING\", \"INFO\", \"DEBUG\"]\n configLogger(level=levels[min(len(levels) - 1, options.verbose)])\n\n output_font = options.output_font or makeOutputFileName(options.input_font)\n log.info(\"Compiling features to '%s'\" % (output_font))\n\n font = TTFont(options.input_font)\n try:\n addOpenTypeFeatures(\n font, options.input_fea, tables=options.tables, debug=options.debug\n )\n except FeatureLibError as e:\n if options.traceback:\n raise\n log.error(e)\n font.save(output_font)\n\n\nif __name__ == \"__main__\":\n sys.exit(main())\n", "path": "Lib/fontTools/feaLib/__main__.py"}], "after_files": [{"content": "from fontTools.ttLib import TTFont\nfrom fontTools.feaLib.builder import addOpenTypeFeatures, Builder\nfrom fontTools.feaLib.error import FeatureLibError\nfrom fontTools import configLogger\nfrom fontTools.misc.cliTools import makeOutputFileName\nimport sys\nimport argparse\nimport logging\n\n\nlog = logging.getLogger(\"fontTools.feaLib\")\n\n\ndef main(args=None):\n \"\"\"Add features from a feature file (.fea) into a OTF font\"\"\"\n parser = argparse.ArgumentParser(\n description=\"Use fontTools to compile OpenType feature files (*.fea).\"\n )\n parser.add_argument(\n \"input_fea\", metavar=\"FEATURES\", help=\"Path to the feature file\"\n )\n parser.add_argument(\n \"input_font\", metavar=\"INPUT_FONT\", help=\"Path to the input font\"\n )\n parser.add_argument(\n \"-o\",\n \"--output\",\n dest=\"output_font\",\n metavar=\"OUTPUT_FONT\",\n help=\"Path to the output font.\",\n )\n parser.add_argument(\n \"-t\",\n \"--tables\",\n metavar=\"TABLE_TAG\",\n choices=Builder.supportedTables,\n nargs=\"+\",\n help=\"Specify the table(s) to be built.\",\n )\n parser.add_argument(\n \"-d\",\n \"--debug\",\n action=\"store_true\",\n help=\"Add source-level debugging information to font.\",\n )\n parser.add_argument(\n \"-v\",\n \"--verbose\",\n help=\"increase the logger verbosity. Multiple -v \" \"options are allowed.\",\n action=\"count\",\n default=0,\n )\n parser.add_argument(\n \"--traceback\", help=\"show traceback for exceptions.\", action=\"store_true\"\n )\n options = parser.parse_args(args)\n\n levels = [\"WARNING\", \"INFO\", \"DEBUG\"]\n configLogger(level=levels[min(len(levels) - 1, options.verbose)])\n\n output_font = options.output_font or makeOutputFileName(options.input_font)\n log.info(\"Compiling features to '%s'\" % (output_font))\n\n font = TTFont(options.input_font)\n try:\n addOpenTypeFeatures(\n font, options.input_fea, tables=options.tables, debug=options.debug\n )\n except FeatureLibError as e:\n if options.traceback:\n raise\n log.error(e)\n sys.exit(1)\n font.save(output_font)\n\n\nif __name__ == \"__main__\":\n sys.exit(main())\n", "path": "Lib/fontTools/feaLib/__main__.py"}]} |
gh_patches_debug_66 | rasdani/github-patches | git_diff | projectmesa__mesa-561 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Update tests to use pytest, not nose
Update tests to use pytest, not nose. nose is not maintained anymore.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 import re
4
5 from setuptools import setup, find_packages
6 from codecs import open
7
8 requires = [
9 'click',
10 'cookiecutter',
11 'jupyter',
12 'networkx',
13 'numpy',
14 'pandas',
15 'tornado >= 4.2, < 5.0.0',
16 'tqdm',
17 ]
18
19 extras_require = {
20 'dev': [
21 'coverage',
22 'flake8',
23 'nose',
24 'sphinx',
25 ],
26 'docs': [
27 'sphinx',
28 ]
29 }
30
31 version = ''
32 with open('mesa/__init__.py', 'r') as fd:
33 version = re.search(r'^__version__\s*=\s*[\'"]([^\'"]*)[\'"]',
34 fd.read(), re.MULTILINE).group(1)
35
36 with open('README.rst', 'rb', encoding='utf-8') as f:
37 readme = f.read()
38
39 setup(
40 name='Mesa',
41 version=version,
42 description="Agent-based modeling (ABM) in Python 3+",
43 long_description=readme,
44 author='Project Mesa Team',
45 author_email='[email protected]',
46 url='https://github.com/projectmesa/mesa',
47 packages=find_packages(),
48 package_data={'mesa': ['visualization/templates/*.html', 'visualization/templates/css/*',
49 'visualization/templates/fonts/*', 'visualization/templates/js/*'],
50 'cookiecutter-mesa': ['cookiecutter-mesa/*']},
51 include_package_data=True,
52 install_requires=requires,
53 extras_require=extras_require,
54 keywords='agent based modeling model ABM simulation multi-agent',
55 license='Apache 2.0',
56 zip_safe=False,
57 classifiers=[
58 'Topic :: Scientific/Engineering',
59 'Topic :: Scientific/Engineering :: Artificial Life',
60 'Topic :: Scientific/Engineering :: Artificial Intelligence',
61 'Intended Audience :: Science/Research',
62 'Programming Language :: Python :: 3 :: Only',
63 'License :: OSI Approved :: Apache Software License',
64 'Operating System :: OS Independent',
65 'Development Status :: 3 - Alpha',
66 'Natural Language :: English',
67 ],
68 entry_points='''
69 [console_scripts]
70 mesa=mesa.main:cli
71 ''',
72 )
73
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -20,7 +20,8 @@
'dev': [
'coverage',
'flake8',
- 'nose',
+ 'pytest',
+ 'pytest-cov',
'sphinx',
],
'docs': [
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -20,7 +20,8 @@\n 'dev': [\n 'coverage',\n 'flake8',\n- 'nose',\n+ 'pytest',\n+ 'pytest-cov',\n 'sphinx',\n ],\n 'docs': [\n", "issue": "Update tests to use pytest, not nose\nUpdate tests to use pytest, not nose. nose is not maintained anymore. \n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\nimport re\n\nfrom setuptools import setup, find_packages\nfrom codecs import open\n\nrequires = [\n 'click',\n 'cookiecutter',\n 'jupyter',\n 'networkx',\n 'numpy',\n 'pandas',\n 'tornado >= 4.2, < 5.0.0',\n 'tqdm',\n]\n\nextras_require = {\n 'dev': [\n 'coverage',\n 'flake8',\n 'nose',\n 'sphinx',\n ],\n 'docs': [\n 'sphinx',\n ]\n}\n\nversion = ''\nwith open('mesa/__init__.py', 'r') as fd:\n version = re.search(r'^__version__\\s*=\\s*[\\'\"]([^\\'\"]*)[\\'\"]',\n fd.read(), re.MULTILINE).group(1)\n\nwith open('README.rst', 'rb', encoding='utf-8') as f:\n readme = f.read()\n\nsetup(\n name='Mesa',\n version=version,\n description=\"Agent-based modeling (ABM) in Python 3+\",\n long_description=readme,\n author='Project Mesa Team',\n author_email='[email protected]',\n url='https://github.com/projectmesa/mesa',\n packages=find_packages(),\n package_data={'mesa': ['visualization/templates/*.html', 'visualization/templates/css/*',\n 'visualization/templates/fonts/*', 'visualization/templates/js/*'],\n 'cookiecutter-mesa': ['cookiecutter-mesa/*']},\n include_package_data=True,\n install_requires=requires,\n extras_require=extras_require,\n keywords='agent based modeling model ABM simulation multi-agent',\n license='Apache 2.0',\n zip_safe=False,\n classifiers=[\n 'Topic :: Scientific/Engineering',\n 'Topic :: Scientific/Engineering :: Artificial Life',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'Intended Audience :: Science/Research',\n 'Programming Language :: Python :: 3 :: Only',\n 'License :: OSI Approved :: Apache Software License',\n 'Operating System :: OS Independent',\n 'Development Status :: 3 - Alpha',\n 'Natural Language :: English',\n ],\n entry_points='''\n [console_scripts]\n mesa=mesa.main:cli\n ''',\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\nimport re\n\nfrom setuptools import setup, find_packages\nfrom codecs import open\n\nrequires = [\n 'click',\n 'cookiecutter',\n 'jupyter',\n 'networkx',\n 'numpy',\n 'pandas',\n 'tornado >= 4.2, < 5.0.0',\n 'tqdm',\n]\n\nextras_require = {\n 'dev': [\n 'coverage',\n 'flake8',\n 'pytest',\n 'pytest-cov',\n 'sphinx',\n ],\n 'docs': [\n 'sphinx',\n ]\n}\n\nversion = ''\nwith open('mesa/__init__.py', 'r') as fd:\n version = re.search(r'^__version__\\s*=\\s*[\\'\"]([^\\'\"]*)[\\'\"]',\n fd.read(), re.MULTILINE).group(1)\n\nwith open('README.rst', 'rb', encoding='utf-8') as f:\n readme = f.read()\n\nsetup(\n name='Mesa',\n version=version,\n description=\"Agent-based modeling (ABM) in Python 3+\",\n long_description=readme,\n author='Project Mesa Team',\n author_email='[email protected]',\n url='https://github.com/projectmesa/mesa',\n packages=find_packages(),\n package_data={'mesa': ['visualization/templates/*.html', 'visualization/templates/css/*',\n 'visualization/templates/fonts/*', 'visualization/templates/js/*'],\n 'cookiecutter-mesa': ['cookiecutter-mesa/*']},\n include_package_data=True,\n install_requires=requires,\n extras_require=extras_require,\n keywords='agent based modeling model ABM simulation multi-agent',\n license='Apache 2.0',\n zip_safe=False,\n classifiers=[\n 'Topic :: Scientific/Engineering',\n 'Topic :: Scientific/Engineering :: Artificial Life',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'Intended Audience :: Science/Research',\n 'Programming Language :: Python :: 3 :: Only',\n 'License :: OSI Approved :: Apache Software License',\n 'Operating System :: OS Independent',\n 'Development Status :: 3 - Alpha',\n 'Natural Language :: English',\n ],\n entry_points='''\n [console_scripts]\n mesa=mesa.main:cli\n ''',\n)\n", "path": "setup.py"}]} |
gh_patches_debug_67 | rasdani/github-patches | git_diff | sbi-dev__sbi-398 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
SNPE with NSF fails when sampling with MCMC
This occurs in a very particular setting: `SNPE` inference with `NSF` density estimator and `sample_with_mcmc=True` (no matter which type of MCMC.
- it works with `sample_with_mcmc=False`,
- and it works with `SNLE`!
I tried to chase it down, but no success so far. You can reproduce it locally by running
```
pytest -s tests/linearGaussian_snpe_test.py::test_c2st_snpe_external_data_on_linearGaussian
```
and setting
https://github.com/mackelab/sbi/blob/6b5ed7be1d7522546b06c39aec1f206a354cc2ef/tests/linearGaussian_snpe_test.py#L286
to `True`.
This is the error trace:
```python
> samples = posterior.sample((num_samples,))
tests/linearGaussian_snpe_test.py:289:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
sbi/inference/posteriors/direct_posterior.py:336: in sample
samples = self._sample_posterior_mcmc(
sbi/inference/posteriors/base_posterior.py:333: in _sample_posterior_mcmc
samples = self._slice_np_mcmc(
sbi/inference/posteriors/base_posterior.py:397: in _slice_np_mcmc
posterior_sampler.gen(int(warmup_steps))
sbi/mcmc/slice_numpy.py:93: in gen
self._tune_bracket_width(rng)
sbi/mcmc/slice_numpy.py:145: in _tune_bracket_width
x[i], wi = self._sample_from_conditional(i, x[i], rng)
sbi/mcmc/slice_numpy.py:173: in _sample_from_conditional
while Li(lx) >= logu and cxi - lx < self.max_width:
sbi/mcmc/slice_numpy.py:162: in <lambda>
Li = lambda t: self.lp_f(np.concatenate([self.x[:i], [t], self.x[i + 1 :]]))
sbi/inference/posteriors/direct_posterior.py:477: in np_potential
target_log_prob = self.posterior_nn.log_prob(
.sbi_env/lib/python3.8/site-packages/nflows/distributions/base.py:40: in log_prob
return self._log_prob(inputs, context)
.sbi_env/lib/python3.8/site-packages/nflows/flows/base.py:39: in _log_prob
noise, logabsdet = self._transform(inputs, context=embedded_context)
.sbi_env/lib/python3.8/site-packages/torch/nn/modules/module.py:722: in _call_impl
result = self.forward(*input, **kwargs)
.sbi_env/lib/python3.8/site-packages/nflows/transforms/base.py:56: in forward
return self._cascade(inputs, funcs, context)
.sbi_env/lib/python3.8/site-packages/nflows/transforms/base.py:50: in _cascade
outputs, logabsdet = func(outputs, context)
.sbi_env/lib/python3.8/site-packages/torch/nn/modules/module.py:722: in _call_impl
result = self.forward(*input, **kwargs)
.sbi_env/lib/python3.8/site-packages/nflows/transforms/base.py:56: in forward
return self._cascade(inputs, funcs, context)
.sbi_env/lib/python3.8/site-packages/nflows/transforms/base.py:50: in _cascade
outputs, logabsdet = func(outputs, context)
.sbi_env/lib/python3.8/site-packages/torch/nn/modules/module.py:722: in _call_impl
result = self.forward(*input, **kwargs)
.sbi_env/lib/python3.8/site-packages/nflows/transforms/base.py:56: in forward
return self._cascade(inputs, funcs, context)
.sbi_env/lib/python3.8/site-packages/nflows/transforms/base.py:50: in _cascade
outputs, logabsdet = func(outputs, context)
.sbi_env/lib/python3.8/site-packages/torch/nn/modules/module.py:722: in _call_impl
result = self.forward(*input, **kwargs)
.sbi_env/lib/python3.8/site-packages/nflows/transforms/coupling.py:84: in forward
transform_split, logabsdet = self._coupling_transform_forward(
.sbi_env/lib/python3.8/site-packages/nflows/transforms/coupling.py:194: in _coupling_transform_forward
return self._coupling_transform(inputs, transform_params, inverse=False)
.sbi_env/lib/python3.8/site-packages/nflows/transforms/coupling.py:211: in _coupling_transform
outputs, logabsdet = self._piecewise_cdf(inputs, transform_params, inverse)
.sbi_env/lib/python3.8/site-packages/nflows/transforms/coupling.py:492: in _piecewise_cdf
return spline_fn(
.sbi_env/lib/python3.8/site-packages/nflows/transforms/splines/rational_quadratic.py:45: in unconstrained_rational_quadratic_spline
) = rational_quadratic_spline(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
inputs = tensor([]), unnormalized_widths = tensor([], size=(0, 10)), unnormalized_heights = tensor([], size=(0, 10)), unnormalized_derivatives = tensor([], size=(0, 11))
inverse = False, left = -3.0, right = 3.0, bottom = -3.0, top = 3.0, min_bin_width = 0.001, min_bin_height = 0.001, min_derivative = 0.001
def rational_quadratic_spline(
inputs,
unnormalized_widths,
unnormalized_heights,
unnormalized_derivatives,
inverse=False,
left=0.0,
right=1.0,
bottom=0.0,
top=1.0,
min_bin_width=DEFAULT_MIN_BIN_WIDTH,
min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
min_derivative=DEFAULT_MIN_DERIVATIVE,
):
> if torch.min(inputs) < left or torch.max(inputs) > right:
E RuntimeError: operation does not have an identity.
.sbi_env/lib/python3.8/site-packages/nflows/transforms/splines/rational_quadratic.py:77: RuntimeError
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 #
4 # This file is part of sbi, a toolkit for simulation-based inference. sbi is licensed
5 # under the Affero General Public License v3, see <https://www.gnu.org/licenses/>.
6 #
7 # Note: To use the 'upload' functionality of this file, you must:
8 # $ pipenv install twine --dev
9
10 import io
11 import os
12 import sys
13 from shutil import rmtree
14
15 from setuptools import find_packages, setup, Command
16
17 # Package meta-data.
18 NAME = "sbi"
19 DESCRIPTION = "Simulation-based inference."
20 KEYWORDS = "bayesian parameter inference system_identification simulator PyTorch"
21 URL = "https://github.com/mackelab/sbi"
22 EMAIL = "[email protected]"
23 AUTHOR = "Γlvaro Tejero-Cantero, Jakob H. Macke, Jan-Matthis LΓΌckmann, Conor M. Durkan, Michael Deistler, Jan BΓΆlts"
24 REQUIRES_PYTHON = ">=3.6.0"
25
26 REQUIRED = [
27 "joblib",
28 "matplotlib",
29 "numpy",
30 "pillow",
31 "pyknos>=0.12",
32 "pyro-ppl>=1.3.1",
33 "scipy",
34 "tensorboard",
35 "torch>=1.5.1",
36 "tqdm",
37 ]
38
39 EXTRAS = {
40 "dev": [
41 "autoflake",
42 "black",
43 "deepdiff",
44 "flake8",
45 "isort",
46 "jupyter",
47 "mkdocs",
48 "mkdocs-material",
49 "markdown-include",
50 "mkdocs-redirects",
51 "mkdocstrings",
52 "nbconvert",
53 "pep517",
54 "pytest",
55 "pyyaml",
56 "scikit-learn",
57 "torchtestcase",
58 "twine",
59 ],
60 }
61
62 here = os.path.abspath(os.path.dirname(__file__))
63
64 # Import the README and use it as the long-description.
65 try:
66 with io.open(os.path.join(here, "README.md"), encoding="utf-8") as f:
67 long_description = "\n" + f.read()
68 except FileNotFoundError:
69 long_description = DESCRIPTION
70
71 # Load the package's __version__.py module as a dictionary.
72 about = {}
73 project_slug = NAME.lower().replace("-", "_").replace(" ", "_")
74 with open(os.path.join(here, project_slug, "__version__.py")) as f:
75 exec(f.read(), about)
76
77
78 class UploadCommand(Command):
79 """Support setup.py upload."""
80
81 description = "Build and publish the package."
82 user_options = []
83
84 @staticmethod
85 def status(s):
86 """Prints things in bold."""
87 print("\033[1m{0}\033[0m".format(s))
88
89 def initialize_options(self):
90 pass
91
92 def finalize_options(self):
93 pass
94
95 def run(self):
96 try:
97 self.status("Removing previous buildsβ¦")
98 rmtree(os.path.join(here, "dist"))
99 except OSError:
100 pass
101
102 self.status("Building Source and Wheel (universal) distributionβ¦")
103 os.system("{0} setup.py sdist bdist_wheel --universal".format(sys.executable))
104
105 self.status("Uploading the package to PyPI via Twineβ¦")
106 os.system("twine upload dist/*")
107
108 self.status("Pushing git tagsβ¦")
109 os.system("git tag v{0}".format(about["__version__"]))
110 os.system("git push --tags")
111
112 sys.exit()
113
114
115 setup(
116 name=NAME,
117 version=about["__version__"],
118 description=DESCRIPTION,
119 keywords=KEYWORDS,
120 long_description=long_description,
121 long_description_content_type="text/markdown",
122 author=AUTHOR,
123 author_email=EMAIL,
124 python_requires=REQUIRES_PYTHON,
125 url=URL,
126 packages=find_packages(exclude=["tests", "*.tests", "*.tests.*", "tests.*"]),
127 install_requires=REQUIRED,
128 extras_require=EXTRAS,
129 include_package_data=True,
130 license="AGPLv3",
131 classifiers=[
132 # Full list: https://pypi.python.org/pypi?%3Aaction=list_classifiers
133 "Development Status :: 3 - Alpha",
134 "License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)",
135 "Intended Audience :: Developers",
136 "Intended Audience :: Education",
137 "Intended Audience :: Science/Research",
138 "Topic :: Adaptive Technologies",
139 "Topic :: Scientific/Engineering",
140 "Topic :: Scientific/Engineering :: Artificial Intelligence",
141 "Topic :: Scientific/Engineering :: Mathematics",
142 "Programming Language :: Python",
143 "Programming Language :: Python :: 3",
144 "Programming Language :: Python :: 3.6",
145 "Programming Language :: Python :: 3.7",
146 "Programming Language :: Python :: 3.8",
147 ],
148 # $ setup.py publish support.
149 cmdclass=dict(upload=UploadCommand),
150 )
151
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -28,7 +28,7 @@
"matplotlib",
"numpy",
"pillow",
- "pyknos>=0.12",
+ "pyknos>=0.14",
"pyro-ppl>=1.3.1",
"scipy",
"tensorboard",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -28,7 +28,7 @@\n \"matplotlib\",\n \"numpy\",\n \"pillow\",\n- \"pyknos>=0.12\",\n+ \"pyknos>=0.14\",\n \"pyro-ppl>=1.3.1\",\n \"scipy\",\n \"tensorboard\",\n", "issue": "SNPE with NSF fails when sampling with MCMC\nThis occurs in a very particular setting: `SNPE` inference with `NSF` density estimator and `sample_with_mcmc=True` (no matter which type of MCMC. \r\n\r\n- it works with `sample_with_mcmc=False`, \r\n- and it works with `SNLE`! \r\n\r\nI tried to chase it down, but no success so far. You can reproduce it locally by running\r\n\r\n```\r\npytest -s tests/linearGaussian_snpe_test.py::test_c2st_snpe_external_data_on_linearGaussian\r\n```\r\n\r\nand setting \r\nhttps://github.com/mackelab/sbi/blob/6b5ed7be1d7522546b06c39aec1f206a354cc2ef/tests/linearGaussian_snpe_test.py#L286\r\n\r\nto `True`. \r\n\r\nThis is the error trace:\r\n```python\r\n\r\n> samples = posterior.sample((num_samples,))\r\n\r\ntests/linearGaussian_snpe_test.py:289:\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\nsbi/inference/posteriors/direct_posterior.py:336: in sample\r\n samples = self._sample_posterior_mcmc(\r\nsbi/inference/posteriors/base_posterior.py:333: in _sample_posterior_mcmc\r\n samples = self._slice_np_mcmc(\r\nsbi/inference/posteriors/base_posterior.py:397: in _slice_np_mcmc\r\n posterior_sampler.gen(int(warmup_steps))\r\nsbi/mcmc/slice_numpy.py:93: in gen\r\n self._tune_bracket_width(rng)\r\nsbi/mcmc/slice_numpy.py:145: in _tune_bracket_width\r\n x[i], wi = self._sample_from_conditional(i, x[i], rng)\r\nsbi/mcmc/slice_numpy.py:173: in _sample_from_conditional\r\n while Li(lx) >= logu and cxi - lx < self.max_width:\r\nsbi/mcmc/slice_numpy.py:162: in <lambda>\r\n Li = lambda t: self.lp_f(np.concatenate([self.x[:i], [t], self.x[i + 1 :]]))\r\nsbi/inference/posteriors/direct_posterior.py:477: in np_potential\r\n target_log_prob = self.posterior_nn.log_prob(\r\n.sbi_env/lib/python3.8/site-packages/nflows/distributions/base.py:40: in log_prob\r\n return self._log_prob(inputs, context)\r\n.sbi_env/lib/python3.8/site-packages/nflows/flows/base.py:39: in _log_prob\r\n noise, logabsdet = self._transform(inputs, context=embedded_context)\r\n.sbi_env/lib/python3.8/site-packages/torch/nn/modules/module.py:722: in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n.sbi_env/lib/python3.8/site-packages/nflows/transforms/base.py:56: in forward\r\n return self._cascade(inputs, funcs, context)\r\n.sbi_env/lib/python3.8/site-packages/nflows/transforms/base.py:50: in _cascade\r\n outputs, logabsdet = func(outputs, context)\r\n.sbi_env/lib/python3.8/site-packages/torch/nn/modules/module.py:722: in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n.sbi_env/lib/python3.8/site-packages/nflows/transforms/base.py:56: in forward\r\n return self._cascade(inputs, funcs, context)\r\n.sbi_env/lib/python3.8/site-packages/nflows/transforms/base.py:50: in _cascade\r\n outputs, logabsdet = func(outputs, context)\r\n.sbi_env/lib/python3.8/site-packages/torch/nn/modules/module.py:722: in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n.sbi_env/lib/python3.8/site-packages/nflows/transforms/base.py:56: in forward\r\n return self._cascade(inputs, funcs, context)\r\n.sbi_env/lib/python3.8/site-packages/nflows/transforms/base.py:50: in _cascade\r\n outputs, logabsdet = func(outputs, context)\r\n.sbi_env/lib/python3.8/site-packages/torch/nn/modules/module.py:722: in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n.sbi_env/lib/python3.8/site-packages/nflows/transforms/coupling.py:84: in forward\r\n transform_split, logabsdet = self._coupling_transform_forward(\r\n.sbi_env/lib/python3.8/site-packages/nflows/transforms/coupling.py:194: in _coupling_transform_forward\r\n return self._coupling_transform(inputs, transform_params, inverse=False)\r\n.sbi_env/lib/python3.8/site-packages/nflows/transforms/coupling.py:211: in _coupling_transform\r\n outputs, logabsdet = self._piecewise_cdf(inputs, transform_params, inverse)\r\n.sbi_env/lib/python3.8/site-packages/nflows/transforms/coupling.py:492: in _piecewise_cdf\r\n return spline_fn(\r\n.sbi_env/lib/python3.8/site-packages/nflows/transforms/splines/rational_quadratic.py:45: in unconstrained_rational_quadratic_spline\r\n ) = rational_quadratic_spline(\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\n\r\ninputs = tensor([]), unnormalized_widths = tensor([], size=(0, 10)), unnormalized_heights = tensor([], size=(0, 10)), unnormalized_derivatives = tensor([], size=(0, 11))\r\ninverse = False, left = -3.0, right = 3.0, bottom = -3.0, top = 3.0, min_bin_width = 0.001, min_bin_height = 0.001, min_derivative = 0.001\r\n\r\n def rational_quadratic_spline(\r\n inputs,\r\n unnormalized_widths,\r\n unnormalized_heights,\r\n unnormalized_derivatives,\r\n inverse=False,\r\n left=0.0,\r\n right=1.0,\r\n bottom=0.0,\r\n top=1.0,\r\n min_bin_width=DEFAULT_MIN_BIN_WIDTH,\r\n min_bin_height=DEFAULT_MIN_BIN_HEIGHT,\r\n min_derivative=DEFAULT_MIN_DERIVATIVE,\r\n ):\r\n> if torch.min(inputs) < left or torch.max(inputs) > right:\r\nE RuntimeError: operation does not have an identity.\r\n\r\n.sbi_env/lib/python3.8/site-packages/nflows/transforms/splines/rational_quadratic.py:77: RuntimeError\r\n```\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# This file is part of sbi, a toolkit for simulation-based inference. sbi is licensed\n# under the Affero General Public License v3, see <https://www.gnu.org/licenses/>.\n#\n# Note: To use the 'upload' functionality of this file, you must:\n# $ pipenv install twine --dev\n\nimport io\nimport os\nimport sys\nfrom shutil import rmtree\n\nfrom setuptools import find_packages, setup, Command\n\n# Package meta-data.\nNAME = \"sbi\"\nDESCRIPTION = \"Simulation-based inference.\"\nKEYWORDS = \"bayesian parameter inference system_identification simulator PyTorch\"\nURL = \"https://github.com/mackelab/sbi\"\nEMAIL = \"[email protected]\"\nAUTHOR = \"\u00c1lvaro Tejero-Cantero, Jakob H. Macke, Jan-Matthis L\u00fcckmann, Conor M. Durkan, Michael Deistler, Jan B\u00f6lts\"\nREQUIRES_PYTHON = \">=3.6.0\"\n\nREQUIRED = [\n \"joblib\",\n \"matplotlib\",\n \"numpy\",\n \"pillow\",\n \"pyknos>=0.12\",\n \"pyro-ppl>=1.3.1\",\n \"scipy\",\n \"tensorboard\",\n \"torch>=1.5.1\",\n \"tqdm\",\n]\n\nEXTRAS = {\n \"dev\": [\n \"autoflake\",\n \"black\",\n \"deepdiff\",\n \"flake8\",\n \"isort\",\n \"jupyter\",\n \"mkdocs\",\n \"mkdocs-material\",\n \"markdown-include\",\n \"mkdocs-redirects\",\n \"mkdocstrings\",\n \"nbconvert\",\n \"pep517\",\n \"pytest\",\n \"pyyaml\",\n \"scikit-learn\",\n \"torchtestcase\",\n \"twine\",\n ],\n}\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\n# Import the README and use it as the long-description.\ntry:\n with io.open(os.path.join(here, \"README.md\"), encoding=\"utf-8\") as f:\n long_description = \"\\n\" + f.read()\nexcept FileNotFoundError:\n long_description = DESCRIPTION\n\n# Load the package's __version__.py module as a dictionary.\nabout = {}\nproject_slug = NAME.lower().replace(\"-\", \"_\").replace(\" \", \"_\")\nwith open(os.path.join(here, project_slug, \"__version__.py\")) as f:\n exec(f.read(), about)\n\n\nclass UploadCommand(Command):\n \"\"\"Support setup.py upload.\"\"\"\n\n description = \"Build and publish the package.\"\n user_options = []\n\n @staticmethod\n def status(s):\n \"\"\"Prints things in bold.\"\"\"\n print(\"\\033[1m{0}\\033[0m\".format(s))\n\n def initialize_options(self):\n pass\n\n def finalize_options(self):\n pass\n\n def run(self):\n try:\n self.status(\"Removing previous builds\u2026\")\n rmtree(os.path.join(here, \"dist\"))\n except OSError:\n pass\n\n self.status(\"Building Source and Wheel (universal) distribution\u2026\")\n os.system(\"{0} setup.py sdist bdist_wheel --universal\".format(sys.executable))\n\n self.status(\"Uploading the package to PyPI via Twine\u2026\")\n os.system(\"twine upload dist/*\")\n\n self.status(\"Pushing git tags\u2026\")\n os.system(\"git tag v{0}\".format(about[\"__version__\"]))\n os.system(\"git push --tags\")\n\n sys.exit()\n\n\nsetup(\n name=NAME,\n version=about[\"__version__\"],\n description=DESCRIPTION,\n keywords=KEYWORDS,\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n author=AUTHOR,\n author_email=EMAIL,\n python_requires=REQUIRES_PYTHON,\n url=URL,\n packages=find_packages(exclude=[\"tests\", \"*.tests\", \"*.tests.*\", \"tests.*\"]),\n install_requires=REQUIRED,\n extras_require=EXTRAS,\n include_package_data=True,\n license=\"AGPLv3\",\n classifiers=[\n # Full list: https://pypi.python.org/pypi?%3Aaction=list_classifiers\n \"Development Status :: 3 - Alpha\",\n \"License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Education\",\n \"Intended Audience :: Science/Research\",\n \"Topic :: Adaptive Technologies\",\n \"Topic :: Scientific/Engineering\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n \"Topic :: Scientific/Engineering :: Mathematics\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n ],\n # $ setup.py publish support.\n cmdclass=dict(upload=UploadCommand),\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# This file is part of sbi, a toolkit for simulation-based inference. sbi is licensed\n# under the Affero General Public License v3, see <https://www.gnu.org/licenses/>.\n#\n# Note: To use the 'upload' functionality of this file, you must:\n# $ pipenv install twine --dev\n\nimport io\nimport os\nimport sys\nfrom shutil import rmtree\n\nfrom setuptools import find_packages, setup, Command\n\n# Package meta-data.\nNAME = \"sbi\"\nDESCRIPTION = \"Simulation-based inference.\"\nKEYWORDS = \"bayesian parameter inference system_identification simulator PyTorch\"\nURL = \"https://github.com/mackelab/sbi\"\nEMAIL = \"[email protected]\"\nAUTHOR = \"\u00c1lvaro Tejero-Cantero, Jakob H. Macke, Jan-Matthis L\u00fcckmann, Conor M. Durkan, Michael Deistler, Jan B\u00f6lts\"\nREQUIRES_PYTHON = \">=3.6.0\"\n\nREQUIRED = [\n \"joblib\",\n \"matplotlib\",\n \"numpy\",\n \"pillow\",\n \"pyknos>=0.14\",\n \"pyro-ppl>=1.3.1\",\n \"scipy\",\n \"tensorboard\",\n \"torch>=1.5.1\",\n \"tqdm\",\n]\n\nEXTRAS = {\n \"dev\": [\n \"autoflake\",\n \"black\",\n \"deepdiff\",\n \"flake8\",\n \"isort\",\n \"jupyter\",\n \"mkdocs\",\n \"mkdocs-material\",\n \"markdown-include\",\n \"mkdocs-redirects\",\n \"mkdocstrings\",\n \"nbconvert\",\n \"pep517\",\n \"pytest\",\n \"pyyaml\",\n \"scikit-learn\",\n \"torchtestcase\",\n \"twine\",\n ],\n}\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\n# Import the README and use it as the long-description.\ntry:\n with io.open(os.path.join(here, \"README.md\"), encoding=\"utf-8\") as f:\n long_description = \"\\n\" + f.read()\nexcept FileNotFoundError:\n long_description = DESCRIPTION\n\n# Load the package's __version__.py module as a dictionary.\nabout = {}\nproject_slug = NAME.lower().replace(\"-\", \"_\").replace(\" \", \"_\")\nwith open(os.path.join(here, project_slug, \"__version__.py\")) as f:\n exec(f.read(), about)\n\n\nclass UploadCommand(Command):\n \"\"\"Support setup.py upload.\"\"\"\n\n description = \"Build and publish the package.\"\n user_options = []\n\n @staticmethod\n def status(s):\n \"\"\"Prints things in bold.\"\"\"\n print(\"\\033[1m{0}\\033[0m\".format(s))\n\n def initialize_options(self):\n pass\n\n def finalize_options(self):\n pass\n\n def run(self):\n try:\n self.status(\"Removing previous builds\u2026\")\n rmtree(os.path.join(here, \"dist\"))\n except OSError:\n pass\n\n self.status(\"Building Source and Wheel (universal) distribution\u2026\")\n os.system(\"{0} setup.py sdist bdist_wheel --universal\".format(sys.executable))\n\n self.status(\"Uploading the package to PyPI via Twine\u2026\")\n os.system(\"twine upload dist/*\")\n\n self.status(\"Pushing git tags\u2026\")\n os.system(\"git tag v{0}\".format(about[\"__version__\"]))\n os.system(\"git push --tags\")\n\n sys.exit()\n\n\nsetup(\n name=NAME,\n version=about[\"__version__\"],\n description=DESCRIPTION,\n keywords=KEYWORDS,\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n author=AUTHOR,\n author_email=EMAIL,\n python_requires=REQUIRES_PYTHON,\n url=URL,\n packages=find_packages(exclude=[\"tests\", \"*.tests\", \"*.tests.*\", \"tests.*\"]),\n install_requires=REQUIRED,\n extras_require=EXTRAS,\n include_package_data=True,\n license=\"AGPLv3\",\n classifiers=[\n # Full list: https://pypi.python.org/pypi?%3Aaction=list_classifiers\n \"Development Status :: 3 - Alpha\",\n \"License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Education\",\n \"Intended Audience :: Science/Research\",\n \"Topic :: Adaptive Technologies\",\n \"Topic :: Scientific/Engineering\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n \"Topic :: Scientific/Engineering :: Mathematics\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n ],\n # $ setup.py publish support.\n cmdclass=dict(upload=UploadCommand),\n)\n", "path": "setup.py"}]} |
gh_patches_debug_68 | rasdani/github-patches | git_diff | dbt-labs__dbt-core-2475 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[0.17.0rc1] Broken Docker image entrypoint
### Describe the bug
The entrypoint for the image is invalid
### Steps To Reproduce
1. Pull the image: `docker pull fishtownanalytics/dbt:0.17.0rc1`
2. Run the image:
```
docker run -it fishtownanalytics/dbt:0.17.0rc1
/bin/sh: 1: [dbt,: not found
```
### Expected behavior
The DBT help command is displayed
### Additional context
I plan on integrating DBT with our Airflow infrastructure as a container (we extend Airflow exclusively through containerized components)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scripts/build-dbt.py`
Content:
```
1 import json
2 import os
3 import re
4 import shutil
5 import subprocess
6 import sys
7 import tempfile
8 import textwrap
9 import time
10 import venv # type: ignore
11 import zipfile
12
13 from typing import Dict
14
15 from argparse import ArgumentParser
16 from dataclasses import dataclass
17 from pathlib import Path
18 from urllib.request import urlopen
19
20 from typing import Optional, Iterator, Tuple, List
21
22
23 HOMEBREW_PYTHON = (3, 8)
24
25
26 # This should match the pattern in .bumpversion.cfg
27 VERSION_PATTERN = re.compile(
28 r'(?P<major>\d+)\.(?P<minor>\d+)\.(?P<patch>\d+)'
29 r'((?P<prerelease>[a-z]+)(?P<num>\d+))?'
30 )
31
32
33 class Version:
34 def __init__(self, raw: str) -> None:
35 self.raw = raw
36 match = VERSION_PATTERN.match(self.raw)
37 assert match is not None, f'Invalid version: {self.raw}'
38 groups = match.groupdict()
39
40 self.major: int = int(groups['major'])
41 self.minor: int = int(groups['minor'])
42 self.patch: int = int(groups['patch'])
43 self.prerelease: Optional[str] = None
44 self.num: Optional[int] = None
45
46 if groups['num'] is not None:
47 self.prerelease = groups['prerelease']
48 self.num = int(groups['num'])
49
50 def __str__(self):
51 return self.raw
52
53 def homebrew_class_name(self) -> str:
54 name = f'DbtAT{self.major}{self.minor}{self.patch}'
55 if self.prerelease is not None and self.num is not None:
56 name = f'{name}{self.prerelease.title()}{self.num}'
57 return name
58
59 def homebrew_filename(self):
60 version_str = f'{self.major}.{self.minor}.{self.patch}'
61 if self.prerelease is not None and self.num is not None:
62 version_str = f'{version_str}-{self.prerelease}{self.num}'
63 return f'dbt@{version_str}.rb'
64
65
66 @dataclass
67 class Arguments:
68 version: Version
69 part: str
70 path: Path
71 homebrew_path: Path
72 homebrew_set_default: bool
73 set_version: bool
74 build_pypi: bool
75 upload_pypi: bool
76 test_upload: bool
77 build_homebrew: bool
78 build_docker: bool
79 upload_docker: bool
80 write_requirements: bool
81 write_dockerfile: bool
82
83 @classmethod
84 def parse(cls) -> 'Arguments':
85 parser = ArgumentParser(
86 prog="Bump dbt's version, build packages"
87 )
88 parser.add_argument(
89 'version',
90 type=Version,
91 help="The version to set",
92 )
93 parser.add_argument(
94 'part',
95 type=str,
96 help="The part of the version to update",
97 )
98 parser.add_argument(
99 '--path',
100 type=Path,
101 help='The path to the dbt repository',
102 default=Path.cwd(),
103 )
104 parser.add_argument(
105 '--homebrew-path',
106 type=Path,
107 help='The path to the dbt homebrew install',
108 default=(Path.cwd() / '../homebrew-dbt'),
109 )
110 parser.add_argument(
111 '--homebrew-set-default',
112 action='store_true',
113 help='If set, make this homebrew version the default',
114 )
115 parser.add_argument(
116 '--no-set-version',
117 dest='set_version',
118 action='store_false',
119 help='Skip bumping the version',
120 )
121 parser.add_argument(
122 '--no-build-pypi',
123 dest='build_pypi',
124 action='store_false',
125 help='skip building pypi',
126 )
127 parser.add_argument(
128 '--no-build-docker',
129 dest='build_docker',
130 action='store_false',
131 help='skip building docker images',
132 )
133 parser.add_argument(
134 '--no-upload-docker',
135 dest='upload_docker',
136 action='store_false',
137 help='skip uploading docker images',
138 )
139
140 uploading = parser.add_mutually_exclusive_group()
141
142 uploading.add_argument(
143 '--upload-pypi',
144 dest='force_upload_pypi',
145 action='store_true',
146 help='upload to pypi even if building is disabled'
147 )
148
149 uploading.add_argument(
150 '--no-upload-pypi',
151 dest='no_upload_pypi',
152 action='store_true',
153 help='skip uploading to pypi',
154 )
155
156 parser.add_argument(
157 '--no-upload',
158 dest='test_upload',
159 action='store_false',
160 help='Skip uploading to pypitest',
161 )
162
163 parser.add_argument(
164 '--no-build-homebrew',
165 dest='build_homebrew',
166 action='store_false',
167 help='Skip building homebrew packages',
168 )
169 parser.add_argument(
170 '--no-write-requirements',
171 dest='write_requirements',
172 action='store_false',
173 help='Skip writing the requirements file. It must exist.'
174 )
175 parser.add_argument(
176 '--no-write-dockerfile',
177 dest='write_dockerfile',
178 action='store_false',
179 help='Skip writing the dockerfile. It must exist.'
180 )
181 parsed = parser.parse_args()
182
183 upload_pypi = parsed.build_pypi
184 if parsed.force_upload_pypi:
185 upload_pypi = True
186 elif parsed.no_upload_pypi:
187 upload_pypi = False
188
189 return cls(
190 version=parsed.version,
191 part=parsed.part,
192 path=parsed.path,
193 homebrew_path=parsed.homebrew_path,
194 homebrew_set_default=parsed.homebrew_set_default,
195 set_version=parsed.set_version,
196 build_pypi=parsed.build_pypi,
197 upload_pypi=upload_pypi,
198 test_upload=parsed.test_upload,
199 build_homebrew=parsed.build_homebrew,
200 build_docker=parsed.build_docker,
201 upload_docker=parsed.upload_docker,
202 write_requirements=parsed.write_requirements,
203 write_dockerfile=parsed.write_dockerfile,
204 )
205
206
207 def collect_output(cmd, cwd=None, stderr=subprocess.PIPE) -> str:
208 try:
209 result = subprocess.run(
210 cmd, cwd=cwd, check=True, stdout=subprocess.PIPE, stderr=stderr
211 )
212 except subprocess.CalledProcessError as exc:
213 print(f'Command {exc.cmd} failed')
214 if exc.output:
215 print(exc.output.decode('utf-8'))
216 if exc.stderr:
217 print(exc.stderr.decode('utf-8'), file=sys.stderr)
218 raise
219 return result.stdout.decode('utf-8')
220
221
222 def run_command(cmd, cwd=None) -> None:
223 result = collect_output(cmd, stderr=subprocess.STDOUT, cwd=cwd)
224 print(result)
225
226
227 def set_version(path: Path, version: Version, part: str):
228 # bumpversion --commit --no-tag --new-version "${version}" "${port}"
229 cmd = [
230 'bumpversion', '--commit', '--no-tag', '--new-version',
231 str(version), part
232 ]
233 print(f'bumping version to {version}')
234 run_command(cmd, cwd=path)
235 print(f'bumped version to {version}')
236
237
238 class PypiBuilder:
239 _SUBPACKAGES = (
240 'core',
241 'plugins/postgres',
242 'plugins/redshift',
243 'plugins/bigquery',
244 'plugins/snowflake',
245 )
246
247 def __init__(self, dbt_path: Path):
248 self.dbt_path = dbt_path
249
250 @staticmethod
251 def _dist_for(path: Path, make=False) -> Path:
252 dist_path = path / 'dist'
253 if dist_path.exists():
254 shutil.rmtree(dist_path)
255 if make:
256 os.makedirs(dist_path)
257 build_path = path / 'build'
258 if build_path.exists():
259 shutil.rmtree(build_path)
260 return dist_path
261
262 @staticmethod
263 def _build_pypi_package(path: Path):
264 print(f'building package in {path}')
265 cmd = ['python', 'setup.py', 'sdist', 'bdist_wheel']
266 run_command(cmd, cwd=path)
267 print(f'finished building package in {path}')
268
269 @staticmethod
270 def _all_packages_in(path: Path) -> Iterator[Path]:
271 path = path / 'dist'
272 for pattern in ('*.tar.gz', '*.whl'):
273 yield from path.glob(pattern)
274
275 def _build_subpackage(self, name: str) -> Iterator[Path]:
276 subpath = self.dbt_path / name
277 self._dist_for(subpath)
278 self._build_pypi_package(subpath)
279 return self._all_packages_in(subpath)
280
281 def build(self):
282 print('building pypi packages')
283 dist_path = self._dist_for(self.dbt_path)
284 sub_pkgs: List[Path] = []
285 for path in self._SUBPACKAGES:
286 sub_pkgs.extend(self._build_subpackage(path))
287
288 # now build the main package
289 self._build_pypi_package(self.dbt_path)
290 # now copy everything from the subpackages in
291 for package in sub_pkgs:
292 shutil.copy(str(package), dist_path)
293
294 print('built pypi packages')
295
296 def upload(self, *, test=True):
297 cmd = ['twine', 'check']
298 cmd.extend(str(p) for p in self._all_packages_in(self.dbt_path))
299 run_command(cmd)
300 cmd = ['twine', 'upload']
301 if test:
302 cmd.extend(['--repository', 'pypitest'])
303 cmd.extend(str(p) for p in self._all_packages_in(self.dbt_path))
304 print('uploading packages: {}'.format(' '.join(cmd)))
305 run_command(cmd)
306 print('uploaded packages')
307
308
309 class PipInstaller(venv.EnvBuilder):
310 def __init__(self, packages: List[str]) -> None:
311 super().__init__(with_pip=True)
312 self.packages = packages
313
314 def post_setup(self, context):
315 # we can't run from the dbt directory or this gets all weird, so
316 # install from an empty temp directory and then remove it.
317 tmp = tempfile.mkdtemp()
318 cmd = [context.env_exe, '-m', 'pip', 'install', '--upgrade']
319 cmd.extend(self.packages)
320 print(f'installing {self.packages}')
321 try:
322 run_command(cmd, cwd=tmp)
323 finally:
324 os.rmdir(tmp)
325 print(f'finished installing {self.packages}')
326
327 def create(self, venv_path):
328 os.makedirs(venv_path.parent, exist_ok=True)
329 if venv_path.exists():
330 shutil.rmtree(venv_path)
331 return super().create(venv_path)
332
333
334 def _require_wheels(dbt_path: Path) -> List[Path]:
335 dist_path = dbt_path / 'dist'
336 wheels = list(dist_path.glob('*.whl'))
337 if not wheels:
338 raise ValueError(
339 f'No wheels found in {dist_path} - run scripts/build-wheels.sh'
340 )
341 return wheels
342
343
344 class DistFolderEnv(PipInstaller):
345 def __init__(self, dbt_path: Path) -> None:
346 self.wheels = _require_wheels(dbt_path)
347 super().__init__(packages=self.wheels)
348
349
350 class PoetVirtualenv(PipInstaller):
351 def __init__(self, dbt_version: Version) -> None:
352 super().__init__([f'dbt=={dbt_version}', 'homebrew-pypi-poet'])
353
354
355 @dataclass
356 class HomebrewTemplate:
357 url_data: str
358 hash_data: str
359 dependencies: str
360
361
362 def _make_venv_at(root: Path, name: str, builder: venv.EnvBuilder):
363 venv_path = root / name
364 os.makedirs(root, exist_ok=True)
365 if venv_path.exists():
366 shutil.rmtree(venv_path)
367
368 builder.create(venv_path)
369 return venv_path
370
371
372 class HomebrewBuilder:
373 def __init__(
374 self,
375 dbt_path: Path,
376 version: Version,
377 homebrew_path: Path,
378 set_default: bool,
379 ) -> None:
380 self.dbt_path = dbt_path
381 self.version = version
382 self.homebrew_path = homebrew_path
383 self.set_default = set_default
384 self._template: Optional[HomebrewTemplate] = None
385
386 def make_venv(self) -> PoetVirtualenv:
387 env = PoetVirtualenv(self.version)
388 max_attempts = 10
389 for attempt in range(1, max_attempts+1):
390 # after uploading to pypi, it can take a few minutes for installing
391 # to work. Retry a few times...
392 try:
393 env.create(self.homebrew_venv_path)
394 return
395 except subprocess.CalledProcessError:
396 if attempt == max_attempts:
397 raise
398 else:
399 print(
400 f'installation failed - waiting 60s for pypi to see '
401 f'the new version (attempt {attempt}/{max_attempts})'
402 )
403 time.sleep(60)
404
405 return env
406
407 @property
408 def versioned_formula_path(self) -> Path:
409 return (
410 self.homebrew_path / 'Formula' / self.version.homebrew_filename()
411 )
412
413 @property
414 def default_formula_path(self) -> Path:
415 return (
416 self.homebrew_path / 'Formula/dbt.rb'
417 )
418
419 @property
420 def homebrew_venv_path(self) -> Path:
421 return self.dbt_path / 'build' / 'homebrew-venv'
422
423 @staticmethod
424 def _dbt_homebrew_formula_fmt() -> str:
425 return textwrap.dedent('''\
426 class {formula_name} < Formula
427 include Language::Python::Virtualenv
428
429 desc "Data build tool"
430 homepage "https://github.com/fishtown-analytics/dbt"
431 url "{url_data}"
432 sha256 "{hash_data}"
433 revision 1
434
435 bottle do
436 root_url "http://bottles.getdbt.com"
437 # bottle hashes + versions go here
438 end
439
440 depends_on "[email protected]"
441 depends_on "postgresql"
442 depends_on "python"
443
444 {dependencies}
445 {trailer}
446 end
447 ''')
448
449 @staticmethod
450 def _dbt_homebrew_trailer() -> str:
451 dedented = textwrap.dedent('''\
452 def install
453 venv = virtualenv_create(libexec, "python3")
454
455 res = resources.map(&:name).to_set
456
457 res.each do |r|
458 venv.pip_install resource(r)
459 end
460
461 venv.pip_install_and_link buildpath
462
463 bin.install_symlink "#{libexec}/bin/dbt" => "dbt"
464 end
465
466 test do
467 (testpath/"dbt_project.yml").write(
468 "{name: 'test', version: '0.0.1', profile: 'default'}",
469 )
470 (testpath/".dbt/profiles.yml").write(
471 "{default: {outputs: {default: {type: 'postgres', threads: 1,
472 host: 'localhost', port: 5432, user: 'root', pass: 'password',
473 dbname: 'test', schema: 'test'}}, target: 'default'}}",
474 )
475 (testpath/"models/test.sql").write("select * from test")
476 system "#{bin}/dbt", "test"
477 end''')
478 return textwrap.indent(dedented, ' ')
479
480 def get_formula_data(
481 self, versioned: bool = True
482 ) -> str:
483 fmt = self._dbt_homebrew_formula_fmt()
484 trailer = self._dbt_homebrew_trailer()
485 if versioned:
486 formula_name = self.version.homebrew_class_name()
487 else:
488 formula_name = 'Dbt'
489
490 return fmt.format(
491 formula_name=formula_name,
492 version=self.version,
493 url_data=self.template.url_data,
494 hash_data=self.template.hash_data,
495 dependencies=self.template.dependencies,
496 trailer=trailer,
497 )
498
499 @property
500 def template(self) -> HomebrewTemplate:
501 if self._template is None:
502 self.make_venv()
503 print('done setting up virtualenv')
504 poet = self.homebrew_venv_path / 'bin/poet'
505
506 # get the dbt package info
507 url_data, hash_data = self._get_pypi_dbt_info()
508
509 dependencies = self._get_recursive_dependencies(poet)
510 template = HomebrewTemplate(
511 url_data=url_data,
512 hash_data=hash_data,
513 dependencies=dependencies,
514 )
515 self._template = template
516 else:
517 template = self._template
518 return template
519
520 def _get_pypi_dbt_info(self) -> Tuple[str, str]:
521 fp = urlopen(f'https://pypi.org/pypi/dbt/{self.version}/json')
522 try:
523 data = json.load(fp)
524 finally:
525 fp.close()
526 assert 'urls' in data
527 for pkginfo in data['urls']:
528 assert 'packagetype' in pkginfo
529 if pkginfo['packagetype'] == 'sdist':
530 assert 'url' in pkginfo
531 assert 'digests' in pkginfo
532 assert 'sha256' in pkginfo['digests']
533 url = pkginfo['url']
534 digest = pkginfo['digests']['sha256']
535 return url, digest
536 raise ValueError(f'Never got a valid sdist for dbt=={self.version}')
537
538 def _get_recursive_dependencies(self, poet_exe: Path) -> str:
539 cmd = [str(poet_exe), '--resources', 'dbt']
540 raw = collect_output(cmd).split('\n')
541 return '\n'.join(self._remove_dbt_resource(raw))
542
543 def _remove_dbt_resource(self, lines: List[str]) -> Iterator[str]:
544 # TODO: fork poet or extract the good bits to avoid this
545 line_iter = iter(lines)
546 # don't do a double-newline or "brew audit" gets mad
547 for line in line_iter:
548 # skip the contents of the "dbt" resource block.
549 if line.strip() == 'resource "dbt" do':
550 for skip in line_iter:
551 if skip.strip() == 'end':
552 # skip the newline after 'end'
553 next(line_iter)
554 break
555 else:
556 yield line
557
558 def create_versioned_formula_file(self):
559 formula_contents = self.get_formula_data(versioned=True)
560 if self.versioned_formula_path.exists():
561 print('Homebrew formula path already exists, overwriting')
562 self.versioned_formula_path.write_text(formula_contents)
563
564 def commit_versioned_formula(self):
565 # add a commit for the new formula
566 run_command(
567 ['git', 'add', self.versioned_formula_path],
568 cwd=self.homebrew_path
569 )
570 run_command(
571 ['git', 'commit', '-m', f'add dbt@{self.version}'],
572 cwd=self.homebrew_path
573 )
574
575 def commit_default_formula(self):
576 run_command(
577 ['git', 'add', self.default_formula_path],
578 cwd=self.homebrew_path
579 )
580 run_command(
581 ['git', 'commit', '-m', f'upgrade dbt to {self.version}'],
582 cwd=self.homebrew_path
583 )
584
585 @staticmethod
586 def run_tests(formula_path: Path, audit: bool = True):
587 path = os.path.normpath(formula_path)
588 run_command(['brew', 'uninstall', '--force', path])
589 versions = [
590 l.strip() for l in
591 collect_output(['brew', 'list']).split('\n')
592 if l.strip().startswith('dbt@') or l.strip() == 'dbt'
593 ]
594 if versions:
595 run_command(['brew', 'unlink'] + versions)
596 run_command(['brew', 'install', path])
597 run_command(['brew', 'test', path])
598 if audit:
599 run_command(['brew', 'audit', '--strict', path])
600
601 def create_default_package(self):
602 os.remove(self.default_formula_path)
603 formula_contents = self.get_formula_data(versioned=False)
604 self.default_formula_path.write_text(formula_contents)
605
606 def build(self):
607 self.create_versioned_formula_file()
608 # self.run_tests(formula_path=self.versioned_formula_path)
609 self.commit_versioned_formula()
610
611 if self.set_default:
612 self.create_default_package()
613 # self.run_tests(formula_path=self.default_formula_path, audit=False)
614 self.commit_default_formula()
615
616
617 class WheelInfo:
618 def __init__(self, path):
619 self.path = path
620
621 @staticmethod
622 def _extract_distinfo_path(wfile: zipfile.ZipFile) -> zipfile.Path:
623 zpath = zipfile.Path(root=wfile)
624 for path in zpath.iterdir():
625 if path.name.endswith('.dist-info'):
626 return path
627 raise ValueError('Wheel with no dist-info?')
628
629 def get_metadata(self) -> Dict[str, str]:
630 with zipfile.ZipFile(self.path) as wf:
631 distinfo = self._extract_distinfo_path(wf)
632 metadata = distinfo / 'METADATA'
633 metadata_dict: Dict[str, str] = {}
634 for line in metadata.read_text().split('\n'):
635 parts = line.split(': ', 1)
636 if len(parts) == 2:
637 metadata_dict[parts[0]] = parts[1]
638 return metadata_dict
639
640 def package_name(self) -> str:
641 metadata = self.get_metadata()
642 if 'Name' not in metadata:
643 raise ValueError('Wheel with no name?')
644 return metadata['Name']
645
646
647 class DockerBuilder:
648 """The docker builder requires the existence of a dbt package"""
649 def __init__(self, dbt_path: Path, version: Version) -> None:
650 self.dbt_path = dbt_path
651 self.version = version
652
653 @property
654 def docker_path(self) -> Path:
655 return self.dbt_path / 'docker'
656
657 @property
658 def dockerfile_name(self) -> str:
659 return f'Dockerfile.{self.version}'
660
661 @property
662 def dockerfile_path(self) -> Path:
663 return self.docker_path / self.dockerfile_name
664
665 @property
666 def requirements_path(self) -> Path:
667 return self.docker_path / 'requirements'
668
669 @property
670 def requirements_file_name(self) -> str:
671 return f'requirements.{self.version}.txt'
672
673 @property
674 def dockerfile_venv_path(self) -> Path:
675 return self.dbt_path / 'build' / 'docker-venv'
676
677 @property
678 def requirements_txt_path(self) -> Path:
679 return self.requirements_path / self.requirements_file_name
680
681 def make_venv(self) -> DistFolderEnv:
682 env = DistFolderEnv(self.dbt_path)
683
684 env.create(self.dockerfile_venv_path)
685 return env
686
687 def get_frozen(self) -> str:
688 env = self.make_venv()
689 pip_path = self.dockerfile_venv_path / 'bin/pip'
690 cmd = [pip_path, 'freeze']
691 wheel_names = {
692 WheelInfo(wheel_path).package_name() for wheel_path in env.wheels
693 }
694 # remove the dependencies in dbt itself
695 return '\n'.join([
696 dep for dep in collect_output(cmd).split('\n')
697 if dep.split('==')[0] not in wheel_names
698 ])
699
700 def write_lockfile(self):
701 freeze = self.get_frozen()
702 path = self.requirements_txt_path
703 if path.exists():
704 raise ValueError(f'Found existing requirements file at {path}!')
705 os.makedirs(path.parent, exist_ok=True)
706 path.write_text(freeze)
707
708 def get_dockerfile_contents(self):
709 dist_path = (self.dbt_path / 'dist').relative_to(Path.cwd())
710 wheel_paths = ' '.join(
711 os.path.join('.', 'dist', p.name)
712 for p in _require_wheels(self.dbt_path)
713 )
714
715 requirements_path = self.requirements_txt_path.relative_to(Path.cwd())
716
717 return textwrap.dedent(
718 f'''\
719 FROM python:3.8.1-slim-buster
720
721 RUN apt-get update && \
722 apt-get dist-upgrade -y && \
723 apt-get install -y --no-install-recommends \
724 git software-properties-common make build-essential \
725 ca-certificates libpq-dev && \
726 apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
727
728 COPY {requirements_path} ./{self.requirements_file_name}
729 COPY {dist_path} ./dist
730 RUN pip install --upgrade pip setuptools
731 RUN pip install --requirement ./{self.requirements_file_name}
732 RUN pip install {wheel_paths}
733
734 RUN useradd -mU dbt_user
735
736 ENV PYTHONIOENCODING=utf-8
737 ENV LANG C.UTF-8
738
739 WORKDIR /usr/app
740 VOLUME /usr/app
741
742 USER dbt_user
743 CMD ['dbt', 'run']
744 '''
745 )
746
747 def write_dockerfile(self):
748 dockerfile = self.get_dockerfile_contents()
749 path = self.dockerfile_path
750 if path.exists():
751 raise ValueError(f'Found existing docker file at {path}!')
752 os.makedirs(path.parent, exist_ok=True)
753 path.write_text(dockerfile)
754
755 @property
756 def image_tag(self):
757 return f'dbt:{self.version}'
758
759 @property
760 def remote_tag(self):
761 return f'fishtownanalytics/{self.image_tag}'
762
763 def create_docker_image(self):
764 run_command(
765 [
766 'docker', 'build',
767 '-f', self.dockerfile_path,
768 '--tag', self.image_tag,
769 # '--no-cache',
770 self.dbt_path,
771 ],
772 cwd=self.dbt_path
773 )
774
775 def set_remote_tag(self):
776 # tag it
777 run_command(
778 ['docker', 'tag', self.image_tag, self.remote_tag],
779 cwd=self.dbt_path,
780 )
781
782 def commit_docker_folder(self):
783 # commit the contents of docker/
784 run_command(
785 ['git', 'add', 'docker'],
786 cwd=self.dbt_path
787 )
788 commit_msg = f'Add {self.image_tag} dockerfiles and requirements'
789 run_command(['git', 'commit', '-m', commit_msg], cwd=self.dbt_path)
790
791 def build(
792 self,
793 write_requirements: bool = True,
794 write_dockerfile: bool = True
795 ):
796 if write_requirements:
797 self.write_lockfile()
798 if write_dockerfile:
799 self.write_dockerfile()
800 self.commit_docker_folder()
801 self.create_docker_image()
802 self.set_remote_tag()
803
804 def push(self):
805 run_command(
806 ['docker', 'push', self.remote_tag]
807 )
808
809
810 def sanity_check():
811 if sys.version_info[:len(HOMEBREW_PYTHON)] != HOMEBREW_PYTHON:
812 python_version_str = '.'.join(str(i) for i in HOMEBREW_PYTHON)
813 print(f'This script must be run with python {python_version_str}')
814 sys.exit(1)
815
816 # avoid "what's a bdist_wheel" errors
817 try:
818 import wheel # type: ignore # noqa
819 except ImportError:
820 print(
821 'The wheel package is required to build. Please run:\n'
822 'pip install -r dev_requirements.txt'
823 )
824 sys.exit(1)
825
826
827 def upgrade_to(args: Arguments):
828 if args.set_version:
829 set_version(args.path, args.version, args.part)
830
831 builder = PypiBuilder(args.path)
832 if args.build_pypi:
833 builder.build()
834
835 if args.upload_pypi:
836 if args.test_upload:
837 builder.upload()
838 input(
839 f'Ensure https://test.pypi.org/project/dbt/{args.version}/ '
840 'exists and looks reasonable'
841 )
842 builder.upload(test=False)
843
844 if args.build_homebrew:
845 if args.upload_pypi:
846 print('waiting a minute for pypi before trying to pip install')
847 # if we uploaded to pypi, wait a minute before we bother trying to
848 # pip install
849 time.sleep(60)
850 HomebrewBuilder(
851 dbt_path=args.path,
852 version=args.version,
853 homebrew_path=args.homebrew_path,
854 set_default=args.homebrew_set_default,
855 ).build()
856
857 if args.build_docker:
858 builder = DockerBuilder(
859 dbt_path=args.path,
860 version=args.version,
861 )
862 builder.build(
863 write_requirements=args.write_requirements,
864 write_dockerfile=args.write_dockerfile,
865 )
866 if args.upload_docker:
867 builder.push()
868
869
870 def main():
871 sanity_check()
872 args = Arguments.parse()
873 upgrade_to(args)
874
875
876 if __name__ == '__main__':
877 main()
878
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/scripts/build-dbt.py b/scripts/build-dbt.py
--- a/scripts/build-dbt.py
+++ b/scripts/build-dbt.py
@@ -740,7 +740,7 @@
VOLUME /usr/app
USER dbt_user
- CMD ['dbt', 'run']
+ ENTRYPOINT dbt
'''
)
| {"golden_diff": "diff --git a/scripts/build-dbt.py b/scripts/build-dbt.py\n--- a/scripts/build-dbt.py\n+++ b/scripts/build-dbt.py\n@@ -740,7 +740,7 @@\n VOLUME /usr/app\n \n USER dbt_user\n- CMD ['dbt', 'run']\n+ ENTRYPOINT dbt\n '''\n )\n", "issue": "[0.17.0rc1] Broken Docker image entrypoint\n### Describe the bug\r\n\r\nThe entrypoint for the image is invalid\r\n\r\n### Steps To Reproduce\r\n\r\n1. Pull the image: `docker pull fishtownanalytics/dbt:0.17.0rc1`\r\n2. Run the image: \r\n\r\n```\r\ndocker run -it fishtownanalytics/dbt:0.17.0rc1\r\n/bin/sh: 1: [dbt,: not found\r\n```\r\n\r\n### Expected behavior\r\n\r\nThe DBT help command is displayed\r\n\r\n\r\n### Additional context\r\n\r\nI plan on integrating DBT with our Airflow infrastructure as a container (we extend Airflow exclusively through containerized components)\r\n\n", "before_files": [{"content": "import json\nimport os\nimport re\nimport shutil\nimport subprocess\nimport sys\nimport tempfile\nimport textwrap\nimport time\nimport venv # type: ignore\nimport zipfile\n\nfrom typing import Dict\n\nfrom argparse import ArgumentParser\nfrom dataclasses import dataclass\nfrom pathlib import Path\nfrom urllib.request import urlopen\n\nfrom typing import Optional, Iterator, Tuple, List\n\n\nHOMEBREW_PYTHON = (3, 8)\n\n\n# This should match the pattern in .bumpversion.cfg\nVERSION_PATTERN = re.compile(\n r'(?P<major>\\d+)\\.(?P<minor>\\d+)\\.(?P<patch>\\d+)'\n r'((?P<prerelease>[a-z]+)(?P<num>\\d+))?'\n)\n\n\nclass Version:\n def __init__(self, raw: str) -> None:\n self.raw = raw\n match = VERSION_PATTERN.match(self.raw)\n assert match is not None, f'Invalid version: {self.raw}'\n groups = match.groupdict()\n\n self.major: int = int(groups['major'])\n self.minor: int = int(groups['minor'])\n self.patch: int = int(groups['patch'])\n self.prerelease: Optional[str] = None\n self.num: Optional[int] = None\n\n if groups['num'] is not None:\n self.prerelease = groups['prerelease']\n self.num = int(groups['num'])\n\n def __str__(self):\n return self.raw\n\n def homebrew_class_name(self) -> str:\n name = f'DbtAT{self.major}{self.minor}{self.patch}'\n if self.prerelease is not None and self.num is not None:\n name = f'{name}{self.prerelease.title()}{self.num}'\n return name\n\n def homebrew_filename(self):\n version_str = f'{self.major}.{self.minor}.{self.patch}'\n if self.prerelease is not None and self.num is not None:\n version_str = f'{version_str}-{self.prerelease}{self.num}'\n return f'dbt@{version_str}.rb'\n\n\n@dataclass\nclass Arguments:\n version: Version\n part: str\n path: Path\n homebrew_path: Path\n homebrew_set_default: bool\n set_version: bool\n build_pypi: bool\n upload_pypi: bool\n test_upload: bool\n build_homebrew: bool\n build_docker: bool\n upload_docker: bool\n write_requirements: bool\n write_dockerfile: bool\n\n @classmethod\n def parse(cls) -> 'Arguments':\n parser = ArgumentParser(\n prog=\"Bump dbt's version, build packages\"\n )\n parser.add_argument(\n 'version',\n type=Version,\n help=\"The version to set\",\n )\n parser.add_argument(\n 'part',\n type=str,\n help=\"The part of the version to update\",\n )\n parser.add_argument(\n '--path',\n type=Path,\n help='The path to the dbt repository',\n default=Path.cwd(),\n )\n parser.add_argument(\n '--homebrew-path',\n type=Path,\n help='The path to the dbt homebrew install',\n default=(Path.cwd() / '../homebrew-dbt'),\n )\n parser.add_argument(\n '--homebrew-set-default',\n action='store_true',\n help='If set, make this homebrew version the default',\n )\n parser.add_argument(\n '--no-set-version',\n dest='set_version',\n action='store_false',\n help='Skip bumping the version',\n )\n parser.add_argument(\n '--no-build-pypi',\n dest='build_pypi',\n action='store_false',\n help='skip building pypi',\n )\n parser.add_argument(\n '--no-build-docker',\n dest='build_docker',\n action='store_false',\n help='skip building docker images',\n )\n parser.add_argument(\n '--no-upload-docker',\n dest='upload_docker',\n action='store_false',\n help='skip uploading docker images',\n )\n\n uploading = parser.add_mutually_exclusive_group()\n\n uploading.add_argument(\n '--upload-pypi',\n dest='force_upload_pypi',\n action='store_true',\n help='upload to pypi even if building is disabled'\n )\n\n uploading.add_argument(\n '--no-upload-pypi',\n dest='no_upload_pypi',\n action='store_true',\n help='skip uploading to pypi',\n )\n\n parser.add_argument(\n '--no-upload',\n dest='test_upload',\n action='store_false',\n help='Skip uploading to pypitest',\n )\n\n parser.add_argument(\n '--no-build-homebrew',\n dest='build_homebrew',\n action='store_false',\n help='Skip building homebrew packages',\n )\n parser.add_argument(\n '--no-write-requirements',\n dest='write_requirements',\n action='store_false',\n help='Skip writing the requirements file. It must exist.'\n )\n parser.add_argument(\n '--no-write-dockerfile',\n dest='write_dockerfile',\n action='store_false',\n help='Skip writing the dockerfile. It must exist.'\n )\n parsed = parser.parse_args()\n\n upload_pypi = parsed.build_pypi\n if parsed.force_upload_pypi:\n upload_pypi = True\n elif parsed.no_upload_pypi:\n upload_pypi = False\n\n return cls(\n version=parsed.version,\n part=parsed.part,\n path=parsed.path,\n homebrew_path=parsed.homebrew_path,\n homebrew_set_default=parsed.homebrew_set_default,\n set_version=parsed.set_version,\n build_pypi=parsed.build_pypi,\n upload_pypi=upload_pypi,\n test_upload=parsed.test_upload,\n build_homebrew=parsed.build_homebrew,\n build_docker=parsed.build_docker,\n upload_docker=parsed.upload_docker,\n write_requirements=parsed.write_requirements,\n write_dockerfile=parsed.write_dockerfile,\n )\n\n\ndef collect_output(cmd, cwd=None, stderr=subprocess.PIPE) -> str:\n try:\n result = subprocess.run(\n cmd, cwd=cwd, check=True, stdout=subprocess.PIPE, stderr=stderr\n )\n except subprocess.CalledProcessError as exc:\n print(f'Command {exc.cmd} failed')\n if exc.output:\n print(exc.output.decode('utf-8'))\n if exc.stderr:\n print(exc.stderr.decode('utf-8'), file=sys.stderr)\n raise\n return result.stdout.decode('utf-8')\n\n\ndef run_command(cmd, cwd=None) -> None:\n result = collect_output(cmd, stderr=subprocess.STDOUT, cwd=cwd)\n print(result)\n\n\ndef set_version(path: Path, version: Version, part: str):\n # bumpversion --commit --no-tag --new-version \"${version}\" \"${port}\"\n cmd = [\n 'bumpversion', '--commit', '--no-tag', '--new-version',\n str(version), part\n ]\n print(f'bumping version to {version}')\n run_command(cmd, cwd=path)\n print(f'bumped version to {version}')\n\n\nclass PypiBuilder:\n _SUBPACKAGES = (\n 'core',\n 'plugins/postgres',\n 'plugins/redshift',\n 'plugins/bigquery',\n 'plugins/snowflake',\n )\n\n def __init__(self, dbt_path: Path):\n self.dbt_path = dbt_path\n\n @staticmethod\n def _dist_for(path: Path, make=False) -> Path:\n dist_path = path / 'dist'\n if dist_path.exists():\n shutil.rmtree(dist_path)\n if make:\n os.makedirs(dist_path)\n build_path = path / 'build'\n if build_path.exists():\n shutil.rmtree(build_path)\n return dist_path\n\n @staticmethod\n def _build_pypi_package(path: Path):\n print(f'building package in {path}')\n cmd = ['python', 'setup.py', 'sdist', 'bdist_wheel']\n run_command(cmd, cwd=path)\n print(f'finished building package in {path}')\n\n @staticmethod\n def _all_packages_in(path: Path) -> Iterator[Path]:\n path = path / 'dist'\n for pattern in ('*.tar.gz', '*.whl'):\n yield from path.glob(pattern)\n\n def _build_subpackage(self, name: str) -> Iterator[Path]:\n subpath = self.dbt_path / name\n self._dist_for(subpath)\n self._build_pypi_package(subpath)\n return self._all_packages_in(subpath)\n\n def build(self):\n print('building pypi packages')\n dist_path = self._dist_for(self.dbt_path)\n sub_pkgs: List[Path] = []\n for path in self._SUBPACKAGES:\n sub_pkgs.extend(self._build_subpackage(path))\n\n # now build the main package\n self._build_pypi_package(self.dbt_path)\n # now copy everything from the subpackages in\n for package in sub_pkgs:\n shutil.copy(str(package), dist_path)\n\n print('built pypi packages')\n\n def upload(self, *, test=True):\n cmd = ['twine', 'check']\n cmd.extend(str(p) for p in self._all_packages_in(self.dbt_path))\n run_command(cmd)\n cmd = ['twine', 'upload']\n if test:\n cmd.extend(['--repository', 'pypitest'])\n cmd.extend(str(p) for p in self._all_packages_in(self.dbt_path))\n print('uploading packages: {}'.format(' '.join(cmd)))\n run_command(cmd)\n print('uploaded packages')\n\n\nclass PipInstaller(venv.EnvBuilder):\n def __init__(self, packages: List[str]) -> None:\n super().__init__(with_pip=True)\n self.packages = packages\n\n def post_setup(self, context):\n # we can't run from the dbt directory or this gets all weird, so\n # install from an empty temp directory and then remove it.\n tmp = tempfile.mkdtemp()\n cmd = [context.env_exe, '-m', 'pip', 'install', '--upgrade']\n cmd.extend(self.packages)\n print(f'installing {self.packages}')\n try:\n run_command(cmd, cwd=tmp)\n finally:\n os.rmdir(tmp)\n print(f'finished installing {self.packages}')\n\n def create(self, venv_path):\n os.makedirs(venv_path.parent, exist_ok=True)\n if venv_path.exists():\n shutil.rmtree(venv_path)\n return super().create(venv_path)\n\n\ndef _require_wheels(dbt_path: Path) -> List[Path]:\n dist_path = dbt_path / 'dist'\n wheels = list(dist_path.glob('*.whl'))\n if not wheels:\n raise ValueError(\n f'No wheels found in {dist_path} - run scripts/build-wheels.sh'\n )\n return wheels\n\n\nclass DistFolderEnv(PipInstaller):\n def __init__(self, dbt_path: Path) -> None:\n self.wheels = _require_wheels(dbt_path)\n super().__init__(packages=self.wheels)\n\n\nclass PoetVirtualenv(PipInstaller):\n def __init__(self, dbt_version: Version) -> None:\n super().__init__([f'dbt=={dbt_version}', 'homebrew-pypi-poet'])\n\n\n@dataclass\nclass HomebrewTemplate:\n url_data: str\n hash_data: str\n dependencies: str\n\n\ndef _make_venv_at(root: Path, name: str, builder: venv.EnvBuilder):\n venv_path = root / name\n os.makedirs(root, exist_ok=True)\n if venv_path.exists():\n shutil.rmtree(venv_path)\n\n builder.create(venv_path)\n return venv_path\n\n\nclass HomebrewBuilder:\n def __init__(\n self,\n dbt_path: Path,\n version: Version,\n homebrew_path: Path,\n set_default: bool,\n ) -> None:\n self.dbt_path = dbt_path\n self.version = version\n self.homebrew_path = homebrew_path\n self.set_default = set_default\n self._template: Optional[HomebrewTemplate] = None\n\n def make_venv(self) -> PoetVirtualenv:\n env = PoetVirtualenv(self.version)\n max_attempts = 10\n for attempt in range(1, max_attempts+1):\n # after uploading to pypi, it can take a few minutes for installing\n # to work. Retry a few times...\n try:\n env.create(self.homebrew_venv_path)\n return\n except subprocess.CalledProcessError:\n if attempt == max_attempts:\n raise\n else:\n print(\n f'installation failed - waiting 60s for pypi to see '\n f'the new version (attempt {attempt}/{max_attempts})'\n )\n time.sleep(60)\n\n return env\n\n @property\n def versioned_formula_path(self) -> Path:\n return (\n self.homebrew_path / 'Formula' / self.version.homebrew_filename()\n )\n\n @property\n def default_formula_path(self) -> Path:\n return (\n self.homebrew_path / 'Formula/dbt.rb'\n )\n\n @property\n def homebrew_venv_path(self) -> Path:\n return self.dbt_path / 'build' / 'homebrew-venv'\n\n @staticmethod\n def _dbt_homebrew_formula_fmt() -> str:\n return textwrap.dedent('''\\\n class {formula_name} < Formula\n include Language::Python::Virtualenv\n\n desc \"Data build tool\"\n homepage \"https://github.com/fishtown-analytics/dbt\"\n url \"{url_data}\"\n sha256 \"{hash_data}\"\n revision 1\n\n bottle do\n root_url \"http://bottles.getdbt.com\"\n # bottle hashes + versions go here\n end\n\n depends_on \"[email protected]\"\n depends_on \"postgresql\"\n depends_on \"python\"\n\n {dependencies}\n {trailer}\n end\n ''')\n\n @staticmethod\n def _dbt_homebrew_trailer() -> str:\n dedented = textwrap.dedent('''\\\n def install\n venv = virtualenv_create(libexec, \"python3\")\n\n res = resources.map(&:name).to_set\n\n res.each do |r|\n venv.pip_install resource(r)\n end\n\n venv.pip_install_and_link buildpath\n\n bin.install_symlink \"#{libexec}/bin/dbt\" => \"dbt\"\n end\n\n test do\n (testpath/\"dbt_project.yml\").write(\n \"{name: 'test', version: '0.0.1', profile: 'default'}\",\n )\n (testpath/\".dbt/profiles.yml\").write(\n \"{default: {outputs: {default: {type: 'postgres', threads: 1,\n host: 'localhost', port: 5432, user: 'root', pass: 'password',\n dbname: 'test', schema: 'test'}}, target: 'default'}}\",\n )\n (testpath/\"models/test.sql\").write(\"select * from test\")\n system \"#{bin}/dbt\", \"test\"\n end''')\n return textwrap.indent(dedented, ' ')\n\n def get_formula_data(\n self, versioned: bool = True\n ) -> str:\n fmt = self._dbt_homebrew_formula_fmt()\n trailer = self._dbt_homebrew_trailer()\n if versioned:\n formula_name = self.version.homebrew_class_name()\n else:\n formula_name = 'Dbt'\n\n return fmt.format(\n formula_name=formula_name,\n version=self.version,\n url_data=self.template.url_data,\n hash_data=self.template.hash_data,\n dependencies=self.template.dependencies,\n trailer=trailer,\n )\n\n @property\n def template(self) -> HomebrewTemplate:\n if self._template is None:\n self.make_venv()\n print('done setting up virtualenv')\n poet = self.homebrew_venv_path / 'bin/poet'\n\n # get the dbt package info\n url_data, hash_data = self._get_pypi_dbt_info()\n\n dependencies = self._get_recursive_dependencies(poet)\n template = HomebrewTemplate(\n url_data=url_data,\n hash_data=hash_data,\n dependencies=dependencies,\n )\n self._template = template\n else:\n template = self._template\n return template\n\n def _get_pypi_dbt_info(self) -> Tuple[str, str]:\n fp = urlopen(f'https://pypi.org/pypi/dbt/{self.version}/json')\n try:\n data = json.load(fp)\n finally:\n fp.close()\n assert 'urls' in data\n for pkginfo in data['urls']:\n assert 'packagetype' in pkginfo\n if pkginfo['packagetype'] == 'sdist':\n assert 'url' in pkginfo\n assert 'digests' in pkginfo\n assert 'sha256' in pkginfo['digests']\n url = pkginfo['url']\n digest = pkginfo['digests']['sha256']\n return url, digest\n raise ValueError(f'Never got a valid sdist for dbt=={self.version}')\n\n def _get_recursive_dependencies(self, poet_exe: Path) -> str:\n cmd = [str(poet_exe), '--resources', 'dbt']\n raw = collect_output(cmd).split('\\n')\n return '\\n'.join(self._remove_dbt_resource(raw))\n\n def _remove_dbt_resource(self, lines: List[str]) -> Iterator[str]:\n # TODO: fork poet or extract the good bits to avoid this\n line_iter = iter(lines)\n # don't do a double-newline or \"brew audit\" gets mad\n for line in line_iter:\n # skip the contents of the \"dbt\" resource block.\n if line.strip() == 'resource \"dbt\" do':\n for skip in line_iter:\n if skip.strip() == 'end':\n # skip the newline after 'end'\n next(line_iter)\n break\n else:\n yield line\n\n def create_versioned_formula_file(self):\n formula_contents = self.get_formula_data(versioned=True)\n if self.versioned_formula_path.exists():\n print('Homebrew formula path already exists, overwriting')\n self.versioned_formula_path.write_text(formula_contents)\n\n def commit_versioned_formula(self):\n # add a commit for the new formula\n run_command(\n ['git', 'add', self.versioned_formula_path],\n cwd=self.homebrew_path\n )\n run_command(\n ['git', 'commit', '-m', f'add dbt@{self.version}'],\n cwd=self.homebrew_path\n )\n\n def commit_default_formula(self):\n run_command(\n ['git', 'add', self.default_formula_path],\n cwd=self.homebrew_path\n )\n run_command(\n ['git', 'commit', '-m', f'upgrade dbt to {self.version}'],\n cwd=self.homebrew_path\n )\n\n @staticmethod\n def run_tests(formula_path: Path, audit: bool = True):\n path = os.path.normpath(formula_path)\n run_command(['brew', 'uninstall', '--force', path])\n versions = [\n l.strip() for l in\n collect_output(['brew', 'list']).split('\\n')\n if l.strip().startswith('dbt@') or l.strip() == 'dbt'\n ]\n if versions:\n run_command(['brew', 'unlink'] + versions)\n run_command(['brew', 'install', path])\n run_command(['brew', 'test', path])\n if audit:\n run_command(['brew', 'audit', '--strict', path])\n\n def create_default_package(self):\n os.remove(self.default_formula_path)\n formula_contents = self.get_formula_data(versioned=False)\n self.default_formula_path.write_text(formula_contents)\n\n def build(self):\n self.create_versioned_formula_file()\n # self.run_tests(formula_path=self.versioned_formula_path)\n self.commit_versioned_formula()\n\n if self.set_default:\n self.create_default_package()\n # self.run_tests(formula_path=self.default_formula_path, audit=False)\n self.commit_default_formula()\n\n\nclass WheelInfo:\n def __init__(self, path):\n self.path = path\n\n @staticmethod\n def _extract_distinfo_path(wfile: zipfile.ZipFile) -> zipfile.Path:\n zpath = zipfile.Path(root=wfile)\n for path in zpath.iterdir():\n if path.name.endswith('.dist-info'):\n return path\n raise ValueError('Wheel with no dist-info?')\n\n def get_metadata(self) -> Dict[str, str]:\n with zipfile.ZipFile(self.path) as wf:\n distinfo = self._extract_distinfo_path(wf)\n metadata = distinfo / 'METADATA'\n metadata_dict: Dict[str, str] = {}\n for line in metadata.read_text().split('\\n'):\n parts = line.split(': ', 1)\n if len(parts) == 2:\n metadata_dict[parts[0]] = parts[1]\n return metadata_dict\n\n def package_name(self) -> str:\n metadata = self.get_metadata()\n if 'Name' not in metadata:\n raise ValueError('Wheel with no name?')\n return metadata['Name']\n\n\nclass DockerBuilder:\n \"\"\"The docker builder requires the existence of a dbt package\"\"\"\n def __init__(self, dbt_path: Path, version: Version) -> None:\n self.dbt_path = dbt_path\n self.version = version\n\n @property\n def docker_path(self) -> Path:\n return self.dbt_path / 'docker'\n\n @property\n def dockerfile_name(self) -> str:\n return f'Dockerfile.{self.version}'\n\n @property\n def dockerfile_path(self) -> Path:\n return self.docker_path / self.dockerfile_name\n\n @property\n def requirements_path(self) -> Path:\n return self.docker_path / 'requirements'\n\n @property\n def requirements_file_name(self) -> str:\n return f'requirements.{self.version}.txt'\n\n @property\n def dockerfile_venv_path(self) -> Path:\n return self.dbt_path / 'build' / 'docker-venv'\n\n @property\n def requirements_txt_path(self) -> Path:\n return self.requirements_path / self.requirements_file_name\n\n def make_venv(self) -> DistFolderEnv:\n env = DistFolderEnv(self.dbt_path)\n\n env.create(self.dockerfile_venv_path)\n return env\n\n def get_frozen(self) -> str:\n env = self.make_venv()\n pip_path = self.dockerfile_venv_path / 'bin/pip'\n cmd = [pip_path, 'freeze']\n wheel_names = {\n WheelInfo(wheel_path).package_name() for wheel_path in env.wheels\n }\n # remove the dependencies in dbt itself\n return '\\n'.join([\n dep for dep in collect_output(cmd).split('\\n')\n if dep.split('==')[0] not in wheel_names\n ])\n\n def write_lockfile(self):\n freeze = self.get_frozen()\n path = self.requirements_txt_path\n if path.exists():\n raise ValueError(f'Found existing requirements file at {path}!')\n os.makedirs(path.parent, exist_ok=True)\n path.write_text(freeze)\n\n def get_dockerfile_contents(self):\n dist_path = (self.dbt_path / 'dist').relative_to(Path.cwd())\n wheel_paths = ' '.join(\n os.path.join('.', 'dist', p.name)\n for p in _require_wheels(self.dbt_path)\n )\n\n requirements_path = self.requirements_txt_path.relative_to(Path.cwd())\n\n return textwrap.dedent(\n f'''\\\n FROM python:3.8.1-slim-buster\n\n RUN apt-get update && \\\n apt-get dist-upgrade -y && \\\n apt-get install -y --no-install-recommends \\\n git software-properties-common make build-essential \\\n ca-certificates libpq-dev && \\\n apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*\n\n COPY {requirements_path} ./{self.requirements_file_name}\n COPY {dist_path} ./dist\n RUN pip install --upgrade pip setuptools\n RUN pip install --requirement ./{self.requirements_file_name}\n RUN pip install {wheel_paths}\n\n RUN useradd -mU dbt_user\n\n ENV PYTHONIOENCODING=utf-8\n ENV LANG C.UTF-8\n\n WORKDIR /usr/app\n VOLUME /usr/app\n\n USER dbt_user\n CMD ['dbt', 'run']\n '''\n )\n\n def write_dockerfile(self):\n dockerfile = self.get_dockerfile_contents()\n path = self.dockerfile_path\n if path.exists():\n raise ValueError(f'Found existing docker file at {path}!')\n os.makedirs(path.parent, exist_ok=True)\n path.write_text(dockerfile)\n\n @property\n def image_tag(self):\n return f'dbt:{self.version}'\n\n @property\n def remote_tag(self):\n return f'fishtownanalytics/{self.image_tag}'\n\n def create_docker_image(self):\n run_command(\n [\n 'docker', 'build',\n '-f', self.dockerfile_path,\n '--tag', self.image_tag,\n # '--no-cache',\n self.dbt_path,\n ],\n cwd=self.dbt_path\n )\n\n def set_remote_tag(self):\n # tag it\n run_command(\n ['docker', 'tag', self.image_tag, self.remote_tag],\n cwd=self.dbt_path,\n )\n\n def commit_docker_folder(self):\n # commit the contents of docker/\n run_command(\n ['git', 'add', 'docker'],\n cwd=self.dbt_path\n )\n commit_msg = f'Add {self.image_tag} dockerfiles and requirements'\n run_command(['git', 'commit', '-m', commit_msg], cwd=self.dbt_path)\n\n def build(\n self,\n write_requirements: bool = True,\n write_dockerfile: bool = True\n ):\n if write_requirements:\n self.write_lockfile()\n if write_dockerfile:\n self.write_dockerfile()\n self.commit_docker_folder()\n self.create_docker_image()\n self.set_remote_tag()\n\n def push(self):\n run_command(\n ['docker', 'push', self.remote_tag]\n )\n\n\ndef sanity_check():\n if sys.version_info[:len(HOMEBREW_PYTHON)] != HOMEBREW_PYTHON:\n python_version_str = '.'.join(str(i) for i in HOMEBREW_PYTHON)\n print(f'This script must be run with python {python_version_str}')\n sys.exit(1)\n\n # avoid \"what's a bdist_wheel\" errors\n try:\n import wheel # type: ignore # noqa\n except ImportError:\n print(\n 'The wheel package is required to build. Please run:\\n'\n 'pip install -r dev_requirements.txt'\n )\n sys.exit(1)\n\n\ndef upgrade_to(args: Arguments):\n if args.set_version:\n set_version(args.path, args.version, args.part)\n\n builder = PypiBuilder(args.path)\n if args.build_pypi:\n builder.build()\n\n if args.upload_pypi:\n if args.test_upload:\n builder.upload()\n input(\n f'Ensure https://test.pypi.org/project/dbt/{args.version}/ '\n 'exists and looks reasonable'\n )\n builder.upload(test=False)\n\n if args.build_homebrew:\n if args.upload_pypi:\n print('waiting a minute for pypi before trying to pip install')\n # if we uploaded to pypi, wait a minute before we bother trying to\n # pip install\n time.sleep(60)\n HomebrewBuilder(\n dbt_path=args.path,\n version=args.version,\n homebrew_path=args.homebrew_path,\n set_default=args.homebrew_set_default,\n ).build()\n\n if args.build_docker:\n builder = DockerBuilder(\n dbt_path=args.path,\n version=args.version,\n )\n builder.build(\n write_requirements=args.write_requirements,\n write_dockerfile=args.write_dockerfile,\n )\n if args.upload_docker:\n builder.push()\n\n\ndef main():\n sanity_check()\n args = Arguments.parse()\n upgrade_to(args)\n\n\nif __name__ == '__main__':\n main()\n", "path": "scripts/build-dbt.py"}], "after_files": [{"content": "import json\nimport os\nimport re\nimport shutil\nimport subprocess\nimport sys\nimport tempfile\nimport textwrap\nimport time\nimport venv # type: ignore\nimport zipfile\n\nfrom typing import Dict\n\nfrom argparse import ArgumentParser\nfrom dataclasses import dataclass\nfrom pathlib import Path\nfrom urllib.request import urlopen\n\nfrom typing import Optional, Iterator, Tuple, List\n\n\nHOMEBREW_PYTHON = (3, 8)\n\n\n# This should match the pattern in .bumpversion.cfg\nVERSION_PATTERN = re.compile(\n r'(?P<major>\\d+)\\.(?P<minor>\\d+)\\.(?P<patch>\\d+)'\n r'((?P<prerelease>[a-z]+)(?P<num>\\d+))?'\n)\n\n\nclass Version:\n def __init__(self, raw: str) -> None:\n self.raw = raw\n match = VERSION_PATTERN.match(self.raw)\n assert match is not None, f'Invalid version: {self.raw}'\n groups = match.groupdict()\n\n self.major: int = int(groups['major'])\n self.minor: int = int(groups['minor'])\n self.patch: int = int(groups['patch'])\n self.prerelease: Optional[str] = None\n self.num: Optional[int] = None\n\n if groups['num'] is not None:\n self.prerelease = groups['prerelease']\n self.num = int(groups['num'])\n\n def __str__(self):\n return self.raw\n\n def homebrew_class_name(self) -> str:\n name = f'DbtAT{self.major}{self.minor}{self.patch}'\n if self.prerelease is not None and self.num is not None:\n name = f'{name}{self.prerelease.title()}{self.num}'\n return name\n\n def homebrew_filename(self):\n version_str = f'{self.major}.{self.minor}.{self.patch}'\n if self.prerelease is not None and self.num is not None:\n version_str = f'{version_str}-{self.prerelease}{self.num}'\n return f'dbt@{version_str}.rb'\n\n\n@dataclass\nclass Arguments:\n version: Version\n part: str\n path: Path\n homebrew_path: Path\n homebrew_set_default: bool\n set_version: bool\n build_pypi: bool\n upload_pypi: bool\n test_upload: bool\n build_homebrew: bool\n build_docker: bool\n upload_docker: bool\n write_requirements: bool\n write_dockerfile: bool\n\n @classmethod\n def parse(cls) -> 'Arguments':\n parser = ArgumentParser(\n prog=\"Bump dbt's version, build packages\"\n )\n parser.add_argument(\n 'version',\n type=Version,\n help=\"The version to set\",\n )\n parser.add_argument(\n 'part',\n type=str,\n help=\"The part of the version to update\",\n )\n parser.add_argument(\n '--path',\n type=Path,\n help='The path to the dbt repository',\n default=Path.cwd(),\n )\n parser.add_argument(\n '--homebrew-path',\n type=Path,\n help='The path to the dbt homebrew install',\n default=(Path.cwd() / '../homebrew-dbt'),\n )\n parser.add_argument(\n '--homebrew-set-default',\n action='store_true',\n help='If set, make this homebrew version the default',\n )\n parser.add_argument(\n '--no-set-version',\n dest='set_version',\n action='store_false',\n help='Skip bumping the version',\n )\n parser.add_argument(\n '--no-build-pypi',\n dest='build_pypi',\n action='store_false',\n help='skip building pypi',\n )\n parser.add_argument(\n '--no-build-docker',\n dest='build_docker',\n action='store_false',\n help='skip building docker images',\n )\n parser.add_argument(\n '--no-upload-docker',\n dest='upload_docker',\n action='store_false',\n help='skip uploading docker images',\n )\n\n uploading = parser.add_mutually_exclusive_group()\n\n uploading.add_argument(\n '--upload-pypi',\n dest='force_upload_pypi',\n action='store_true',\n help='upload to pypi even if building is disabled'\n )\n\n uploading.add_argument(\n '--no-upload-pypi',\n dest='no_upload_pypi',\n action='store_true',\n help='skip uploading to pypi',\n )\n\n parser.add_argument(\n '--no-upload',\n dest='test_upload',\n action='store_false',\n help='Skip uploading to pypitest',\n )\n\n parser.add_argument(\n '--no-build-homebrew',\n dest='build_homebrew',\n action='store_false',\n help='Skip building homebrew packages',\n )\n parser.add_argument(\n '--no-write-requirements',\n dest='write_requirements',\n action='store_false',\n help='Skip writing the requirements file. It must exist.'\n )\n parser.add_argument(\n '--no-write-dockerfile',\n dest='write_dockerfile',\n action='store_false',\n help='Skip writing the dockerfile. It must exist.'\n )\n parsed = parser.parse_args()\n\n upload_pypi = parsed.build_pypi\n if parsed.force_upload_pypi:\n upload_pypi = True\n elif parsed.no_upload_pypi:\n upload_pypi = False\n\n return cls(\n version=parsed.version,\n part=parsed.part,\n path=parsed.path,\n homebrew_path=parsed.homebrew_path,\n homebrew_set_default=parsed.homebrew_set_default,\n set_version=parsed.set_version,\n build_pypi=parsed.build_pypi,\n upload_pypi=upload_pypi,\n test_upload=parsed.test_upload,\n build_homebrew=parsed.build_homebrew,\n build_docker=parsed.build_docker,\n upload_docker=parsed.upload_docker,\n write_requirements=parsed.write_requirements,\n write_dockerfile=parsed.write_dockerfile,\n )\n\n\ndef collect_output(cmd, cwd=None, stderr=subprocess.PIPE) -> str:\n try:\n result = subprocess.run(\n cmd, cwd=cwd, check=True, stdout=subprocess.PIPE, stderr=stderr\n )\n except subprocess.CalledProcessError as exc:\n print(f'Command {exc.cmd} failed')\n if exc.output:\n print(exc.output.decode('utf-8'))\n if exc.stderr:\n print(exc.stderr.decode('utf-8'), file=sys.stderr)\n raise\n return result.stdout.decode('utf-8')\n\n\ndef run_command(cmd, cwd=None) -> None:\n result = collect_output(cmd, stderr=subprocess.STDOUT, cwd=cwd)\n print(result)\n\n\ndef set_version(path: Path, version: Version, part: str):\n # bumpversion --commit --no-tag --new-version \"${version}\" \"${port}\"\n cmd = [\n 'bumpversion', '--commit', '--no-tag', '--new-version',\n str(version), part\n ]\n print(f'bumping version to {version}')\n run_command(cmd, cwd=path)\n print(f'bumped version to {version}')\n\n\nclass PypiBuilder:\n _SUBPACKAGES = (\n 'core',\n 'plugins/postgres',\n 'plugins/redshift',\n 'plugins/bigquery',\n 'plugins/snowflake',\n )\n\n def __init__(self, dbt_path: Path):\n self.dbt_path = dbt_path\n\n @staticmethod\n def _dist_for(path: Path, make=False) -> Path:\n dist_path = path / 'dist'\n if dist_path.exists():\n shutil.rmtree(dist_path)\n if make:\n os.makedirs(dist_path)\n build_path = path / 'build'\n if build_path.exists():\n shutil.rmtree(build_path)\n return dist_path\n\n @staticmethod\n def _build_pypi_package(path: Path):\n print(f'building package in {path}')\n cmd = ['python', 'setup.py', 'sdist', 'bdist_wheel']\n run_command(cmd, cwd=path)\n print(f'finished building package in {path}')\n\n @staticmethod\n def _all_packages_in(path: Path) -> Iterator[Path]:\n path = path / 'dist'\n for pattern in ('*.tar.gz', '*.whl'):\n yield from path.glob(pattern)\n\n def _build_subpackage(self, name: str) -> Iterator[Path]:\n subpath = self.dbt_path / name\n self._dist_for(subpath)\n self._build_pypi_package(subpath)\n return self._all_packages_in(subpath)\n\n def build(self):\n print('building pypi packages')\n dist_path = self._dist_for(self.dbt_path)\n sub_pkgs: List[Path] = []\n for path in self._SUBPACKAGES:\n sub_pkgs.extend(self._build_subpackage(path))\n\n # now build the main package\n self._build_pypi_package(self.dbt_path)\n # now copy everything from the subpackages in\n for package in sub_pkgs:\n shutil.copy(str(package), dist_path)\n\n print('built pypi packages')\n\n def upload(self, *, test=True):\n cmd = ['twine', 'check']\n cmd.extend(str(p) for p in self._all_packages_in(self.dbt_path))\n run_command(cmd)\n cmd = ['twine', 'upload']\n if test:\n cmd.extend(['--repository', 'pypitest'])\n cmd.extend(str(p) for p in self._all_packages_in(self.dbt_path))\n print('uploading packages: {}'.format(' '.join(cmd)))\n run_command(cmd)\n print('uploaded packages')\n\n\nclass PipInstaller(venv.EnvBuilder):\n def __init__(self, packages: List[str]) -> None:\n super().__init__(with_pip=True)\n self.packages = packages\n\n def post_setup(self, context):\n # we can't run from the dbt directory or this gets all weird, so\n # install from an empty temp directory and then remove it.\n tmp = tempfile.mkdtemp()\n cmd = [context.env_exe, '-m', 'pip', 'install', '--upgrade']\n cmd.extend(self.packages)\n print(f'installing {self.packages}')\n try:\n run_command(cmd, cwd=tmp)\n finally:\n os.rmdir(tmp)\n print(f'finished installing {self.packages}')\n\n def create(self, venv_path):\n os.makedirs(venv_path.parent, exist_ok=True)\n if venv_path.exists():\n shutil.rmtree(venv_path)\n return super().create(venv_path)\n\n\ndef _require_wheels(dbt_path: Path) -> List[Path]:\n dist_path = dbt_path / 'dist'\n wheels = list(dist_path.glob('*.whl'))\n if not wheels:\n raise ValueError(\n f'No wheels found in {dist_path} - run scripts/build-wheels.sh'\n )\n return wheels\n\n\nclass DistFolderEnv(PipInstaller):\n def __init__(self, dbt_path: Path) -> None:\n self.wheels = _require_wheels(dbt_path)\n super().__init__(packages=self.wheels)\n\n\nclass PoetVirtualenv(PipInstaller):\n def __init__(self, dbt_version: Version) -> None:\n super().__init__([f'dbt=={dbt_version}', 'homebrew-pypi-poet'])\n\n\n@dataclass\nclass HomebrewTemplate:\n url_data: str\n hash_data: str\n dependencies: str\n\n\ndef _make_venv_at(root: Path, name: str, builder: venv.EnvBuilder):\n venv_path = root / name\n os.makedirs(root, exist_ok=True)\n if venv_path.exists():\n shutil.rmtree(venv_path)\n\n builder.create(venv_path)\n return venv_path\n\n\nclass HomebrewBuilder:\n def __init__(\n self,\n dbt_path: Path,\n version: Version,\n homebrew_path: Path,\n set_default: bool,\n ) -> None:\n self.dbt_path = dbt_path\n self.version = version\n self.homebrew_path = homebrew_path\n self.set_default = set_default\n self._template: Optional[HomebrewTemplate] = None\n\n def make_venv(self) -> PoetVirtualenv:\n env = PoetVirtualenv(self.version)\n max_attempts = 10\n for attempt in range(1, max_attempts+1):\n # after uploading to pypi, it can take a few minutes for installing\n # to work. Retry a few times...\n try:\n env.create(self.homebrew_venv_path)\n return\n except subprocess.CalledProcessError:\n if attempt == max_attempts:\n raise\n else:\n print(\n f'installation failed - waiting 60s for pypi to see '\n f'the new version (attempt {attempt}/{max_attempts})'\n )\n time.sleep(60)\n\n return env\n\n @property\n def versioned_formula_path(self) -> Path:\n return (\n self.homebrew_path / 'Formula' / self.version.homebrew_filename()\n )\n\n @property\n def default_formula_path(self) -> Path:\n return (\n self.homebrew_path / 'Formula/dbt.rb'\n )\n\n @property\n def homebrew_venv_path(self) -> Path:\n return self.dbt_path / 'build' / 'homebrew-venv'\n\n @staticmethod\n def _dbt_homebrew_formula_fmt() -> str:\n return textwrap.dedent('''\\\n class {formula_name} < Formula\n include Language::Python::Virtualenv\n\n desc \"Data build tool\"\n homepage \"https://github.com/fishtown-analytics/dbt\"\n url \"{url_data}\"\n sha256 \"{hash_data}\"\n revision 1\n\n bottle do\n root_url \"http://bottles.getdbt.com\"\n # bottle hashes + versions go here\n end\n\n depends_on \"[email protected]\"\n depends_on \"postgresql\"\n depends_on \"python\"\n\n {dependencies}\n {trailer}\n end\n ''')\n\n @staticmethod\n def _dbt_homebrew_trailer() -> str:\n dedented = textwrap.dedent('''\\\n def install\n venv = virtualenv_create(libexec, \"python3\")\n\n res = resources.map(&:name).to_set\n\n res.each do |r|\n venv.pip_install resource(r)\n end\n\n venv.pip_install_and_link buildpath\n\n bin.install_symlink \"#{libexec}/bin/dbt\" => \"dbt\"\n end\n\n test do\n (testpath/\"dbt_project.yml\").write(\n \"{name: 'test', version: '0.0.1', profile: 'default'}\",\n )\n (testpath/\".dbt/profiles.yml\").write(\n \"{default: {outputs: {default: {type: 'postgres', threads: 1,\n host: 'localhost', port: 5432, user: 'root', pass: 'password',\n dbname: 'test', schema: 'test'}}, target: 'default'}}\",\n )\n (testpath/\"models/test.sql\").write(\"select * from test\")\n system \"#{bin}/dbt\", \"test\"\n end''')\n return textwrap.indent(dedented, ' ')\n\n def get_formula_data(\n self, versioned: bool = True\n ) -> str:\n fmt = self._dbt_homebrew_formula_fmt()\n trailer = self._dbt_homebrew_trailer()\n if versioned:\n formula_name = self.version.homebrew_class_name()\n else:\n formula_name = 'Dbt'\n\n return fmt.format(\n formula_name=formula_name,\n version=self.version,\n url_data=self.template.url_data,\n hash_data=self.template.hash_data,\n dependencies=self.template.dependencies,\n trailer=trailer,\n )\n\n @property\n def template(self) -> HomebrewTemplate:\n if self._template is None:\n self.make_venv()\n print('done setting up virtualenv')\n poet = self.homebrew_venv_path / 'bin/poet'\n\n # get the dbt package info\n url_data, hash_data = self._get_pypi_dbt_info()\n\n dependencies = self._get_recursive_dependencies(poet)\n template = HomebrewTemplate(\n url_data=url_data,\n hash_data=hash_data,\n dependencies=dependencies,\n )\n self._template = template\n else:\n template = self._template\n return template\n\n def _get_pypi_dbt_info(self) -> Tuple[str, str]:\n fp = urlopen(f'https://pypi.org/pypi/dbt/{self.version}/json')\n try:\n data = json.load(fp)\n finally:\n fp.close()\n assert 'urls' in data\n for pkginfo in data['urls']:\n assert 'packagetype' in pkginfo\n if pkginfo['packagetype'] == 'sdist':\n assert 'url' in pkginfo\n assert 'digests' in pkginfo\n assert 'sha256' in pkginfo['digests']\n url = pkginfo['url']\n digest = pkginfo['digests']['sha256']\n return url, digest\n raise ValueError(f'Never got a valid sdist for dbt=={self.version}')\n\n def _get_recursive_dependencies(self, poet_exe: Path) -> str:\n cmd = [str(poet_exe), '--resources', 'dbt']\n raw = collect_output(cmd).split('\\n')\n return '\\n'.join(self._remove_dbt_resource(raw))\n\n def _remove_dbt_resource(self, lines: List[str]) -> Iterator[str]:\n # TODO: fork poet or extract the good bits to avoid this\n line_iter = iter(lines)\n # don't do a double-newline or \"brew audit\" gets mad\n for line in line_iter:\n # skip the contents of the \"dbt\" resource block.\n if line.strip() == 'resource \"dbt\" do':\n for skip in line_iter:\n if skip.strip() == 'end':\n # skip the newline after 'end'\n next(line_iter)\n break\n else:\n yield line\n\n def create_versioned_formula_file(self):\n formula_contents = self.get_formula_data(versioned=True)\n if self.versioned_formula_path.exists():\n print('Homebrew formula path already exists, overwriting')\n self.versioned_formula_path.write_text(formula_contents)\n\n def commit_versioned_formula(self):\n # add a commit for the new formula\n run_command(\n ['git', 'add', self.versioned_formula_path],\n cwd=self.homebrew_path\n )\n run_command(\n ['git', 'commit', '-m', f'add dbt@{self.version}'],\n cwd=self.homebrew_path\n )\n\n def commit_default_formula(self):\n run_command(\n ['git', 'add', self.default_formula_path],\n cwd=self.homebrew_path\n )\n run_command(\n ['git', 'commit', '-m', f'upgrade dbt to {self.version}'],\n cwd=self.homebrew_path\n )\n\n @staticmethod\n def run_tests(formula_path: Path, audit: bool = True):\n path = os.path.normpath(formula_path)\n run_command(['brew', 'uninstall', '--force', path])\n versions = [\n l.strip() for l in\n collect_output(['brew', 'list']).split('\\n')\n if l.strip().startswith('dbt@') or l.strip() == 'dbt'\n ]\n if versions:\n run_command(['brew', 'unlink'] + versions)\n run_command(['brew', 'install', path])\n run_command(['brew', 'test', path])\n if audit:\n run_command(['brew', 'audit', '--strict', path])\n\n def create_default_package(self):\n os.remove(self.default_formula_path)\n formula_contents = self.get_formula_data(versioned=False)\n self.default_formula_path.write_text(formula_contents)\n\n def build(self):\n self.create_versioned_formula_file()\n # self.run_tests(formula_path=self.versioned_formula_path)\n self.commit_versioned_formula()\n\n if self.set_default:\n self.create_default_package()\n # self.run_tests(formula_path=self.default_formula_path, audit=False)\n self.commit_default_formula()\n\n\nclass WheelInfo:\n def __init__(self, path):\n self.path = path\n\n @staticmethod\n def _extract_distinfo_path(wfile: zipfile.ZipFile) -> zipfile.Path:\n zpath = zipfile.Path(root=wfile)\n for path in zpath.iterdir():\n if path.name.endswith('.dist-info'):\n return path\n raise ValueError('Wheel with no dist-info?')\n\n def get_metadata(self) -> Dict[str, str]:\n with zipfile.ZipFile(self.path) as wf:\n distinfo = self._extract_distinfo_path(wf)\n metadata = distinfo / 'METADATA'\n metadata_dict: Dict[str, str] = {}\n for line in metadata.read_text().split('\\n'):\n parts = line.split(': ', 1)\n if len(parts) == 2:\n metadata_dict[parts[0]] = parts[1]\n return metadata_dict\n\n def package_name(self) -> str:\n metadata = self.get_metadata()\n if 'Name' not in metadata:\n raise ValueError('Wheel with no name?')\n return metadata['Name']\n\n\nclass DockerBuilder:\n \"\"\"The docker builder requires the existence of a dbt package\"\"\"\n def __init__(self, dbt_path: Path, version: Version) -> None:\n self.dbt_path = dbt_path\n self.version = version\n\n @property\n def docker_path(self) -> Path:\n return self.dbt_path / 'docker'\n\n @property\n def dockerfile_name(self) -> str:\n return f'Dockerfile.{self.version}'\n\n @property\n def dockerfile_path(self) -> Path:\n return self.docker_path / self.dockerfile_name\n\n @property\n def requirements_path(self) -> Path:\n return self.docker_path / 'requirements'\n\n @property\n def requirements_file_name(self) -> str:\n return f'requirements.{self.version}.txt'\n\n @property\n def dockerfile_venv_path(self) -> Path:\n return self.dbt_path / 'build' / 'docker-venv'\n\n @property\n def requirements_txt_path(self) -> Path:\n return self.requirements_path / self.requirements_file_name\n\n def make_venv(self) -> DistFolderEnv:\n env = DistFolderEnv(self.dbt_path)\n\n env.create(self.dockerfile_venv_path)\n return env\n\n def get_frozen(self) -> str:\n env = self.make_venv()\n pip_path = self.dockerfile_venv_path / 'bin/pip'\n cmd = [pip_path, 'freeze']\n wheel_names = {\n WheelInfo(wheel_path).package_name() for wheel_path in env.wheels\n }\n # remove the dependencies in dbt itself\n return '\\n'.join([\n dep for dep in collect_output(cmd).split('\\n')\n if dep.split('==')[0] not in wheel_names\n ])\n\n def write_lockfile(self):\n freeze = self.get_frozen()\n path = self.requirements_txt_path\n if path.exists():\n raise ValueError(f'Found existing requirements file at {path}!')\n os.makedirs(path.parent, exist_ok=True)\n path.write_text(freeze)\n\n def get_dockerfile_contents(self):\n dist_path = (self.dbt_path / 'dist').relative_to(Path.cwd())\n wheel_paths = ' '.join(\n os.path.join('.', 'dist', p.name)\n for p in _require_wheels(self.dbt_path)\n )\n\n requirements_path = self.requirements_txt_path.relative_to(Path.cwd())\n\n return textwrap.dedent(\n f'''\\\n FROM python:3.8.1-slim-buster\n\n RUN apt-get update && \\\n apt-get dist-upgrade -y && \\\n apt-get install -y --no-install-recommends \\\n git software-properties-common make build-essential \\\n ca-certificates libpq-dev && \\\n apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*\n\n COPY {requirements_path} ./{self.requirements_file_name}\n COPY {dist_path} ./dist\n RUN pip install --upgrade pip setuptools\n RUN pip install --requirement ./{self.requirements_file_name}\n RUN pip install {wheel_paths}\n\n RUN useradd -mU dbt_user\n\n ENV PYTHONIOENCODING=utf-8\n ENV LANG C.UTF-8\n\n WORKDIR /usr/app\n VOLUME /usr/app\n\n USER dbt_user\n ENTRYPOINT dbt\n '''\n )\n\n def write_dockerfile(self):\n dockerfile = self.get_dockerfile_contents()\n path = self.dockerfile_path\n if path.exists():\n raise ValueError(f'Found existing docker file at {path}!')\n os.makedirs(path.parent, exist_ok=True)\n path.write_text(dockerfile)\n\n @property\n def image_tag(self):\n return f'dbt:{self.version}'\n\n @property\n def remote_tag(self):\n return f'fishtownanalytics/{self.image_tag}'\n\n def create_docker_image(self):\n run_command(\n [\n 'docker', 'build',\n '-f', self.dockerfile_path,\n '--tag', self.image_tag,\n # '--no-cache',\n self.dbt_path,\n ],\n cwd=self.dbt_path\n )\n\n def set_remote_tag(self):\n # tag it\n run_command(\n ['docker', 'tag', self.image_tag, self.remote_tag],\n cwd=self.dbt_path,\n )\n\n def commit_docker_folder(self):\n # commit the contents of docker/\n run_command(\n ['git', 'add', 'docker'],\n cwd=self.dbt_path\n )\n commit_msg = f'Add {self.image_tag} dockerfiles and requirements'\n run_command(['git', 'commit', '-m', commit_msg], cwd=self.dbt_path)\n\n def build(\n self,\n write_requirements: bool = True,\n write_dockerfile: bool = True\n ):\n if write_requirements:\n self.write_lockfile()\n if write_dockerfile:\n self.write_dockerfile()\n self.commit_docker_folder()\n self.create_docker_image()\n self.set_remote_tag()\n\n def push(self):\n run_command(\n ['docker', 'push', self.remote_tag]\n )\n\n\ndef sanity_check():\n if sys.version_info[:len(HOMEBREW_PYTHON)] != HOMEBREW_PYTHON:\n python_version_str = '.'.join(str(i) for i in HOMEBREW_PYTHON)\n print(f'This script must be run with python {python_version_str}')\n sys.exit(1)\n\n # avoid \"what's a bdist_wheel\" errors\n try:\n import wheel # type: ignore # noqa\n except ImportError:\n print(\n 'The wheel package is required to build. Please run:\\n'\n 'pip install -r dev_requirements.txt'\n )\n sys.exit(1)\n\n\ndef upgrade_to(args: Arguments):\n if args.set_version:\n set_version(args.path, args.version, args.part)\n\n builder = PypiBuilder(args.path)\n if args.build_pypi:\n builder.build()\n\n if args.upload_pypi:\n if args.test_upload:\n builder.upload()\n input(\n f'Ensure https://test.pypi.org/project/dbt/{args.version}/ '\n 'exists and looks reasonable'\n )\n builder.upload(test=False)\n\n if args.build_homebrew:\n if args.upload_pypi:\n print('waiting a minute for pypi before trying to pip install')\n # if we uploaded to pypi, wait a minute before we bother trying to\n # pip install\n time.sleep(60)\n HomebrewBuilder(\n dbt_path=args.path,\n version=args.version,\n homebrew_path=args.homebrew_path,\n set_default=args.homebrew_set_default,\n ).build()\n\n if args.build_docker:\n builder = DockerBuilder(\n dbt_path=args.path,\n version=args.version,\n )\n builder.build(\n write_requirements=args.write_requirements,\n write_dockerfile=args.write_dockerfile,\n )\n if args.upload_docker:\n builder.push()\n\n\ndef main():\n sanity_check()\n args = Arguments.parse()\n upgrade_to(args)\n\n\nif __name__ == '__main__':\n main()\n", "path": "scripts/build-dbt.py"}]} |
gh_patches_debug_69 | rasdani/github-patches | git_diff | agconti__cookiecutter-django-rest-155 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Set versatile image's field to 'create_images_on_demand' to false in production by default.
``` python
VERSATILEIMAGEFIELD_SETTINGS['create_images_on_demand'] = False
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `{{cookiecutter.github_repository_name}}/{{cookiecutter.app_name}}/config/production.py`
Content:
```
1 import os
2 from configurations import values
3 from boto.s3.connection import OrdinaryCallingFormat
4 from .common import Common
5
6 try:
7 # Python 2.x
8 import urlparse
9 except ImportError:
10 # Python 3.x
11 from urllib import parse as urlparse
12
13
14 class Production(Common):
15
16 # Honor the 'X-Forwarded-Proto' header for request.is_secure()
17 # https://devcenter.heroku.com/articles/getting-started-with-django
18 SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
19
20 INSTALLED_APPS = Common.INSTALLED_APPS
21 SECRET_KEY = values.SecretValue()
22
23 # django-secure
24 # http://django-secure.readthedocs.org/en/v0.1.2/settings.html
25 INSTALLED_APPS += ("djangosecure", )
26
27 SECURE_HSTS_SECONDS = 60
28 SECURE_HSTS_INCLUDE_SUBDOMAINS = values.BooleanValue(True)
29 SECURE_FRAME_DENY = values.BooleanValue(True)
30 SECURE_CONTENT_TYPE_NOSNIFF = values.BooleanValue(True)
31 SECURE_BROWSER_XSS_FILTER = values.BooleanValue(True)
32 SESSION_COOKIE_SECURE = values.BooleanValue(False)
33 SESSION_COOKIE_HTTPONLY = values.BooleanValue(True)
34 SECURE_SSL_REDIRECT = values.BooleanValue(True)
35
36 # Site
37 # https://docs.djangoproject.com/en/1.6/ref/settings/#allowed-hosts
38 ALLOWED_HOSTS = ["*"]
39
40 INSTALLED_APPS += ("gunicorn", )
41
42 # Template
43 # https://docs.djangoproject.com/en/dev/ref/settings/#template-dirs
44 TEMPLATE_LOADERS = (
45 ('django.template.loaders.cached.Loader', (
46 'django.template.loaders.filesystem.Loader',
47 'django.template.loaders.app_directories.Loader',
48 )),
49 )
50
51 # Media files
52 # http://django-storages.readthedocs.org/en/latest/index.html
53 INSTALLED_APPS += ('storages',)
54 DEFAULT_FILE_STORAGE = 'storages.backends.s3boto.S3BotoStorage'
55 AWS_ACCESS_KEY_ID = values.Value('DJANGO_AWS_ACCESS_KEY_ID')
56 AWS_SECRET_ACCESS_KEY = values.Value('DJANGO_AWS_SECRET_ACCESS_KEY')
57 AWS_STORAGE_BUCKET_NAME = values.Value('DJANGO_AWS_STORAGE_BUCKET_NAME')
58 AWS_AUTO_CREATE_BUCKET = True
59 AWS_QUERYSTRING_AUTH = False
60 MEDIA_URL = 'https://s3.amazonaws.com/{}/'.format(AWS_STORAGE_BUCKET_NAME)
61 AWS_S3_CALLING_FORMAT = OrdinaryCallingFormat()
62
63 # https://developers.google.com/web/fundamentals/performance/optimizing-content-efficiency/http-caching#cache-control
64 # Response can be cached by browser and any intermediary caches (i.e. it is "public") for up to 1 day
65 # 86400 = (60 seconds x 60 minutes x 24 hours)
66 AWS_HEADERS = {
67 'Cache-Control': 'max-age=86400, s-maxage=86400, must-revalidate',
68 }
69
70 # Static files
71 STATICFILES_STORAGE = 'whitenoise.django.GzipManifestStaticFilesStorage'
72
73 # Caching
74 redis_url = urlparse.urlparse(os.environ.get('REDISTOGO_URL', 'redis://localhost:6379'))
75 CACHES = {
76 'default': {
77 'BACKEND': 'redis_cache.RedisCache',
78 'LOCATION': '{}:{}'.format(redis_url.hostname, redis_url.port),
79 'OPTIONS': {
80 'DB': 0,
81 'PASSWORD': redis_url.password,
82 'PARSER_CLASS': 'redis.connection.HiredisParser',
83 'CONNECTION_POOL_CLASS': 'redis.BlockingConnectionPool',
84 'CONNECTION_POOL_CLASS_KWARGS': {
85 'max_connections': 50,
86 'timeout': 20,
87 }
88 }
89 }
90 }
91
92 # Django RQ production settings
93 RQ_QUEUES = {
94 'default': {
95 'URL': os.getenv('REDISTOGO_URL', 'redis://localhost:6379'),
96 'DB': 0,
97 'DEFAULT_TIMEOUT': 500,
98 },
99 }
100
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/{{cookiecutter.github_repository_name}}/{{cookiecutter.app_name}}/config/production.py b/{{cookiecutter.github_repository_name}}/{{cookiecutter.app_name}}/config/production.py
--- a/{{cookiecutter.github_repository_name}}/{{cookiecutter.app_name}}/config/production.py
+++ b/{{cookiecutter.github_repository_name}}/{{cookiecutter.app_name}}/config/production.py
@@ -97,3 +97,5 @@
'DEFAULT_TIMEOUT': 500,
},
}
+
+ Common.VERSATILEIMAGEFIELD_SETTINGS['create_images_on_demand'] = False
| {"golden_diff": "diff --git a/{{cookiecutter.github_repository_name}}/{{cookiecutter.app_name}}/config/production.py b/{{cookiecutter.github_repository_name}}/{{cookiecutter.app_name}}/config/production.py\n--- a/{{cookiecutter.github_repository_name}}/{{cookiecutter.app_name}}/config/production.py\n+++ b/{{cookiecutter.github_repository_name}}/{{cookiecutter.app_name}}/config/production.py\n@@ -97,3 +97,5 @@\n 'DEFAULT_TIMEOUT': 500,\n },\n }\n+\n+ Common.VERSATILEIMAGEFIELD_SETTINGS['create_images_on_demand'] = False\n", "issue": "Set versatile image's field to 'create_images_on_demand' to false in production by default.\n``` python\nVERSATILEIMAGEFIELD_SETTINGS['create_images_on_demand'] = False\n```\n\n", "before_files": [{"content": "import os\nfrom configurations import values\nfrom boto.s3.connection import OrdinaryCallingFormat\nfrom .common import Common\n\ntry:\n # Python 2.x\n import urlparse\nexcept ImportError:\n # Python 3.x\n from urllib import parse as urlparse\n\n\nclass Production(Common):\n\n # Honor the 'X-Forwarded-Proto' header for request.is_secure()\n # https://devcenter.heroku.com/articles/getting-started-with-django\n SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')\n\n INSTALLED_APPS = Common.INSTALLED_APPS\n SECRET_KEY = values.SecretValue()\n\n # django-secure\n # http://django-secure.readthedocs.org/en/v0.1.2/settings.html\n INSTALLED_APPS += (\"djangosecure\", )\n\n SECURE_HSTS_SECONDS = 60\n SECURE_HSTS_INCLUDE_SUBDOMAINS = values.BooleanValue(True)\n SECURE_FRAME_DENY = values.BooleanValue(True)\n SECURE_CONTENT_TYPE_NOSNIFF = values.BooleanValue(True)\n SECURE_BROWSER_XSS_FILTER = values.BooleanValue(True)\n SESSION_COOKIE_SECURE = values.BooleanValue(False)\n SESSION_COOKIE_HTTPONLY = values.BooleanValue(True)\n SECURE_SSL_REDIRECT = values.BooleanValue(True)\n\n # Site\n # https://docs.djangoproject.com/en/1.6/ref/settings/#allowed-hosts\n ALLOWED_HOSTS = [\"*\"]\n\n INSTALLED_APPS += (\"gunicorn\", )\n\n # Template\n # https://docs.djangoproject.com/en/dev/ref/settings/#template-dirs\n TEMPLATE_LOADERS = (\n ('django.template.loaders.cached.Loader', (\n 'django.template.loaders.filesystem.Loader',\n 'django.template.loaders.app_directories.Loader',\n )),\n )\n\n # Media files\n # http://django-storages.readthedocs.org/en/latest/index.html\n INSTALLED_APPS += ('storages',)\n DEFAULT_FILE_STORAGE = 'storages.backends.s3boto.S3BotoStorage'\n AWS_ACCESS_KEY_ID = values.Value('DJANGO_AWS_ACCESS_KEY_ID')\n AWS_SECRET_ACCESS_KEY = values.Value('DJANGO_AWS_SECRET_ACCESS_KEY')\n AWS_STORAGE_BUCKET_NAME = values.Value('DJANGO_AWS_STORAGE_BUCKET_NAME')\n AWS_AUTO_CREATE_BUCKET = True\n AWS_QUERYSTRING_AUTH = False\n MEDIA_URL = 'https://s3.amazonaws.com/{}/'.format(AWS_STORAGE_BUCKET_NAME)\n AWS_S3_CALLING_FORMAT = OrdinaryCallingFormat()\n\n # https://developers.google.com/web/fundamentals/performance/optimizing-content-efficiency/http-caching#cache-control\n # Response can be cached by browser and any intermediary caches (i.e. it is \"public\") for up to 1 day\n # 86400 = (60 seconds x 60 minutes x 24 hours)\n AWS_HEADERS = {\n 'Cache-Control': 'max-age=86400, s-maxage=86400, must-revalidate',\n }\n\n # Static files\n STATICFILES_STORAGE = 'whitenoise.django.GzipManifestStaticFilesStorage'\n\n # Caching\n redis_url = urlparse.urlparse(os.environ.get('REDISTOGO_URL', 'redis://localhost:6379'))\n CACHES = {\n 'default': {\n 'BACKEND': 'redis_cache.RedisCache',\n 'LOCATION': '{}:{}'.format(redis_url.hostname, redis_url.port),\n 'OPTIONS': {\n 'DB': 0,\n 'PASSWORD': redis_url.password,\n 'PARSER_CLASS': 'redis.connection.HiredisParser',\n 'CONNECTION_POOL_CLASS': 'redis.BlockingConnectionPool',\n 'CONNECTION_POOL_CLASS_KWARGS': {\n 'max_connections': 50,\n 'timeout': 20,\n }\n }\n }\n }\n\n # Django RQ production settings\n RQ_QUEUES = {\n 'default': {\n 'URL': os.getenv('REDISTOGO_URL', 'redis://localhost:6379'),\n 'DB': 0,\n 'DEFAULT_TIMEOUT': 500,\n },\n }\n", "path": "{{cookiecutter.github_repository_name}}/{{cookiecutter.app_name}}/config/production.py"}], "after_files": [{"content": "import os\nfrom configurations import values\nfrom boto.s3.connection import OrdinaryCallingFormat\nfrom .common import Common\n\ntry:\n # Python 2.x\n import urlparse\nexcept ImportError:\n # Python 3.x\n from urllib import parse as urlparse\n\n\nclass Production(Common):\n\n # Honor the 'X-Forwarded-Proto' header for request.is_secure()\n # https://devcenter.heroku.com/articles/getting-started-with-django\n SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')\n\n INSTALLED_APPS = Common.INSTALLED_APPS\n SECRET_KEY = values.SecretValue()\n\n # django-secure\n # http://django-secure.readthedocs.org/en/v0.1.2/settings.html\n INSTALLED_APPS += (\"djangosecure\", )\n\n SECURE_HSTS_SECONDS = 60\n SECURE_HSTS_INCLUDE_SUBDOMAINS = values.BooleanValue(True)\n SECURE_FRAME_DENY = values.BooleanValue(True)\n SECURE_CONTENT_TYPE_NOSNIFF = values.BooleanValue(True)\n SECURE_BROWSER_XSS_FILTER = values.BooleanValue(True)\n SESSION_COOKIE_SECURE = values.BooleanValue(False)\n SESSION_COOKIE_HTTPONLY = values.BooleanValue(True)\n SECURE_SSL_REDIRECT = values.BooleanValue(True)\n\n # Site\n # https://docs.djangoproject.com/en/1.6/ref/settings/#allowed-hosts\n ALLOWED_HOSTS = [\"*\"]\n\n INSTALLED_APPS += (\"gunicorn\", )\n\n # Template\n # https://docs.djangoproject.com/en/dev/ref/settings/#template-dirs\n TEMPLATE_LOADERS = (\n ('django.template.loaders.cached.Loader', (\n 'django.template.loaders.filesystem.Loader',\n 'django.template.loaders.app_directories.Loader',\n )),\n )\n\n # Media files\n # http://django-storages.readthedocs.org/en/latest/index.html\n INSTALLED_APPS += ('storages',)\n DEFAULT_FILE_STORAGE = 'storages.backends.s3boto.S3BotoStorage'\n AWS_ACCESS_KEY_ID = values.Value('DJANGO_AWS_ACCESS_KEY_ID')\n AWS_SECRET_ACCESS_KEY = values.Value('DJANGO_AWS_SECRET_ACCESS_KEY')\n AWS_STORAGE_BUCKET_NAME = values.Value('DJANGO_AWS_STORAGE_BUCKET_NAME')\n AWS_AUTO_CREATE_BUCKET = True\n AWS_QUERYSTRING_AUTH = False\n MEDIA_URL = 'https://s3.amazonaws.com/{}/'.format(AWS_STORAGE_BUCKET_NAME)\n AWS_S3_CALLING_FORMAT = OrdinaryCallingFormat()\n\n # https://developers.google.com/web/fundamentals/performance/optimizing-content-efficiency/http-caching#cache-control\n # Response can be cached by browser and any intermediary caches (i.e. it is \"public\") for up to 1 day\n # 86400 = (60 seconds x 60 minutes x 24 hours)\n AWS_HEADERS = {\n 'Cache-Control': 'max-age=86400, s-maxage=86400, must-revalidate',\n }\n\n # Static files\n STATICFILES_STORAGE = 'whitenoise.django.GzipManifestStaticFilesStorage'\n\n # Caching\n redis_url = urlparse.urlparse(os.environ.get('REDISTOGO_URL', 'redis://localhost:6379'))\n CACHES = {\n 'default': {\n 'BACKEND': 'redis_cache.RedisCache',\n 'LOCATION': '{}:{}'.format(redis_url.hostname, redis_url.port),\n 'OPTIONS': {\n 'DB': 0,\n 'PASSWORD': redis_url.password,\n 'PARSER_CLASS': 'redis.connection.HiredisParser',\n 'CONNECTION_POOL_CLASS': 'redis.BlockingConnectionPool',\n 'CONNECTION_POOL_CLASS_KWARGS': {\n 'max_connections': 50,\n 'timeout': 20,\n }\n }\n }\n }\n\n # Django RQ production settings\n RQ_QUEUES = {\n 'default': {\n 'URL': os.getenv('REDISTOGO_URL', 'redis://localhost:6379'),\n 'DB': 0,\n 'DEFAULT_TIMEOUT': 500,\n },\n }\n\n Common.VERSATILEIMAGEFIELD_SETTINGS['create_images_on_demand'] = False\n", "path": "{{cookiecutter.github_repository_name}}/{{cookiecutter.app_name}}/config/production.py"}]} |
gh_patches_debug_70 | rasdani/github-patches | git_diff | geopandas__geopandas-2249 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
DOC: Address GeoPandas op deprecation in docs
While working on #2211 I noticed instances of the `op` parameter still being used.
This `op` parameter was deprecated in pull request #1626 in favour of `predicate`.
Locations where op is still present includes:
* [sjoin benchmark](https://github.com/geopandas/geopandas/blob/master/benchmarks/sjoin.py)
* [Spatial Joins notebook](https://github.com/geopandas/geopandas/blob/master/doc/source/gallery/spatial_joins.ipynb)
I can address the notebook instance but I don't know what the benchmark instance of `op` does so wouldn't want to change it without a thumbs up from a maintainer.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `benchmarks/sjoin.py`
Content:
```
1 import random
2
3 from geopandas import GeoDataFrame, GeoSeries, sjoin
4 from shapely.geometry import Point, LineString, Polygon
5 import numpy as np
6
7
8 class Bench:
9
10 param_names = ['op']
11 params = [('intersects', 'contains', 'within')]
12
13 def setup(self, *args):
14 triangles = GeoSeries(
15 [Polygon([(random.random(), random.random()) for _ in range(3)])
16 for _ in range(1000)])
17
18 points = GeoSeries(
19 [Point(x, y) for x, y in zip(np.random.random(10000),
20 np.random.random(10000))])
21
22 df1 = GeoDataFrame({'val1': np.random.randn(len(triangles)),
23 'geometry': triangles})
24 df2 = GeoDataFrame({'val1': np.random.randn(len(points)),
25 'geometry': points})
26
27 self.df1, self.df2 = df1, df2
28
29 def time_sjoin(self, op):
30 sjoin(self.df1, self.df2, op=op)
31
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/benchmarks/sjoin.py b/benchmarks/sjoin.py
--- a/benchmarks/sjoin.py
+++ b/benchmarks/sjoin.py
@@ -26,5 +26,5 @@
self.df1, self.df2 = df1, df2
- def time_sjoin(self, op):
- sjoin(self.df1, self.df2, op=op)
+ def time_sjoin(self, predicate):
+ sjoin(self.df1, self.df2, predicate=predicate)
| {"golden_diff": "diff --git a/benchmarks/sjoin.py b/benchmarks/sjoin.py\n--- a/benchmarks/sjoin.py\n+++ b/benchmarks/sjoin.py\n@@ -26,5 +26,5 @@\n \n self.df1, self.df2 = df1, df2\n \n- def time_sjoin(self, op):\n- sjoin(self.df1, self.df2, op=op)\n+ def time_sjoin(self, predicate):\n+ sjoin(self.df1, self.df2, predicate=predicate)\n", "issue": "DOC: Address GeoPandas op deprecation in docs\nWhile working on #2211 I noticed instances of the `op` parameter still being used.\r\n\r\nThis `op` parameter was deprecated in pull request #1626 in favour of `predicate`.\r\n\r\nLocations where op is still present includes:\r\n* [sjoin benchmark](https://github.com/geopandas/geopandas/blob/master/benchmarks/sjoin.py)\r\n* [Spatial Joins notebook](https://github.com/geopandas/geopandas/blob/master/doc/source/gallery/spatial_joins.ipynb)\r\n \r\nI can address the notebook instance but I don't know what the benchmark instance of `op` does so wouldn't want to change it without a thumbs up from a maintainer.\n", "before_files": [{"content": "import random\n\nfrom geopandas import GeoDataFrame, GeoSeries, sjoin\nfrom shapely.geometry import Point, LineString, Polygon\nimport numpy as np\n\n\nclass Bench:\n\n param_names = ['op']\n params = [('intersects', 'contains', 'within')]\n\n def setup(self, *args):\n triangles = GeoSeries(\n [Polygon([(random.random(), random.random()) for _ in range(3)])\n for _ in range(1000)])\n\n points = GeoSeries(\n [Point(x, y) for x, y in zip(np.random.random(10000),\n np.random.random(10000))])\n\n df1 = GeoDataFrame({'val1': np.random.randn(len(triangles)),\n 'geometry': triangles})\n df2 = GeoDataFrame({'val1': np.random.randn(len(points)),\n 'geometry': points})\n\n self.df1, self.df2 = df1, df2\n\n def time_sjoin(self, op):\n sjoin(self.df1, self.df2, op=op)\n", "path": "benchmarks/sjoin.py"}], "after_files": [{"content": "import random\n\nfrom geopandas import GeoDataFrame, GeoSeries, sjoin\nfrom shapely.geometry import Point, LineString, Polygon\nimport numpy as np\n\n\nclass Bench:\n\n param_names = ['op']\n params = [('intersects', 'contains', 'within')]\n\n def setup(self, *args):\n triangles = GeoSeries(\n [Polygon([(random.random(), random.random()) for _ in range(3)])\n for _ in range(1000)])\n\n points = GeoSeries(\n [Point(x, y) for x, y in zip(np.random.random(10000),\n np.random.random(10000))])\n\n df1 = GeoDataFrame({'val1': np.random.randn(len(triangles)),\n 'geometry': triangles})\n df2 = GeoDataFrame({'val1': np.random.randn(len(points)),\n 'geometry': points})\n\n self.df1, self.df2 = df1, df2\n\n def time_sjoin(self, predicate):\n sjoin(self.df1, self.df2, predicate=predicate)\n", "path": "benchmarks/sjoin.py"}]} |
gh_patches_debug_71 | rasdani/github-patches | git_diff | typeddjango__django-stubs-640 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
"field" and "exclude" arguments of model_to_dict() do not accept sets
# Bug report
## What's wrong
Our test suite contains code that simplifies to this fragment:
```py
from typing import Mapping
from django.db.models.base import Model
from django.forms import model_to_dict
def check(instance: Model, data: Mapping[str, object]) -> None:
assert data == model_to_dict(instance, fields=data.keys())
```
When checking that with mypy, it reports:
```
testcase.py:8: error: Argument "fields" to "model_to_dict" has incompatible type "AbstractSet[str]";
expected "Union[List[Union[Callable[..., Any], str]], Sequence[str], Literal['__all__'], None]"
[arg-type]
assert data == model_to_dict(instance, fields=data.keys())
```
## How is that should be
The implementation of `model_to_dict()` only needs `__bool__()` and `__contains__()` to be provided by the `fields` and `exclude` arguments, so passing a keys set should not be flagged as an error.
I think a solution could be to replace `Sequence` in the stubs annotation with `Collection`.
## System information
- OS: Ubuntu Linux 18.04
- `python` version: 3.6.9
- `django` version: 2.2.1
- `mypy` version: 0.812
- `django-stubs` version: 1.8.0
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 import os
2 from distutils.core import setup
3 from typing import List
4
5 from setuptools import find_packages
6
7
8 def find_stub_files(name: str) -> List[str]:
9 result = []
10 for root, dirs, files in os.walk(name):
11 for file in files:
12 if file.endswith(".pyi"):
13 if os.path.sep in root:
14 sub_root = root.split(os.path.sep, 1)[-1]
15 file = os.path.join(sub_root, file)
16 result.append(file)
17 return result
18
19
20 with open("README.md") as f:
21 readme = f.read()
22
23 dependencies = [
24 "mypy>=0.790",
25 "typing-extensions",
26 "django",
27 "django-stubs-ext",
28 ]
29
30 setup(
31 name="django-stubs",
32 version="1.8.0",
33 description="Mypy stubs for Django",
34 long_description=readme,
35 long_description_content_type="text/markdown",
36 license="MIT",
37 url="https://github.com/typeddjango/django-stubs",
38 author="Maksim Kurnikov",
39 author_email="[email protected]",
40 py_modules=[],
41 python_requires=">=3.6",
42 install_requires=dependencies,
43 packages=["django-stubs", *find_packages(exclude=["scripts"])],
44 package_data={"django-stubs": find_stub_files("django-stubs")},
45 classifiers=[
46 "License :: OSI Approved :: MIT License",
47 "Operating System :: OS Independent",
48 "Programming Language :: Python :: 3.6",
49 "Programming Language :: Python :: 3.7",
50 "Programming Language :: Python :: 3.8",
51 "Programming Language :: Python :: 3.9",
52 "Typing :: Typed",
53 "Framework :: Django",
54 "Framework :: Django :: 2.2",
55 "Framework :: Django :: 3.0",
56 "Framework :: Django :: 3.1",
57 "Framework :: Django :: 3.2",
58 ],
59 project_urls={
60 "Release notes": "https://github.com/typeddjango/django-stubs/releases",
61 },
62 )
63
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -21,7 +21,7 @@
readme = f.read()
dependencies = [
- "mypy>=0.790",
+ "mypy>=0.900",
"typing-extensions",
"django",
"django-stubs-ext",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -21,7 +21,7 @@\n readme = f.read()\n \n dependencies = [\n- \"mypy>=0.790\",\n+ \"mypy>=0.900\",\n \"typing-extensions\",\n \"django\",\n \"django-stubs-ext\",\n", "issue": "\"field\" and \"exclude\" arguments of model_to_dict() do not accept sets\n# Bug report\r\n\r\n## What's wrong\r\n\r\nOur test suite contains code that simplifies to this fragment:\r\n```py\r\nfrom typing import Mapping\r\n\r\nfrom django.db.models.base import Model\r\nfrom django.forms import model_to_dict\r\n\r\n\r\ndef check(instance: Model, data: Mapping[str, object]) -> None:\r\n assert data == model_to_dict(instance, fields=data.keys())\r\n```\r\n\r\nWhen checking that with mypy, it reports:\r\n```\r\ntestcase.py:8: error: Argument \"fields\" to \"model_to_dict\" has incompatible type \"AbstractSet[str]\";\r\nexpected \"Union[List[Union[Callable[..., Any], str]], Sequence[str], Literal['__all__'], None]\"\r\n[arg-type]\r\n assert data == model_to_dict(instance, fields=data.keys())\r\n```\r\n\r\n## How is that should be\r\n\r\nThe implementation of `model_to_dict()` only needs `__bool__()` and `__contains__()` to be provided by the `fields` and `exclude` arguments, so passing a keys set should not be flagged as an error.\r\n\r\nI think a solution could be to replace `Sequence` in the stubs annotation with `Collection`.\r\n\r\n## System information\r\n\r\n- OS: Ubuntu Linux 18.04\r\n- `python` version: 3.6.9\r\n- `django` version: 2.2.1\r\n- `mypy` version: 0.812\r\n- `django-stubs` version: 1.8.0\r\n\n", "before_files": [{"content": "import os\nfrom distutils.core import setup\nfrom typing import List\n\nfrom setuptools import find_packages\n\n\ndef find_stub_files(name: str) -> List[str]:\n result = []\n for root, dirs, files in os.walk(name):\n for file in files:\n if file.endswith(\".pyi\"):\n if os.path.sep in root:\n sub_root = root.split(os.path.sep, 1)[-1]\n file = os.path.join(sub_root, file)\n result.append(file)\n return result\n\n\nwith open(\"README.md\") as f:\n readme = f.read()\n\ndependencies = [\n \"mypy>=0.790\",\n \"typing-extensions\",\n \"django\",\n \"django-stubs-ext\",\n]\n\nsetup(\n name=\"django-stubs\",\n version=\"1.8.0\",\n description=\"Mypy stubs for Django\",\n long_description=readme,\n long_description_content_type=\"text/markdown\",\n license=\"MIT\",\n url=\"https://github.com/typeddjango/django-stubs\",\n author=\"Maksim Kurnikov\",\n author_email=\"[email protected]\",\n py_modules=[],\n python_requires=\">=3.6\",\n install_requires=dependencies,\n packages=[\"django-stubs\", *find_packages(exclude=[\"scripts\"])],\n package_data={\"django-stubs\": find_stub_files(\"django-stubs\")},\n classifiers=[\n \"License :: OSI Approved :: MIT License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Typing :: Typed\",\n \"Framework :: Django\",\n \"Framework :: Django :: 2.2\",\n \"Framework :: Django :: 3.0\",\n \"Framework :: Django :: 3.1\",\n \"Framework :: Django :: 3.2\",\n ],\n project_urls={\n \"Release notes\": \"https://github.com/typeddjango/django-stubs/releases\",\n },\n)\n", "path": "setup.py"}], "after_files": [{"content": "import os\nfrom distutils.core import setup\nfrom typing import List\n\nfrom setuptools import find_packages\n\n\ndef find_stub_files(name: str) -> List[str]:\n result = []\n for root, dirs, files in os.walk(name):\n for file in files:\n if file.endswith(\".pyi\"):\n if os.path.sep in root:\n sub_root = root.split(os.path.sep, 1)[-1]\n file = os.path.join(sub_root, file)\n result.append(file)\n return result\n\n\nwith open(\"README.md\") as f:\n readme = f.read()\n\ndependencies = [\n \"mypy>=0.900\",\n \"typing-extensions\",\n \"django\",\n \"django-stubs-ext\",\n]\n\nsetup(\n name=\"django-stubs\",\n version=\"1.8.0\",\n description=\"Mypy stubs for Django\",\n long_description=readme,\n long_description_content_type=\"text/markdown\",\n license=\"MIT\",\n url=\"https://github.com/typeddjango/django-stubs\",\n author=\"Maksim Kurnikov\",\n author_email=\"[email protected]\",\n py_modules=[],\n python_requires=\">=3.6\",\n install_requires=dependencies,\n packages=[\"django-stubs\", *find_packages(exclude=[\"scripts\"])],\n package_data={\"django-stubs\": find_stub_files(\"django-stubs\")},\n classifiers=[\n \"License :: OSI Approved :: MIT License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Typing :: Typed\",\n \"Framework :: Django\",\n \"Framework :: Django :: 2.2\",\n \"Framework :: Django :: 3.0\",\n \"Framework :: Django :: 3.1\",\n \"Framework :: Django :: 3.2\",\n ],\n project_urls={\n \"Release notes\": \"https://github.com/typeddjango/django-stubs/releases\",\n },\n)\n", "path": "setup.py"}]} |
gh_patches_debug_72 | rasdani/github-patches | git_diff | sosreport__sos-724 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove hashlib import from sos/utilities.py since
With the removal of the hashing functions from sos/utilities.py there is no need to import hashlib. The hashing was removed in the following commit: https://github.com/sosreport/sos/commit/6038fdf8617319a13b0b42f3283ec2066d54b283
$ gendiff sos/ .org
diff -up sos/sos/utilities.py.org sos/sos/utilities.py
--- sos/sos/utilities.py.org 2016-01-12 09:26:50.865294298 -0500
+++ sos/sos/utilities.py 2016-01-12 09:26:58.959233725 -0500
@@ -18,7 +18,6 @@ import os
import re
import inspect
from subprocess import Popen, PIPE, STDOUT
-import hashlib
import logging
import fnmatch
import errno
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sos/utilities.py`
Content:
```
1 # This program is free software; you can redistribute it and/or modify
2 # it under the terms of the GNU General Public License as published by
3 # the Free Software Foundation; either version 2 of the License, or
4 # (at your option) any later version.
5
6 # This program is distributed in the hope that it will be useful,
7 # but WITHOUT ANY WARRANTY; without even the implied warranty of
8 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
9 # GNU General Public License for more details.
10
11 # You should have received a copy of the GNU General Public License
12 # along with this program; if not, write to the Free Software
13 # Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
14
15 from __future__ import with_statement
16
17 import os
18 import re
19 import inspect
20 from subprocess import Popen, PIPE, STDOUT
21 import hashlib
22 import logging
23 import fnmatch
24 import errno
25 import shlex
26
27 from contextlib import closing
28
29 # PYCOMPAT
30 import six
31 from six import StringIO
32
33
34 def tail(filename, number_of_bytes):
35 """Returns the last number_of_bytes of filename"""
36 with open(filename, "rb") as f:
37 if os.stat(filename).st_size > number_of_bytes:
38 f.seek(-number_of_bytes, 2)
39 return f.read()
40
41
42 def fileobj(path_or_file, mode='r'):
43 """Returns a file-like object that can be used as a context manager"""
44 if isinstance(path_or_file, six.string_types):
45 try:
46 return open(path_or_file, mode)
47 except:
48 log = logging.getLogger('sos')
49 log.debug("fileobj: %s could not be opened" % path_or_file)
50 return closing(StringIO())
51 else:
52 return closing(path_or_file)
53
54
55 def convert_bytes(bytes_, K=1 << 10, M=1 << 20, G=1 << 30, T=1 << 40):
56 """Converts a number of bytes to a shorter, more human friendly format"""
57 fn = float(bytes_)
58 if bytes_ >= T:
59 return '%.1fT' % (fn / T)
60 elif bytes_ >= G:
61 return '%.1fG' % (fn / G)
62 elif bytes_ >= M:
63 return '%.1fM' % (fn / M)
64 elif bytes_ >= K:
65 return '%.1fK' % (fn / K)
66 else:
67 return '%d' % bytes_
68
69
70 def find(file_pattern, top_dir, max_depth=None, path_pattern=None):
71 """generator function to find files recursively. Usage:
72
73 for filename in find("*.properties", "/var/log/foobar"):
74 print filename
75 """
76 if max_depth:
77 base_depth = os.path.dirname(top_dir).count(os.path.sep)
78 max_depth += base_depth
79
80 for path, dirlist, filelist in os.walk(top_dir):
81 if max_depth and path.count(os.path.sep) >= max_depth:
82 del dirlist[:]
83
84 if path_pattern and not fnmatch.fnmatch(path, path_pattern):
85 continue
86
87 for name in fnmatch.filter(filelist, file_pattern):
88 yield os.path.join(path, name)
89
90
91 def grep(pattern, *files_or_paths):
92 """Returns lines matched in fnames, where fnames can either be pathnames to
93 files to grep through or open file objects to grep through line by line"""
94 matches = []
95
96 for fop in files_or_paths:
97 with fileobj(fop) as fo:
98 matches.extend((line for line in fo if re.match(pattern, line)))
99
100 return matches
101
102
103 def is_executable(command):
104 """Returns if a command matches an executable on the PATH"""
105
106 paths = os.environ.get("PATH", "").split(os.path.pathsep)
107 candidates = [command] + [os.path.join(p, command) for p in paths]
108 return any(os.access(path, os.X_OK) for path in candidates)
109
110
111 def sos_get_command_output(command, timeout=300, stderr=False,
112 chroot=None, chdir=None):
113 """Execute a command and return a dictionary of status and output,
114 optionally changing root or current working directory before
115 executing command.
116 """
117 # Change root or cwd for child only. Exceptions in the prexec_fn
118 # closure are caught in the parent (chroot and chdir are bound from
119 # the enclosing scope).
120 def _child_prep_fn():
121 if (chroot):
122 os.chroot(chroot)
123 if (chdir):
124 os.chdir(chdir)
125
126 cmd_env = os.environ
127 # ensure consistent locale for collected command output
128 cmd_env['LC_ALL'] = 'C'
129 # use /usr/bin/timeout to implement a timeout
130 if timeout and is_executable("timeout"):
131 command = "timeout %ds %s" % (timeout, command)
132
133 # shlex.split() reacts badly to unicode on older python runtimes.
134 if not six.PY3:
135 command = command.encode('utf-8', 'ignore')
136 args = shlex.split(command)
137 try:
138 p = Popen(args, shell=False, stdout=PIPE,
139 stderr=STDOUT if stderr else PIPE,
140 bufsize=-1, env=cmd_env, close_fds=True,
141 preexec_fn=_child_prep_fn)
142 stdout, stderr = p.communicate()
143 except OSError as e:
144 if e.errno == errno.ENOENT:
145 return {'status': 127, 'output': ""}
146 else:
147 raise e
148
149 if p.returncode == 126 or p.returncode == 127:
150 stdout = six.binary_type(b"")
151
152 return {
153 'status': p.returncode,
154 'output': stdout.decode('utf-8', 'ignore')
155 }
156
157
158 def import_module(module_fqname, superclasses=None):
159 """Imports the module module_fqname and returns a list of defined classes
160 from that module. If superclasses is defined then the classes returned will
161 be subclasses of the specified superclass or superclasses. If superclasses
162 is plural it must be a tuple of classes."""
163 module_name = module_fqname.rpartition(".")[-1]
164 module = __import__(module_fqname, globals(), locals(), [module_name])
165 modules = [class_ for cname, class_ in
166 inspect.getmembers(module, inspect.isclass)
167 if class_.__module__ == module_fqname]
168 if superclasses:
169 modules = [m for m in modules if issubclass(m, superclasses)]
170
171 return modules
172
173
174 def shell_out(cmd, timeout=30, chroot=None, runat=None):
175 """Shell out to an external command and return the output or the empty
176 string in case of error.
177 """
178 return sos_get_command_output(cmd, timeout=timeout,
179 chroot=chroot, chdir=runat)['output']
180
181
182 class ImporterHelper(object):
183 """Provides a list of modules that can be imported in a package.
184 Importable modules are located along the module __path__ list and modules
185 are files that end in .py.
186 """
187
188 def __init__(self, package):
189 """package is a package module
190 import my.package.module
191 helper = ImporterHelper(my.package.module)"""
192 self.package = package
193
194 def _plugin_name(self, path):
195 "Returns the plugin module name given the path"
196 base = os.path.basename(path)
197 name, ext = os.path.splitext(base)
198 return name
199
200 def _get_plugins_from_list(self, list_):
201 plugins = [self._plugin_name(plugin)
202 for plugin in list_
203 if "__init__" not in plugin and plugin.endswith(".py")]
204 plugins.sort()
205 return plugins
206
207 def _find_plugins_in_dir(self, path):
208 if os.path.exists(path):
209 py_files = list(find("*.py", path))
210 pnames = self._get_plugins_from_list(py_files)
211 if pnames:
212 return pnames
213 else:
214 return []
215
216 def get_modules(self):
217 """Returns the list of importable modules in the configured python
218 package. """
219 plugins = []
220 for path in self.package.__path__:
221 if os.path.isdir(path) or path == '':
222 plugins.extend(self._find_plugins_in_dir(path))
223
224 return plugins
225
226 # vim: set et ts=4 sw=4 :
227
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sos/utilities.py b/sos/utilities.py
--- a/sos/utilities.py
+++ b/sos/utilities.py
@@ -18,7 +18,6 @@
import re
import inspect
from subprocess import Popen, PIPE, STDOUT
-import hashlib
import logging
import fnmatch
import errno
| {"golden_diff": "diff --git a/sos/utilities.py b/sos/utilities.py\n--- a/sos/utilities.py\n+++ b/sos/utilities.py\n@@ -18,7 +18,6 @@\n import re\n import inspect\n from subprocess import Popen, PIPE, STDOUT\n-import hashlib\n import logging\n import fnmatch\n import errno\n", "issue": "Remove hashlib import from sos/utilities.py since \nWith the removal of the hashing functions from sos/utilities.py there is no need to import hashlib. The hashing was removed in the following commit: https://github.com/sosreport/sos/commit/6038fdf8617319a13b0b42f3283ec2066d54b283\n\n$ gendiff sos/ .org\ndiff -up sos/sos/utilities.py.org sos/sos/utilities.py\n--- sos/sos/utilities.py.org 2016-01-12 09:26:50.865294298 -0500\n+++ sos/sos/utilities.py 2016-01-12 09:26:58.959233725 -0500\n@@ -18,7 +18,6 @@ import os\n import re\n import inspect\n from subprocess import Popen, PIPE, STDOUT\n-import hashlib\n import logging\n import fnmatch\n import errno\n\n", "before_files": [{"content": "# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation; either version 2 of the License, or\n# (at your option) any later version.\n\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n\n# You should have received a copy of the GNU General Public License\n# along with this program; if not, write to the Free Software\n# Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.\n\nfrom __future__ import with_statement\n\nimport os\nimport re\nimport inspect\nfrom subprocess import Popen, PIPE, STDOUT\nimport hashlib\nimport logging\nimport fnmatch\nimport errno\nimport shlex\n\nfrom contextlib import closing\n\n# PYCOMPAT\nimport six\nfrom six import StringIO\n\n\ndef tail(filename, number_of_bytes):\n \"\"\"Returns the last number_of_bytes of filename\"\"\"\n with open(filename, \"rb\") as f:\n if os.stat(filename).st_size > number_of_bytes:\n f.seek(-number_of_bytes, 2)\n return f.read()\n\n\ndef fileobj(path_or_file, mode='r'):\n \"\"\"Returns a file-like object that can be used as a context manager\"\"\"\n if isinstance(path_or_file, six.string_types):\n try:\n return open(path_or_file, mode)\n except:\n log = logging.getLogger('sos')\n log.debug(\"fileobj: %s could not be opened\" % path_or_file)\n return closing(StringIO())\n else:\n return closing(path_or_file)\n\n\ndef convert_bytes(bytes_, K=1 << 10, M=1 << 20, G=1 << 30, T=1 << 40):\n \"\"\"Converts a number of bytes to a shorter, more human friendly format\"\"\"\n fn = float(bytes_)\n if bytes_ >= T:\n return '%.1fT' % (fn / T)\n elif bytes_ >= G:\n return '%.1fG' % (fn / G)\n elif bytes_ >= M:\n return '%.1fM' % (fn / M)\n elif bytes_ >= K:\n return '%.1fK' % (fn / K)\n else:\n return '%d' % bytes_\n\n\ndef find(file_pattern, top_dir, max_depth=None, path_pattern=None):\n \"\"\"generator function to find files recursively. Usage:\n\n for filename in find(\"*.properties\", \"/var/log/foobar\"):\n print filename\n \"\"\"\n if max_depth:\n base_depth = os.path.dirname(top_dir).count(os.path.sep)\n max_depth += base_depth\n\n for path, dirlist, filelist in os.walk(top_dir):\n if max_depth and path.count(os.path.sep) >= max_depth:\n del dirlist[:]\n\n if path_pattern and not fnmatch.fnmatch(path, path_pattern):\n continue\n\n for name in fnmatch.filter(filelist, file_pattern):\n yield os.path.join(path, name)\n\n\ndef grep(pattern, *files_or_paths):\n \"\"\"Returns lines matched in fnames, where fnames can either be pathnames to\n files to grep through or open file objects to grep through line by line\"\"\"\n matches = []\n\n for fop in files_or_paths:\n with fileobj(fop) as fo:\n matches.extend((line for line in fo if re.match(pattern, line)))\n\n return matches\n\n\ndef is_executable(command):\n \"\"\"Returns if a command matches an executable on the PATH\"\"\"\n\n paths = os.environ.get(\"PATH\", \"\").split(os.path.pathsep)\n candidates = [command] + [os.path.join(p, command) for p in paths]\n return any(os.access(path, os.X_OK) for path in candidates)\n\n\ndef sos_get_command_output(command, timeout=300, stderr=False,\n chroot=None, chdir=None):\n \"\"\"Execute a command and return a dictionary of status and output,\n optionally changing root or current working directory before\n executing command.\n \"\"\"\n # Change root or cwd for child only. Exceptions in the prexec_fn\n # closure are caught in the parent (chroot and chdir are bound from\n # the enclosing scope).\n def _child_prep_fn():\n if (chroot):\n os.chroot(chroot)\n if (chdir):\n os.chdir(chdir)\n\n cmd_env = os.environ\n # ensure consistent locale for collected command output\n cmd_env['LC_ALL'] = 'C'\n # use /usr/bin/timeout to implement a timeout\n if timeout and is_executable(\"timeout\"):\n command = \"timeout %ds %s\" % (timeout, command)\n\n # shlex.split() reacts badly to unicode on older python runtimes.\n if not six.PY3:\n command = command.encode('utf-8', 'ignore')\n args = shlex.split(command)\n try:\n p = Popen(args, shell=False, stdout=PIPE,\n stderr=STDOUT if stderr else PIPE,\n bufsize=-1, env=cmd_env, close_fds=True,\n preexec_fn=_child_prep_fn)\n stdout, stderr = p.communicate()\n except OSError as e:\n if e.errno == errno.ENOENT:\n return {'status': 127, 'output': \"\"}\n else:\n raise e\n\n if p.returncode == 126 or p.returncode == 127:\n stdout = six.binary_type(b\"\")\n\n return {\n 'status': p.returncode,\n 'output': stdout.decode('utf-8', 'ignore')\n }\n\n\ndef import_module(module_fqname, superclasses=None):\n \"\"\"Imports the module module_fqname and returns a list of defined classes\n from that module. If superclasses is defined then the classes returned will\n be subclasses of the specified superclass or superclasses. If superclasses\n is plural it must be a tuple of classes.\"\"\"\n module_name = module_fqname.rpartition(\".\")[-1]\n module = __import__(module_fqname, globals(), locals(), [module_name])\n modules = [class_ for cname, class_ in\n inspect.getmembers(module, inspect.isclass)\n if class_.__module__ == module_fqname]\n if superclasses:\n modules = [m for m in modules if issubclass(m, superclasses)]\n\n return modules\n\n\ndef shell_out(cmd, timeout=30, chroot=None, runat=None):\n \"\"\"Shell out to an external command and return the output or the empty\n string in case of error.\n \"\"\"\n return sos_get_command_output(cmd, timeout=timeout,\n chroot=chroot, chdir=runat)['output']\n\n\nclass ImporterHelper(object):\n \"\"\"Provides a list of modules that can be imported in a package.\n Importable modules are located along the module __path__ list and modules\n are files that end in .py.\n \"\"\"\n\n def __init__(self, package):\n \"\"\"package is a package module\n import my.package.module\n helper = ImporterHelper(my.package.module)\"\"\"\n self.package = package\n\n def _plugin_name(self, path):\n \"Returns the plugin module name given the path\"\n base = os.path.basename(path)\n name, ext = os.path.splitext(base)\n return name\n\n def _get_plugins_from_list(self, list_):\n plugins = [self._plugin_name(plugin)\n for plugin in list_\n if \"__init__\" not in plugin and plugin.endswith(\".py\")]\n plugins.sort()\n return plugins\n\n def _find_plugins_in_dir(self, path):\n if os.path.exists(path):\n py_files = list(find(\"*.py\", path))\n pnames = self._get_plugins_from_list(py_files)\n if pnames:\n return pnames\n else:\n return []\n\n def get_modules(self):\n \"\"\"Returns the list of importable modules in the configured python\n package. \"\"\"\n plugins = []\n for path in self.package.__path__:\n if os.path.isdir(path) or path == '':\n plugins.extend(self._find_plugins_in_dir(path))\n\n return plugins\n\n# vim: set et ts=4 sw=4 :\n", "path": "sos/utilities.py"}], "after_files": [{"content": "# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation; either version 2 of the License, or\n# (at your option) any later version.\n\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n\n# You should have received a copy of the GNU General Public License\n# along with this program; if not, write to the Free Software\n# Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.\n\nfrom __future__ import with_statement\n\nimport os\nimport re\nimport inspect\nfrom subprocess import Popen, PIPE, STDOUT\nimport logging\nimport fnmatch\nimport errno\nimport shlex\n\nfrom contextlib import closing\n\n# PYCOMPAT\nimport six\nfrom six import StringIO\n\n\ndef tail(filename, number_of_bytes):\n \"\"\"Returns the last number_of_bytes of filename\"\"\"\n with open(filename, \"rb\") as f:\n if os.stat(filename).st_size > number_of_bytes:\n f.seek(-number_of_bytes, 2)\n return f.read()\n\n\ndef fileobj(path_or_file, mode='r'):\n \"\"\"Returns a file-like object that can be used as a context manager\"\"\"\n if isinstance(path_or_file, six.string_types):\n try:\n return open(path_or_file, mode)\n except:\n log = logging.getLogger('sos')\n log.debug(\"fileobj: %s could not be opened\" % path_or_file)\n return closing(StringIO())\n else:\n return closing(path_or_file)\n\n\ndef convert_bytes(bytes_, K=1 << 10, M=1 << 20, G=1 << 30, T=1 << 40):\n \"\"\"Converts a number of bytes to a shorter, more human friendly format\"\"\"\n fn = float(bytes_)\n if bytes_ >= T:\n return '%.1fT' % (fn / T)\n elif bytes_ >= G:\n return '%.1fG' % (fn / G)\n elif bytes_ >= M:\n return '%.1fM' % (fn / M)\n elif bytes_ >= K:\n return '%.1fK' % (fn / K)\n else:\n return '%d' % bytes_\n\n\ndef find(file_pattern, top_dir, max_depth=None, path_pattern=None):\n \"\"\"generator function to find files recursively. Usage:\n\n for filename in find(\"*.properties\", \"/var/log/foobar\"):\n print filename\n \"\"\"\n if max_depth:\n base_depth = os.path.dirname(top_dir).count(os.path.sep)\n max_depth += base_depth\n\n for path, dirlist, filelist in os.walk(top_dir):\n if max_depth and path.count(os.path.sep) >= max_depth:\n del dirlist[:]\n\n if path_pattern and not fnmatch.fnmatch(path, path_pattern):\n continue\n\n for name in fnmatch.filter(filelist, file_pattern):\n yield os.path.join(path, name)\n\n\ndef grep(pattern, *files_or_paths):\n \"\"\"Returns lines matched in fnames, where fnames can either be pathnames to\n files to grep through or open file objects to grep through line by line\"\"\"\n matches = []\n\n for fop in files_or_paths:\n with fileobj(fop) as fo:\n matches.extend((line for line in fo if re.match(pattern, line)))\n\n return matches\n\n\ndef is_executable(command):\n \"\"\"Returns if a command matches an executable on the PATH\"\"\"\n\n paths = os.environ.get(\"PATH\", \"\").split(os.path.pathsep)\n candidates = [command] + [os.path.join(p, command) for p in paths]\n return any(os.access(path, os.X_OK) for path in candidates)\n\n\ndef sos_get_command_output(command, timeout=300, stderr=False,\n chroot=None, chdir=None):\n \"\"\"Execute a command and return a dictionary of status and output,\n optionally changing root or current working directory before\n executing command.\n \"\"\"\n # Change root or cwd for child only. Exceptions in the prexec_fn\n # closure are caught in the parent (chroot and chdir are bound from\n # the enclosing scope).\n def _child_prep_fn():\n if (chroot):\n os.chroot(chroot)\n if (chdir):\n os.chdir(chdir)\n\n cmd_env = os.environ\n # ensure consistent locale for collected command output\n cmd_env['LC_ALL'] = 'C'\n # use /usr/bin/timeout to implement a timeout\n if timeout and is_executable(\"timeout\"):\n command = \"timeout %ds %s\" % (timeout, command)\n\n # shlex.split() reacts badly to unicode on older python runtimes.\n if not six.PY3:\n command = command.encode('utf-8', 'ignore')\n args = shlex.split(command)\n try:\n p = Popen(args, shell=False, stdout=PIPE,\n stderr=STDOUT if stderr else PIPE,\n bufsize=-1, env=cmd_env, close_fds=True,\n preexec_fn=_child_prep_fn)\n stdout, stderr = p.communicate()\n except OSError as e:\n if e.errno == errno.ENOENT:\n return {'status': 127, 'output': \"\"}\n else:\n raise e\n\n if p.returncode == 126 or p.returncode == 127:\n stdout = six.binary_type(b\"\")\n\n return {\n 'status': p.returncode,\n 'output': stdout.decode('utf-8', 'ignore')\n }\n\n\ndef import_module(module_fqname, superclasses=None):\n \"\"\"Imports the module module_fqname and returns a list of defined classes\n from that module. If superclasses is defined then the classes returned will\n be subclasses of the specified superclass or superclasses. If superclasses\n is plural it must be a tuple of classes.\"\"\"\n module_name = module_fqname.rpartition(\".\")[-1]\n module = __import__(module_fqname, globals(), locals(), [module_name])\n modules = [class_ for cname, class_ in\n inspect.getmembers(module, inspect.isclass)\n if class_.__module__ == module_fqname]\n if superclasses:\n modules = [m for m in modules if issubclass(m, superclasses)]\n\n return modules\n\n\ndef shell_out(cmd, timeout=30, chroot=None, runat=None):\n \"\"\"Shell out to an external command and return the output or the empty\n string in case of error.\n \"\"\"\n return sos_get_command_output(cmd, timeout=timeout,\n chroot=chroot, chdir=runat)['output']\n\n\nclass ImporterHelper(object):\n \"\"\"Provides a list of modules that can be imported in a package.\n Importable modules are located along the module __path__ list and modules\n are files that end in .py.\n \"\"\"\n\n def __init__(self, package):\n \"\"\"package is a package module\n import my.package.module\n helper = ImporterHelper(my.package.module)\"\"\"\n self.package = package\n\n def _plugin_name(self, path):\n \"Returns the plugin module name given the path\"\n base = os.path.basename(path)\n name, ext = os.path.splitext(base)\n return name\n\n def _get_plugins_from_list(self, list_):\n plugins = [self._plugin_name(plugin)\n for plugin in list_\n if \"__init__\" not in plugin and plugin.endswith(\".py\")]\n plugins.sort()\n return plugins\n\n def _find_plugins_in_dir(self, path):\n if os.path.exists(path):\n py_files = list(find(\"*.py\", path))\n pnames = self._get_plugins_from_list(py_files)\n if pnames:\n return pnames\n else:\n return []\n\n def get_modules(self):\n \"\"\"Returns the list of importable modules in the configured python\n package. \"\"\"\n plugins = []\n for path in self.package.__path__:\n if os.path.isdir(path) or path == '':\n plugins.extend(self._find_plugins_in_dir(path))\n\n return plugins\n\n# vim: set et ts=4 sw=4 :\n", "path": "sos/utilities.py"}]} |
gh_patches_debug_73 | rasdani/github-patches | git_diff | paperless-ngx__paperless-ngx-4158 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Security] saved_views API returns (hashed) user password in response
### Description
The response of `GET /api/saved_views/` includes the hashed password of the owner of the saved view.
### Steps to reproduce
```
curl -uuser:pass https://host.com/api/saved_views/ | jq .results[].owner.password
```
### Webserver logs
```bash
-
```
### Browser logs
_No response_
### Paperless-ngx version
1.16.5
### Host OS
Debian GNU/Linux 12
### Installation method
Docker - official image
### Browser
_No response_
### Configuration changes
_No response_
### Other
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/documents/serialisers.py`
Content:
```
1 import datetime
2 import math
3 import re
4 import zoneinfo
5
6 import magic
7 from celery import states
8 from django.conf import settings
9 from django.contrib.auth.models import Group
10 from django.contrib.auth.models import User
11 from django.utils.text import slugify
12 from django.utils.translation import gettext as _
13 from guardian.core import ObjectPermissionChecker
14 from guardian.shortcuts import get_users_with_perms
15 from rest_framework import serializers
16 from rest_framework.fields import SerializerMethodField
17
18 from documents.permissions import get_groups_with_only_permission
19 from documents.permissions import set_permissions_for_object
20
21 from . import bulk_edit
22 from .models import Correspondent
23 from .models import Document
24 from .models import DocumentType
25 from .models import MatchingModel
26 from .models import PaperlessTask
27 from .models import SavedView
28 from .models import SavedViewFilterRule
29 from .models import StoragePath
30 from .models import Tag
31 from .models import UiSettings
32 from .parsers import is_mime_type_supported
33
34
35 # https://www.django-rest-framework.org/api-guide/serializers/#example
36 class DynamicFieldsModelSerializer(serializers.ModelSerializer):
37 """
38 A ModelSerializer that takes an additional `fields` argument that
39 controls which fields should be displayed.
40 """
41
42 def __init__(self, *args, **kwargs):
43 # Don't pass the 'fields' arg up to the superclass
44 fields = kwargs.pop("fields", None)
45
46 # Instantiate the superclass normally
47 super().__init__(*args, **kwargs)
48
49 if fields is not None:
50 # Drop any fields that are not specified in the `fields` argument.
51 allowed = set(fields)
52 existing = set(self.fields)
53 for field_name in existing - allowed:
54 self.fields.pop(field_name)
55
56
57 class MatchingModelSerializer(serializers.ModelSerializer):
58 document_count = serializers.IntegerField(read_only=True)
59
60 def get_slug(self, obj):
61 return slugify(obj.name)
62
63 slug = SerializerMethodField()
64
65 def validate(self, data):
66 # see https://github.com/encode/django-rest-framework/issues/7173
67 name = data["name"] if "name" in data else self.instance.name
68 owner = (
69 data["owner"]
70 if "owner" in data
71 else self.user
72 if hasattr(self, "user")
73 else None
74 )
75 pk = self.instance.pk if hasattr(self.instance, "pk") else None
76 if ("name" in data or "owner" in data) and self.Meta.model.objects.filter(
77 name=name,
78 owner=owner,
79 ).exclude(pk=pk).exists():
80 raise serializers.ValidationError(
81 {"error": "Object violates owner / name unique constraint"},
82 )
83 return data
84
85 def validate_match(self, match):
86 if (
87 "matching_algorithm" in self.initial_data
88 and self.initial_data["matching_algorithm"] == MatchingModel.MATCH_REGEX
89 ):
90 try:
91 re.compile(match)
92 except re.error as e:
93 raise serializers.ValidationError(
94 _("Invalid regular expression: %(error)s") % {"error": str(e.msg)},
95 )
96 return match
97
98
99 class SetPermissionsMixin:
100 def _validate_user_ids(self, user_ids):
101 users = User.objects.none()
102 if user_ids is not None:
103 users = User.objects.filter(id__in=user_ids)
104 if not users.count() == len(user_ids):
105 raise serializers.ValidationError(
106 "Some users in don't exist or were specified twice.",
107 )
108 return users
109
110 def _validate_group_ids(self, group_ids):
111 groups = Group.objects.none()
112 if group_ids is not None:
113 groups = Group.objects.filter(id__in=group_ids)
114 if not groups.count() == len(group_ids):
115 raise serializers.ValidationError(
116 "Some groups in don't exist or were specified twice.",
117 )
118 return groups
119
120 def validate_set_permissions(self, set_permissions=None):
121 permissions_dict = {
122 "view": {
123 "users": User.objects.none(),
124 "groups": Group.objects.none(),
125 },
126 "change": {
127 "users": User.objects.none(),
128 "groups": Group.objects.none(),
129 },
130 }
131 if set_permissions is not None:
132 for action in permissions_dict:
133 if action in set_permissions:
134 users = set_permissions[action]["users"]
135 permissions_dict[action]["users"] = self._validate_user_ids(users)
136 groups = set_permissions[action]["groups"]
137 permissions_dict[action]["groups"] = self._validate_group_ids(
138 groups,
139 )
140 return permissions_dict
141
142 def _set_permissions(self, permissions, object):
143 set_permissions_for_object(permissions, object)
144
145
146 class OwnedObjectSerializer(serializers.ModelSerializer, SetPermissionsMixin):
147 def __init__(self, *args, **kwargs):
148 self.user = kwargs.pop("user", None)
149 full_perms = kwargs.pop("full_perms", False)
150 super().__init__(*args, **kwargs)
151
152 try:
153 if full_perms:
154 self.fields.pop("user_can_change")
155 else:
156 self.fields.pop("permissions")
157 except KeyError:
158 pass
159
160 def get_permissions(self, obj):
161 view_codename = f"view_{obj.__class__.__name__.lower()}"
162 change_codename = f"change_{obj.__class__.__name__.lower()}"
163
164 return {
165 "view": {
166 "users": get_users_with_perms(
167 obj,
168 only_with_perms_in=[view_codename],
169 with_group_users=False,
170 ).values_list("id", flat=True),
171 "groups": get_groups_with_only_permission(
172 obj,
173 codename=view_codename,
174 ).values_list("id", flat=True),
175 },
176 "change": {
177 "users": get_users_with_perms(
178 obj,
179 only_with_perms_in=[change_codename],
180 with_group_users=False,
181 ).values_list("id", flat=True),
182 "groups": get_groups_with_only_permission(
183 obj,
184 codename=change_codename,
185 ).values_list("id", flat=True),
186 },
187 }
188
189 def get_user_can_change(self, obj):
190 checker = ObjectPermissionChecker(self.user) if self.user is not None else None
191 return (
192 obj.owner is None
193 or obj.owner == self.user
194 or (
195 self.user is not None
196 and checker.has_perm(f"change_{obj.__class__.__name__.lower()}", obj)
197 )
198 )
199
200 permissions = SerializerMethodField(read_only=True)
201 user_can_change = SerializerMethodField(read_only=True)
202
203 set_permissions = serializers.DictField(
204 label="Set permissions",
205 allow_empty=True,
206 required=False,
207 write_only=True,
208 )
209 # other methods in mixin
210
211 def create(self, validated_data):
212 # default to current user if not set
213 if "owner" not in validated_data and self.user:
214 validated_data["owner"] = self.user
215 permissions = None
216 if "set_permissions" in validated_data:
217 permissions = validated_data.pop("set_permissions")
218 instance = super().create(validated_data)
219 if permissions is not None:
220 self._set_permissions(permissions, instance)
221 return instance
222
223 def update(self, instance, validated_data):
224 if "set_permissions" in validated_data:
225 self._set_permissions(validated_data["set_permissions"], instance)
226 if "owner" in validated_data and "name" in self.Meta.fields:
227 name = validated_data["name"] if "name" in validated_data else instance.name
228 not_unique = (
229 self.Meta.model.objects.exclude(pk=instance.pk)
230 .filter(owner=validated_data["owner"], name=name)
231 .exists()
232 )
233 if not_unique:
234 raise serializers.ValidationError(
235 {"error": "Object violates owner / name unique constraint"},
236 )
237 return super().update(instance, validated_data)
238
239
240 class CorrespondentSerializer(MatchingModelSerializer, OwnedObjectSerializer):
241 last_correspondence = serializers.DateTimeField(read_only=True)
242
243 class Meta:
244 model = Correspondent
245 fields = (
246 "id",
247 "slug",
248 "name",
249 "match",
250 "matching_algorithm",
251 "is_insensitive",
252 "document_count",
253 "last_correspondence",
254 "owner",
255 "permissions",
256 "user_can_change",
257 "set_permissions",
258 )
259
260
261 class DocumentTypeSerializer(MatchingModelSerializer, OwnedObjectSerializer):
262 class Meta:
263 model = DocumentType
264 fields = (
265 "id",
266 "slug",
267 "name",
268 "match",
269 "matching_algorithm",
270 "is_insensitive",
271 "document_count",
272 "owner",
273 "permissions",
274 "user_can_change",
275 "set_permissions",
276 )
277
278
279 class ColorField(serializers.Field):
280 COLOURS = (
281 (1, "#a6cee3"),
282 (2, "#1f78b4"),
283 (3, "#b2df8a"),
284 (4, "#33a02c"),
285 (5, "#fb9a99"),
286 (6, "#e31a1c"),
287 (7, "#fdbf6f"),
288 (8, "#ff7f00"),
289 (9, "#cab2d6"),
290 (10, "#6a3d9a"),
291 (11, "#b15928"),
292 (12, "#000000"),
293 (13, "#cccccc"),
294 )
295
296 def to_internal_value(self, data):
297 for id, color in self.COLOURS:
298 if id == data:
299 return color
300 raise serializers.ValidationError
301
302 def to_representation(self, value):
303 for id, color in self.COLOURS:
304 if color == value:
305 return id
306 return 1
307
308
309 class TagSerializerVersion1(MatchingModelSerializer, OwnedObjectSerializer):
310 colour = ColorField(source="color", default="#a6cee3")
311
312 class Meta:
313 model = Tag
314 fields = (
315 "id",
316 "slug",
317 "name",
318 "colour",
319 "match",
320 "matching_algorithm",
321 "is_insensitive",
322 "is_inbox_tag",
323 "document_count",
324 "owner",
325 "permissions",
326 "user_can_change",
327 "set_permissions",
328 )
329
330
331 class TagSerializer(MatchingModelSerializer, OwnedObjectSerializer):
332 def get_text_color(self, obj):
333 try:
334 h = obj.color.lstrip("#")
335 rgb = tuple(int(h[i : i + 2], 16) / 256 for i in (0, 2, 4))
336 luminance = math.sqrt(
337 0.299 * math.pow(rgb[0], 2)
338 + 0.587 * math.pow(rgb[1], 2)
339 + 0.114 * math.pow(rgb[2], 2),
340 )
341 return "#ffffff" if luminance < 0.53 else "#000000"
342 except ValueError:
343 return "#000000"
344
345 text_color = serializers.SerializerMethodField()
346
347 class Meta:
348 model = Tag
349 fields = (
350 "id",
351 "slug",
352 "name",
353 "color",
354 "text_color",
355 "match",
356 "matching_algorithm",
357 "is_insensitive",
358 "is_inbox_tag",
359 "document_count",
360 "owner",
361 "permissions",
362 "user_can_change",
363 "set_permissions",
364 )
365
366 def validate_color(self, color):
367 regex = r"#[0-9a-fA-F]{6}"
368 if not re.match(regex, color):
369 raise serializers.ValidationError(_("Invalid color."))
370 return color
371
372
373 class CorrespondentField(serializers.PrimaryKeyRelatedField):
374 def get_queryset(self):
375 return Correspondent.objects.all()
376
377
378 class TagsField(serializers.PrimaryKeyRelatedField):
379 def get_queryset(self):
380 return Tag.objects.all()
381
382
383 class DocumentTypeField(serializers.PrimaryKeyRelatedField):
384 def get_queryset(self):
385 return DocumentType.objects.all()
386
387
388 class StoragePathField(serializers.PrimaryKeyRelatedField):
389 def get_queryset(self):
390 return StoragePath.objects.all()
391
392
393 class DocumentSerializer(OwnedObjectSerializer, DynamicFieldsModelSerializer):
394 correspondent = CorrespondentField(allow_null=True)
395 tags = TagsField(many=True)
396 document_type = DocumentTypeField(allow_null=True)
397 storage_path = StoragePathField(allow_null=True)
398
399 original_file_name = SerializerMethodField()
400 archived_file_name = SerializerMethodField()
401 created_date = serializers.DateField(required=False)
402
403 owner = serializers.PrimaryKeyRelatedField(
404 queryset=User.objects.all(),
405 required=False,
406 allow_null=True,
407 )
408
409 def get_original_file_name(self, obj):
410 return obj.original_filename
411
412 def get_archived_file_name(self, obj):
413 if obj.has_archive_version:
414 return obj.get_public_filename(archive=True)
415 else:
416 return None
417
418 def to_representation(self, instance):
419 doc = super().to_representation(instance)
420 if self.truncate_content and "content" in self.fields:
421 doc["content"] = doc.get("content")[0:550]
422 return doc
423
424 def update(self, instance, validated_data):
425 if "created_date" in validated_data and "created" not in validated_data:
426 new_datetime = datetime.datetime.combine(
427 validated_data.get("created_date"),
428 datetime.time(0, 0, 0, 0, zoneinfo.ZoneInfo(settings.TIME_ZONE)),
429 )
430 instance.created = new_datetime
431 instance.save()
432 if "created_date" in validated_data:
433 validated_data.pop("created_date")
434 super().update(instance, validated_data)
435 return instance
436
437 def __init__(self, *args, **kwargs):
438 self.truncate_content = kwargs.pop("truncate_content", False)
439
440 super().__init__(*args, **kwargs)
441
442 class Meta:
443 model = Document
444 depth = 1
445 fields = (
446 "id",
447 "correspondent",
448 "document_type",
449 "storage_path",
450 "title",
451 "content",
452 "tags",
453 "created",
454 "created_date",
455 "modified",
456 "added",
457 "archive_serial_number",
458 "original_file_name",
459 "archived_file_name",
460 "owner",
461 "permissions",
462 "user_can_change",
463 "set_permissions",
464 "notes",
465 )
466
467
468 class SavedViewFilterRuleSerializer(serializers.ModelSerializer):
469 class Meta:
470 model = SavedViewFilterRule
471 fields = ["rule_type", "value"]
472
473
474 class SavedViewSerializer(OwnedObjectSerializer):
475 filter_rules = SavedViewFilterRuleSerializer(many=True)
476
477 class Meta:
478 model = SavedView
479 depth = 1
480 fields = [
481 "id",
482 "name",
483 "show_on_dashboard",
484 "show_in_sidebar",
485 "sort_field",
486 "sort_reverse",
487 "filter_rules",
488 "owner",
489 "permissions",
490 "user_can_change",
491 "set_permissions",
492 ]
493
494 def update(self, instance, validated_data):
495 if "filter_rules" in validated_data:
496 rules_data = validated_data.pop("filter_rules")
497 else:
498 rules_data = None
499 if "user" in validated_data:
500 # backwards compatibility
501 validated_data["owner"] = validated_data.pop("user")
502 super().update(instance, validated_data)
503 if rules_data is not None:
504 SavedViewFilterRule.objects.filter(saved_view=instance).delete()
505 for rule_data in rules_data:
506 SavedViewFilterRule.objects.create(saved_view=instance, **rule_data)
507 return instance
508
509 def create(self, validated_data):
510 rules_data = validated_data.pop("filter_rules")
511 if "user" in validated_data:
512 # backwards compatibility
513 validated_data["owner"] = validated_data.pop("user")
514 saved_view = SavedView.objects.create(**validated_data)
515 for rule_data in rules_data:
516 SavedViewFilterRule.objects.create(saved_view=saved_view, **rule_data)
517 return saved_view
518
519
520 class DocumentListSerializer(serializers.Serializer):
521 documents = serializers.ListField(
522 required=True,
523 label="Documents",
524 write_only=True,
525 child=serializers.IntegerField(),
526 )
527
528 def _validate_document_id_list(self, documents, name="documents"):
529 if not isinstance(documents, list):
530 raise serializers.ValidationError(f"{name} must be a list")
531 if not all(isinstance(i, int) for i in documents):
532 raise serializers.ValidationError(f"{name} must be a list of integers")
533 count = Document.objects.filter(id__in=documents).count()
534 if not count == len(documents):
535 raise serializers.ValidationError(
536 f"Some documents in {name} don't exist or were specified twice.",
537 )
538
539 def validate_documents(self, documents):
540 self._validate_document_id_list(documents)
541 return documents
542
543
544 class BulkEditSerializer(DocumentListSerializer, SetPermissionsMixin):
545 method = serializers.ChoiceField(
546 choices=[
547 "set_correspondent",
548 "set_document_type",
549 "set_storage_path",
550 "add_tag",
551 "remove_tag",
552 "modify_tags",
553 "delete",
554 "redo_ocr",
555 "set_permissions",
556 ],
557 label="Method",
558 write_only=True,
559 )
560
561 parameters = serializers.DictField(allow_empty=True)
562
563 def _validate_tag_id_list(self, tags, name="tags"):
564 if not isinstance(tags, list):
565 raise serializers.ValidationError(f"{name} must be a list")
566 if not all(isinstance(i, int) for i in tags):
567 raise serializers.ValidationError(f"{name} must be a list of integers")
568 count = Tag.objects.filter(id__in=tags).count()
569 if not count == len(tags):
570 raise serializers.ValidationError(
571 f"Some tags in {name} don't exist or were specified twice.",
572 )
573
574 def validate_method(self, method):
575 if method == "set_correspondent":
576 return bulk_edit.set_correspondent
577 elif method == "set_document_type":
578 return bulk_edit.set_document_type
579 elif method == "set_storage_path":
580 return bulk_edit.set_storage_path
581 elif method == "add_tag":
582 return bulk_edit.add_tag
583 elif method == "remove_tag":
584 return bulk_edit.remove_tag
585 elif method == "modify_tags":
586 return bulk_edit.modify_tags
587 elif method == "delete":
588 return bulk_edit.delete
589 elif method == "redo_ocr":
590 return bulk_edit.redo_ocr
591 elif method == "set_permissions":
592 return bulk_edit.set_permissions
593 else:
594 raise serializers.ValidationError("Unsupported method.")
595
596 def _validate_parameters_tags(self, parameters):
597 if "tag" in parameters:
598 tag_id = parameters["tag"]
599 try:
600 Tag.objects.get(id=tag_id)
601 except Tag.DoesNotExist:
602 raise serializers.ValidationError("Tag does not exist")
603 else:
604 raise serializers.ValidationError("tag not specified")
605
606 def _validate_parameters_document_type(self, parameters):
607 if "document_type" in parameters:
608 document_type_id = parameters["document_type"]
609 if document_type_id is None:
610 # None is ok
611 return
612 try:
613 DocumentType.objects.get(id=document_type_id)
614 except DocumentType.DoesNotExist:
615 raise serializers.ValidationError("Document type does not exist")
616 else:
617 raise serializers.ValidationError("document_type not specified")
618
619 def _validate_parameters_correspondent(self, parameters):
620 if "correspondent" in parameters:
621 correspondent_id = parameters["correspondent"]
622 if correspondent_id is None:
623 return
624 try:
625 Correspondent.objects.get(id=correspondent_id)
626 except Correspondent.DoesNotExist:
627 raise serializers.ValidationError("Correspondent does not exist")
628 else:
629 raise serializers.ValidationError("correspondent not specified")
630
631 def _validate_storage_path(self, parameters):
632 if "storage_path" in parameters:
633 storage_path_id = parameters["storage_path"]
634 if storage_path_id is None:
635 return
636 try:
637 StoragePath.objects.get(id=storage_path_id)
638 except StoragePath.DoesNotExist:
639 raise serializers.ValidationError(
640 "Storage path does not exist",
641 )
642 else:
643 raise serializers.ValidationError("storage path not specified")
644
645 def _validate_parameters_modify_tags(self, parameters):
646 if "add_tags" in parameters:
647 self._validate_tag_id_list(parameters["add_tags"], "add_tags")
648 else:
649 raise serializers.ValidationError("add_tags not specified")
650
651 if "remove_tags" in parameters:
652 self._validate_tag_id_list(parameters["remove_tags"], "remove_tags")
653 else:
654 raise serializers.ValidationError("remove_tags not specified")
655
656 def _validate_owner(self, owner):
657 ownerUser = User.objects.get(pk=owner)
658 if ownerUser is None:
659 raise serializers.ValidationError("Specified owner cannot be found")
660 return ownerUser
661
662 def _validate_parameters_set_permissions(self, parameters):
663 parameters["set_permissions"] = self.validate_set_permissions(
664 parameters["set_permissions"],
665 )
666 if "owner" in parameters and parameters["owner"] is not None:
667 self._validate_owner(parameters["owner"])
668
669 def validate(self, attrs):
670 method = attrs["method"]
671 parameters = attrs["parameters"]
672
673 if method == bulk_edit.set_correspondent:
674 self._validate_parameters_correspondent(parameters)
675 elif method == bulk_edit.set_document_type:
676 self._validate_parameters_document_type(parameters)
677 elif method == bulk_edit.add_tag or method == bulk_edit.remove_tag:
678 self._validate_parameters_tags(parameters)
679 elif method == bulk_edit.modify_tags:
680 self._validate_parameters_modify_tags(parameters)
681 elif method == bulk_edit.set_storage_path:
682 self._validate_storage_path(parameters)
683 elif method == bulk_edit.set_permissions:
684 self._validate_parameters_set_permissions(parameters)
685
686 return attrs
687
688
689 class PostDocumentSerializer(serializers.Serializer):
690 created = serializers.DateTimeField(
691 label="Created",
692 allow_null=True,
693 write_only=True,
694 required=False,
695 )
696
697 document = serializers.FileField(
698 label="Document",
699 write_only=True,
700 )
701
702 title = serializers.CharField(
703 label="Title",
704 write_only=True,
705 required=False,
706 )
707
708 correspondent = serializers.PrimaryKeyRelatedField(
709 queryset=Correspondent.objects.all(),
710 label="Correspondent",
711 allow_null=True,
712 write_only=True,
713 required=False,
714 )
715
716 document_type = serializers.PrimaryKeyRelatedField(
717 queryset=DocumentType.objects.all(),
718 label="Document type",
719 allow_null=True,
720 write_only=True,
721 required=False,
722 )
723
724 tags = serializers.PrimaryKeyRelatedField(
725 many=True,
726 queryset=Tag.objects.all(),
727 label="Tags",
728 write_only=True,
729 required=False,
730 )
731
732 archive_serial_number = serializers.IntegerField(
733 label="ASN",
734 write_only=True,
735 required=False,
736 min_value=Document.ARCHIVE_SERIAL_NUMBER_MIN,
737 max_value=Document.ARCHIVE_SERIAL_NUMBER_MAX,
738 )
739
740 def validate_document(self, document):
741 document_data = document.file.read()
742 mime_type = magic.from_buffer(document_data, mime=True)
743
744 if not is_mime_type_supported(mime_type):
745 raise serializers.ValidationError(
746 _("File type %(type)s not supported") % {"type": mime_type},
747 )
748
749 return document.name, document_data
750
751 def validate_correspondent(self, correspondent):
752 if correspondent:
753 return correspondent.id
754 else:
755 return None
756
757 def validate_document_type(self, document_type):
758 if document_type:
759 return document_type.id
760 else:
761 return None
762
763 def validate_tags(self, tags):
764 if tags:
765 return [tag.id for tag in tags]
766 else:
767 return None
768
769
770 class BulkDownloadSerializer(DocumentListSerializer):
771 content = serializers.ChoiceField(
772 choices=["archive", "originals", "both"],
773 default="archive",
774 )
775
776 compression = serializers.ChoiceField(
777 choices=["none", "deflated", "bzip2", "lzma"],
778 default="none",
779 )
780
781 follow_formatting = serializers.BooleanField(
782 default=False,
783 )
784
785 def validate_compression(self, compression):
786 import zipfile
787
788 return {
789 "none": zipfile.ZIP_STORED,
790 "deflated": zipfile.ZIP_DEFLATED,
791 "bzip2": zipfile.ZIP_BZIP2,
792 "lzma": zipfile.ZIP_LZMA,
793 }[compression]
794
795
796 class StoragePathSerializer(MatchingModelSerializer, OwnedObjectSerializer):
797 class Meta:
798 model = StoragePath
799 fields = (
800 "id",
801 "slug",
802 "name",
803 "path",
804 "match",
805 "matching_algorithm",
806 "is_insensitive",
807 "document_count",
808 "owner",
809 "permissions",
810 "user_can_change",
811 "set_permissions",
812 )
813
814 def validate_path(self, path):
815 try:
816 path.format(
817 title="title",
818 correspondent="correspondent",
819 document_type="document_type",
820 created="created",
821 created_year="created_year",
822 created_year_short="created_year_short",
823 created_month="created_month",
824 created_month_name="created_month_name",
825 created_month_name_short="created_month_name_short",
826 created_day="created_day",
827 added="added",
828 added_year="added_year",
829 added_year_short="added_year_short",
830 added_month="added_month",
831 added_month_name="added_month_name",
832 added_month_name_short="added_month_name_short",
833 added_day="added_day",
834 asn="asn",
835 tags="tags",
836 tag_list="tag_list",
837 owner_username="someone",
838 original_name="testfile",
839 )
840
841 except KeyError as err:
842 raise serializers.ValidationError(_("Invalid variable detected.")) from err
843
844 return path
845
846 def update(self, instance, validated_data):
847 """
848 When a storage path is updated, see if documents
849 using it require a rename/move
850 """
851 doc_ids = [doc.id for doc in instance.documents.all()]
852 if len(doc_ids):
853 bulk_edit.bulk_update_documents.delay(doc_ids)
854
855 return super().update(instance, validated_data)
856
857
858 class UiSettingsViewSerializer(serializers.ModelSerializer):
859 class Meta:
860 model = UiSettings
861 depth = 1
862 fields = [
863 "id",
864 "settings",
865 ]
866
867 def validate_settings(self, settings):
868 # we never save update checking backend setting
869 if "update_checking" in settings:
870 try:
871 settings["update_checking"].pop("backend_setting")
872 except KeyError:
873 pass
874 return settings
875
876 def create(self, validated_data):
877 ui_settings = UiSettings.objects.update_or_create(
878 user=validated_data.get("user"),
879 defaults={"settings": validated_data.get("settings", None)},
880 )
881 return ui_settings
882
883
884 class TasksViewSerializer(serializers.ModelSerializer):
885 class Meta:
886 model = PaperlessTask
887 depth = 1
888 fields = (
889 "id",
890 "task_id",
891 "task_file_name",
892 "date_created",
893 "date_done",
894 "type",
895 "status",
896 "result",
897 "acknowledged",
898 "related_document",
899 )
900
901 type = serializers.SerializerMethodField()
902
903 def get_type(self, obj):
904 # just file tasks, for now
905 return "file"
906
907 related_document = serializers.SerializerMethodField()
908 related_doc_re = re.compile(r"New document id (\d+) created")
909
910 def get_related_document(self, obj):
911 result = None
912 if obj.status is not None and obj.status == states.SUCCESS:
913 try:
914 result = self.related_doc_re.search(obj.result).group(1)
915 except Exception:
916 pass
917
918 return result
919
920
921 class AcknowledgeTasksViewSerializer(serializers.Serializer):
922 tasks = serializers.ListField(
923 required=True,
924 label="Tasks",
925 write_only=True,
926 child=serializers.IntegerField(),
927 )
928
929 def _validate_task_id_list(self, tasks, name="tasks"):
930 pass
931 if not isinstance(tasks, list):
932 raise serializers.ValidationError(f"{name} must be a list")
933 if not all(isinstance(i, int) for i in tasks):
934 raise serializers.ValidationError(f"{name} must be a list of integers")
935 count = PaperlessTask.objects.filter(id__in=tasks).count()
936 if not count == len(tasks):
937 raise serializers.ValidationError(
938 f"Some tasks in {name} don't exist or were specified twice.",
939 )
940
941 def validate_tasks(self, tasks):
942 self._validate_task_id_list(tasks)
943 return tasks
944
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/documents/serialisers.py b/src/documents/serialisers.py
--- a/src/documents/serialisers.py
+++ b/src/documents/serialisers.py
@@ -476,7 +476,6 @@
class Meta:
model = SavedView
- depth = 1
fields = [
"id",
"name",
| {"golden_diff": "diff --git a/src/documents/serialisers.py b/src/documents/serialisers.py\n--- a/src/documents/serialisers.py\n+++ b/src/documents/serialisers.py\n@@ -476,7 +476,6 @@\n \n class Meta:\n model = SavedView\n- depth = 1\n fields = [\n \"id\",\n \"name\",\n", "issue": "[Security] saved_views API returns (hashed) user password in response\n### Description\n\nThe response of `GET /api/saved_views/` includes the hashed password of the owner of the saved view.\n\n### Steps to reproduce\n\n```\r\ncurl -uuser:pass https://host.com/api/saved_views/ | jq .results[].owner.password\r\n```\n\n### Webserver logs\n\n```bash\n-\n```\n\n\n### Browser logs\n\n_No response_\n\n### Paperless-ngx version\n\n1.16.5\n\n### Host OS\n\nDebian GNU/Linux 12\n\n### Installation method\n\nDocker - official image\n\n### Browser\n\n_No response_\n\n### Configuration changes\n\n_No response_\n\n### Other\n\n_No response_\n", "before_files": [{"content": "import datetime\nimport math\nimport re\nimport zoneinfo\n\nimport magic\nfrom celery import states\nfrom django.conf import settings\nfrom django.contrib.auth.models import Group\nfrom django.contrib.auth.models import User\nfrom django.utils.text import slugify\nfrom django.utils.translation import gettext as _\nfrom guardian.core import ObjectPermissionChecker\nfrom guardian.shortcuts import get_users_with_perms\nfrom rest_framework import serializers\nfrom rest_framework.fields import SerializerMethodField\n\nfrom documents.permissions import get_groups_with_only_permission\nfrom documents.permissions import set_permissions_for_object\n\nfrom . import bulk_edit\nfrom .models import Correspondent\nfrom .models import Document\nfrom .models import DocumentType\nfrom .models import MatchingModel\nfrom .models import PaperlessTask\nfrom .models import SavedView\nfrom .models import SavedViewFilterRule\nfrom .models import StoragePath\nfrom .models import Tag\nfrom .models import UiSettings\nfrom .parsers import is_mime_type_supported\n\n\n# https://www.django-rest-framework.org/api-guide/serializers/#example\nclass DynamicFieldsModelSerializer(serializers.ModelSerializer):\n \"\"\"\n A ModelSerializer that takes an additional `fields` argument that\n controls which fields should be displayed.\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n # Don't pass the 'fields' arg up to the superclass\n fields = kwargs.pop(\"fields\", None)\n\n # Instantiate the superclass normally\n super().__init__(*args, **kwargs)\n\n if fields is not None:\n # Drop any fields that are not specified in the `fields` argument.\n allowed = set(fields)\n existing = set(self.fields)\n for field_name in existing - allowed:\n self.fields.pop(field_name)\n\n\nclass MatchingModelSerializer(serializers.ModelSerializer):\n document_count = serializers.IntegerField(read_only=True)\n\n def get_slug(self, obj):\n return slugify(obj.name)\n\n slug = SerializerMethodField()\n\n def validate(self, data):\n # see https://github.com/encode/django-rest-framework/issues/7173\n name = data[\"name\"] if \"name\" in data else self.instance.name\n owner = (\n data[\"owner\"]\n if \"owner\" in data\n else self.user\n if hasattr(self, \"user\")\n else None\n )\n pk = self.instance.pk if hasattr(self.instance, \"pk\") else None\n if (\"name\" in data or \"owner\" in data) and self.Meta.model.objects.filter(\n name=name,\n owner=owner,\n ).exclude(pk=pk).exists():\n raise serializers.ValidationError(\n {\"error\": \"Object violates owner / name unique constraint\"},\n )\n return data\n\n def validate_match(self, match):\n if (\n \"matching_algorithm\" in self.initial_data\n and self.initial_data[\"matching_algorithm\"] == MatchingModel.MATCH_REGEX\n ):\n try:\n re.compile(match)\n except re.error as e:\n raise serializers.ValidationError(\n _(\"Invalid regular expression: %(error)s\") % {\"error\": str(e.msg)},\n )\n return match\n\n\nclass SetPermissionsMixin:\n def _validate_user_ids(self, user_ids):\n users = User.objects.none()\n if user_ids is not None:\n users = User.objects.filter(id__in=user_ids)\n if not users.count() == len(user_ids):\n raise serializers.ValidationError(\n \"Some users in don't exist or were specified twice.\",\n )\n return users\n\n def _validate_group_ids(self, group_ids):\n groups = Group.objects.none()\n if group_ids is not None:\n groups = Group.objects.filter(id__in=group_ids)\n if not groups.count() == len(group_ids):\n raise serializers.ValidationError(\n \"Some groups in don't exist or were specified twice.\",\n )\n return groups\n\n def validate_set_permissions(self, set_permissions=None):\n permissions_dict = {\n \"view\": {\n \"users\": User.objects.none(),\n \"groups\": Group.objects.none(),\n },\n \"change\": {\n \"users\": User.objects.none(),\n \"groups\": Group.objects.none(),\n },\n }\n if set_permissions is not None:\n for action in permissions_dict:\n if action in set_permissions:\n users = set_permissions[action][\"users\"]\n permissions_dict[action][\"users\"] = self._validate_user_ids(users)\n groups = set_permissions[action][\"groups\"]\n permissions_dict[action][\"groups\"] = self._validate_group_ids(\n groups,\n )\n return permissions_dict\n\n def _set_permissions(self, permissions, object):\n set_permissions_for_object(permissions, object)\n\n\nclass OwnedObjectSerializer(serializers.ModelSerializer, SetPermissionsMixin):\n def __init__(self, *args, **kwargs):\n self.user = kwargs.pop(\"user\", None)\n full_perms = kwargs.pop(\"full_perms\", False)\n super().__init__(*args, **kwargs)\n\n try:\n if full_perms:\n self.fields.pop(\"user_can_change\")\n else:\n self.fields.pop(\"permissions\")\n except KeyError:\n pass\n\n def get_permissions(self, obj):\n view_codename = f\"view_{obj.__class__.__name__.lower()}\"\n change_codename = f\"change_{obj.__class__.__name__.lower()}\"\n\n return {\n \"view\": {\n \"users\": get_users_with_perms(\n obj,\n only_with_perms_in=[view_codename],\n with_group_users=False,\n ).values_list(\"id\", flat=True),\n \"groups\": get_groups_with_only_permission(\n obj,\n codename=view_codename,\n ).values_list(\"id\", flat=True),\n },\n \"change\": {\n \"users\": get_users_with_perms(\n obj,\n only_with_perms_in=[change_codename],\n with_group_users=False,\n ).values_list(\"id\", flat=True),\n \"groups\": get_groups_with_only_permission(\n obj,\n codename=change_codename,\n ).values_list(\"id\", flat=True),\n },\n }\n\n def get_user_can_change(self, obj):\n checker = ObjectPermissionChecker(self.user) if self.user is not None else None\n return (\n obj.owner is None\n or obj.owner == self.user\n or (\n self.user is not None\n and checker.has_perm(f\"change_{obj.__class__.__name__.lower()}\", obj)\n )\n )\n\n permissions = SerializerMethodField(read_only=True)\n user_can_change = SerializerMethodField(read_only=True)\n\n set_permissions = serializers.DictField(\n label=\"Set permissions\",\n allow_empty=True,\n required=False,\n write_only=True,\n )\n # other methods in mixin\n\n def create(self, validated_data):\n # default to current user if not set\n if \"owner\" not in validated_data and self.user:\n validated_data[\"owner\"] = self.user\n permissions = None\n if \"set_permissions\" in validated_data:\n permissions = validated_data.pop(\"set_permissions\")\n instance = super().create(validated_data)\n if permissions is not None:\n self._set_permissions(permissions, instance)\n return instance\n\n def update(self, instance, validated_data):\n if \"set_permissions\" in validated_data:\n self._set_permissions(validated_data[\"set_permissions\"], instance)\n if \"owner\" in validated_data and \"name\" in self.Meta.fields:\n name = validated_data[\"name\"] if \"name\" in validated_data else instance.name\n not_unique = (\n self.Meta.model.objects.exclude(pk=instance.pk)\n .filter(owner=validated_data[\"owner\"], name=name)\n .exists()\n )\n if not_unique:\n raise serializers.ValidationError(\n {\"error\": \"Object violates owner / name unique constraint\"},\n )\n return super().update(instance, validated_data)\n\n\nclass CorrespondentSerializer(MatchingModelSerializer, OwnedObjectSerializer):\n last_correspondence = serializers.DateTimeField(read_only=True)\n\n class Meta:\n model = Correspondent\n fields = (\n \"id\",\n \"slug\",\n \"name\",\n \"match\",\n \"matching_algorithm\",\n \"is_insensitive\",\n \"document_count\",\n \"last_correspondence\",\n \"owner\",\n \"permissions\",\n \"user_can_change\",\n \"set_permissions\",\n )\n\n\nclass DocumentTypeSerializer(MatchingModelSerializer, OwnedObjectSerializer):\n class Meta:\n model = DocumentType\n fields = (\n \"id\",\n \"slug\",\n \"name\",\n \"match\",\n \"matching_algorithm\",\n \"is_insensitive\",\n \"document_count\",\n \"owner\",\n \"permissions\",\n \"user_can_change\",\n \"set_permissions\",\n )\n\n\nclass ColorField(serializers.Field):\n COLOURS = (\n (1, \"#a6cee3\"),\n (2, \"#1f78b4\"),\n (3, \"#b2df8a\"),\n (4, \"#33a02c\"),\n (5, \"#fb9a99\"),\n (6, \"#e31a1c\"),\n (7, \"#fdbf6f\"),\n (8, \"#ff7f00\"),\n (9, \"#cab2d6\"),\n (10, \"#6a3d9a\"),\n (11, \"#b15928\"),\n (12, \"#000000\"),\n (13, \"#cccccc\"),\n )\n\n def to_internal_value(self, data):\n for id, color in self.COLOURS:\n if id == data:\n return color\n raise serializers.ValidationError\n\n def to_representation(self, value):\n for id, color in self.COLOURS:\n if color == value:\n return id\n return 1\n\n\nclass TagSerializerVersion1(MatchingModelSerializer, OwnedObjectSerializer):\n colour = ColorField(source=\"color\", default=\"#a6cee3\")\n\n class Meta:\n model = Tag\n fields = (\n \"id\",\n \"slug\",\n \"name\",\n \"colour\",\n \"match\",\n \"matching_algorithm\",\n \"is_insensitive\",\n \"is_inbox_tag\",\n \"document_count\",\n \"owner\",\n \"permissions\",\n \"user_can_change\",\n \"set_permissions\",\n )\n\n\nclass TagSerializer(MatchingModelSerializer, OwnedObjectSerializer):\n def get_text_color(self, obj):\n try:\n h = obj.color.lstrip(\"#\")\n rgb = tuple(int(h[i : i + 2], 16) / 256 for i in (0, 2, 4))\n luminance = math.sqrt(\n 0.299 * math.pow(rgb[0], 2)\n + 0.587 * math.pow(rgb[1], 2)\n + 0.114 * math.pow(rgb[2], 2),\n )\n return \"#ffffff\" if luminance < 0.53 else \"#000000\"\n except ValueError:\n return \"#000000\"\n\n text_color = serializers.SerializerMethodField()\n\n class Meta:\n model = Tag\n fields = (\n \"id\",\n \"slug\",\n \"name\",\n \"color\",\n \"text_color\",\n \"match\",\n \"matching_algorithm\",\n \"is_insensitive\",\n \"is_inbox_tag\",\n \"document_count\",\n \"owner\",\n \"permissions\",\n \"user_can_change\",\n \"set_permissions\",\n )\n\n def validate_color(self, color):\n regex = r\"#[0-9a-fA-F]{6}\"\n if not re.match(regex, color):\n raise serializers.ValidationError(_(\"Invalid color.\"))\n return color\n\n\nclass CorrespondentField(serializers.PrimaryKeyRelatedField):\n def get_queryset(self):\n return Correspondent.objects.all()\n\n\nclass TagsField(serializers.PrimaryKeyRelatedField):\n def get_queryset(self):\n return Tag.objects.all()\n\n\nclass DocumentTypeField(serializers.PrimaryKeyRelatedField):\n def get_queryset(self):\n return DocumentType.objects.all()\n\n\nclass StoragePathField(serializers.PrimaryKeyRelatedField):\n def get_queryset(self):\n return StoragePath.objects.all()\n\n\nclass DocumentSerializer(OwnedObjectSerializer, DynamicFieldsModelSerializer):\n correspondent = CorrespondentField(allow_null=True)\n tags = TagsField(many=True)\n document_type = DocumentTypeField(allow_null=True)\n storage_path = StoragePathField(allow_null=True)\n\n original_file_name = SerializerMethodField()\n archived_file_name = SerializerMethodField()\n created_date = serializers.DateField(required=False)\n\n owner = serializers.PrimaryKeyRelatedField(\n queryset=User.objects.all(),\n required=False,\n allow_null=True,\n )\n\n def get_original_file_name(self, obj):\n return obj.original_filename\n\n def get_archived_file_name(self, obj):\n if obj.has_archive_version:\n return obj.get_public_filename(archive=True)\n else:\n return None\n\n def to_representation(self, instance):\n doc = super().to_representation(instance)\n if self.truncate_content and \"content\" in self.fields:\n doc[\"content\"] = doc.get(\"content\")[0:550]\n return doc\n\n def update(self, instance, validated_data):\n if \"created_date\" in validated_data and \"created\" not in validated_data:\n new_datetime = datetime.datetime.combine(\n validated_data.get(\"created_date\"),\n datetime.time(0, 0, 0, 0, zoneinfo.ZoneInfo(settings.TIME_ZONE)),\n )\n instance.created = new_datetime\n instance.save()\n if \"created_date\" in validated_data:\n validated_data.pop(\"created_date\")\n super().update(instance, validated_data)\n return instance\n\n def __init__(self, *args, **kwargs):\n self.truncate_content = kwargs.pop(\"truncate_content\", False)\n\n super().__init__(*args, **kwargs)\n\n class Meta:\n model = Document\n depth = 1\n fields = (\n \"id\",\n \"correspondent\",\n \"document_type\",\n \"storage_path\",\n \"title\",\n \"content\",\n \"tags\",\n \"created\",\n \"created_date\",\n \"modified\",\n \"added\",\n \"archive_serial_number\",\n \"original_file_name\",\n \"archived_file_name\",\n \"owner\",\n \"permissions\",\n \"user_can_change\",\n \"set_permissions\",\n \"notes\",\n )\n\n\nclass SavedViewFilterRuleSerializer(serializers.ModelSerializer):\n class Meta:\n model = SavedViewFilterRule\n fields = [\"rule_type\", \"value\"]\n\n\nclass SavedViewSerializer(OwnedObjectSerializer):\n filter_rules = SavedViewFilterRuleSerializer(many=True)\n\n class Meta:\n model = SavedView\n depth = 1\n fields = [\n \"id\",\n \"name\",\n \"show_on_dashboard\",\n \"show_in_sidebar\",\n \"sort_field\",\n \"sort_reverse\",\n \"filter_rules\",\n \"owner\",\n \"permissions\",\n \"user_can_change\",\n \"set_permissions\",\n ]\n\n def update(self, instance, validated_data):\n if \"filter_rules\" in validated_data:\n rules_data = validated_data.pop(\"filter_rules\")\n else:\n rules_data = None\n if \"user\" in validated_data:\n # backwards compatibility\n validated_data[\"owner\"] = validated_data.pop(\"user\")\n super().update(instance, validated_data)\n if rules_data is not None:\n SavedViewFilterRule.objects.filter(saved_view=instance).delete()\n for rule_data in rules_data:\n SavedViewFilterRule.objects.create(saved_view=instance, **rule_data)\n return instance\n\n def create(self, validated_data):\n rules_data = validated_data.pop(\"filter_rules\")\n if \"user\" in validated_data:\n # backwards compatibility\n validated_data[\"owner\"] = validated_data.pop(\"user\")\n saved_view = SavedView.objects.create(**validated_data)\n for rule_data in rules_data:\n SavedViewFilterRule.objects.create(saved_view=saved_view, **rule_data)\n return saved_view\n\n\nclass DocumentListSerializer(serializers.Serializer):\n documents = serializers.ListField(\n required=True,\n label=\"Documents\",\n write_only=True,\n child=serializers.IntegerField(),\n )\n\n def _validate_document_id_list(self, documents, name=\"documents\"):\n if not isinstance(documents, list):\n raise serializers.ValidationError(f\"{name} must be a list\")\n if not all(isinstance(i, int) for i in documents):\n raise serializers.ValidationError(f\"{name} must be a list of integers\")\n count = Document.objects.filter(id__in=documents).count()\n if not count == len(documents):\n raise serializers.ValidationError(\n f\"Some documents in {name} don't exist or were specified twice.\",\n )\n\n def validate_documents(self, documents):\n self._validate_document_id_list(documents)\n return documents\n\n\nclass BulkEditSerializer(DocumentListSerializer, SetPermissionsMixin):\n method = serializers.ChoiceField(\n choices=[\n \"set_correspondent\",\n \"set_document_type\",\n \"set_storage_path\",\n \"add_tag\",\n \"remove_tag\",\n \"modify_tags\",\n \"delete\",\n \"redo_ocr\",\n \"set_permissions\",\n ],\n label=\"Method\",\n write_only=True,\n )\n\n parameters = serializers.DictField(allow_empty=True)\n\n def _validate_tag_id_list(self, tags, name=\"tags\"):\n if not isinstance(tags, list):\n raise serializers.ValidationError(f\"{name} must be a list\")\n if not all(isinstance(i, int) for i in tags):\n raise serializers.ValidationError(f\"{name} must be a list of integers\")\n count = Tag.objects.filter(id__in=tags).count()\n if not count == len(tags):\n raise serializers.ValidationError(\n f\"Some tags in {name} don't exist or were specified twice.\",\n )\n\n def validate_method(self, method):\n if method == \"set_correspondent\":\n return bulk_edit.set_correspondent\n elif method == \"set_document_type\":\n return bulk_edit.set_document_type\n elif method == \"set_storage_path\":\n return bulk_edit.set_storage_path\n elif method == \"add_tag\":\n return bulk_edit.add_tag\n elif method == \"remove_tag\":\n return bulk_edit.remove_tag\n elif method == \"modify_tags\":\n return bulk_edit.modify_tags\n elif method == \"delete\":\n return bulk_edit.delete\n elif method == \"redo_ocr\":\n return bulk_edit.redo_ocr\n elif method == \"set_permissions\":\n return bulk_edit.set_permissions\n else:\n raise serializers.ValidationError(\"Unsupported method.\")\n\n def _validate_parameters_tags(self, parameters):\n if \"tag\" in parameters:\n tag_id = parameters[\"tag\"]\n try:\n Tag.objects.get(id=tag_id)\n except Tag.DoesNotExist:\n raise serializers.ValidationError(\"Tag does not exist\")\n else:\n raise serializers.ValidationError(\"tag not specified\")\n\n def _validate_parameters_document_type(self, parameters):\n if \"document_type\" in parameters:\n document_type_id = parameters[\"document_type\"]\n if document_type_id is None:\n # None is ok\n return\n try:\n DocumentType.objects.get(id=document_type_id)\n except DocumentType.DoesNotExist:\n raise serializers.ValidationError(\"Document type does not exist\")\n else:\n raise serializers.ValidationError(\"document_type not specified\")\n\n def _validate_parameters_correspondent(self, parameters):\n if \"correspondent\" in parameters:\n correspondent_id = parameters[\"correspondent\"]\n if correspondent_id is None:\n return\n try:\n Correspondent.objects.get(id=correspondent_id)\n except Correspondent.DoesNotExist:\n raise serializers.ValidationError(\"Correspondent does not exist\")\n else:\n raise serializers.ValidationError(\"correspondent not specified\")\n\n def _validate_storage_path(self, parameters):\n if \"storage_path\" in parameters:\n storage_path_id = parameters[\"storage_path\"]\n if storage_path_id is None:\n return\n try:\n StoragePath.objects.get(id=storage_path_id)\n except StoragePath.DoesNotExist:\n raise serializers.ValidationError(\n \"Storage path does not exist\",\n )\n else:\n raise serializers.ValidationError(\"storage path not specified\")\n\n def _validate_parameters_modify_tags(self, parameters):\n if \"add_tags\" in parameters:\n self._validate_tag_id_list(parameters[\"add_tags\"], \"add_tags\")\n else:\n raise serializers.ValidationError(\"add_tags not specified\")\n\n if \"remove_tags\" in parameters:\n self._validate_tag_id_list(parameters[\"remove_tags\"], \"remove_tags\")\n else:\n raise serializers.ValidationError(\"remove_tags not specified\")\n\n def _validate_owner(self, owner):\n ownerUser = User.objects.get(pk=owner)\n if ownerUser is None:\n raise serializers.ValidationError(\"Specified owner cannot be found\")\n return ownerUser\n\n def _validate_parameters_set_permissions(self, parameters):\n parameters[\"set_permissions\"] = self.validate_set_permissions(\n parameters[\"set_permissions\"],\n )\n if \"owner\" in parameters and parameters[\"owner\"] is not None:\n self._validate_owner(parameters[\"owner\"])\n\n def validate(self, attrs):\n method = attrs[\"method\"]\n parameters = attrs[\"parameters\"]\n\n if method == bulk_edit.set_correspondent:\n self._validate_parameters_correspondent(parameters)\n elif method == bulk_edit.set_document_type:\n self._validate_parameters_document_type(parameters)\n elif method == bulk_edit.add_tag or method == bulk_edit.remove_tag:\n self._validate_parameters_tags(parameters)\n elif method == bulk_edit.modify_tags:\n self._validate_parameters_modify_tags(parameters)\n elif method == bulk_edit.set_storage_path:\n self._validate_storage_path(parameters)\n elif method == bulk_edit.set_permissions:\n self._validate_parameters_set_permissions(parameters)\n\n return attrs\n\n\nclass PostDocumentSerializer(serializers.Serializer):\n created = serializers.DateTimeField(\n label=\"Created\",\n allow_null=True,\n write_only=True,\n required=False,\n )\n\n document = serializers.FileField(\n label=\"Document\",\n write_only=True,\n )\n\n title = serializers.CharField(\n label=\"Title\",\n write_only=True,\n required=False,\n )\n\n correspondent = serializers.PrimaryKeyRelatedField(\n queryset=Correspondent.objects.all(),\n label=\"Correspondent\",\n allow_null=True,\n write_only=True,\n required=False,\n )\n\n document_type = serializers.PrimaryKeyRelatedField(\n queryset=DocumentType.objects.all(),\n label=\"Document type\",\n allow_null=True,\n write_only=True,\n required=False,\n )\n\n tags = serializers.PrimaryKeyRelatedField(\n many=True,\n queryset=Tag.objects.all(),\n label=\"Tags\",\n write_only=True,\n required=False,\n )\n\n archive_serial_number = serializers.IntegerField(\n label=\"ASN\",\n write_only=True,\n required=False,\n min_value=Document.ARCHIVE_SERIAL_NUMBER_MIN,\n max_value=Document.ARCHIVE_SERIAL_NUMBER_MAX,\n )\n\n def validate_document(self, document):\n document_data = document.file.read()\n mime_type = magic.from_buffer(document_data, mime=True)\n\n if not is_mime_type_supported(mime_type):\n raise serializers.ValidationError(\n _(\"File type %(type)s not supported\") % {\"type\": mime_type},\n )\n\n return document.name, document_data\n\n def validate_correspondent(self, correspondent):\n if correspondent:\n return correspondent.id\n else:\n return None\n\n def validate_document_type(self, document_type):\n if document_type:\n return document_type.id\n else:\n return None\n\n def validate_tags(self, tags):\n if tags:\n return [tag.id for tag in tags]\n else:\n return None\n\n\nclass BulkDownloadSerializer(DocumentListSerializer):\n content = serializers.ChoiceField(\n choices=[\"archive\", \"originals\", \"both\"],\n default=\"archive\",\n )\n\n compression = serializers.ChoiceField(\n choices=[\"none\", \"deflated\", \"bzip2\", \"lzma\"],\n default=\"none\",\n )\n\n follow_formatting = serializers.BooleanField(\n default=False,\n )\n\n def validate_compression(self, compression):\n import zipfile\n\n return {\n \"none\": zipfile.ZIP_STORED,\n \"deflated\": zipfile.ZIP_DEFLATED,\n \"bzip2\": zipfile.ZIP_BZIP2,\n \"lzma\": zipfile.ZIP_LZMA,\n }[compression]\n\n\nclass StoragePathSerializer(MatchingModelSerializer, OwnedObjectSerializer):\n class Meta:\n model = StoragePath\n fields = (\n \"id\",\n \"slug\",\n \"name\",\n \"path\",\n \"match\",\n \"matching_algorithm\",\n \"is_insensitive\",\n \"document_count\",\n \"owner\",\n \"permissions\",\n \"user_can_change\",\n \"set_permissions\",\n )\n\n def validate_path(self, path):\n try:\n path.format(\n title=\"title\",\n correspondent=\"correspondent\",\n document_type=\"document_type\",\n created=\"created\",\n created_year=\"created_year\",\n created_year_short=\"created_year_short\",\n created_month=\"created_month\",\n created_month_name=\"created_month_name\",\n created_month_name_short=\"created_month_name_short\",\n created_day=\"created_day\",\n added=\"added\",\n added_year=\"added_year\",\n added_year_short=\"added_year_short\",\n added_month=\"added_month\",\n added_month_name=\"added_month_name\",\n added_month_name_short=\"added_month_name_short\",\n added_day=\"added_day\",\n asn=\"asn\",\n tags=\"tags\",\n tag_list=\"tag_list\",\n owner_username=\"someone\",\n original_name=\"testfile\",\n )\n\n except KeyError as err:\n raise serializers.ValidationError(_(\"Invalid variable detected.\")) from err\n\n return path\n\n def update(self, instance, validated_data):\n \"\"\"\n When a storage path is updated, see if documents\n using it require a rename/move\n \"\"\"\n doc_ids = [doc.id for doc in instance.documents.all()]\n if len(doc_ids):\n bulk_edit.bulk_update_documents.delay(doc_ids)\n\n return super().update(instance, validated_data)\n\n\nclass UiSettingsViewSerializer(serializers.ModelSerializer):\n class Meta:\n model = UiSettings\n depth = 1\n fields = [\n \"id\",\n \"settings\",\n ]\n\n def validate_settings(self, settings):\n # we never save update checking backend setting\n if \"update_checking\" in settings:\n try:\n settings[\"update_checking\"].pop(\"backend_setting\")\n except KeyError:\n pass\n return settings\n\n def create(self, validated_data):\n ui_settings = UiSettings.objects.update_or_create(\n user=validated_data.get(\"user\"),\n defaults={\"settings\": validated_data.get(\"settings\", None)},\n )\n return ui_settings\n\n\nclass TasksViewSerializer(serializers.ModelSerializer):\n class Meta:\n model = PaperlessTask\n depth = 1\n fields = (\n \"id\",\n \"task_id\",\n \"task_file_name\",\n \"date_created\",\n \"date_done\",\n \"type\",\n \"status\",\n \"result\",\n \"acknowledged\",\n \"related_document\",\n )\n\n type = serializers.SerializerMethodField()\n\n def get_type(self, obj):\n # just file tasks, for now\n return \"file\"\n\n related_document = serializers.SerializerMethodField()\n related_doc_re = re.compile(r\"New document id (\\d+) created\")\n\n def get_related_document(self, obj):\n result = None\n if obj.status is not None and obj.status == states.SUCCESS:\n try:\n result = self.related_doc_re.search(obj.result).group(1)\n except Exception:\n pass\n\n return result\n\n\nclass AcknowledgeTasksViewSerializer(serializers.Serializer):\n tasks = serializers.ListField(\n required=True,\n label=\"Tasks\",\n write_only=True,\n child=serializers.IntegerField(),\n )\n\n def _validate_task_id_list(self, tasks, name=\"tasks\"):\n pass\n if not isinstance(tasks, list):\n raise serializers.ValidationError(f\"{name} must be a list\")\n if not all(isinstance(i, int) for i in tasks):\n raise serializers.ValidationError(f\"{name} must be a list of integers\")\n count = PaperlessTask.objects.filter(id__in=tasks).count()\n if not count == len(tasks):\n raise serializers.ValidationError(\n f\"Some tasks in {name} don't exist or were specified twice.\",\n )\n\n def validate_tasks(self, tasks):\n self._validate_task_id_list(tasks)\n return tasks\n", "path": "src/documents/serialisers.py"}], "after_files": [{"content": "import datetime\nimport math\nimport re\nimport zoneinfo\n\nimport magic\nfrom celery import states\nfrom django.conf import settings\nfrom django.contrib.auth.models import Group\nfrom django.contrib.auth.models import User\nfrom django.utils.text import slugify\nfrom django.utils.translation import gettext as _\nfrom guardian.core import ObjectPermissionChecker\nfrom guardian.shortcuts import get_users_with_perms\nfrom rest_framework import serializers\nfrom rest_framework.fields import SerializerMethodField\n\nfrom documents.permissions import get_groups_with_only_permission\nfrom documents.permissions import set_permissions_for_object\n\nfrom . import bulk_edit\nfrom .models import Correspondent\nfrom .models import Document\nfrom .models import DocumentType\nfrom .models import MatchingModel\nfrom .models import PaperlessTask\nfrom .models import SavedView\nfrom .models import SavedViewFilterRule\nfrom .models import StoragePath\nfrom .models import Tag\nfrom .models import UiSettings\nfrom .parsers import is_mime_type_supported\n\n\n# https://www.django-rest-framework.org/api-guide/serializers/#example\nclass DynamicFieldsModelSerializer(serializers.ModelSerializer):\n \"\"\"\n A ModelSerializer that takes an additional `fields` argument that\n controls which fields should be displayed.\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n # Don't pass the 'fields' arg up to the superclass\n fields = kwargs.pop(\"fields\", None)\n\n # Instantiate the superclass normally\n super().__init__(*args, **kwargs)\n\n if fields is not None:\n # Drop any fields that are not specified in the `fields` argument.\n allowed = set(fields)\n existing = set(self.fields)\n for field_name in existing - allowed:\n self.fields.pop(field_name)\n\n\nclass MatchingModelSerializer(serializers.ModelSerializer):\n document_count = serializers.IntegerField(read_only=True)\n\n def get_slug(self, obj):\n return slugify(obj.name)\n\n slug = SerializerMethodField()\n\n def validate(self, data):\n # see https://github.com/encode/django-rest-framework/issues/7173\n name = data[\"name\"] if \"name\" in data else self.instance.name\n owner = (\n data[\"owner\"]\n if \"owner\" in data\n else self.user\n if hasattr(self, \"user\")\n else None\n )\n pk = self.instance.pk if hasattr(self.instance, \"pk\") else None\n if (\"name\" in data or \"owner\" in data) and self.Meta.model.objects.filter(\n name=name,\n owner=owner,\n ).exclude(pk=pk).exists():\n raise serializers.ValidationError(\n {\"error\": \"Object violates owner / name unique constraint\"},\n )\n return data\n\n def validate_match(self, match):\n if (\n \"matching_algorithm\" in self.initial_data\n and self.initial_data[\"matching_algorithm\"] == MatchingModel.MATCH_REGEX\n ):\n try:\n re.compile(match)\n except re.error as e:\n raise serializers.ValidationError(\n _(\"Invalid regular expression: %(error)s\") % {\"error\": str(e.msg)},\n )\n return match\n\n\nclass SetPermissionsMixin:\n def _validate_user_ids(self, user_ids):\n users = User.objects.none()\n if user_ids is not None:\n users = User.objects.filter(id__in=user_ids)\n if not users.count() == len(user_ids):\n raise serializers.ValidationError(\n \"Some users in don't exist or were specified twice.\",\n )\n return users\n\n def _validate_group_ids(self, group_ids):\n groups = Group.objects.none()\n if group_ids is not None:\n groups = Group.objects.filter(id__in=group_ids)\n if not groups.count() == len(group_ids):\n raise serializers.ValidationError(\n \"Some groups in don't exist or were specified twice.\",\n )\n return groups\n\n def validate_set_permissions(self, set_permissions=None):\n permissions_dict = {\n \"view\": {\n \"users\": User.objects.none(),\n \"groups\": Group.objects.none(),\n },\n \"change\": {\n \"users\": User.objects.none(),\n \"groups\": Group.objects.none(),\n },\n }\n if set_permissions is not None:\n for action in permissions_dict:\n if action in set_permissions:\n users = set_permissions[action][\"users\"]\n permissions_dict[action][\"users\"] = self._validate_user_ids(users)\n groups = set_permissions[action][\"groups\"]\n permissions_dict[action][\"groups\"] = self._validate_group_ids(\n groups,\n )\n return permissions_dict\n\n def _set_permissions(self, permissions, object):\n set_permissions_for_object(permissions, object)\n\n\nclass OwnedObjectSerializer(serializers.ModelSerializer, SetPermissionsMixin):\n def __init__(self, *args, **kwargs):\n self.user = kwargs.pop(\"user\", None)\n full_perms = kwargs.pop(\"full_perms\", False)\n super().__init__(*args, **kwargs)\n\n try:\n if full_perms:\n self.fields.pop(\"user_can_change\")\n else:\n self.fields.pop(\"permissions\")\n except KeyError:\n pass\n\n def get_permissions(self, obj):\n view_codename = f\"view_{obj.__class__.__name__.lower()}\"\n change_codename = f\"change_{obj.__class__.__name__.lower()}\"\n\n return {\n \"view\": {\n \"users\": get_users_with_perms(\n obj,\n only_with_perms_in=[view_codename],\n with_group_users=False,\n ).values_list(\"id\", flat=True),\n \"groups\": get_groups_with_only_permission(\n obj,\n codename=view_codename,\n ).values_list(\"id\", flat=True),\n },\n \"change\": {\n \"users\": get_users_with_perms(\n obj,\n only_with_perms_in=[change_codename],\n with_group_users=False,\n ).values_list(\"id\", flat=True),\n \"groups\": get_groups_with_only_permission(\n obj,\n codename=change_codename,\n ).values_list(\"id\", flat=True),\n },\n }\n\n def get_user_can_change(self, obj):\n checker = ObjectPermissionChecker(self.user) if self.user is not None else None\n return (\n obj.owner is None\n or obj.owner == self.user\n or (\n self.user is not None\n and checker.has_perm(f\"change_{obj.__class__.__name__.lower()}\", obj)\n )\n )\n\n permissions = SerializerMethodField(read_only=True)\n user_can_change = SerializerMethodField(read_only=True)\n\n set_permissions = serializers.DictField(\n label=\"Set permissions\",\n allow_empty=True,\n required=False,\n write_only=True,\n )\n # other methods in mixin\n\n def create(self, validated_data):\n # default to current user if not set\n if \"owner\" not in validated_data and self.user:\n validated_data[\"owner\"] = self.user\n permissions = None\n if \"set_permissions\" in validated_data:\n permissions = validated_data.pop(\"set_permissions\")\n instance = super().create(validated_data)\n if permissions is not None:\n self._set_permissions(permissions, instance)\n return instance\n\n def update(self, instance, validated_data):\n if \"set_permissions\" in validated_data:\n self._set_permissions(validated_data[\"set_permissions\"], instance)\n if \"owner\" in validated_data and \"name\" in self.Meta.fields:\n name = validated_data[\"name\"] if \"name\" in validated_data else instance.name\n not_unique = (\n self.Meta.model.objects.exclude(pk=instance.pk)\n .filter(owner=validated_data[\"owner\"], name=name)\n .exists()\n )\n if not_unique:\n raise serializers.ValidationError(\n {\"error\": \"Object violates owner / name unique constraint\"},\n )\n return super().update(instance, validated_data)\n\n\nclass CorrespondentSerializer(MatchingModelSerializer, OwnedObjectSerializer):\n last_correspondence = serializers.DateTimeField(read_only=True)\n\n class Meta:\n model = Correspondent\n fields = (\n \"id\",\n \"slug\",\n \"name\",\n \"match\",\n \"matching_algorithm\",\n \"is_insensitive\",\n \"document_count\",\n \"last_correspondence\",\n \"owner\",\n \"permissions\",\n \"user_can_change\",\n \"set_permissions\",\n )\n\n\nclass DocumentTypeSerializer(MatchingModelSerializer, OwnedObjectSerializer):\n class Meta:\n model = DocumentType\n fields = (\n \"id\",\n \"slug\",\n \"name\",\n \"match\",\n \"matching_algorithm\",\n \"is_insensitive\",\n \"document_count\",\n \"owner\",\n \"permissions\",\n \"user_can_change\",\n \"set_permissions\",\n )\n\n\nclass ColorField(serializers.Field):\n COLOURS = (\n (1, \"#a6cee3\"),\n (2, \"#1f78b4\"),\n (3, \"#b2df8a\"),\n (4, \"#33a02c\"),\n (5, \"#fb9a99\"),\n (6, \"#e31a1c\"),\n (7, \"#fdbf6f\"),\n (8, \"#ff7f00\"),\n (9, \"#cab2d6\"),\n (10, \"#6a3d9a\"),\n (11, \"#b15928\"),\n (12, \"#000000\"),\n (13, \"#cccccc\"),\n )\n\n def to_internal_value(self, data):\n for id, color in self.COLOURS:\n if id == data:\n return color\n raise serializers.ValidationError\n\n def to_representation(self, value):\n for id, color in self.COLOURS:\n if color == value:\n return id\n return 1\n\n\nclass TagSerializerVersion1(MatchingModelSerializer, OwnedObjectSerializer):\n colour = ColorField(source=\"color\", default=\"#a6cee3\")\n\n class Meta:\n model = Tag\n fields = (\n \"id\",\n \"slug\",\n \"name\",\n \"colour\",\n \"match\",\n \"matching_algorithm\",\n \"is_insensitive\",\n \"is_inbox_tag\",\n \"document_count\",\n \"owner\",\n \"permissions\",\n \"user_can_change\",\n \"set_permissions\",\n )\n\n\nclass TagSerializer(MatchingModelSerializer, OwnedObjectSerializer):\n def get_text_color(self, obj):\n try:\n h = obj.color.lstrip(\"#\")\n rgb = tuple(int(h[i : i + 2], 16) / 256 for i in (0, 2, 4))\n luminance = math.sqrt(\n 0.299 * math.pow(rgb[0], 2)\n + 0.587 * math.pow(rgb[1], 2)\n + 0.114 * math.pow(rgb[2], 2),\n )\n return \"#ffffff\" if luminance < 0.53 else \"#000000\"\n except ValueError:\n return \"#000000\"\n\n text_color = serializers.SerializerMethodField()\n\n class Meta:\n model = Tag\n fields = (\n \"id\",\n \"slug\",\n \"name\",\n \"color\",\n \"text_color\",\n \"match\",\n \"matching_algorithm\",\n \"is_insensitive\",\n \"is_inbox_tag\",\n \"document_count\",\n \"owner\",\n \"permissions\",\n \"user_can_change\",\n \"set_permissions\",\n )\n\n def validate_color(self, color):\n regex = r\"#[0-9a-fA-F]{6}\"\n if not re.match(regex, color):\n raise serializers.ValidationError(_(\"Invalid color.\"))\n return color\n\n\nclass CorrespondentField(serializers.PrimaryKeyRelatedField):\n def get_queryset(self):\n return Correspondent.objects.all()\n\n\nclass TagsField(serializers.PrimaryKeyRelatedField):\n def get_queryset(self):\n return Tag.objects.all()\n\n\nclass DocumentTypeField(serializers.PrimaryKeyRelatedField):\n def get_queryset(self):\n return DocumentType.objects.all()\n\n\nclass StoragePathField(serializers.PrimaryKeyRelatedField):\n def get_queryset(self):\n return StoragePath.objects.all()\n\n\nclass DocumentSerializer(OwnedObjectSerializer, DynamicFieldsModelSerializer):\n correspondent = CorrespondentField(allow_null=True)\n tags = TagsField(many=True)\n document_type = DocumentTypeField(allow_null=True)\n storage_path = StoragePathField(allow_null=True)\n\n original_file_name = SerializerMethodField()\n archived_file_name = SerializerMethodField()\n created_date = serializers.DateField(required=False)\n\n owner = serializers.PrimaryKeyRelatedField(\n queryset=User.objects.all(),\n required=False,\n allow_null=True,\n )\n\n def get_original_file_name(self, obj):\n return obj.original_filename\n\n def get_archived_file_name(self, obj):\n if obj.has_archive_version:\n return obj.get_public_filename(archive=True)\n else:\n return None\n\n def to_representation(self, instance):\n doc = super().to_representation(instance)\n if self.truncate_content and \"content\" in self.fields:\n doc[\"content\"] = doc.get(\"content\")[0:550]\n return doc\n\n def update(self, instance, validated_data):\n if \"created_date\" in validated_data and \"created\" not in validated_data:\n new_datetime = datetime.datetime.combine(\n validated_data.get(\"created_date\"),\n datetime.time(0, 0, 0, 0, zoneinfo.ZoneInfo(settings.TIME_ZONE)),\n )\n instance.created = new_datetime\n instance.save()\n if \"created_date\" in validated_data:\n validated_data.pop(\"created_date\")\n super().update(instance, validated_data)\n return instance\n\n def __init__(self, *args, **kwargs):\n self.truncate_content = kwargs.pop(\"truncate_content\", False)\n\n super().__init__(*args, **kwargs)\n\n class Meta:\n model = Document\n depth = 1\n fields = (\n \"id\",\n \"correspondent\",\n \"document_type\",\n \"storage_path\",\n \"title\",\n \"content\",\n \"tags\",\n \"created\",\n \"created_date\",\n \"modified\",\n \"added\",\n \"archive_serial_number\",\n \"original_file_name\",\n \"archived_file_name\",\n \"owner\",\n \"permissions\",\n \"user_can_change\",\n \"set_permissions\",\n \"notes\",\n )\n\n\nclass SavedViewFilterRuleSerializer(serializers.ModelSerializer):\n class Meta:\n model = SavedViewFilterRule\n fields = [\"rule_type\", \"value\"]\n\n\nclass SavedViewSerializer(OwnedObjectSerializer):\n filter_rules = SavedViewFilterRuleSerializer(many=True)\n\n class Meta:\n model = SavedView\n fields = [\n \"id\",\n \"name\",\n \"show_on_dashboard\",\n \"show_in_sidebar\",\n \"sort_field\",\n \"sort_reverse\",\n \"filter_rules\",\n \"owner\",\n \"permissions\",\n \"user_can_change\",\n \"set_permissions\",\n ]\n\n def update(self, instance, validated_data):\n if \"filter_rules\" in validated_data:\n rules_data = validated_data.pop(\"filter_rules\")\n else:\n rules_data = None\n if \"user\" in validated_data:\n # backwards compatibility\n validated_data[\"owner\"] = validated_data.pop(\"user\")\n super().update(instance, validated_data)\n if rules_data is not None:\n SavedViewFilterRule.objects.filter(saved_view=instance).delete()\n for rule_data in rules_data:\n SavedViewFilterRule.objects.create(saved_view=instance, **rule_data)\n return instance\n\n def create(self, validated_data):\n rules_data = validated_data.pop(\"filter_rules\")\n if \"user\" in validated_data:\n # backwards compatibility\n validated_data[\"owner\"] = validated_data.pop(\"user\")\n saved_view = SavedView.objects.create(**validated_data)\n for rule_data in rules_data:\n SavedViewFilterRule.objects.create(saved_view=saved_view, **rule_data)\n return saved_view\n\n\nclass DocumentListSerializer(serializers.Serializer):\n documents = serializers.ListField(\n required=True,\n label=\"Documents\",\n write_only=True,\n child=serializers.IntegerField(),\n )\n\n def _validate_document_id_list(self, documents, name=\"documents\"):\n if not isinstance(documents, list):\n raise serializers.ValidationError(f\"{name} must be a list\")\n if not all(isinstance(i, int) for i in documents):\n raise serializers.ValidationError(f\"{name} must be a list of integers\")\n count = Document.objects.filter(id__in=documents).count()\n if not count == len(documents):\n raise serializers.ValidationError(\n f\"Some documents in {name} don't exist or were specified twice.\",\n )\n\n def validate_documents(self, documents):\n self._validate_document_id_list(documents)\n return documents\n\n\nclass BulkEditSerializer(DocumentListSerializer, SetPermissionsMixin):\n method = serializers.ChoiceField(\n choices=[\n \"set_correspondent\",\n \"set_document_type\",\n \"set_storage_path\",\n \"add_tag\",\n \"remove_tag\",\n \"modify_tags\",\n \"delete\",\n \"redo_ocr\",\n \"set_permissions\",\n ],\n label=\"Method\",\n write_only=True,\n )\n\n parameters = serializers.DictField(allow_empty=True)\n\n def _validate_tag_id_list(self, tags, name=\"tags\"):\n if not isinstance(tags, list):\n raise serializers.ValidationError(f\"{name} must be a list\")\n if not all(isinstance(i, int) for i in tags):\n raise serializers.ValidationError(f\"{name} must be a list of integers\")\n count = Tag.objects.filter(id__in=tags).count()\n if not count == len(tags):\n raise serializers.ValidationError(\n f\"Some tags in {name} don't exist or were specified twice.\",\n )\n\n def validate_method(self, method):\n if method == \"set_correspondent\":\n return bulk_edit.set_correspondent\n elif method == \"set_document_type\":\n return bulk_edit.set_document_type\n elif method == \"set_storage_path\":\n return bulk_edit.set_storage_path\n elif method == \"add_tag\":\n return bulk_edit.add_tag\n elif method == \"remove_tag\":\n return bulk_edit.remove_tag\n elif method == \"modify_tags\":\n return bulk_edit.modify_tags\n elif method == \"delete\":\n return bulk_edit.delete\n elif method == \"redo_ocr\":\n return bulk_edit.redo_ocr\n elif method == \"set_permissions\":\n return bulk_edit.set_permissions\n else:\n raise serializers.ValidationError(\"Unsupported method.\")\n\n def _validate_parameters_tags(self, parameters):\n if \"tag\" in parameters:\n tag_id = parameters[\"tag\"]\n try:\n Tag.objects.get(id=tag_id)\n except Tag.DoesNotExist:\n raise serializers.ValidationError(\"Tag does not exist\")\n else:\n raise serializers.ValidationError(\"tag not specified\")\n\n def _validate_parameters_document_type(self, parameters):\n if \"document_type\" in parameters:\n document_type_id = parameters[\"document_type\"]\n if document_type_id is None:\n # None is ok\n return\n try:\n DocumentType.objects.get(id=document_type_id)\n except DocumentType.DoesNotExist:\n raise serializers.ValidationError(\"Document type does not exist\")\n else:\n raise serializers.ValidationError(\"document_type not specified\")\n\n def _validate_parameters_correspondent(self, parameters):\n if \"correspondent\" in parameters:\n correspondent_id = parameters[\"correspondent\"]\n if correspondent_id is None:\n return\n try:\n Correspondent.objects.get(id=correspondent_id)\n except Correspondent.DoesNotExist:\n raise serializers.ValidationError(\"Correspondent does not exist\")\n else:\n raise serializers.ValidationError(\"correspondent not specified\")\n\n def _validate_storage_path(self, parameters):\n if \"storage_path\" in parameters:\n storage_path_id = parameters[\"storage_path\"]\n if storage_path_id is None:\n return\n try:\n StoragePath.objects.get(id=storage_path_id)\n except StoragePath.DoesNotExist:\n raise serializers.ValidationError(\n \"Storage path does not exist\",\n )\n else:\n raise serializers.ValidationError(\"storage path not specified\")\n\n def _validate_parameters_modify_tags(self, parameters):\n if \"add_tags\" in parameters:\n self._validate_tag_id_list(parameters[\"add_tags\"], \"add_tags\")\n else:\n raise serializers.ValidationError(\"add_tags not specified\")\n\n if \"remove_tags\" in parameters:\n self._validate_tag_id_list(parameters[\"remove_tags\"], \"remove_tags\")\n else:\n raise serializers.ValidationError(\"remove_tags not specified\")\n\n def _validate_owner(self, owner):\n ownerUser = User.objects.get(pk=owner)\n if ownerUser is None:\n raise serializers.ValidationError(\"Specified owner cannot be found\")\n return ownerUser\n\n def _validate_parameters_set_permissions(self, parameters):\n parameters[\"set_permissions\"] = self.validate_set_permissions(\n parameters[\"set_permissions\"],\n )\n if \"owner\" in parameters and parameters[\"owner\"] is not None:\n self._validate_owner(parameters[\"owner\"])\n\n def validate(self, attrs):\n method = attrs[\"method\"]\n parameters = attrs[\"parameters\"]\n\n if method == bulk_edit.set_correspondent:\n self._validate_parameters_correspondent(parameters)\n elif method == bulk_edit.set_document_type:\n self._validate_parameters_document_type(parameters)\n elif method == bulk_edit.add_tag or method == bulk_edit.remove_tag:\n self._validate_parameters_tags(parameters)\n elif method == bulk_edit.modify_tags:\n self._validate_parameters_modify_tags(parameters)\n elif method == bulk_edit.set_storage_path:\n self._validate_storage_path(parameters)\n elif method == bulk_edit.set_permissions:\n self._validate_parameters_set_permissions(parameters)\n\n return attrs\n\n\nclass PostDocumentSerializer(serializers.Serializer):\n created = serializers.DateTimeField(\n label=\"Created\",\n allow_null=True,\n write_only=True,\n required=False,\n )\n\n document = serializers.FileField(\n label=\"Document\",\n write_only=True,\n )\n\n title = serializers.CharField(\n label=\"Title\",\n write_only=True,\n required=False,\n )\n\n correspondent = serializers.PrimaryKeyRelatedField(\n queryset=Correspondent.objects.all(),\n label=\"Correspondent\",\n allow_null=True,\n write_only=True,\n required=False,\n )\n\n document_type = serializers.PrimaryKeyRelatedField(\n queryset=DocumentType.objects.all(),\n label=\"Document type\",\n allow_null=True,\n write_only=True,\n required=False,\n )\n\n tags = serializers.PrimaryKeyRelatedField(\n many=True,\n queryset=Tag.objects.all(),\n label=\"Tags\",\n write_only=True,\n required=False,\n )\n\n archive_serial_number = serializers.IntegerField(\n label=\"ASN\",\n write_only=True,\n required=False,\n min_value=Document.ARCHIVE_SERIAL_NUMBER_MIN,\n max_value=Document.ARCHIVE_SERIAL_NUMBER_MAX,\n )\n\n def validate_document(self, document):\n document_data = document.file.read()\n mime_type = magic.from_buffer(document_data, mime=True)\n\n if not is_mime_type_supported(mime_type):\n raise serializers.ValidationError(\n _(\"File type %(type)s not supported\") % {\"type\": mime_type},\n )\n\n return document.name, document_data\n\n def validate_correspondent(self, correspondent):\n if correspondent:\n return correspondent.id\n else:\n return None\n\n def validate_document_type(self, document_type):\n if document_type:\n return document_type.id\n else:\n return None\n\n def validate_tags(self, tags):\n if tags:\n return [tag.id for tag in tags]\n else:\n return None\n\n\nclass BulkDownloadSerializer(DocumentListSerializer):\n content = serializers.ChoiceField(\n choices=[\"archive\", \"originals\", \"both\"],\n default=\"archive\",\n )\n\n compression = serializers.ChoiceField(\n choices=[\"none\", \"deflated\", \"bzip2\", \"lzma\"],\n default=\"none\",\n )\n\n follow_formatting = serializers.BooleanField(\n default=False,\n )\n\n def validate_compression(self, compression):\n import zipfile\n\n return {\n \"none\": zipfile.ZIP_STORED,\n \"deflated\": zipfile.ZIP_DEFLATED,\n \"bzip2\": zipfile.ZIP_BZIP2,\n \"lzma\": zipfile.ZIP_LZMA,\n }[compression]\n\n\nclass StoragePathSerializer(MatchingModelSerializer, OwnedObjectSerializer):\n class Meta:\n model = StoragePath\n fields = (\n \"id\",\n \"slug\",\n \"name\",\n \"path\",\n \"match\",\n \"matching_algorithm\",\n \"is_insensitive\",\n \"document_count\",\n \"owner\",\n \"permissions\",\n \"user_can_change\",\n \"set_permissions\",\n )\n\n def validate_path(self, path):\n try:\n path.format(\n title=\"title\",\n correspondent=\"correspondent\",\n document_type=\"document_type\",\n created=\"created\",\n created_year=\"created_year\",\n created_year_short=\"created_year_short\",\n created_month=\"created_month\",\n created_month_name=\"created_month_name\",\n created_month_name_short=\"created_month_name_short\",\n created_day=\"created_day\",\n added=\"added\",\n added_year=\"added_year\",\n added_year_short=\"added_year_short\",\n added_month=\"added_month\",\n added_month_name=\"added_month_name\",\n added_month_name_short=\"added_month_name_short\",\n added_day=\"added_day\",\n asn=\"asn\",\n tags=\"tags\",\n tag_list=\"tag_list\",\n owner_username=\"someone\",\n original_name=\"testfile\",\n )\n\n except KeyError as err:\n raise serializers.ValidationError(_(\"Invalid variable detected.\")) from err\n\n return path\n\n def update(self, instance, validated_data):\n \"\"\"\n When a storage path is updated, see if documents\n using it require a rename/move\n \"\"\"\n doc_ids = [doc.id for doc in instance.documents.all()]\n if len(doc_ids):\n bulk_edit.bulk_update_documents.delay(doc_ids)\n\n return super().update(instance, validated_data)\n\n\nclass UiSettingsViewSerializer(serializers.ModelSerializer):\n class Meta:\n model = UiSettings\n depth = 1\n fields = [\n \"id\",\n \"settings\",\n ]\n\n def validate_settings(self, settings):\n # we never save update checking backend setting\n if \"update_checking\" in settings:\n try:\n settings[\"update_checking\"].pop(\"backend_setting\")\n except KeyError:\n pass\n return settings\n\n def create(self, validated_data):\n ui_settings = UiSettings.objects.update_or_create(\n user=validated_data.get(\"user\"),\n defaults={\"settings\": validated_data.get(\"settings\", None)},\n )\n return ui_settings\n\n\nclass TasksViewSerializer(serializers.ModelSerializer):\n class Meta:\n model = PaperlessTask\n depth = 1\n fields = (\n \"id\",\n \"task_id\",\n \"task_file_name\",\n \"date_created\",\n \"date_done\",\n \"type\",\n \"status\",\n \"result\",\n \"acknowledged\",\n \"related_document\",\n )\n\n type = serializers.SerializerMethodField()\n\n def get_type(self, obj):\n # just file tasks, for now\n return \"file\"\n\n related_document = serializers.SerializerMethodField()\n related_doc_re = re.compile(r\"New document id (\\d+) created\")\n\n def get_related_document(self, obj):\n result = None\n if obj.status is not None and obj.status == states.SUCCESS:\n try:\n result = self.related_doc_re.search(obj.result).group(1)\n except Exception:\n pass\n\n return result\n\n\nclass AcknowledgeTasksViewSerializer(serializers.Serializer):\n tasks = serializers.ListField(\n required=True,\n label=\"Tasks\",\n write_only=True,\n child=serializers.IntegerField(),\n )\n\n def _validate_task_id_list(self, tasks, name=\"tasks\"):\n pass\n if not isinstance(tasks, list):\n raise serializers.ValidationError(f\"{name} must be a list\")\n if not all(isinstance(i, int) for i in tasks):\n raise serializers.ValidationError(f\"{name} must be a list of integers\")\n count = PaperlessTask.objects.filter(id__in=tasks).count()\n if not count == len(tasks):\n raise serializers.ValidationError(\n f\"Some tasks in {name} don't exist or were specified twice.\",\n )\n\n def validate_tasks(self, tasks):\n self._validate_task_id_list(tasks)\n return tasks\n", "path": "src/documents/serialisers.py"}]} |
gh_patches_debug_74 | rasdani/github-patches | git_diff | airctic__icevision-441 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add icedata to icevision.all
## π Feature
Currently to train a dataset available with icedata the following two lines are necessary:
```python
import icedata
from icevision.all import *
```
Because icedata already depends on icevision, icevision cannot depend on icedata. **But** I guess we can add icedata as a soft dependency to `.all`, we just have to be sure not to use `icedata` internally in icevision.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `icevision/all.py`
Content:
```
1 from icevision.imports import *
2 from icevision import *
3
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/icevision/all.py b/icevision/all.py
--- a/icevision/all.py
+++ b/icevision/all.py
@@ -1,2 +1,9 @@
from icevision.imports import *
from icevision import *
+
+# soft import icedata
+try:
+ import icedata
+except ModuleNotFoundError as e:
+ if str(e) != f"No module named 'icedata'":
+ raise e
| {"golden_diff": "diff --git a/icevision/all.py b/icevision/all.py\n--- a/icevision/all.py\n+++ b/icevision/all.py\n@@ -1,2 +1,9 @@\n from icevision.imports import *\n from icevision import *\n+\n+# soft import icedata\n+try:\n+ import icedata\n+except ModuleNotFoundError as e:\n+ if str(e) != f\"No module named 'icedata'\":\n+ raise e\n", "issue": "Add icedata to icevision.all\n## \ud83d\ude80 Feature\r\nCurrently to train a dataset available with icedata the following two lines are necessary:\r\n```python\r\nimport icedata\r\nfrom icevision.all import *\r\n```\r\n\r\nBecause icedata already depends on icevision, icevision cannot depend on icedata. **But** I guess we can add icedata as a soft dependency to `.all`, we just have to be sure not to use `icedata` internally in icevision.\n", "before_files": [{"content": "from icevision.imports import *\nfrom icevision import *\n", "path": "icevision/all.py"}], "after_files": [{"content": "from icevision.imports import *\nfrom icevision import *\n\n# soft import icedata\ntry:\n import icedata\nexcept ModuleNotFoundError as e:\n if str(e) != f\"No module named 'icedata'\":\n raise e\n", "path": "icevision/all.py"}]} |
gh_patches_debug_75 | rasdani/github-patches | git_diff | pypi__warehouse-2399 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Pagination for releases on Project Admin doesn't work
The pagination on releases does not appear to be functional in the project admin. It shows the first N releases, but when you click to see all, there is no pagination links and there is no text to indicate what page you're on or how many results there are.
Manually adding a ``?page=2`` *does* work, so this is likely just something wrong in the template.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `warehouse/admin/views/projects.py`
Content:
```
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 import shlex
14
15 from paginate_sqlalchemy import SqlalchemyOrmPage as SQLAlchemyORMPage
16 from pyramid.httpexceptions import (
17 HTTPBadRequest,
18 HTTPMovedPermanently,
19 )
20 from pyramid.view import view_config
21 from sqlalchemy import or_
22
23 from warehouse.accounts.models import User
24 from warehouse.packaging.models import Project, Release, Role, JournalEntry
25 from warehouse.utils.paginate import paginate_url_factory
26
27
28 @view_config(
29 route_name="admin.project.list",
30 renderer="admin/projects/list.html",
31 permission="admin",
32 uses_session=True,
33 )
34 def project_list(request):
35 q = request.params.get("q")
36
37 try:
38 page_num = int(request.params.get("page", 1))
39 except ValueError:
40 raise HTTPBadRequest("'page' must be an integer.") from None
41
42 projects_query = request.db.query(Project).order_by(Project.name)
43
44 if q:
45 terms = shlex.split(q)
46
47 filters = []
48 for term in terms:
49 filters.append(Project.name.ilike(term))
50
51 projects_query = projects_query.filter(or_(*filters))
52
53 projects = SQLAlchemyORMPage(
54 projects_query,
55 page=page_num,
56 items_per_page=25,
57 url_maker=paginate_url_factory(request),
58 )
59
60 return {"projects": projects, "query": q}
61
62
63 @view_config(route_name="admin.project.detail",
64 renderer="admin/projects/detail.html",
65 permission="admin",
66 uses_session=True,
67 require_csrf=True,
68 require_methods=False)
69 def project_detail(project, request):
70 project_name = request.matchdict["project_name"]
71
72 if project_name != project.normalized_name:
73 raise HTTPMovedPermanently(
74 request.current_route_path(
75 project_name=project.normalized_name,
76 ),
77 )
78
79 maintainers = [
80 role
81 for role in (
82 request.db.query(Role)
83 .join(User)
84 .filter(Role.project == project)
85 .distinct(User.username)
86 .all()
87 )
88 ]
89 maintainers = sorted(
90 maintainers,
91 key=lambda x: (x.role_name, x.user.username),
92 )
93 journal = [
94 entry
95 for entry in (
96 request.db.query(JournalEntry)
97 .filter(JournalEntry.name == project.name)
98 .order_by(JournalEntry.submitted_date.desc())
99 .limit(50)
100 )
101 ]
102
103 return {"project": project, "maintainers": maintainers, "journal": journal}
104
105
106 @view_config(
107 route_name="admin.project.releases",
108 renderer="admin/projects/releases_list.html",
109 permission="admin",
110 uses_session=True,
111 )
112 def releases_list(project, request):
113 q = request.params.get("q")
114 project_name = request.matchdict["project_name"]
115
116 if project_name != project.normalized_name:
117 raise HTTPMovedPermanently(
118 request.current_route_path(
119 project_name=project.normalized_name,
120 ),
121 )
122
123 try:
124 page_num = int(request.params.get("page", 1))
125 except ValueError:
126 raise HTTPBadRequest("'page' must be an integer.") from None
127
128 releases_query = (request.db.query(Release)
129 .filter(Release.project == project)
130 .order_by(Release._pypi_ordering.desc()))
131
132 if q:
133 terms = shlex.split(q)
134
135 filters = []
136 for term in terms:
137 if ":" in term:
138 field, value = term.split(":", 1)
139 if field.lower() == "version":
140 filters.append(Release.version.ilike(value))
141
142 releases_query = releases_query.filter(or_(*filters))
143
144 releases = SQLAlchemyORMPage(
145 releases_query,
146 page=page_num,
147 items_per_page=25,
148 url_maker=paginate_url_factory(request),
149 )
150
151 return {
152 "releases": list(releases),
153 "project": project,
154 "query": q,
155 }
156
157
158 @view_config(
159 route_name="admin.project.journals",
160 renderer="admin/projects/journals_list.html",
161 permission="admin",
162 uses_session=True,
163 )
164 def journals_list(project, request):
165 q = request.params.get("q")
166 project_name = request.matchdict["project_name"]
167
168 if project_name != project.normalized_name:
169 raise HTTPMovedPermanently(
170 request.current_route_path(
171 project_name=project.normalized_name,
172 ),
173 )
174
175 try:
176 page_num = int(request.params.get("page", 1))
177 except ValueError:
178 raise HTTPBadRequest("'page' must be an integer.") from None
179
180 journals_query = (request.db.query(JournalEntry)
181 .filter(JournalEntry.name == project.name)
182 .order_by(JournalEntry.submitted_date.desc()))
183
184 if q:
185 terms = shlex.split(q)
186
187 filters = []
188 for term in terms:
189 if ":" in term:
190 field, value = term.split(":", 1)
191 if field.lower() == "version":
192 filters.append(JournalEntry.version.ilike(value))
193
194 journals_query = journals_query.filter(or_(*filters))
195
196 journals = SQLAlchemyORMPage(
197 journals_query,
198 page=page_num,
199 items_per_page=25,
200 url_maker=paginate_url_factory(request),
201 )
202
203 return {"journals": journals, "project": project, "query": q}
204
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/warehouse/admin/views/projects.py b/warehouse/admin/views/projects.py
--- a/warehouse/admin/views/projects.py
+++ b/warehouse/admin/views/projects.py
@@ -149,7 +149,7 @@
)
return {
- "releases": list(releases),
+ "releases": releases,
"project": project,
"query": q,
}
| {"golden_diff": "diff --git a/warehouse/admin/views/projects.py b/warehouse/admin/views/projects.py\n--- a/warehouse/admin/views/projects.py\n+++ b/warehouse/admin/views/projects.py\n@@ -149,7 +149,7 @@\n )\n \n return {\n- \"releases\": list(releases),\n+ \"releases\": releases,\n \"project\": project,\n \"query\": q,\n }\n", "issue": "Pagination for releases on Project Admin doesn't work\nThe pagination on releases does not appear to be functional in the project admin. It shows the first N releases, but when you click to see all, there is no pagination links and there is no text to indicate what page you're on or how many results there are.\r\n\r\nManually adding a ``?page=2`` *does* work, so this is likely just something wrong in the template.\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport shlex\n\nfrom paginate_sqlalchemy import SqlalchemyOrmPage as SQLAlchemyORMPage\nfrom pyramid.httpexceptions import (\n HTTPBadRequest,\n HTTPMovedPermanently,\n)\nfrom pyramid.view import view_config\nfrom sqlalchemy import or_\n\nfrom warehouse.accounts.models import User\nfrom warehouse.packaging.models import Project, Release, Role, JournalEntry\nfrom warehouse.utils.paginate import paginate_url_factory\n\n\n@view_config(\n route_name=\"admin.project.list\",\n renderer=\"admin/projects/list.html\",\n permission=\"admin\",\n uses_session=True,\n)\ndef project_list(request):\n q = request.params.get(\"q\")\n\n try:\n page_num = int(request.params.get(\"page\", 1))\n except ValueError:\n raise HTTPBadRequest(\"'page' must be an integer.\") from None\n\n projects_query = request.db.query(Project).order_by(Project.name)\n\n if q:\n terms = shlex.split(q)\n\n filters = []\n for term in terms:\n filters.append(Project.name.ilike(term))\n\n projects_query = projects_query.filter(or_(*filters))\n\n projects = SQLAlchemyORMPage(\n projects_query,\n page=page_num,\n items_per_page=25,\n url_maker=paginate_url_factory(request),\n )\n\n return {\"projects\": projects, \"query\": q}\n\n\n@view_config(route_name=\"admin.project.detail\",\n renderer=\"admin/projects/detail.html\",\n permission=\"admin\",\n uses_session=True,\n require_csrf=True,\n require_methods=False)\ndef project_detail(project, request):\n project_name = request.matchdict[\"project_name\"]\n\n if project_name != project.normalized_name:\n raise HTTPMovedPermanently(\n request.current_route_path(\n project_name=project.normalized_name,\n ),\n )\n\n maintainers = [\n role\n for role in (\n request.db.query(Role)\n .join(User)\n .filter(Role.project == project)\n .distinct(User.username)\n .all()\n )\n ]\n maintainers = sorted(\n maintainers,\n key=lambda x: (x.role_name, x.user.username),\n )\n journal = [\n entry\n for entry in (\n request.db.query(JournalEntry)\n .filter(JournalEntry.name == project.name)\n .order_by(JournalEntry.submitted_date.desc())\n .limit(50)\n )\n ]\n\n return {\"project\": project, \"maintainers\": maintainers, \"journal\": journal}\n\n\n@view_config(\n route_name=\"admin.project.releases\",\n renderer=\"admin/projects/releases_list.html\",\n permission=\"admin\",\n uses_session=True,\n)\ndef releases_list(project, request):\n q = request.params.get(\"q\")\n project_name = request.matchdict[\"project_name\"]\n\n if project_name != project.normalized_name:\n raise HTTPMovedPermanently(\n request.current_route_path(\n project_name=project.normalized_name,\n ),\n )\n\n try:\n page_num = int(request.params.get(\"page\", 1))\n except ValueError:\n raise HTTPBadRequest(\"'page' must be an integer.\") from None\n\n releases_query = (request.db.query(Release)\n .filter(Release.project == project)\n .order_by(Release._pypi_ordering.desc()))\n\n if q:\n terms = shlex.split(q)\n\n filters = []\n for term in terms:\n if \":\" in term:\n field, value = term.split(\":\", 1)\n if field.lower() == \"version\":\n filters.append(Release.version.ilike(value))\n\n releases_query = releases_query.filter(or_(*filters))\n\n releases = SQLAlchemyORMPage(\n releases_query,\n page=page_num,\n items_per_page=25,\n url_maker=paginate_url_factory(request),\n )\n\n return {\n \"releases\": list(releases),\n \"project\": project,\n \"query\": q,\n }\n\n\n@view_config(\n route_name=\"admin.project.journals\",\n renderer=\"admin/projects/journals_list.html\",\n permission=\"admin\",\n uses_session=True,\n)\ndef journals_list(project, request):\n q = request.params.get(\"q\")\n project_name = request.matchdict[\"project_name\"]\n\n if project_name != project.normalized_name:\n raise HTTPMovedPermanently(\n request.current_route_path(\n project_name=project.normalized_name,\n ),\n )\n\n try:\n page_num = int(request.params.get(\"page\", 1))\n except ValueError:\n raise HTTPBadRequest(\"'page' must be an integer.\") from None\n\n journals_query = (request.db.query(JournalEntry)\n .filter(JournalEntry.name == project.name)\n .order_by(JournalEntry.submitted_date.desc()))\n\n if q:\n terms = shlex.split(q)\n\n filters = []\n for term in terms:\n if \":\" in term:\n field, value = term.split(\":\", 1)\n if field.lower() == \"version\":\n filters.append(JournalEntry.version.ilike(value))\n\n journals_query = journals_query.filter(or_(*filters))\n\n journals = SQLAlchemyORMPage(\n journals_query,\n page=page_num,\n items_per_page=25,\n url_maker=paginate_url_factory(request),\n )\n\n return {\"journals\": journals, \"project\": project, \"query\": q}\n", "path": "warehouse/admin/views/projects.py"}], "after_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport shlex\n\nfrom paginate_sqlalchemy import SqlalchemyOrmPage as SQLAlchemyORMPage\nfrom pyramid.httpexceptions import (\n HTTPBadRequest,\n HTTPMovedPermanently,\n)\nfrom pyramid.view import view_config\nfrom sqlalchemy import or_\n\nfrom warehouse.accounts.models import User\nfrom warehouse.packaging.models import Project, Release, Role, JournalEntry\nfrom warehouse.utils.paginate import paginate_url_factory\n\n\n@view_config(\n route_name=\"admin.project.list\",\n renderer=\"admin/projects/list.html\",\n permission=\"admin\",\n uses_session=True,\n)\ndef project_list(request):\n q = request.params.get(\"q\")\n\n try:\n page_num = int(request.params.get(\"page\", 1))\n except ValueError:\n raise HTTPBadRequest(\"'page' must be an integer.\") from None\n\n projects_query = request.db.query(Project).order_by(Project.name)\n\n if q:\n terms = shlex.split(q)\n\n filters = []\n for term in terms:\n filters.append(Project.name.ilike(term))\n\n projects_query = projects_query.filter(or_(*filters))\n\n projects = SQLAlchemyORMPage(\n projects_query,\n page=page_num,\n items_per_page=25,\n url_maker=paginate_url_factory(request),\n )\n\n return {\"projects\": projects, \"query\": q}\n\n\n@view_config(route_name=\"admin.project.detail\",\n renderer=\"admin/projects/detail.html\",\n permission=\"admin\",\n uses_session=True,\n require_csrf=True,\n require_methods=False)\ndef project_detail(project, request):\n project_name = request.matchdict[\"project_name\"]\n\n if project_name != project.normalized_name:\n raise HTTPMovedPermanently(\n request.current_route_path(\n project_name=project.normalized_name,\n ),\n )\n\n maintainers = [\n role\n for role in (\n request.db.query(Role)\n .join(User)\n .filter(Role.project == project)\n .distinct(User.username)\n .all()\n )\n ]\n maintainers = sorted(\n maintainers,\n key=lambda x: (x.role_name, x.user.username),\n )\n journal = [\n entry\n for entry in (\n request.db.query(JournalEntry)\n .filter(JournalEntry.name == project.name)\n .order_by(JournalEntry.submitted_date.desc())\n .limit(50)\n )\n ]\n\n return {\"project\": project, \"maintainers\": maintainers, \"journal\": journal}\n\n\n@view_config(\n route_name=\"admin.project.releases\",\n renderer=\"admin/projects/releases_list.html\",\n permission=\"admin\",\n uses_session=True,\n)\ndef releases_list(project, request):\n q = request.params.get(\"q\")\n project_name = request.matchdict[\"project_name\"]\n\n if project_name != project.normalized_name:\n raise HTTPMovedPermanently(\n request.current_route_path(\n project_name=project.normalized_name,\n ),\n )\n\n try:\n page_num = int(request.params.get(\"page\", 1))\n except ValueError:\n raise HTTPBadRequest(\"'page' must be an integer.\") from None\n\n releases_query = (request.db.query(Release)\n .filter(Release.project == project)\n .order_by(Release._pypi_ordering.desc()))\n\n if q:\n terms = shlex.split(q)\n\n filters = []\n for term in terms:\n if \":\" in term:\n field, value = term.split(\":\", 1)\n if field.lower() == \"version\":\n filters.append(Release.version.ilike(value))\n\n releases_query = releases_query.filter(or_(*filters))\n\n releases = SQLAlchemyORMPage(\n releases_query,\n page=page_num,\n items_per_page=25,\n url_maker=paginate_url_factory(request),\n )\n\n return {\n \"releases\": releases,\n \"project\": project,\n \"query\": q,\n }\n\n\n@view_config(\n route_name=\"admin.project.journals\",\n renderer=\"admin/projects/journals_list.html\",\n permission=\"admin\",\n uses_session=True,\n)\ndef journals_list(project, request):\n q = request.params.get(\"q\")\n project_name = request.matchdict[\"project_name\"]\n\n if project_name != project.normalized_name:\n raise HTTPMovedPermanently(\n request.current_route_path(\n project_name=project.normalized_name,\n ),\n )\n\n try:\n page_num = int(request.params.get(\"page\", 1))\n except ValueError:\n raise HTTPBadRequest(\"'page' must be an integer.\") from None\n\n journals_query = (request.db.query(JournalEntry)\n .filter(JournalEntry.name == project.name)\n .order_by(JournalEntry.submitted_date.desc()))\n\n if q:\n terms = shlex.split(q)\n\n filters = []\n for term in terms:\n if \":\" in term:\n field, value = term.split(\":\", 1)\n if field.lower() == \"version\":\n filters.append(JournalEntry.version.ilike(value))\n\n journals_query = journals_query.filter(or_(*filters))\n\n journals = SQLAlchemyORMPage(\n journals_query,\n page=page_num,\n items_per_page=25,\n url_maker=paginate_url_factory(request),\n )\n\n return {\"journals\": journals, \"project\": project, \"query\": q}\n", "path": "warehouse/admin/views/projects.py"}]} |
gh_patches_debug_76 | rasdani/github-patches | git_diff | pytorch__TensorRT-196 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
π [Bug] UnicodeDecodeError running setup.py
## Bug Description
Trying to run "python setup.py install" fails with a unicode error when reading README.md.
## To Reproduce
Steps to reproduce the behavior:
1. docker run --gpus=all -it nvcr.io/nvidia/tensorrt:20.03-py3 /bin/bash
2. (cd /usr/bin && wget -O bazel https://github.com/bazelbuild/bazelisk/releases/download/v1.7.3/bazelisk-linux-amd64 && chmod +x bazel)
3. git clone https://github.com/NVIDIA/TRTorch.git
4. cd TRTorch/py
5. pip install -r requirements.txt
6. python setup.py install
The error follows:
> root@320583666d0c:/workspace/TRTorch/py# python setup.py install
> Traceback (most recent call last):
> File "setup.py", line 194, in <module>
> long_description = fh.read()
> File "/usr/lib/python3.6/encodings/ascii.py", line 26, in decode
> return codecs.ascii_decode(input, self.errors)[0]
> UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 7349: ordinal not in range(128)
## Expected behavior
No unicode error
## Environment
- PyTorch Version (e.g., 1.0): 1.6.0
- CPU Architecture: x86
- OS (e.g., Linux): Linux
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip
- Build command you used (if compiling from source): python setup.py install
- Are you using local sources or building from archives: local sources (git clone)
- Python version: 3.6.9
- CUDA version: 10.2.89
- GPU models and configuration: gtx 970
## Additional context
The following appears to resolve the issue:
```
diff --git a/py/setup.py b/py/setup.py
index 53f85da..8344c0a 100644
--- a/py/setup.py
+++ b/py/setup.py
@@ -190,7 +190,7 @@ ext_modules = [
)
]
-with open("README.md", "r") as fh:
+with open("README.md", "r", encoding="utf-8") as fh:
long_description = fh.read()
setup(
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `py/setup.py`
Content:
```
1 import os
2 import sys
3 import glob
4 import setuptools
5 from setuptools import setup, Extension, find_packages
6 from setuptools.command.build_ext import build_ext
7 from setuptools.command.develop import develop
8 from setuptools.command.install import install
9 from distutils.cmd import Command
10 from wheel.bdist_wheel import bdist_wheel
11
12 from torch.utils import cpp_extension
13 from shutil import copyfile, rmtree
14
15 import subprocess
16
17 dir_path = os.path.dirname(os.path.realpath(__file__))
18
19 __version__ = '0.1.0a0'
20
21 CXX11_ABI = False
22
23 if "--use-cxx11-abi" in sys.argv:
24 sys.argv.remove("--use-cxx11-abi")
25 CXX11_ABI = True
26
27 def which(program):
28 import os
29 def is_exe(fpath):
30 return os.path.isfile(fpath) and os.access(fpath, os.X_OK)
31
32 fpath, fname = os.path.split(program)
33 if fpath:
34 if is_exe(program):
35 return program
36 else:
37 for path in os.environ["PATH"].split(os.pathsep):
38 exe_file = os.path.join(path, program)
39 if is_exe(exe_file):
40 return exe_file
41
42 return None
43
44 BAZEL_EXE = which("bazel")
45
46 def build_libtrtorch_pre_cxx11_abi(develop=True, use_dist_dir=True, cxx11_abi=False):
47 cmd = [BAZEL_EXE, "build"]
48 cmd.append("//cpp/api/lib:libtrtorch.so")
49 if develop:
50 cmd.append("--compilation_mode=dbg")
51 else:
52 cmd.append("--compilation_mode=opt")
53 if use_dist_dir:
54 cmd.append("--distdir=third_party/dist_dir/x86_64-linux-gnu")
55 if not cxx11_abi:
56 cmd.append("--config=python")
57 else:
58 print("using CXX11 ABI build")
59
60 print("building libtrtorch")
61 status_code = subprocess.run(cmd).returncode
62
63 if status_code != 0:
64 sys.exit(status_code)
65
66
67 def gen_version_file():
68 if not os.path.exists(dir_path + '/trtorch/_version.py'):
69 os.mknod(dir_path + '/trtorch/_version.py')
70
71 with open(dir_path + '/trtorch/_version.py', 'w') as f:
72 print("creating version file")
73 f.write("__version__ = \"" + __version__ + '\"')
74
75 def copy_libtrtorch(multilinux=False):
76 if not os.path.exists(dir_path + '/trtorch/lib'):
77 os.makedirs(dir_path + '/trtorch/lib')
78
79 print("copying library into module")
80 if multilinux:
81 copyfile(dir_path + "/build/libtrtorch_build/libtrtorch.so", dir_path + '/trtorch/lib/libtrtorch.so')
82 else:
83 copyfile(dir_path + "/../bazel-bin/cpp/api/lib/libtrtorch.so", dir_path + '/trtorch/lib/libtrtorch.so')
84
85 class DevelopCommand(develop):
86 description = "Builds the package and symlinks it into the PYTHONPATH"
87
88 def initialize_options(self):
89 develop.initialize_options(self)
90
91 def finalize_options(self):
92 develop.finalize_options(self)
93
94 def run(self):
95 global CXX11_ABI
96 build_libtrtorch_pre_cxx11_abi(develop=True, cxx11_abi=CXX11_ABI)
97 gen_version_file()
98 copy_libtrtorch()
99 develop.run(self)
100
101
102 class InstallCommand(install):
103 description = "Builds the package"
104
105 def initialize_options(self):
106 install.initialize_options(self)
107
108 def finalize_options(self):
109 install.finalize_options(self)
110
111 def run(self):
112 global CXX11_ABI
113 build_libtrtorch_pre_cxx11_abi(develop=False, cxx11_abi=CXX11_ABI)
114 gen_version_file()
115 copy_libtrtorch()
116 install.run(self)
117
118 class BdistCommand(bdist_wheel):
119 description = "Builds the package"
120
121 def initialize_options(self):
122 bdist_wheel.initialize_options(self)
123
124 def finalize_options(self):
125 bdist_wheel.finalize_options(self)
126
127 def run(self):
128 global CXX11_ABI
129 build_libtrtorch_pre_cxx11_abi(develop=False, cxx11_abi=CXX11_ABI)
130 gen_version_file()
131 copy_libtrtorch()
132 bdist_wheel.run(self)
133
134 class CleanCommand(Command):
135 """Custom clean command to tidy up the project root."""
136 PY_CLEAN_FILES = ['./build', './dist', './trtorch/__pycache__', './trtorch/lib', './*.pyc', './*.tgz', './*.egg-info']
137 description = "Command to tidy up the project root"
138 user_options = []
139
140 def initialize_options(self):
141 pass
142
143 def finalize_options(self):
144 pass
145
146 def run(self):
147 for path_spec in self.PY_CLEAN_FILES:
148 # Make paths absolute and relative to this path
149 abs_paths = glob.glob(os.path.normpath(os.path.join(dir_path, path_spec)))
150 for path in [str(p) for p in abs_paths]:
151 if not path.startswith(dir_path):
152 # Die if path in CLEAN_FILES is absolute + outside this directory
153 raise ValueError("%s is not a path inside %s" % (path, dir_path))
154 print('Removing %s' % os.path.relpath(path))
155 rmtree(path)
156
157 ext_modules = [
158 cpp_extension.CUDAExtension('trtorch._C',
159 [
160 'trtorch/csrc/trtorch_py.cpp',
161 'trtorch/csrc/tensorrt_backend.cpp',
162 'trtorch/csrc/tensorrt_classes.cpp',
163 'trtorch/csrc/register_tensorrt_classes.cpp',
164 ],
165 library_dirs=[
166 (dir_path + '/trtorch/lib/'),
167 "/opt/conda/lib/python3.6/config-3.6m-x86_64-linux-gnu"
168 ],
169 libraries=[
170 "trtorch"
171 ],
172 include_dirs=[
173 dir_path + "trtorch/csrc",
174 dir_path + "/../",
175 dir_path + "/../bazel-TRTorch/external/tensorrt/include",
176 ],
177 extra_compile_args=[
178 "-Wno-deprecated",
179 "-Wno-deprecated-declarations",
180 ] + (["-D_GLIBCXX_USE_CXX11_ABI=1"] if CXX11_ABI else ["-D_GLIBCXX_USE_CXX11_ABI=0"]),
181 extra_link_args=[
182 "-Wno-deprecated",
183 "-Wno-deprecated-declarations",
184 "-Wl,--no-as-needed",
185 "-ltrtorch",
186 "-Wl,-rpath,$ORIGIN/lib",
187 "-lpthread",
188 "-ldl",
189 "-lutil",
190 "-lrt",
191 "-lm",
192 "-Xlinker",
193 "-export-dynamic"
194 ] + (["-D_GLIBCXX_USE_CXX11_ABI=1"] if CXX11_ABI else ["-D_GLIBCXX_USE_CXX11_ABI=0"]),
195 undef_macros=[ "NDEBUG" ]
196 )
197 ]
198
199 with open("README.md", "r") as fh:
200 long_description = fh.read()
201
202 setup(
203 name='trtorch',
204 version=__version__,
205 author='NVIDIA',
206 author_email='[email protected]',
207 url='https://nvidia.github.io/TRTorch',
208 description='A compiler backend for PyTorch JIT targeting NVIDIA GPUs',
209 long_description_content_type='text/markdown',
210 long_description=long_description,
211 ext_modules=ext_modules,
212 install_requires=[
213 'torch==1.6.0',
214 ],
215 setup_requires=[],
216 cmdclass={
217 'install': InstallCommand,
218 'clean': CleanCommand,
219 'develop': DevelopCommand,
220 'build_ext': cpp_extension.BuildExtension,
221 'bdist_wheel': BdistCommand,
222 },
223 zip_safe=False,
224 license="BSD",
225 packages=find_packages(),
226 classifiers=[
227 "Development Status :: 4 - Beta",
228 "Environment :: GPU :: NVIDIA CUDA",
229 "License :: OSI Approved :: BSD License",
230 "Intended Audience :: Developers",
231 "Intended Audience :: Science/Research",
232 "Operating System :: POSIX :: Linux",
233 "Programming Language :: C++",
234 "Programming Language :: Python",
235 "Programming Language :: Python :: Implementation :: CPython",
236 "Topic :: Scientific/Engineering",
237 "Topic :: Scientific/Engineering :: Artificial Intelligence",
238 "Topic :: Software Development",
239 "Topic :: Software Development :: Libraries"
240 ],
241 python_requires='>=3.6',
242 include_package_data=True,
243 package_data={
244 'trtorch': ['lib/*.so'],
245 },
246 exclude_package_data={
247 '': ['*.cpp', '*.h'],
248 'trtorch': ['csrc/*.cpp'],
249 }
250 )
251
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/py/setup.py b/py/setup.py
--- a/py/setup.py
+++ b/py/setup.py
@@ -190,7 +190,7 @@
)
]
-with open("README.md", "r") as fh:
+with open("README.md", "r", encoding="utf-8") as fh:
long_description = fh.read()
setup(
| {"golden_diff": "diff --git a/py/setup.py b/py/setup.py\n--- a/py/setup.py\n+++ b/py/setup.py\n@@ -190,7 +190,7 @@\n )\n ]\n \n-with open(\"README.md\", \"r\") as fh:\n+with open(\"README.md\", \"r\", encoding=\"utf-8\") as fh:\n long_description = fh.read()\n \n setup(\n", "issue": "\ud83d\udc1b [Bug] UnicodeDecodeError running setup.py\n## Bug Description\r\n\r\nTrying to run \"python setup.py install\" fails with a unicode error when reading README.md.\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n1. docker run --gpus=all -it nvcr.io/nvidia/tensorrt:20.03-py3 /bin/bash\r\n2. (cd /usr/bin && wget -O bazel https://github.com/bazelbuild/bazelisk/releases/download/v1.7.3/bazelisk-linux-amd64 && chmod +x bazel)\r\n3. git clone https://github.com/NVIDIA/TRTorch.git\r\n4. cd TRTorch/py\r\n5. pip install -r requirements.txt\r\n6. python setup.py install\r\n\r\nThe error follows:\r\n> root@320583666d0c:/workspace/TRTorch/py# python setup.py install \r\n> Traceback (most recent call last):\r\n> File \"setup.py\", line 194, in <module>\r\n> long_description = fh.read()\r\n> File \"/usr/lib/python3.6/encodings/ascii.py\", line 26, in decode\r\n> return codecs.ascii_decode(input, self.errors)[0]\r\n> UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 7349: ordinal not in range(128)\r\n\r\n## Expected behavior\r\n\r\nNo unicode error\r\n\r\n## Environment\r\n\r\n - PyTorch Version (e.g., 1.0): 1.6.0\r\n - CPU Architecture: x86\r\n - OS (e.g., Linux): Linux\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip\r\n - Build command you used (if compiling from source): python setup.py install\r\n - Are you using local sources or building from archives: local sources (git clone)\r\n - Python version: 3.6.9\r\n - CUDA version: 10.2.89\r\n - GPU models and configuration: gtx 970\r\n\r\n## Additional context\r\n\r\nThe following appears to resolve the issue:\r\n\r\n```\r\ndiff --git a/py/setup.py b/py/setup.py\r\nindex 53f85da..8344c0a 100644\r\n--- a/py/setup.py\r\n+++ b/py/setup.py\r\n@@ -190,7 +190,7 @@ ext_modules = [\r\n )\r\n ]\r\n \r\n-with open(\"README.md\", \"r\") as fh:\r\n+with open(\"README.md\", \"r\", encoding=\"utf-8\") as fh:\r\n long_description = fh.read()\r\n \r\n setup(\r\n```\r\n\r\n\n", "before_files": [{"content": "import os\nimport sys\nimport glob\nimport setuptools\nfrom setuptools import setup, Extension, find_packages\nfrom setuptools.command.build_ext import build_ext\nfrom setuptools.command.develop import develop\nfrom setuptools.command.install import install\nfrom distutils.cmd import Command\nfrom wheel.bdist_wheel import bdist_wheel\n\nfrom torch.utils import cpp_extension\nfrom shutil import copyfile, rmtree\n\nimport subprocess\n\ndir_path = os.path.dirname(os.path.realpath(__file__))\n\n__version__ = '0.1.0a0'\n\nCXX11_ABI = False\n\nif \"--use-cxx11-abi\" in sys.argv:\n sys.argv.remove(\"--use-cxx11-abi\")\n CXX11_ABI = True\n\ndef which(program):\n import os\n def is_exe(fpath):\n return os.path.isfile(fpath) and os.access(fpath, os.X_OK)\n\n fpath, fname = os.path.split(program)\n if fpath:\n if is_exe(program):\n return program\n else:\n for path in os.environ[\"PATH\"].split(os.pathsep):\n exe_file = os.path.join(path, program)\n if is_exe(exe_file):\n return exe_file\n\n return None\n\nBAZEL_EXE = which(\"bazel\")\n\ndef build_libtrtorch_pre_cxx11_abi(develop=True, use_dist_dir=True, cxx11_abi=False):\n cmd = [BAZEL_EXE, \"build\"]\n cmd.append(\"//cpp/api/lib:libtrtorch.so\")\n if develop:\n cmd.append(\"--compilation_mode=dbg\")\n else:\n cmd.append(\"--compilation_mode=opt\")\n if use_dist_dir:\n cmd.append(\"--distdir=third_party/dist_dir/x86_64-linux-gnu\")\n if not cxx11_abi:\n cmd.append(\"--config=python\")\n else:\n print(\"using CXX11 ABI build\")\n\n print(\"building libtrtorch\")\n status_code = subprocess.run(cmd).returncode\n\n if status_code != 0:\n sys.exit(status_code)\n\n\ndef gen_version_file():\n if not os.path.exists(dir_path + '/trtorch/_version.py'):\n os.mknod(dir_path + '/trtorch/_version.py')\n\n with open(dir_path + '/trtorch/_version.py', 'w') as f:\n print(\"creating version file\")\n f.write(\"__version__ = \\\"\" + __version__ + '\\\"')\n\ndef copy_libtrtorch(multilinux=False):\n if not os.path.exists(dir_path + '/trtorch/lib'):\n os.makedirs(dir_path + '/trtorch/lib')\n\n print(\"copying library into module\")\n if multilinux:\n copyfile(dir_path + \"/build/libtrtorch_build/libtrtorch.so\", dir_path + '/trtorch/lib/libtrtorch.so')\n else:\n copyfile(dir_path + \"/../bazel-bin/cpp/api/lib/libtrtorch.so\", dir_path + '/trtorch/lib/libtrtorch.so')\n\nclass DevelopCommand(develop):\n description = \"Builds the package and symlinks it into the PYTHONPATH\"\n\n def initialize_options(self):\n develop.initialize_options(self)\n\n def finalize_options(self):\n develop.finalize_options(self)\n\n def run(self):\n global CXX11_ABI\n build_libtrtorch_pre_cxx11_abi(develop=True, cxx11_abi=CXX11_ABI)\n gen_version_file()\n copy_libtrtorch()\n develop.run(self)\n\n\nclass InstallCommand(install):\n description = \"Builds the package\"\n\n def initialize_options(self):\n install.initialize_options(self)\n\n def finalize_options(self):\n install.finalize_options(self)\n\n def run(self):\n global CXX11_ABI\n build_libtrtorch_pre_cxx11_abi(develop=False, cxx11_abi=CXX11_ABI)\n gen_version_file()\n copy_libtrtorch()\n install.run(self)\n\nclass BdistCommand(bdist_wheel):\n description = \"Builds the package\"\n\n def initialize_options(self):\n bdist_wheel.initialize_options(self)\n\n def finalize_options(self):\n bdist_wheel.finalize_options(self)\n\n def run(self):\n global CXX11_ABI\n build_libtrtorch_pre_cxx11_abi(develop=False, cxx11_abi=CXX11_ABI)\n gen_version_file()\n copy_libtrtorch()\n bdist_wheel.run(self)\n\nclass CleanCommand(Command):\n \"\"\"Custom clean command to tidy up the project root.\"\"\"\n PY_CLEAN_FILES = ['./build', './dist', './trtorch/__pycache__', './trtorch/lib', './*.pyc', './*.tgz', './*.egg-info']\n description = \"Command to tidy up the project root\"\n user_options = []\n\n def initialize_options(self):\n pass\n\n def finalize_options(self):\n pass\n\n def run(self):\n for path_spec in self.PY_CLEAN_FILES:\n # Make paths absolute and relative to this path\n abs_paths = glob.glob(os.path.normpath(os.path.join(dir_path, path_spec)))\n for path in [str(p) for p in abs_paths]:\n if not path.startswith(dir_path):\n # Die if path in CLEAN_FILES is absolute + outside this directory\n raise ValueError(\"%s is not a path inside %s\" % (path, dir_path))\n print('Removing %s' % os.path.relpath(path))\n rmtree(path)\n\next_modules = [\n cpp_extension.CUDAExtension('trtorch._C',\n [\n 'trtorch/csrc/trtorch_py.cpp',\n 'trtorch/csrc/tensorrt_backend.cpp',\n 'trtorch/csrc/tensorrt_classes.cpp',\n 'trtorch/csrc/register_tensorrt_classes.cpp',\n ],\n library_dirs=[\n (dir_path + '/trtorch/lib/'),\n \"/opt/conda/lib/python3.6/config-3.6m-x86_64-linux-gnu\"\n ],\n libraries=[\n \"trtorch\"\n ],\n include_dirs=[\n dir_path + \"trtorch/csrc\",\n dir_path + \"/../\",\n dir_path + \"/../bazel-TRTorch/external/tensorrt/include\",\n ],\n extra_compile_args=[\n \"-Wno-deprecated\",\n \"-Wno-deprecated-declarations\",\n ] + ([\"-D_GLIBCXX_USE_CXX11_ABI=1\"] if CXX11_ABI else [\"-D_GLIBCXX_USE_CXX11_ABI=0\"]),\n extra_link_args=[\n \"-Wno-deprecated\",\n \"-Wno-deprecated-declarations\",\n \"-Wl,--no-as-needed\",\n \"-ltrtorch\",\n \"-Wl,-rpath,$ORIGIN/lib\",\n \"-lpthread\",\n \"-ldl\",\n \"-lutil\",\n \"-lrt\",\n \"-lm\",\n \"-Xlinker\",\n \"-export-dynamic\"\n ] + ([\"-D_GLIBCXX_USE_CXX11_ABI=1\"] if CXX11_ABI else [\"-D_GLIBCXX_USE_CXX11_ABI=0\"]),\n undef_macros=[ \"NDEBUG\" ]\n )\n]\n\nwith open(\"README.md\", \"r\") as fh:\n long_description = fh.read()\n\nsetup(\n name='trtorch',\n version=__version__,\n author='NVIDIA',\n author_email='[email protected]',\n url='https://nvidia.github.io/TRTorch',\n description='A compiler backend for PyTorch JIT targeting NVIDIA GPUs',\n long_description_content_type='text/markdown',\n long_description=long_description,\n ext_modules=ext_modules,\n install_requires=[\n 'torch==1.6.0',\n ],\n setup_requires=[],\n cmdclass={\n 'install': InstallCommand,\n 'clean': CleanCommand,\n 'develop': DevelopCommand,\n 'build_ext': cpp_extension.BuildExtension,\n 'bdist_wheel': BdistCommand,\n },\n zip_safe=False,\n license=\"BSD\",\n packages=find_packages(),\n classifiers=[\n \"Development Status :: 4 - Beta\",\n \"Environment :: GPU :: NVIDIA CUDA\",\n \"License :: OSI Approved :: BSD License\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Science/Research\",\n \"Operating System :: POSIX :: Linux\",\n \"Programming Language :: C++\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Topic :: Scientific/Engineering\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n \"Topic :: Software Development\",\n \"Topic :: Software Development :: Libraries\"\n ],\n python_requires='>=3.6',\n include_package_data=True,\n package_data={\n 'trtorch': ['lib/*.so'],\n },\n exclude_package_data={\n '': ['*.cpp', '*.h'],\n 'trtorch': ['csrc/*.cpp'],\n }\n)\n", "path": "py/setup.py"}], "after_files": [{"content": "import os\nimport sys\nimport glob\nimport setuptools\nfrom setuptools import setup, Extension, find_packages\nfrom setuptools.command.build_ext import build_ext\nfrom setuptools.command.develop import develop\nfrom setuptools.command.install import install\nfrom distutils.cmd import Command\nfrom wheel.bdist_wheel import bdist_wheel\n\nfrom torch.utils import cpp_extension\nfrom shutil import copyfile, rmtree\n\nimport subprocess\n\ndir_path = os.path.dirname(os.path.realpath(__file__))\n\n__version__ = '0.1.0a0'\n\nCXX11_ABI = False\n\nif \"--use-cxx11-abi\" in sys.argv:\n sys.argv.remove(\"--use-cxx11-abi\")\n CXX11_ABI = True\n\ndef which(program):\n import os\n def is_exe(fpath):\n return os.path.isfile(fpath) and os.access(fpath, os.X_OK)\n\n fpath, fname = os.path.split(program)\n if fpath:\n if is_exe(program):\n return program\n else:\n for path in os.environ[\"PATH\"].split(os.pathsep):\n exe_file = os.path.join(path, program)\n if is_exe(exe_file):\n return exe_file\n\n return None\n\nBAZEL_EXE = which(\"bazel\")\n\ndef build_libtrtorch_pre_cxx11_abi(develop=True, use_dist_dir=True, cxx11_abi=False):\n cmd = [BAZEL_EXE, \"build\"]\n cmd.append(\"//cpp/api/lib:libtrtorch.so\")\n if develop:\n cmd.append(\"--compilation_mode=dbg\")\n else:\n cmd.append(\"--compilation_mode=opt\")\n if use_dist_dir:\n cmd.append(\"--distdir=third_party/dist_dir/x86_64-linux-gnu\")\n if not cxx11_abi:\n cmd.append(\"--config=python\")\n else:\n print(\"using CXX11 ABI build\")\n\n print(\"building libtrtorch\")\n status_code = subprocess.run(cmd).returncode\n\n if status_code != 0:\n sys.exit(status_code)\n\n\ndef gen_version_file():\n if not os.path.exists(dir_path + '/trtorch/_version.py'):\n os.mknod(dir_path + '/trtorch/_version.py')\n\n with open(dir_path + '/trtorch/_version.py', 'w') as f:\n print(\"creating version file\")\n f.write(\"__version__ = \\\"\" + __version__ + '\\\"')\n\ndef copy_libtrtorch(multilinux=False):\n if not os.path.exists(dir_path + '/trtorch/lib'):\n os.makedirs(dir_path + '/trtorch/lib')\n\n print(\"copying library into module\")\n if multilinux:\n copyfile(dir_path + \"/build/libtrtorch_build/libtrtorch.so\", dir_path + '/trtorch/lib/libtrtorch.so')\n else:\n copyfile(dir_path + \"/../bazel-bin/cpp/api/lib/libtrtorch.so\", dir_path + '/trtorch/lib/libtrtorch.so')\n\nclass DevelopCommand(develop):\n description = \"Builds the package and symlinks it into the PYTHONPATH\"\n\n def initialize_options(self):\n develop.initialize_options(self)\n\n def finalize_options(self):\n develop.finalize_options(self)\n\n def run(self):\n global CXX11_ABI\n build_libtrtorch_pre_cxx11_abi(develop=True, cxx11_abi=CXX11_ABI)\n gen_version_file()\n copy_libtrtorch()\n develop.run(self)\n\n\nclass InstallCommand(install):\n description = \"Builds the package\"\n\n def initialize_options(self):\n install.initialize_options(self)\n\n def finalize_options(self):\n install.finalize_options(self)\n\n def run(self):\n global CXX11_ABI\n build_libtrtorch_pre_cxx11_abi(develop=False, cxx11_abi=CXX11_ABI)\n gen_version_file()\n copy_libtrtorch()\n install.run(self)\n\nclass BdistCommand(bdist_wheel):\n description = \"Builds the package\"\n\n def initialize_options(self):\n bdist_wheel.initialize_options(self)\n\n def finalize_options(self):\n bdist_wheel.finalize_options(self)\n\n def run(self):\n global CXX11_ABI\n build_libtrtorch_pre_cxx11_abi(develop=False, cxx11_abi=CXX11_ABI)\n gen_version_file()\n copy_libtrtorch()\n bdist_wheel.run(self)\n\nclass CleanCommand(Command):\n \"\"\"Custom clean command to tidy up the project root.\"\"\"\n PY_CLEAN_FILES = ['./build', './dist', './trtorch/__pycache__', './trtorch/lib', './*.pyc', './*.tgz', './*.egg-info']\n description = \"Command to tidy up the project root\"\n user_options = []\n\n def initialize_options(self):\n pass\n\n def finalize_options(self):\n pass\n\n def run(self):\n for path_spec in self.PY_CLEAN_FILES:\n # Make paths absolute and relative to this path\n abs_paths = glob.glob(os.path.normpath(os.path.join(dir_path, path_spec)))\n for path in [str(p) for p in abs_paths]:\n if not path.startswith(dir_path):\n # Die if path in CLEAN_FILES is absolute + outside this directory\n raise ValueError(\"%s is not a path inside %s\" % (path, dir_path))\n print('Removing %s' % os.path.relpath(path))\n rmtree(path)\n\next_modules = [\n cpp_extension.CUDAExtension('trtorch._C',\n ['trtorch/csrc/trtorch_py.cpp'],\n library_dirs=[\n (dir_path + '/trtorch/lib/'),\n \"/opt/conda/lib/python3.6/config-3.6m-x86_64-linux-gnu\"\n ],\n libraries=[\n \"trtorch\"\n ],\n include_dirs=[\n dir_path + \"/../\",\n dir_path + \"/../bazel-TRTorch/external/tensorrt/include\",\n ],\n extra_compile_args=[\n \"-Wno-deprecated\",\n \"-Wno-deprecated-declarations\",\n ] + ([\"-D_GLIBCXX_USE_CXX11_ABI=1\"] if CXX11_ABI else [\"-D_GLIBCXX_USE_CXX11_ABI=0\"]),\n extra_link_args=[\n \"-Wno-deprecated\",\n \"-Wno-deprecated-declarations\",\n \"-Wl,--no-as-needed\",\n \"-ltrtorch\",\n \"-Wl,-rpath,$ORIGIN/lib\",\n \"-lpthread\",\n \"-ldl\",\n \"-lutil\",\n \"-lrt\",\n \"-lm\",\n \"-Xlinker\",\n \"-export-dynamic\"\n ] + ([\"-D_GLIBCXX_USE_CXX11_ABI=1\"] if CXX11_ABI else [\"-D_GLIBCXX_USE_CXX11_ABI=0\"]),\n undef_macros=[ \"NDEBUG\" ]\n )\n]\n\nwith open(\"README.md\", \"r\", encoding=\"utf-8\") as fh:\n long_description = fh.read()\n\nsetup(\n name='trtorch',\n version=__version__,\n author='NVIDIA',\n author_email='[email protected]',\n url='https://nvidia.github.io/TRTorch',\n description='A compiler backend for PyTorch JIT targeting NVIDIA GPUs',\n long_description_content_type='text/markdown',\n long_description=long_description,\n ext_modules=ext_modules,\n install_requires=[\n 'torch==1.6.0',\n ],\n setup_requires=[],\n cmdclass={\n 'install': InstallCommand,\n 'clean': CleanCommand,\n 'develop': DevelopCommand,\n 'build_ext': cpp_extension.BuildExtension,\n 'bdist_wheel': BdistCommand,\n },\n zip_safe=False,\n license=\"BSD\",\n packages=find_packages(),\n classifiers=[\n \"Development Status :: 4 - Beta\",\n \"Environment :: GPU :: NVIDIA CUDA\",\n \"License :: OSI Approved :: BSD License\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Science/Research\",\n \"Operating System :: POSIX :: Linux\",\n \"Programming Language :: C++\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Topic :: Scientific/Engineering\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n \"Topic :: Software Development\",\n \"Topic :: Software Development :: Libraries\"\n ],\n python_requires='>=3.6',\n include_package_data=True,\n package_data={\n 'trtorch': ['lib/*.so'],\n },\n exclude_package_data={\n '': ['*.cpp', '*.h'],\n 'trtorch': ['csrc/*.cpp'],\n }\n)\n", "path": "py/setup.py"}]} |
gh_patches_debug_77 | rasdani/github-patches | git_diff | Lightning-Universe__lightning-flash-1426 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix Flash CI (special examples failing)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `flash_examples/integrations/learn2learn/image_classification_imagenette_mini.py`
Content:
```
1 # Copyright The PyTorch Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 # adapted from https://github.com/learnables/learn2learn/blob/master/examples/vision/protonet_miniimagenet.py#L154
16
17 """## Train file https://www.dropbox.com/s/9g8c6w345s2ek03/mini-imagenet-cache-train.pkl?dl=1
18
19 ## Validation File
20 https://www.dropbox.com/s/ip1b7se3gij3r1b/mini-imagenet-cache-validation.pkl?dl=1
21
22 Followed by renaming the pickle files
23 cp './mini-imagenet-cache-train.pkl?dl=1' './mini-imagenet-cache-train.pkl'
24 cp './mini-imagenet-cache-validation.pkl?dl=1' './mini-imagenet-cache-validation.pkl'
25 """
26
27 import warnings
28 from dataclasses import dataclass
29 from typing import Tuple, Union
30
31 import kornia.augmentation as Ka
32 import kornia.geometry as Kg
33 import learn2learn as l2l
34 import torch
35 import torchvision.transforms as T
36
37 import flash
38 from flash.core.data.io.input import DataKeys
39 from flash.core.data.io.input_transform import InputTransform
40 from flash.core.data.transforms import ApplyToKeys, kornia_collate
41 from flash.image import ImageClassificationData, ImageClassifier
42
43 warnings.simplefilter("ignore")
44
45 # download MiniImagenet
46 train_dataset = l2l.vision.datasets.MiniImagenet(root="./", mode="train", download=False)
47 val_dataset = l2l.vision.datasets.MiniImagenet(root="./", mode="validation", download=False)
48
49
50 @dataclass
51 class ImageClassificationInputTransform(InputTransform):
52
53 image_size: Tuple[int, int] = (196, 196)
54 mean: Union[float, Tuple[float, float, float]] = (0.485, 0.456, 0.406)
55 std: Union[float, Tuple[float, float, float]] = (0.229, 0.224, 0.225)
56
57 def per_sample_transform(self):
58 return T.Compose(
59 [
60 ApplyToKeys(
61 DataKeys.INPUT,
62 T.Compose(
63 [
64 T.ToTensor(),
65 Kg.Resize((196, 196)),
66 # SPATIAL
67 Ka.RandomHorizontalFlip(p=0.25),
68 Ka.RandomRotation(degrees=90.0, p=0.25),
69 Ka.RandomAffine(degrees=1 * 5.0, shear=1 / 5, translate=1 / 20, p=0.25),
70 Ka.RandomPerspective(distortion_scale=1 / 25, p=0.25),
71 # PIXEL-LEVEL
72 Ka.ColorJitter(brightness=1 / 30, p=0.25), # brightness
73 Ka.ColorJitter(saturation=1 / 30, p=0.25), # saturation
74 Ka.ColorJitter(contrast=1 / 30, p=0.25), # contrast
75 Ka.ColorJitter(hue=1 / 30, p=0.25), # hue
76 Ka.RandomMotionBlur(kernel_size=2 * (4 // 3) + 1, angle=1, direction=1.0, p=0.25),
77 Ka.RandomErasing(scale=(1 / 100, 1 / 50), ratio=(1 / 20, 1), p=0.25),
78 ]
79 ),
80 ),
81 ApplyToKeys(DataKeys.TARGET, torch.as_tensor),
82 ]
83 )
84
85 def train_per_sample_transform(self):
86 return T.Compose(
87 [
88 ApplyToKeys(
89 DataKeys.INPUT,
90 T.Compose(
91 [
92 T.ToTensor(),
93 T.Resize(self.image_size),
94 T.Normalize(self.mean, self.std),
95 T.RandomHorizontalFlip(),
96 T.ColorJitter(),
97 T.RandomAutocontrast(),
98 T.RandomPerspective(),
99 ]
100 ),
101 ),
102 ApplyToKeys("target", torch.as_tensor),
103 ]
104 )
105
106 def per_batch_transform_on_device(self):
107 return ApplyToKeys(
108 DataKeys.INPUT,
109 Ka.RandomHorizontalFlip(p=0.25),
110 )
111
112 def collate(self):
113 return kornia_collate
114
115
116 # construct datamodule
117
118 datamodule = ImageClassificationData.from_tensors(
119 train_data=train_dataset.x,
120 train_targets=torch.from_numpy(train_dataset.y.astype(int)),
121 val_data=val_dataset.x,
122 val_targets=torch.from_numpy(val_dataset.y.astype(int)),
123 train_transform=ImageClassificationInputTransform,
124 val_transform=ImageClassificationInputTransform,
125 batch_size=1,
126 )
127
128 model = ImageClassifier(
129 backbone="resnet18",
130 training_strategy="prototypicalnetworks",
131 training_strategy_kwargs={
132 "epoch_length": 10 * 16,
133 "meta_batch_size": 1,
134 "num_tasks": 200,
135 "test_num_tasks": 2000,
136 "ways": datamodule.num_classes,
137 "shots": 1,
138 "test_ways": 5,
139 "test_shots": 1,
140 "test_queries": 15,
141 },
142 optimizer=torch.optim.Adam,
143 learning_rate=0.001,
144 )
145
146 trainer = flash.Trainer(
147 max_epochs=1,
148 gpus=1,
149 precision=16,
150 )
151
152 trainer.finetune(model, datamodule=datamodule, strategy="no_freeze")
153
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/flash_examples/integrations/learn2learn/image_classification_imagenette_mini.py b/flash_examples/integrations/learn2learn/image_classification_imagenette_mini.py
--- a/flash_examples/integrations/learn2learn/image_classification_imagenette_mini.py
+++ b/flash_examples/integrations/learn2learn/image_classification_imagenette_mini.py
@@ -146,6 +146,7 @@
trainer = flash.Trainer(
max_epochs=1,
gpus=1,
+ accelerator="gpu",
precision=16,
)
| {"golden_diff": "diff --git a/flash_examples/integrations/learn2learn/image_classification_imagenette_mini.py b/flash_examples/integrations/learn2learn/image_classification_imagenette_mini.py\n--- a/flash_examples/integrations/learn2learn/image_classification_imagenette_mini.py\n+++ b/flash_examples/integrations/learn2learn/image_classification_imagenette_mini.py\n@@ -146,6 +146,7 @@\n trainer = flash.Trainer(\n max_epochs=1,\n gpus=1,\n+ accelerator=\"gpu\",\n precision=16,\n )\n", "issue": "Fix Flash CI (special examples failing)\n\n", "before_files": [{"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# adapted from https://github.com/learnables/learn2learn/blob/master/examples/vision/protonet_miniimagenet.py#L154\n\n\"\"\"## Train file https://www.dropbox.com/s/9g8c6w345s2ek03/mini-imagenet-cache-train.pkl?dl=1\n\n## Validation File\nhttps://www.dropbox.com/s/ip1b7se3gij3r1b/mini-imagenet-cache-validation.pkl?dl=1\n\nFollowed by renaming the pickle files\ncp './mini-imagenet-cache-train.pkl?dl=1' './mini-imagenet-cache-train.pkl'\ncp './mini-imagenet-cache-validation.pkl?dl=1' './mini-imagenet-cache-validation.pkl'\n\"\"\"\n\nimport warnings\nfrom dataclasses import dataclass\nfrom typing import Tuple, Union\n\nimport kornia.augmentation as Ka\nimport kornia.geometry as Kg\nimport learn2learn as l2l\nimport torch\nimport torchvision.transforms as T\n\nimport flash\nfrom flash.core.data.io.input import DataKeys\nfrom flash.core.data.io.input_transform import InputTransform\nfrom flash.core.data.transforms import ApplyToKeys, kornia_collate\nfrom flash.image import ImageClassificationData, ImageClassifier\n\nwarnings.simplefilter(\"ignore\")\n\n# download MiniImagenet\ntrain_dataset = l2l.vision.datasets.MiniImagenet(root=\"./\", mode=\"train\", download=False)\nval_dataset = l2l.vision.datasets.MiniImagenet(root=\"./\", mode=\"validation\", download=False)\n\n\n@dataclass\nclass ImageClassificationInputTransform(InputTransform):\n\n image_size: Tuple[int, int] = (196, 196)\n mean: Union[float, Tuple[float, float, float]] = (0.485, 0.456, 0.406)\n std: Union[float, Tuple[float, float, float]] = (0.229, 0.224, 0.225)\n\n def per_sample_transform(self):\n return T.Compose(\n [\n ApplyToKeys(\n DataKeys.INPUT,\n T.Compose(\n [\n T.ToTensor(),\n Kg.Resize((196, 196)),\n # SPATIAL\n Ka.RandomHorizontalFlip(p=0.25),\n Ka.RandomRotation(degrees=90.0, p=0.25),\n Ka.RandomAffine(degrees=1 * 5.0, shear=1 / 5, translate=1 / 20, p=0.25),\n Ka.RandomPerspective(distortion_scale=1 / 25, p=0.25),\n # PIXEL-LEVEL\n Ka.ColorJitter(brightness=1 / 30, p=0.25), # brightness\n Ka.ColorJitter(saturation=1 / 30, p=0.25), # saturation\n Ka.ColorJitter(contrast=1 / 30, p=0.25), # contrast\n Ka.ColorJitter(hue=1 / 30, p=0.25), # hue\n Ka.RandomMotionBlur(kernel_size=2 * (4 // 3) + 1, angle=1, direction=1.0, p=0.25),\n Ka.RandomErasing(scale=(1 / 100, 1 / 50), ratio=(1 / 20, 1), p=0.25),\n ]\n ),\n ),\n ApplyToKeys(DataKeys.TARGET, torch.as_tensor),\n ]\n )\n\n def train_per_sample_transform(self):\n return T.Compose(\n [\n ApplyToKeys(\n DataKeys.INPUT,\n T.Compose(\n [\n T.ToTensor(),\n T.Resize(self.image_size),\n T.Normalize(self.mean, self.std),\n T.RandomHorizontalFlip(),\n T.ColorJitter(),\n T.RandomAutocontrast(),\n T.RandomPerspective(),\n ]\n ),\n ),\n ApplyToKeys(\"target\", torch.as_tensor),\n ]\n )\n\n def per_batch_transform_on_device(self):\n return ApplyToKeys(\n DataKeys.INPUT,\n Ka.RandomHorizontalFlip(p=0.25),\n )\n\n def collate(self):\n return kornia_collate\n\n\n# construct datamodule\n\ndatamodule = ImageClassificationData.from_tensors(\n train_data=train_dataset.x,\n train_targets=torch.from_numpy(train_dataset.y.astype(int)),\n val_data=val_dataset.x,\n val_targets=torch.from_numpy(val_dataset.y.astype(int)),\n train_transform=ImageClassificationInputTransform,\n val_transform=ImageClassificationInputTransform,\n batch_size=1,\n)\n\nmodel = ImageClassifier(\n backbone=\"resnet18\",\n training_strategy=\"prototypicalnetworks\",\n training_strategy_kwargs={\n \"epoch_length\": 10 * 16,\n \"meta_batch_size\": 1,\n \"num_tasks\": 200,\n \"test_num_tasks\": 2000,\n \"ways\": datamodule.num_classes,\n \"shots\": 1,\n \"test_ways\": 5,\n \"test_shots\": 1,\n \"test_queries\": 15,\n },\n optimizer=torch.optim.Adam,\n learning_rate=0.001,\n)\n\ntrainer = flash.Trainer(\n max_epochs=1,\n gpus=1,\n precision=16,\n)\n\ntrainer.finetune(model, datamodule=datamodule, strategy=\"no_freeze\")\n", "path": "flash_examples/integrations/learn2learn/image_classification_imagenette_mini.py"}], "after_files": [{"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# adapted from https://github.com/learnables/learn2learn/blob/master/examples/vision/protonet_miniimagenet.py#L154\n\n\"\"\"## Train file https://www.dropbox.com/s/9g8c6w345s2ek03/mini-imagenet-cache-train.pkl?dl=1\n\n## Validation File\nhttps://www.dropbox.com/s/ip1b7se3gij3r1b/mini-imagenet-cache-validation.pkl?dl=1\n\nFollowed by renaming the pickle files\ncp './mini-imagenet-cache-train.pkl?dl=1' './mini-imagenet-cache-train.pkl'\ncp './mini-imagenet-cache-validation.pkl?dl=1' './mini-imagenet-cache-validation.pkl'\n\"\"\"\n\nimport warnings\nfrom dataclasses import dataclass\nfrom typing import Tuple, Union\n\nimport kornia.augmentation as Ka\nimport kornia.geometry as Kg\nimport learn2learn as l2l\nimport torch\nimport torchvision.transforms as T\n\nimport flash\nfrom flash.core.data.io.input import DataKeys\nfrom flash.core.data.io.input_transform import InputTransform\nfrom flash.core.data.transforms import ApplyToKeys, kornia_collate\nfrom flash.image import ImageClassificationData, ImageClassifier\n\nwarnings.simplefilter(\"ignore\")\n\n# download MiniImagenet\ntrain_dataset = l2l.vision.datasets.MiniImagenet(root=\"./\", mode=\"train\", download=False)\nval_dataset = l2l.vision.datasets.MiniImagenet(root=\"./\", mode=\"validation\", download=False)\n\n\n@dataclass\nclass ImageClassificationInputTransform(InputTransform):\n\n image_size: Tuple[int, int] = (196, 196)\n mean: Union[float, Tuple[float, float, float]] = (0.485, 0.456, 0.406)\n std: Union[float, Tuple[float, float, float]] = (0.229, 0.224, 0.225)\n\n def per_sample_transform(self):\n return T.Compose(\n [\n ApplyToKeys(\n DataKeys.INPUT,\n T.Compose(\n [\n T.ToTensor(),\n Kg.Resize((196, 196)),\n # SPATIAL\n Ka.RandomHorizontalFlip(p=0.25),\n Ka.RandomRotation(degrees=90.0, p=0.25),\n Ka.RandomAffine(degrees=1 * 5.0, shear=1 / 5, translate=1 / 20, p=0.25),\n Ka.RandomPerspective(distortion_scale=1 / 25, p=0.25),\n # PIXEL-LEVEL\n Ka.ColorJitter(brightness=1 / 30, p=0.25), # brightness\n Ka.ColorJitter(saturation=1 / 30, p=0.25), # saturation\n Ka.ColorJitter(contrast=1 / 30, p=0.25), # contrast\n Ka.ColorJitter(hue=1 / 30, p=0.25), # hue\n Ka.RandomMotionBlur(kernel_size=2 * (4 // 3) + 1, angle=1, direction=1.0, p=0.25),\n Ka.RandomErasing(scale=(1 / 100, 1 / 50), ratio=(1 / 20, 1), p=0.25),\n ]\n ),\n ),\n ApplyToKeys(DataKeys.TARGET, torch.as_tensor),\n ]\n )\n\n def train_per_sample_transform(self):\n return T.Compose(\n [\n ApplyToKeys(\n DataKeys.INPUT,\n T.Compose(\n [\n T.ToTensor(),\n T.Resize(self.image_size),\n T.Normalize(self.mean, self.std),\n T.RandomHorizontalFlip(),\n T.ColorJitter(),\n T.RandomAutocontrast(),\n T.RandomPerspective(),\n ]\n ),\n ),\n ApplyToKeys(\"target\", torch.as_tensor),\n ]\n )\n\n def per_batch_transform_on_device(self):\n return ApplyToKeys(\n DataKeys.INPUT,\n Ka.RandomHorizontalFlip(p=0.25),\n )\n\n def collate(self):\n return kornia_collate\n\n\n# construct datamodule\n\ndatamodule = ImageClassificationData.from_tensors(\n train_data=train_dataset.x,\n train_targets=torch.from_numpy(train_dataset.y.astype(int)),\n val_data=val_dataset.x,\n val_targets=torch.from_numpy(val_dataset.y.astype(int)),\n train_transform=ImageClassificationInputTransform,\n val_transform=ImageClassificationInputTransform,\n batch_size=1,\n)\n\nmodel = ImageClassifier(\n backbone=\"resnet18\",\n training_strategy=\"prototypicalnetworks\",\n training_strategy_kwargs={\n \"epoch_length\": 10 * 16,\n \"meta_batch_size\": 1,\n \"num_tasks\": 200,\n \"test_num_tasks\": 2000,\n \"ways\": datamodule.num_classes,\n \"shots\": 1,\n \"test_ways\": 5,\n \"test_shots\": 1,\n \"test_queries\": 15,\n },\n optimizer=torch.optim.Adam,\n learning_rate=0.001,\n)\n\ntrainer = flash.Trainer(\n max_epochs=1,\n gpus=1,\n accelerator=\"gpu\",\n precision=16,\n)\n\ntrainer.finetune(model, datamodule=datamodule, strategy=\"no_freeze\")\n", "path": "flash_examples/integrations/learn2learn/image_classification_imagenette_mini.py"}]} |
gh_patches_debug_78 | rasdani/github-patches | git_diff | projectmesa__mesa-1860 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
mesa.visualization.chartmodule doesn't work
As shown in the picture, I run the boltzmann_wealth_model in the mesa example, but the line chart is not displayed normally. Can anyone help me?
<img width="788" alt="ε±εΉζͺεΎ 2023-11-04 183542" src="https://github.com/projectmesa/mesa/assets/75169342/89ba1b20-4011-471b-909e-5fea97da6b73">
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 import re
3 from codecs import open
4
5 from setuptools import find_packages, setup
6
7 requires = [
8 "click",
9 "cookiecutter",
10 "matplotlib",
11 "mesa_viz_tornado",
12 "networkx",
13 "numpy",
14 "pandas",
15 "solara",
16 "tqdm",
17 ]
18
19 extras_require = {
20 "dev": [
21 "black",
22 "ruff~=0.1.1", # Update periodically
23 "coverage",
24 "pytest >= 4.6",
25 "pytest-cov",
26 "sphinx",
27 ],
28 # Explicitly install ipykernel for Python 3.8.
29 # See https://stackoverflow.com/questions/28831854/how-do-i-add-python3-kernel-to-jupyter-ipython
30 # Could be removed in the future
31 "docs": [
32 "sphinx",
33 "ipython",
34 "ipykernel",
35 "pydata_sphinx_theme",
36 "seaborn",
37 "myst-nb",
38 ],
39 }
40
41 version = ""
42 with open("mesa/__init__.py") as fd:
43 version = re.search(
44 r'^__version__\s*=\s*[\'"]([^\'"]*)[\'"]', fd.read(), re.MULTILINE
45 ).group(1)
46
47 with open("README.rst", "rb", encoding="utf-8") as f:
48 readme = f.read()
49
50
51 setup(
52 name="Mesa",
53 version=version,
54 description="Agent-based modeling (ABM) in Python 3+",
55 long_description=readme,
56 author="Project Mesa Team",
57 author_email="[email protected]",
58 url="https://github.com/projectmesa/mesa",
59 packages=find_packages(),
60 package_data={
61 "cookiecutter-mesa": ["cookiecutter-mesa/*"],
62 },
63 include_package_data=True,
64 install_requires=requires,
65 extras_require=extras_require,
66 keywords="agent based modeling model ABM simulation multi-agent",
67 license="Apache 2.0",
68 zip_safe=False,
69 classifiers=[
70 "Topic :: Scientific/Engineering",
71 "Topic :: Scientific/Engineering :: Artificial Life",
72 "Topic :: Scientific/Engineering :: Artificial Intelligence",
73 "Intended Audience :: Science/Research",
74 "Programming Language :: Python :: 3 :: Only",
75 "Programming Language :: Python :: 3.8",
76 "Programming Language :: Python :: 3.9",
77 "Programming Language :: Python :: 3.10",
78 "License :: OSI Approved :: Apache Software License",
79 "Operating System :: OS Independent",
80 "Development Status :: 3 - Alpha",
81 "Natural Language :: English",
82 ],
83 entry_points="""
84 [console_scripts]
85 mesa=mesa.main:cli
86 """,
87 python_requires=">=3.8",
88 )
89
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -8,7 +8,7 @@
"click",
"cookiecutter",
"matplotlib",
- "mesa_viz_tornado",
+ "mesa_viz_tornado~=0.1.0,>=0.1.2",
"networkx",
"numpy",
"pandas",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -8,7 +8,7 @@\n \"click\",\n \"cookiecutter\",\n \"matplotlib\",\n- \"mesa_viz_tornado\",\n+ \"mesa_viz_tornado~=0.1.0,>=0.1.2\",\n \"networkx\",\n \"numpy\",\n \"pandas\",\n", "issue": "mesa.visualization.chartmodule doesn't work\nAs shown in the picture, I run the boltzmann_wealth_model in the mesa example, but the line chart is not displayed normally. Can anyone help me?\r\n<img width=\"788\" alt=\"\u5c4f\u5e55\u622a\u56fe 2023-11-04 183542\" src=\"https://github.com/projectmesa/mesa/assets/75169342/89ba1b20-4011-471b-909e-5fea97da6b73\">\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\nimport re\nfrom codecs import open\n\nfrom setuptools import find_packages, setup\n\nrequires = [\n \"click\",\n \"cookiecutter\",\n \"matplotlib\",\n \"mesa_viz_tornado\",\n \"networkx\",\n \"numpy\",\n \"pandas\",\n \"solara\",\n \"tqdm\",\n]\n\nextras_require = {\n \"dev\": [\n \"black\",\n \"ruff~=0.1.1\", # Update periodically\n \"coverage\",\n \"pytest >= 4.6\",\n \"pytest-cov\",\n \"sphinx\",\n ],\n # Explicitly install ipykernel for Python 3.8.\n # See https://stackoverflow.com/questions/28831854/how-do-i-add-python3-kernel-to-jupyter-ipython\n # Could be removed in the future\n \"docs\": [\n \"sphinx\",\n \"ipython\",\n \"ipykernel\",\n \"pydata_sphinx_theme\",\n \"seaborn\",\n \"myst-nb\",\n ],\n}\n\nversion = \"\"\nwith open(\"mesa/__init__.py\") as fd:\n version = re.search(\n r'^__version__\\s*=\\s*[\\'\"]([^\\'\"]*)[\\'\"]', fd.read(), re.MULTILINE\n ).group(1)\n\nwith open(\"README.rst\", \"rb\", encoding=\"utf-8\") as f:\n readme = f.read()\n\n\nsetup(\n name=\"Mesa\",\n version=version,\n description=\"Agent-based modeling (ABM) in Python 3+\",\n long_description=readme,\n author=\"Project Mesa Team\",\n author_email=\"[email protected]\",\n url=\"https://github.com/projectmesa/mesa\",\n packages=find_packages(),\n package_data={\n \"cookiecutter-mesa\": [\"cookiecutter-mesa/*\"],\n },\n include_package_data=True,\n install_requires=requires,\n extras_require=extras_require,\n keywords=\"agent based modeling model ABM simulation multi-agent\",\n license=\"Apache 2.0\",\n zip_safe=False,\n classifiers=[\n \"Topic :: Scientific/Engineering\",\n \"Topic :: Scientific/Engineering :: Artificial Life\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n \"Intended Audience :: Science/Research\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: OS Independent\",\n \"Development Status :: 3 - Alpha\",\n \"Natural Language :: English\",\n ],\n entry_points=\"\"\"\n [console_scripts]\n mesa=mesa.main:cli\n \"\"\",\n python_requires=\">=3.8\",\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\nimport re\nfrom codecs import open\n\nfrom setuptools import find_packages, setup\n\nrequires = [\n \"click\",\n \"cookiecutter\",\n \"matplotlib\",\n \"mesa_viz_tornado~=0.1.0,>=0.1.2\",\n \"networkx\",\n \"numpy\",\n \"pandas\",\n \"solara\",\n \"tqdm\",\n]\n\nextras_require = {\n \"dev\": [\n \"black\",\n \"ruff~=0.1.1\", # Update periodically\n \"coverage\",\n \"pytest >= 4.6\",\n \"pytest-cov\",\n \"sphinx\",\n ],\n # Explicitly install ipykernel for Python 3.8.\n # See https://stackoverflow.com/questions/28831854/how-do-i-add-python3-kernel-to-jupyter-ipython\n # Could be removed in the future\n \"docs\": [\n \"sphinx\",\n \"ipython\",\n \"ipykernel\",\n \"pydata_sphinx_theme\",\n \"seaborn\",\n \"myst-nb\",\n ],\n}\n\nversion = \"\"\nwith open(\"mesa/__init__.py\") as fd:\n version = re.search(\n r'^__version__\\s*=\\s*[\\'\"]([^\\'\"]*)[\\'\"]', fd.read(), re.MULTILINE\n ).group(1)\n\nwith open(\"README.rst\", \"rb\", encoding=\"utf-8\") as f:\n readme = f.read()\n\n\nsetup(\n name=\"Mesa\",\n version=version,\n description=\"Agent-based modeling (ABM) in Python 3+\",\n long_description=readme,\n author=\"Project Mesa Team\",\n author_email=\"[email protected]\",\n url=\"https://github.com/projectmesa/mesa\",\n packages=find_packages(),\n package_data={\n \"cookiecutter-mesa\": [\"cookiecutter-mesa/*\"],\n },\n include_package_data=True,\n install_requires=requires,\n extras_require=extras_require,\n keywords=\"agent based modeling model ABM simulation multi-agent\",\n license=\"Apache 2.0\",\n zip_safe=False,\n classifiers=[\n \"Topic :: Scientific/Engineering\",\n \"Topic :: Scientific/Engineering :: Artificial Life\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n \"Intended Audience :: Science/Research\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: OS Independent\",\n \"Development Status :: 3 - Alpha\",\n \"Natural Language :: English\",\n ],\n entry_points=\"\"\"\n [console_scripts]\n mesa=mesa.main:cli\n \"\"\",\n python_requires=\">=3.8\",\n)\n", "path": "setup.py"}]} |
gh_patches_debug_79 | rasdani/github-patches | git_diff | getsentry__sentry-45511 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
The weekly reports mail is sent repeatedly 16 times, once every minute
### Environment
self-hosted (https://develop.sentry.dev/self-hosted/)
### Version
23.1.1
### Steps to Reproduce
1γRun `Sentry cron` and `Sentry worker` services
2γ`Sentry worker` has three instances
### Expected Result
Receive only one weekly newsletter per week
### Actual Result
Receive 16 Sentry weekly emails every Monday οΌReceived at one minute intervalsοΌAll users within the organization received 16 weekly report emails equally.
<img width="582" alt="image" src="https://user-images.githubusercontent.com/18591662/223436915-ab795659-3095-49f3-9aa6-73742706587b.png">
@Neo-Zhixing Hi
I suspect it has something to do with this pr, https://github.com/getsentry/sentry/pull/39911, but it is not reproduced in my local development environment and the problem only exists in our production environment. What is the possible cause? Can you give any useful information? Thank you very much!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/sentry/tasks/weekly_reports.py`
Content:
```
1 import heapq
2 import logging
3 from datetime import timedelta
4 from functools import partial, reduce
5
6 import sentry_sdk
7 from django.db.models import Count
8 from django.utils import dateformat, timezone
9 from sentry_sdk import set_tag
10 from snuba_sdk import Request
11 from snuba_sdk.column import Column
12 from snuba_sdk.conditions import Condition, Op
13 from snuba_sdk.entity import Entity
14 from snuba_sdk.expressions import Granularity
15 from snuba_sdk.function import Function
16 from snuba_sdk.orderby import Direction, OrderBy
17 from snuba_sdk.query import Limit, Query
18
19 from sentry.api.serializers.snuba import zerofill
20 from sentry.constants import DataCategory
21 from sentry.db.models.fields import PickledObjectField
22 from sentry.models import (
23 Activity,
24 Group,
25 GroupHistory,
26 GroupHistoryStatus,
27 GroupStatus,
28 Organization,
29 OrganizationMember,
30 OrganizationStatus,
31 User,
32 )
33 from sentry.snuba.dataset import Dataset
34 from sentry.tasks.base import instrumented_task
35 from sentry.types.activity import ActivityType
36 from sentry.utils import json
37 from sentry.utils.dates import floor_to_utc_day, to_datetime, to_timestamp
38 from sentry.utils.email import MessageBuilder
39 from sentry.utils.outcomes import Outcome
40 from sentry.utils.query import RangeQuerySetWrapper
41 from sentry.utils.snuba import parse_snuba_datetime, raw_snql_query
42
43 ONE_DAY = int(timedelta(days=1).total_seconds())
44 date_format = partial(dateformat.format, format_string="F jS, Y")
45
46 logger = logging.getLogger(__name__)
47
48
49 class OrganizationReportContext:
50 def __init__(self, timestamp, duration, organization):
51 self.timestamp = timestamp
52 self.duration = duration
53
54 self.start = to_datetime(timestamp - duration)
55 self.end = to_datetime(timestamp)
56
57 self.organization = organization
58 self.projects = {} # { project_id: ProjectContext }
59
60 self.project_ownership = {} # { user_id: set<project_id> }
61 for project in organization.project_set.all():
62 self.projects[project.id] = ProjectContext(project)
63
64 def __repr__(self):
65 return self.projects.__repr__()
66
67
68 class ProjectContext:
69 accepted_error_count = 0
70 dropped_error_count = 0
71 accepted_transaction_count = 0
72 dropped_transaction_count = 0
73
74 all_issue_count = 0
75 existing_issue_count = 0
76 reopened_issue_count = 0
77 new_issue_count = 0
78
79 def __init__(self, project):
80 self.project = project
81
82 # Array of (group_id, group_history, count)
83 self.key_errors = []
84 # Array of (transaction_name, count_this_week, p95_this_week, count_last_week, p95_last_week)
85 self.key_transactions = []
86 # Array of (Group, count)
87 self.key_performance_issues = []
88
89 # Dictionary of { timestamp: count }
90 self.error_count_by_day = {}
91 # Dictionary of { timestamp: count }
92 self.transaction_count_by_day = {}
93
94 def __repr__(self):
95 return f"{self.key_errors}, Errors: [Accepted {self.accepted_error_count}, Dropped {self.dropped_error_count}]\nTransactions: [Accepted {self.accepted_transaction_count} Dropped {self.dropped_transaction_count}]"
96
97
98 def check_if_project_is_empty(project_ctx):
99 """
100 Check if this project has any content we could show in an email.
101 """
102 return (
103 not project_ctx.key_errors
104 and not project_ctx.key_transactions
105 and not project_ctx.key_performance_issues
106 and not project_ctx.accepted_error_count
107 and not project_ctx.dropped_error_count
108 and not project_ctx.accepted_transaction_count
109 and not project_ctx.dropped_transaction_count
110 )
111
112
113 def check_if_ctx_is_empty(ctx):
114 """
115 Check if the context is empty. If it is, we don't want to send an email.
116 """
117 return all(check_if_project_is_empty(project_ctx) for project_ctx in ctx.projects.values())
118
119
120 # The entry point. This task is scheduled to run every week.
121 @instrumented_task(
122 name="sentry.tasks.weekly_reports.schedule_organizations",
123 queue="reports.prepare",
124 max_retries=5,
125 acks_late=True,
126 )
127 def schedule_organizations(dry_run=False, timestamp=None, duration=None):
128 if timestamp is None:
129 # The time that the report was generated
130 timestamp = to_timestamp(floor_to_utc_day(timezone.now()))
131
132 if duration is None:
133 # The total timespan that the task covers
134 duration = ONE_DAY * 7
135
136 organizations = Organization.objects.filter(status=OrganizationStatus.ACTIVE)
137 for organization in RangeQuerySetWrapper(
138 organizations, step=10000, result_value_getter=lambda item: item.id
139 ):
140 # Create a celery task per organization
141 prepare_organization_report.delay(timestamp, duration, organization.id, dry_run=dry_run)
142
143
144 # This task is launched per-organization.
145 @instrumented_task(
146 name="sentry.tasks.weekly_reports.prepare_organization_report",
147 queue="reports.prepare",
148 max_retries=5,
149 acks_late=True,
150 )
151 def prepare_organization_report(
152 timestamp, duration, organization_id, dry_run=False, target_user=None, email_override=None
153 ):
154 organization = Organization.objects.get(id=organization_id)
155 set_tag("org.slug", organization.slug)
156 set_tag("org.id", organization_id)
157 ctx = OrganizationReportContext(timestamp, duration, organization)
158
159 # Run organization passes
160 with sentry_sdk.start_span(op="weekly_reports.user_project_ownership"):
161 user_project_ownership(ctx)
162 with sentry_sdk.start_span(op="weekly_reports.project_event_counts_for_organization"):
163 project_event_counts_for_organization(ctx)
164 with sentry_sdk.start_span(op="weekly_reports.organization_project_issue_summaries"):
165 organization_project_issue_summaries(ctx)
166
167 with sentry_sdk.start_span(op="weekly_reports.project_passes"):
168 # Run project passes
169 for project in organization.project_set.all():
170 project_key_errors(ctx, project)
171 project_key_transactions(ctx, project)
172 project_key_performance_issues(ctx, project)
173
174 with sentry_sdk.start_span(op="weekly_reports.fetch_key_error_groups"):
175 fetch_key_error_groups(ctx)
176 with sentry_sdk.start_span(op="weekly_reports.fetch_key_performance_issue_groups"):
177 fetch_key_performance_issue_groups(ctx)
178
179 report_is_available = not check_if_ctx_is_empty(ctx)
180 set_tag("report.available", report_is_available)
181
182 if not report_is_available:
183 logger.info(
184 "prepare_organization_report.skipping_empty", extra={"organization": organization_id}
185 )
186 return
187
188 # Finally, deliver the reports
189 with sentry_sdk.start_span(op="weekly_reports.deliver_reports"):
190 deliver_reports(
191 ctx, dry_run=dry_run, target_user=target_user, email_override=email_override
192 )
193
194
195 # Organization Passes
196
197 # Find the projects associated with an user.
198 # Populates context.project_ownership which is { user_id: set<project_id> }
199 def user_project_ownership(ctx):
200 for (project_id, user_id) in OrganizationMember.objects.filter(
201 organization_id=ctx.organization.id, teams__projectteam__project__isnull=False
202 ).values_list("teams__projectteam__project_id", "user_id"):
203 ctx.project_ownership.setdefault(user_id, set()).add(project_id)
204
205
206 # Populates context.projects which is { project_id: ProjectContext }
207 def project_event_counts_for_organization(ctx):
208 def zerofill_data(data):
209 return zerofill(data, ctx.start, ctx.end, ONE_DAY, fill_default=0)
210
211 query = Query(
212 match=Entity("outcomes"),
213 select=[
214 Column("outcome"),
215 Column("category"),
216 Function("sum", [Column("quantity")], "total"),
217 ],
218 where=[
219 Condition(Column("timestamp"), Op.GTE, ctx.start),
220 Condition(Column("timestamp"), Op.LT, ctx.end + timedelta(days=1)),
221 Condition(Column("org_id"), Op.EQ, ctx.organization.id),
222 Condition(
223 Column("outcome"), Op.IN, [Outcome.ACCEPTED, Outcome.FILTERED, Outcome.RATE_LIMITED]
224 ),
225 Condition(
226 Column("category"),
227 Op.IN,
228 [*DataCategory.error_categories(), DataCategory.TRANSACTION],
229 ),
230 ],
231 groupby=[Column("outcome"), Column("category"), Column("project_id"), Column("time")],
232 granularity=Granularity(ONE_DAY),
233 orderby=[OrderBy(Column("time"), Direction.ASC)],
234 )
235 request = Request(dataset=Dataset.Outcomes.value, app_id="reports", query=query)
236 data = raw_snql_query(request, referrer="weekly_reports.outcomes")["data"]
237
238 for dat in data:
239 project_id = dat["project_id"]
240 project_ctx = ctx.projects[project_id]
241 total = dat["total"]
242 timestamp = int(to_timestamp(parse_snuba_datetime(dat["time"])))
243 if dat["category"] == DataCategory.TRANSACTION:
244 # Transaction outcome
245 if dat["outcome"] == Outcome.RATE_LIMITED or dat["outcome"] == Outcome.FILTERED:
246 project_ctx.dropped_transaction_count += total
247 else:
248 project_ctx.accepted_transaction_count += total
249 project_ctx.transaction_count_by_day[timestamp] = total
250 else:
251 # Error outcome
252 if dat["outcome"] == Outcome.RATE_LIMITED or dat["outcome"] == Outcome.FILTERED:
253 project_ctx.dropped_error_count += total
254 else:
255 project_ctx.accepted_error_count += total
256 project_ctx.error_count_by_day[timestamp] = (
257 project_ctx.error_count_by_day.get(timestamp, 0) + total
258 )
259
260
261 def organization_project_issue_summaries(ctx):
262 all_issues = Group.objects.exclude(status=GroupStatus.IGNORED)
263 new_issue_counts = (
264 all_issues.filter(
265 project__organization_id=ctx.organization.id,
266 first_seen__gte=ctx.start,
267 first_seen__lt=ctx.end,
268 )
269 .values("project_id")
270 .annotate(total=Count("*"))
271 )
272 new_issue_counts = {item["project_id"]: item["total"] for item in new_issue_counts}
273
274 # Fetch all regressions. This is a little weird, since there's no way to
275 # tell *when* a group regressed using the Group model. Instead, we query
276 # all groups that have been seen in the last week and have ever regressed
277 # and query the Activity model to find out if they regressed within the
278 # past week. (In theory, the activity table *could* be used to answer this
279 # query without the subselect, but there's no suitable indexes to make it's
280 # performance predictable.)
281 reopened_issue_counts = (
282 Activity.objects.filter(
283 project__organization_id=ctx.organization.id,
284 group__in=all_issues.filter(
285 last_seen__gte=ctx.start,
286 last_seen__lt=ctx.end,
287 resolved_at__isnull=False, # signals this has *ever* been resolved
288 ),
289 type__in=(ActivityType.SET_REGRESSION.value, ActivityType.SET_UNRESOLVED.value),
290 datetime__gte=ctx.start,
291 datetime__lt=ctx.end,
292 )
293 .values("group__project_id")
294 .annotate(total=Count("group_id", distinct=True))
295 )
296 reopened_issue_counts = {
297 item["group__project_id"]: item["total"] for item in reopened_issue_counts
298 }
299
300 # Issues seen at least once over the past week
301 active_issue_counts = (
302 all_issues.filter(
303 project__organization_id=ctx.organization.id,
304 last_seen__gte=ctx.start,
305 last_seen__lt=ctx.end,
306 )
307 .values("project_id")
308 .annotate(total=Count("*"))
309 )
310 active_issue_counts = {item["project_id"]: item["total"] for item in active_issue_counts}
311
312 for project_ctx in ctx.projects.values():
313 project_id = project_ctx.project.id
314 active_issue_count = active_issue_counts.get(project_id, 0)
315 project_ctx.reopened_issue_count = reopened_issue_counts.get(project_id, 0)
316 project_ctx.new_issue_count = new_issue_counts.get(project_id, 0)
317 project_ctx.existing_issue_count = max(
318 active_issue_count - project_ctx.reopened_issue_count - project_ctx.new_issue_count, 0
319 )
320 project_ctx.all_issue_count = (
321 project_ctx.reopened_issue_count
322 + project_ctx.new_issue_count
323 + project_ctx.existing_issue_count
324 )
325
326
327 # Project passes
328 def project_key_errors(ctx, project):
329 if not project.first_event:
330 return
331 # Take the 3 most frequently occuring events
332 with sentry_sdk.start_span(op="weekly_reports.project_key_errors"):
333 query = Query(
334 match=Entity("events"),
335 select=[Column("group_id"), Function("count", [])],
336 where=[
337 Condition(Column("timestamp"), Op.GTE, ctx.start),
338 Condition(Column("timestamp"), Op.LT, ctx.end + timedelta(days=1)),
339 Condition(Column("project_id"), Op.EQ, project.id),
340 ],
341 groupby=[Column("group_id")],
342 orderby=[OrderBy(Function("count", []), Direction.DESC)],
343 limit=Limit(3),
344 )
345 request = Request(dataset=Dataset.Events.value, app_id="reports", query=query)
346 query_result = raw_snql_query(request, referrer="reports.key_errors")
347 key_errors = query_result["data"]
348 # Set project_ctx.key_errors to be an array of (group_id, count) for now.
349 # We will query the group history later on in `fetch_key_error_groups`, batched in a per-organization basis
350 ctx.projects[project.id].key_errors = [(e["group_id"], e["count()"]) for e in key_errors]
351 if ctx.organization.slug == "sentry":
352 logger.info(
353 "project_key_errors.results",
354 extra={"project_id": project.id, "num_key_errors": len(key_errors)},
355 )
356
357
358 # Organization pass. Depends on project_key_errors.
359 def fetch_key_error_groups(ctx):
360 all_key_error_group_ids = []
361 for project_ctx in ctx.projects.values():
362 all_key_error_group_ids.extend([group_id for group_id, count in project_ctx.key_errors])
363
364 if len(all_key_error_group_ids) == 0:
365 return
366
367 group_id_to_group = {}
368 for group in Group.objects.filter(id__in=all_key_error_group_ids).all():
369 group_id_to_group[group.id] = group
370
371 group_history = (
372 GroupHistory.objects.filter(
373 group_id__in=all_key_error_group_ids, organization_id=ctx.organization.id
374 )
375 .order_by("group_id", "-date_added")
376 .distinct("group_id")
377 .all()
378 )
379 group_id_to_group_history = {g.group_id: g for g in group_history}
380
381 for project_ctx in ctx.projects.values():
382 # note Snuba might have groups that have since been deleted
383 # we should just ignore those
384 project_ctx.key_errors = list(
385 filter(
386 lambda x: x[0] is not None,
387 [
388 (
389 group_id_to_group.get(group_id),
390 group_id_to_group_history.get(group_id, None),
391 count,
392 )
393 for group_id, count in project_ctx.key_errors
394 ],
395 )
396 )
397
398
399 def project_key_transactions(ctx, project):
400 if not project.flags.has_transactions:
401 return
402 with sentry_sdk.start_span(op="weekly_reports.project_key_transactions"):
403 # Take the 3 most frequently occuring transactions this week
404 query = Query(
405 match=Entity("transactions"),
406 select=[
407 Column("transaction_name"),
408 Function("quantile(0.95)", [Column("duration")], "p95"),
409 Function("count", [], "count"),
410 ],
411 where=[
412 Condition(Column("finish_ts"), Op.GTE, ctx.start),
413 Condition(Column("finish_ts"), Op.LT, ctx.end + timedelta(days=1)),
414 Condition(Column("project_id"), Op.EQ, project.id),
415 ],
416 groupby=[Column("transaction_name")],
417 orderby=[OrderBy(Function("count", []), Direction.DESC)],
418 limit=Limit(3),
419 )
420 request = Request(dataset=Dataset.Transactions.value, app_id="reports", query=query)
421 query_result = raw_snql_query(request, referrer="weekly_reports.key_transactions.this_week")
422 key_transactions = query_result["data"]
423 ctx.projects[project.id].key_transactions_this_week = [
424 (i["transaction_name"], i["count"], i["p95"]) for i in key_transactions
425 ]
426
427 # Query the p95 for those transactions last week
428 query = Query(
429 match=Entity("transactions"),
430 select=[
431 Column("transaction_name"),
432 Function("quantile(0.95)", [Column("duration")], "p95"),
433 Function("count", [], "count"),
434 ],
435 where=[
436 Condition(Column("finish_ts"), Op.GTE, ctx.start - timedelta(days=7)),
437 Condition(Column("finish_ts"), Op.LT, ctx.end - timedelta(days=7)),
438 Condition(Column("project_id"), Op.EQ, project.id),
439 Condition(
440 Column("transaction_name"),
441 Op.IN,
442 [i["transaction_name"] for i in key_transactions],
443 ),
444 ],
445 groupby=[Column("transaction_name")],
446 )
447 request = Request(dataset=Dataset.Transactions.value, app_id="reports", query=query)
448 query_result = raw_snql_query(request, referrer="weekly_reports.key_transactions.last_week")
449
450 # Join this week with last week
451 last_week_data = {
452 i["transaction_name"]: (i["count"], i["p95"]) for i in query_result["data"]
453 }
454
455 ctx.projects[project.id].key_transactions = [
456 (i["transaction_name"], i["count"], i["p95"])
457 + last_week_data.get(i["transaction_name"], (0, 0))
458 for i in key_transactions
459 ]
460
461
462 def project_key_performance_issues(ctx, project):
463 if not project.first_event:
464 return
465
466 with sentry_sdk.start_span(op="weekly_reports.project_key_performance_issues"):
467 # Pick the 50 top frequent performance issues last seen within a month with the highest event count from all time.
468 # Then, we use this to join with snuba, hoping that the top 3 issue by volume counted in snuba would be within this list.
469 # We do this to limit the number of group_ids snuba has to join with.
470 groups = Group.objects.filter(
471 project_id=project.id,
472 status=GroupStatus.UNRESOLVED,
473 last_seen__gte=ctx.end - timedelta(days=30),
474 # performance issue range
475 type__gte=1000,
476 type__lt=2000,
477 ).order_by("-times_seen")[:50]
478 # Django doesn't have a .limit function, and this will actually do its magic to use the LIMIT statement.
479 groups = list(groups)
480 group_id_to_group = {group.id: group for group in groups}
481
482 if len(group_id_to_group) == 0:
483 return
484
485 # Fine grained query for 3 most frequent events happend during last week
486 query = Query(
487 match=Entity("transactions"),
488 select=[
489 Column("group_ids"),
490 Function("count", []),
491 ],
492 where=[
493 Condition(Column("finish_ts"), Op.GTE, ctx.start),
494 Condition(Column("finish_ts"), Op.LT, ctx.end + timedelta(days=1)),
495 # transactions.group_ids is a list of group_ids that the transaction was associated with.
496 # We want to find the transactions associated with group_id_to_group.keys()
497 # That means group_ids must intersect with group_id_to_group.keys() in order for the transaction to be counted.
498 Condition(
499 Function(
500 "notEmpty",
501 [
502 Function(
503 "arrayIntersect",
504 [Column("group_ids"), list(group_id_to_group.keys())],
505 )
506 ],
507 ),
508 Op.EQ,
509 1,
510 ),
511 Condition(Column("project_id"), Op.EQ, project.id),
512 ],
513 groupby=[Column("group_ids")],
514 orderby=[OrderBy(Function("count", []), Direction.DESC)],
515 limit=Limit(3),
516 )
517 request = Request(dataset=Dataset.Transactions.value, app_id="reports", query=query)
518 query_result = raw_snql_query(request, referrer="reports.key_performance_issues")["data"]
519
520 key_performance_issues = []
521 for d in query_result:
522 count = d["count()"]
523 group_ids = d["group_ids"]
524 for group_id in group_ids:
525 group = group_id_to_group.get(group_id)
526 if group:
527 key_performance_issues.append((group, count))
528 break
529
530 ctx.projects[project.id].key_performance_issues = key_performance_issues
531
532
533 # Organization pass. Depends on project_key_performance_issue.
534 def fetch_key_performance_issue_groups(ctx):
535 all_groups = []
536 for project_ctx in ctx.projects.values():
537 all_groups.extend([group for group, count in project_ctx.key_performance_issues])
538
539 if len(all_groups) == 0:
540 return
541
542 group_id_to_group = {group.id: group for group in all_groups}
543
544 group_history = (
545 GroupHistory.objects.filter(
546 group_id__in=group_id_to_group.keys(), organization_id=ctx.organization.id
547 )
548 .order_by("group_id", "-date_added")
549 .distinct("group_id")
550 .all()
551 )
552 group_id_to_group_history = {g.group_id: g for g in group_history}
553
554 for project_ctx in ctx.projects.values():
555 project_ctx.key_performance_issues = [
556 (group, group_id_to_group_history.get(group.id, None), count)
557 for group, count in project_ctx.key_performance_issues
558 ]
559
560
561 # Deliver reports
562 # For all users in the organization, we generate the template context for the user, and send the email.
563
564
565 def deliver_reports(ctx, dry_run=False, target_user=None, email_override=None):
566 # Specify a sentry user to send this email.
567 if email_override:
568 send_email(ctx, target_user, dry_run=dry_run, email_override=email_override)
569 else:
570 # We save the subscription status of the user in a field in UserOptions.
571 # Here we do a raw query and LEFT JOIN on a subset of UserOption table where sentry_useroption.key = 'reports:disabled-organizations'
572 user_set = User.objects.raw(
573 """SELECT auth_user.*, sentry_useroption.value as options FROM auth_user
574 INNER JOIN sentry_organizationmember on sentry_organizationmember.user_id=auth_user.id
575 LEFT JOIN sentry_useroption on sentry_useroption.user_id = auth_user.id and sentry_useroption.key = 'reports:disabled-organizations'
576 WHERE auth_user.is_active = true
577 AND "sentry_organizationmember"."flags" & %s = 0
578 AND "sentry_organizationmember"."organization_id"= %s """,
579 [OrganizationMember.flags["member-limit:restricted"], ctx.organization.id],
580 )
581
582 for user in user_set:
583 # We manually pick out user.options and use PickledObjectField to deserialize it. We get a list of organizations the user has unsubscribed from user reports
584 option = PickledObjectField().to_python(user.options) or []
585 user_subscribed_to_organization_reports = ctx.organization.id not in option
586 if user_subscribed_to_organization_reports:
587 send_email(ctx, user, dry_run=dry_run)
588
589
590 project_breakdown_colors = ["#422C6E", "#895289", "#D6567F", "#F38150", "#F2B713"]
591 total_color = """
592 linear-gradient(
593 -45deg,
594 #ccc 25%,
595 transparent 25%,
596 transparent 50%,
597 #ccc 50%,
598 #ccc 75%,
599 transparent 75%,
600 transparent
601 );
602 """
603 other_color = "#f2f0fa"
604 group_status_to_color = {
605 GroupHistoryStatus.UNRESOLVED: "#FAD473",
606 GroupHistoryStatus.RESOLVED: "#8ACBBC",
607 GroupHistoryStatus.SET_RESOLVED_IN_RELEASE: "#8ACBBC",
608 GroupHistoryStatus.SET_RESOLVED_IN_COMMIT: "#8ACBBC",
609 GroupHistoryStatus.SET_RESOLVED_IN_PULL_REQUEST: "#8ACBBC",
610 GroupHistoryStatus.AUTO_RESOLVED: "#8ACBBC",
611 GroupHistoryStatus.IGNORED: "#DBD6E1",
612 GroupHistoryStatus.UNIGNORED: "#FAD473",
613 GroupHistoryStatus.ASSIGNED: "#FAAAAC",
614 GroupHistoryStatus.UNASSIGNED: "#FAD473",
615 GroupHistoryStatus.REGRESSED: "#FAAAAC",
616 GroupHistoryStatus.DELETED: "#DBD6E1",
617 GroupHistoryStatus.DELETED_AND_DISCARDED: "#DBD6E1",
618 GroupHistoryStatus.REVIEWED: "#FAD473",
619 GroupHistoryStatus.NEW: "#FAD473",
620 }
621
622
623 # Serialize ctx for template, and calculate view parameters (like graph bar heights)
624 def render_template_context(ctx, user):
625 # Fetch the list of projects associated with the user.
626 # Projects owned by teams that the user has membership of.
627 if user and user.id in ctx.project_ownership:
628 user_projects = list(
629 filter(
630 lambda project_ctx: project_ctx.project.id in ctx.project_ownership[user.id],
631 ctx.projects.values(),
632 )
633 )
634 if len(user_projects) == 0:
635 return None
636 else:
637 # If user is None, or if the user is not a member of the organization, we assume that the email was directed to a user who joined all teams.
638 user_projects = ctx.projects.values()
639
640 # Render the first section of the email where we had the table showing the
641 # number of accepted/dropped errors/transactions for each project.
642 def trends():
643 # Given an iterator of event counts, sum up their accepted/dropped errors/transaction counts.
644 def sum_event_counts(project_ctxs):
645 return reduce(
646 lambda a, b: (a[0] + b[0], a[1] + b[1], a[2] + b[2], a[3] + b[3]),
647 [
648 (
649 project_ctx.accepted_error_count,
650 project_ctx.dropped_error_count,
651 project_ctx.accepted_transaction_count,
652 project_ctx.dropped_transaction_count,
653 )
654 for project_ctx in project_ctxs
655 ],
656 (0, 0, 0, 0),
657 )
658
659 # Highest volume projects go first
660 projects_associated_with_user = sorted(
661 user_projects,
662 reverse=True,
663 key=lambda item: item.accepted_error_count + (item.accepted_transaction_count / 10),
664 )
665 # Calculate total
666 (
667 total_error,
668 total_dropped_error,
669 total_transaction,
670 total_dropped_transaction,
671 ) = sum_event_counts(projects_associated_with_user)
672 # The number of reports to keep is the same as the number of colors
673 # available to use in the legend.
674 projects_taken = projects_associated_with_user[: len(project_breakdown_colors)]
675 # All other items are merged to "Others"
676 projects_not_taken = projects_associated_with_user[len(project_breakdown_colors) :]
677
678 # Calculate legend
679 legend = [
680 {
681 "slug": project_ctx.project.slug,
682 "url": project_ctx.project.get_absolute_url(),
683 "color": project_breakdown_colors[i],
684 "dropped_error_count": project_ctx.dropped_error_count,
685 "accepted_error_count": project_ctx.accepted_error_count,
686 "dropped_transaction_count": project_ctx.dropped_transaction_count,
687 "accepted_transaction_count": project_ctx.accepted_transaction_count,
688 }
689 for i, project_ctx in enumerate(projects_taken)
690 ]
691
692 if len(projects_not_taken) > 0:
693 (
694 others_error,
695 others_dropped_error,
696 others_transaction,
697 others_dropped_transaction,
698 ) = sum_event_counts(projects_not_taken)
699 legend.append(
700 {
701 "slug": f"Other ({len(projects_not_taken)})",
702 "color": other_color,
703 "dropped_error_count": others_dropped_error,
704 "accepted_error_count": others_error,
705 "dropped_transaction_count": others_dropped_transaction,
706 "accepted_transaction_count": others_transaction,
707 }
708 )
709 if len(projects_taken) > 1:
710 legend.append(
711 {
712 "slug": f"Total ({len(projects_associated_with_user)})",
713 "color": total_color,
714 "dropped_error_count": total_dropped_error,
715 "accepted_error_count": total_error,
716 "dropped_transaction_count": total_dropped_transaction,
717 "accepted_transaction_count": total_transaction,
718 }
719 )
720
721 # Calculate series
722 series = []
723 for i in range(0, 7):
724 t = int(to_timestamp(ctx.start)) + ONE_DAY * i
725 project_series = [
726 {
727 "color": project_breakdown_colors[i],
728 "error_count": project_ctx.error_count_by_day.get(t, 0),
729 "transaction_count": project_ctx.transaction_count_by_day.get(t, 0),
730 }
731 for i, project_ctx in enumerate(projects_taken)
732 ]
733 if len(projects_not_taken) > 0:
734 project_series.append(
735 {
736 "color": other_color,
737 "error_count": sum(
738 map(
739 lambda project_ctx: project_ctx.error_count_by_day.get(t, 0),
740 projects_not_taken,
741 )
742 ),
743 "transaction_count": sum(
744 map(
745 lambda project_ctx: project_ctx.transaction_count_by_day.get(t, 0),
746 projects_not_taken,
747 )
748 ),
749 }
750 )
751 series.append((to_datetime(t), project_series))
752 return {
753 "legend": legend,
754 "series": series,
755 "total_error_count": total_error,
756 "total_transaction_count": total_transaction,
757 "error_maximum": max( # The max error count on any single day
758 sum(value["error_count"] for value in values) for timestamp, values in series
759 ),
760 "transaction_maximum": max( # The max transaction count on any single day
761 sum(value["transaction_count"] for value in values) for timestamp, values in series
762 )
763 if len(projects_taken) > 0
764 else 0,
765 }
766
767 def key_errors():
768 # TODO(Steve): Remove debug logging for Sentry
769 def all_key_errors():
770 if ctx.organization.slug == "sentry":
771 logger.info(
772 "render_template_context.all_key_errors.num_projects",
773 extra={"user_id": user.id, "num_user_projects": len(user_projects)},
774 )
775 for project_ctx in user_projects:
776 if ctx.organization.slug == "sentry":
777 logger.info(
778 "render_template_context.all_key_errors.project",
779 extra={
780 "user_id": user.id,
781 "project_id": project_ctx.project.id,
782 },
783 )
784 for group, group_history, count in project_ctx.key_errors:
785 if ctx.organization.slug == "sentry":
786 logger.info(
787 "render_template_context.all_key_errors.found_error",
788 extra={
789 "group_id": group.id,
790 "user_id": user.id,
791 "project_id": project_ctx.project.id,
792 },
793 )
794 yield {
795 "count": count,
796 "group": group,
797 "status": group_history.get_status_display()
798 if group_history
799 else "Unresolved",
800 "status_color": group_status_to_color[group_history.status]
801 if group_history
802 else group_status_to_color[GroupHistoryStatus.NEW],
803 }
804
805 return heapq.nlargest(3, all_key_errors(), lambda d: d["count"])
806
807 def key_transactions():
808 def all_key_transactions():
809 for project_ctx in user_projects:
810 for (
811 transaction_name,
812 count_this_week,
813 p95_this_week,
814 count_last_week,
815 p95_last_week,
816 ) in project_ctx.key_transactions:
817 yield {
818 "name": transaction_name,
819 "count": count_this_week,
820 "p95": p95_this_week,
821 "p95_prev_week": p95_last_week,
822 "project": project_ctx.project,
823 }
824
825 return heapq.nlargest(3, all_key_transactions(), lambda d: d["count"])
826
827 def key_performance_issues():
828 def all_key_performance_issues():
829 for project_ctx in user_projects:
830 for (group, group_history, count) in project_ctx.key_performance_issues:
831 yield {
832 "count": count,
833 "group": group,
834 "status": group_history.get_status_display()
835 if group_history
836 else "Unresolved",
837 "status_color": group_status_to_color[group_history.status]
838 if group_history
839 else group_status_to_color[GroupHistoryStatus.NEW],
840 }
841
842 return heapq.nlargest(3, all_key_performance_issues(), lambda d: d["count"])
843
844 def issue_summary():
845 all_issue_count = 0
846 existing_issue_count = 0
847 reopened_issue_count = 0
848 new_issue_count = 0
849 for project_ctx in user_projects:
850 all_issue_count += project_ctx.all_issue_count
851 existing_issue_count += project_ctx.existing_issue_count
852 reopened_issue_count += project_ctx.reopened_issue_count
853 new_issue_count += project_ctx.new_issue_count
854 return {
855 "all_issue_count": all_issue_count,
856 "existing_issue_count": existing_issue_count,
857 "reopened_issue_count": reopened_issue_count,
858 "new_issue_count": new_issue_count,
859 }
860
861 return {
862 "organization": ctx.organization,
863 "start": date_format(ctx.start),
864 "end": date_format(ctx.end),
865 "trends": trends(),
866 "key_errors": key_errors(),
867 "key_transactions": key_transactions(),
868 "key_performance_issues": key_performance_issues(),
869 "issue_summary": issue_summary(),
870 }
871
872
873 def send_email(ctx, user, dry_run=False, email_override=None):
874 template_ctx = render_template_context(ctx, user)
875 if not template_ctx:
876 logger.debug(
877 f"Skipping report for {ctx.organization.id} to {user}, no qualifying reports to deliver."
878 )
879 return
880
881 message = MessageBuilder(
882 subject=f"Weekly Report for {ctx.organization.name}: {date_format(ctx.start)} - {date_format(ctx.end)}",
883 template="sentry/emails/reports/body.txt",
884 html_template="sentry/emails/reports/body.html",
885 type="report.organization",
886 context=template_ctx,
887 headers={"X-SMTPAPI": json.dumps({"category": "organization_weekly_report"})},
888 )
889 if dry_run:
890 return
891 if email_override:
892 message.send(to=(email_override,))
893 else:
894 message.add_users((user.id,))
895 message.send()
896
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/sentry/tasks/weekly_reports.py b/src/sentry/tasks/weekly_reports.py
--- a/src/sentry/tasks/weekly_reports.py
+++ b/src/sentry/tasks/weekly_reports.py
@@ -892,4 +892,4 @@
message.send(to=(email_override,))
else:
message.add_users((user.id,))
- message.send()
+ message.send_async()
| {"golden_diff": "diff --git a/src/sentry/tasks/weekly_reports.py b/src/sentry/tasks/weekly_reports.py\n--- a/src/sentry/tasks/weekly_reports.py\n+++ b/src/sentry/tasks/weekly_reports.py\n@@ -892,4 +892,4 @@\n message.send(to=(email_override,))\n else:\n message.add_users((user.id,))\n- message.send()\n+ message.send_async()\n", "issue": "The weekly reports mail is sent repeatedly 16 times, once every minute\n### Environment\r\n\r\nself-hosted (https://develop.sentry.dev/self-hosted/)\r\n\r\n### Version\r\n\r\n23.1.1\r\n\r\n### Steps to Reproduce\r\n\r\n1\u3001Run `Sentry cron` and `Sentry worker` services\r\n2\u3001`Sentry worker` has three instances\r\n\r\n### Expected Result\r\n\r\nReceive only one weekly newsletter per week\r\n\r\n### Actual Result\r\n\r\nReceive 16 Sentry weekly emails every Monday \uff0cReceived at one minute intervals\uff0cAll users within the organization received 16 weekly report emails equally.\r\n\r\n<img width=\"582\" alt=\"image\" src=\"https://user-images.githubusercontent.com/18591662/223436915-ab795659-3095-49f3-9aa6-73742706587b.png\">\r\n\r\n@Neo-Zhixing Hi\r\nI suspect it has something to do with this pr, https://github.com/getsentry/sentry/pull/39911, but it is not reproduced in my local development environment and the problem only exists in our production environment. What is the possible cause? Can you give any useful information? Thank you very much!\n", "before_files": [{"content": "import heapq\nimport logging\nfrom datetime import timedelta\nfrom functools import partial, reduce\n\nimport sentry_sdk\nfrom django.db.models import Count\nfrom django.utils import dateformat, timezone\nfrom sentry_sdk import set_tag\nfrom snuba_sdk import Request\nfrom snuba_sdk.column import Column\nfrom snuba_sdk.conditions import Condition, Op\nfrom snuba_sdk.entity import Entity\nfrom snuba_sdk.expressions import Granularity\nfrom snuba_sdk.function import Function\nfrom snuba_sdk.orderby import Direction, OrderBy\nfrom snuba_sdk.query import Limit, Query\n\nfrom sentry.api.serializers.snuba import zerofill\nfrom sentry.constants import DataCategory\nfrom sentry.db.models.fields import PickledObjectField\nfrom sentry.models import (\n Activity,\n Group,\n GroupHistory,\n GroupHistoryStatus,\n GroupStatus,\n Organization,\n OrganizationMember,\n OrganizationStatus,\n User,\n)\nfrom sentry.snuba.dataset import Dataset\nfrom sentry.tasks.base import instrumented_task\nfrom sentry.types.activity import ActivityType\nfrom sentry.utils import json\nfrom sentry.utils.dates import floor_to_utc_day, to_datetime, to_timestamp\nfrom sentry.utils.email import MessageBuilder\nfrom sentry.utils.outcomes import Outcome\nfrom sentry.utils.query import RangeQuerySetWrapper\nfrom sentry.utils.snuba import parse_snuba_datetime, raw_snql_query\n\nONE_DAY = int(timedelta(days=1).total_seconds())\ndate_format = partial(dateformat.format, format_string=\"F jS, Y\")\n\nlogger = logging.getLogger(__name__)\n\n\nclass OrganizationReportContext:\n def __init__(self, timestamp, duration, organization):\n self.timestamp = timestamp\n self.duration = duration\n\n self.start = to_datetime(timestamp - duration)\n self.end = to_datetime(timestamp)\n\n self.organization = organization\n self.projects = {} # { project_id: ProjectContext }\n\n self.project_ownership = {} # { user_id: set<project_id> }\n for project in organization.project_set.all():\n self.projects[project.id] = ProjectContext(project)\n\n def __repr__(self):\n return self.projects.__repr__()\n\n\nclass ProjectContext:\n accepted_error_count = 0\n dropped_error_count = 0\n accepted_transaction_count = 0\n dropped_transaction_count = 0\n\n all_issue_count = 0\n existing_issue_count = 0\n reopened_issue_count = 0\n new_issue_count = 0\n\n def __init__(self, project):\n self.project = project\n\n # Array of (group_id, group_history, count)\n self.key_errors = []\n # Array of (transaction_name, count_this_week, p95_this_week, count_last_week, p95_last_week)\n self.key_transactions = []\n # Array of (Group, count)\n self.key_performance_issues = []\n\n # Dictionary of { timestamp: count }\n self.error_count_by_day = {}\n # Dictionary of { timestamp: count }\n self.transaction_count_by_day = {}\n\n def __repr__(self):\n return f\"{self.key_errors}, Errors: [Accepted {self.accepted_error_count}, Dropped {self.dropped_error_count}]\\nTransactions: [Accepted {self.accepted_transaction_count} Dropped {self.dropped_transaction_count}]\"\n\n\ndef check_if_project_is_empty(project_ctx):\n \"\"\"\n Check if this project has any content we could show in an email.\n \"\"\"\n return (\n not project_ctx.key_errors\n and not project_ctx.key_transactions\n and not project_ctx.key_performance_issues\n and not project_ctx.accepted_error_count\n and not project_ctx.dropped_error_count\n and not project_ctx.accepted_transaction_count\n and not project_ctx.dropped_transaction_count\n )\n\n\ndef check_if_ctx_is_empty(ctx):\n \"\"\"\n Check if the context is empty. If it is, we don't want to send an email.\n \"\"\"\n return all(check_if_project_is_empty(project_ctx) for project_ctx in ctx.projects.values())\n\n\n# The entry point. This task is scheduled to run every week.\n@instrumented_task(\n name=\"sentry.tasks.weekly_reports.schedule_organizations\",\n queue=\"reports.prepare\",\n max_retries=5,\n acks_late=True,\n)\ndef schedule_organizations(dry_run=False, timestamp=None, duration=None):\n if timestamp is None:\n # The time that the report was generated\n timestamp = to_timestamp(floor_to_utc_day(timezone.now()))\n\n if duration is None:\n # The total timespan that the task covers\n duration = ONE_DAY * 7\n\n organizations = Organization.objects.filter(status=OrganizationStatus.ACTIVE)\n for organization in RangeQuerySetWrapper(\n organizations, step=10000, result_value_getter=lambda item: item.id\n ):\n # Create a celery task per organization\n prepare_organization_report.delay(timestamp, duration, organization.id, dry_run=dry_run)\n\n\n# This task is launched per-organization.\n@instrumented_task(\n name=\"sentry.tasks.weekly_reports.prepare_organization_report\",\n queue=\"reports.prepare\",\n max_retries=5,\n acks_late=True,\n)\ndef prepare_organization_report(\n timestamp, duration, organization_id, dry_run=False, target_user=None, email_override=None\n):\n organization = Organization.objects.get(id=organization_id)\n set_tag(\"org.slug\", organization.slug)\n set_tag(\"org.id\", organization_id)\n ctx = OrganizationReportContext(timestamp, duration, organization)\n\n # Run organization passes\n with sentry_sdk.start_span(op=\"weekly_reports.user_project_ownership\"):\n user_project_ownership(ctx)\n with sentry_sdk.start_span(op=\"weekly_reports.project_event_counts_for_organization\"):\n project_event_counts_for_organization(ctx)\n with sentry_sdk.start_span(op=\"weekly_reports.organization_project_issue_summaries\"):\n organization_project_issue_summaries(ctx)\n\n with sentry_sdk.start_span(op=\"weekly_reports.project_passes\"):\n # Run project passes\n for project in organization.project_set.all():\n project_key_errors(ctx, project)\n project_key_transactions(ctx, project)\n project_key_performance_issues(ctx, project)\n\n with sentry_sdk.start_span(op=\"weekly_reports.fetch_key_error_groups\"):\n fetch_key_error_groups(ctx)\n with sentry_sdk.start_span(op=\"weekly_reports.fetch_key_performance_issue_groups\"):\n fetch_key_performance_issue_groups(ctx)\n\n report_is_available = not check_if_ctx_is_empty(ctx)\n set_tag(\"report.available\", report_is_available)\n\n if not report_is_available:\n logger.info(\n \"prepare_organization_report.skipping_empty\", extra={\"organization\": organization_id}\n )\n return\n\n # Finally, deliver the reports\n with sentry_sdk.start_span(op=\"weekly_reports.deliver_reports\"):\n deliver_reports(\n ctx, dry_run=dry_run, target_user=target_user, email_override=email_override\n )\n\n\n# Organization Passes\n\n# Find the projects associated with an user.\n# Populates context.project_ownership which is { user_id: set<project_id> }\ndef user_project_ownership(ctx):\n for (project_id, user_id) in OrganizationMember.objects.filter(\n organization_id=ctx.organization.id, teams__projectteam__project__isnull=False\n ).values_list(\"teams__projectteam__project_id\", \"user_id\"):\n ctx.project_ownership.setdefault(user_id, set()).add(project_id)\n\n\n# Populates context.projects which is { project_id: ProjectContext }\ndef project_event_counts_for_organization(ctx):\n def zerofill_data(data):\n return zerofill(data, ctx.start, ctx.end, ONE_DAY, fill_default=0)\n\n query = Query(\n match=Entity(\"outcomes\"),\n select=[\n Column(\"outcome\"),\n Column(\"category\"),\n Function(\"sum\", [Column(\"quantity\")], \"total\"),\n ],\n where=[\n Condition(Column(\"timestamp\"), Op.GTE, ctx.start),\n Condition(Column(\"timestamp\"), Op.LT, ctx.end + timedelta(days=1)),\n Condition(Column(\"org_id\"), Op.EQ, ctx.organization.id),\n Condition(\n Column(\"outcome\"), Op.IN, [Outcome.ACCEPTED, Outcome.FILTERED, Outcome.RATE_LIMITED]\n ),\n Condition(\n Column(\"category\"),\n Op.IN,\n [*DataCategory.error_categories(), DataCategory.TRANSACTION],\n ),\n ],\n groupby=[Column(\"outcome\"), Column(\"category\"), Column(\"project_id\"), Column(\"time\")],\n granularity=Granularity(ONE_DAY),\n orderby=[OrderBy(Column(\"time\"), Direction.ASC)],\n )\n request = Request(dataset=Dataset.Outcomes.value, app_id=\"reports\", query=query)\n data = raw_snql_query(request, referrer=\"weekly_reports.outcomes\")[\"data\"]\n\n for dat in data:\n project_id = dat[\"project_id\"]\n project_ctx = ctx.projects[project_id]\n total = dat[\"total\"]\n timestamp = int(to_timestamp(parse_snuba_datetime(dat[\"time\"])))\n if dat[\"category\"] == DataCategory.TRANSACTION:\n # Transaction outcome\n if dat[\"outcome\"] == Outcome.RATE_LIMITED or dat[\"outcome\"] == Outcome.FILTERED:\n project_ctx.dropped_transaction_count += total\n else:\n project_ctx.accepted_transaction_count += total\n project_ctx.transaction_count_by_day[timestamp] = total\n else:\n # Error outcome\n if dat[\"outcome\"] == Outcome.RATE_LIMITED or dat[\"outcome\"] == Outcome.FILTERED:\n project_ctx.dropped_error_count += total\n else:\n project_ctx.accepted_error_count += total\n project_ctx.error_count_by_day[timestamp] = (\n project_ctx.error_count_by_day.get(timestamp, 0) + total\n )\n\n\ndef organization_project_issue_summaries(ctx):\n all_issues = Group.objects.exclude(status=GroupStatus.IGNORED)\n new_issue_counts = (\n all_issues.filter(\n project__organization_id=ctx.organization.id,\n first_seen__gte=ctx.start,\n first_seen__lt=ctx.end,\n )\n .values(\"project_id\")\n .annotate(total=Count(\"*\"))\n )\n new_issue_counts = {item[\"project_id\"]: item[\"total\"] for item in new_issue_counts}\n\n # Fetch all regressions. This is a little weird, since there's no way to\n # tell *when* a group regressed using the Group model. Instead, we query\n # all groups that have been seen in the last week and have ever regressed\n # and query the Activity model to find out if they regressed within the\n # past week. (In theory, the activity table *could* be used to answer this\n # query without the subselect, but there's no suitable indexes to make it's\n # performance predictable.)\n reopened_issue_counts = (\n Activity.objects.filter(\n project__organization_id=ctx.organization.id,\n group__in=all_issues.filter(\n last_seen__gte=ctx.start,\n last_seen__lt=ctx.end,\n resolved_at__isnull=False, # signals this has *ever* been resolved\n ),\n type__in=(ActivityType.SET_REGRESSION.value, ActivityType.SET_UNRESOLVED.value),\n datetime__gte=ctx.start,\n datetime__lt=ctx.end,\n )\n .values(\"group__project_id\")\n .annotate(total=Count(\"group_id\", distinct=True))\n )\n reopened_issue_counts = {\n item[\"group__project_id\"]: item[\"total\"] for item in reopened_issue_counts\n }\n\n # Issues seen at least once over the past week\n active_issue_counts = (\n all_issues.filter(\n project__organization_id=ctx.organization.id,\n last_seen__gte=ctx.start,\n last_seen__lt=ctx.end,\n )\n .values(\"project_id\")\n .annotate(total=Count(\"*\"))\n )\n active_issue_counts = {item[\"project_id\"]: item[\"total\"] for item in active_issue_counts}\n\n for project_ctx in ctx.projects.values():\n project_id = project_ctx.project.id\n active_issue_count = active_issue_counts.get(project_id, 0)\n project_ctx.reopened_issue_count = reopened_issue_counts.get(project_id, 0)\n project_ctx.new_issue_count = new_issue_counts.get(project_id, 0)\n project_ctx.existing_issue_count = max(\n active_issue_count - project_ctx.reopened_issue_count - project_ctx.new_issue_count, 0\n )\n project_ctx.all_issue_count = (\n project_ctx.reopened_issue_count\n + project_ctx.new_issue_count\n + project_ctx.existing_issue_count\n )\n\n\n# Project passes\ndef project_key_errors(ctx, project):\n if not project.first_event:\n return\n # Take the 3 most frequently occuring events\n with sentry_sdk.start_span(op=\"weekly_reports.project_key_errors\"):\n query = Query(\n match=Entity(\"events\"),\n select=[Column(\"group_id\"), Function(\"count\", [])],\n where=[\n Condition(Column(\"timestamp\"), Op.GTE, ctx.start),\n Condition(Column(\"timestamp\"), Op.LT, ctx.end + timedelta(days=1)),\n Condition(Column(\"project_id\"), Op.EQ, project.id),\n ],\n groupby=[Column(\"group_id\")],\n orderby=[OrderBy(Function(\"count\", []), Direction.DESC)],\n limit=Limit(3),\n )\n request = Request(dataset=Dataset.Events.value, app_id=\"reports\", query=query)\n query_result = raw_snql_query(request, referrer=\"reports.key_errors\")\n key_errors = query_result[\"data\"]\n # Set project_ctx.key_errors to be an array of (group_id, count) for now.\n # We will query the group history later on in `fetch_key_error_groups`, batched in a per-organization basis\n ctx.projects[project.id].key_errors = [(e[\"group_id\"], e[\"count()\"]) for e in key_errors]\n if ctx.organization.slug == \"sentry\":\n logger.info(\n \"project_key_errors.results\",\n extra={\"project_id\": project.id, \"num_key_errors\": len(key_errors)},\n )\n\n\n# Organization pass. Depends on project_key_errors.\ndef fetch_key_error_groups(ctx):\n all_key_error_group_ids = []\n for project_ctx in ctx.projects.values():\n all_key_error_group_ids.extend([group_id for group_id, count in project_ctx.key_errors])\n\n if len(all_key_error_group_ids) == 0:\n return\n\n group_id_to_group = {}\n for group in Group.objects.filter(id__in=all_key_error_group_ids).all():\n group_id_to_group[group.id] = group\n\n group_history = (\n GroupHistory.objects.filter(\n group_id__in=all_key_error_group_ids, organization_id=ctx.organization.id\n )\n .order_by(\"group_id\", \"-date_added\")\n .distinct(\"group_id\")\n .all()\n )\n group_id_to_group_history = {g.group_id: g for g in group_history}\n\n for project_ctx in ctx.projects.values():\n # note Snuba might have groups that have since been deleted\n # we should just ignore those\n project_ctx.key_errors = list(\n filter(\n lambda x: x[0] is not None,\n [\n (\n group_id_to_group.get(group_id),\n group_id_to_group_history.get(group_id, None),\n count,\n )\n for group_id, count in project_ctx.key_errors\n ],\n )\n )\n\n\ndef project_key_transactions(ctx, project):\n if not project.flags.has_transactions:\n return\n with sentry_sdk.start_span(op=\"weekly_reports.project_key_transactions\"):\n # Take the 3 most frequently occuring transactions this week\n query = Query(\n match=Entity(\"transactions\"),\n select=[\n Column(\"transaction_name\"),\n Function(\"quantile(0.95)\", [Column(\"duration\")], \"p95\"),\n Function(\"count\", [], \"count\"),\n ],\n where=[\n Condition(Column(\"finish_ts\"), Op.GTE, ctx.start),\n Condition(Column(\"finish_ts\"), Op.LT, ctx.end + timedelta(days=1)),\n Condition(Column(\"project_id\"), Op.EQ, project.id),\n ],\n groupby=[Column(\"transaction_name\")],\n orderby=[OrderBy(Function(\"count\", []), Direction.DESC)],\n limit=Limit(3),\n )\n request = Request(dataset=Dataset.Transactions.value, app_id=\"reports\", query=query)\n query_result = raw_snql_query(request, referrer=\"weekly_reports.key_transactions.this_week\")\n key_transactions = query_result[\"data\"]\n ctx.projects[project.id].key_transactions_this_week = [\n (i[\"transaction_name\"], i[\"count\"], i[\"p95\"]) for i in key_transactions\n ]\n\n # Query the p95 for those transactions last week\n query = Query(\n match=Entity(\"transactions\"),\n select=[\n Column(\"transaction_name\"),\n Function(\"quantile(0.95)\", [Column(\"duration\")], \"p95\"),\n Function(\"count\", [], \"count\"),\n ],\n where=[\n Condition(Column(\"finish_ts\"), Op.GTE, ctx.start - timedelta(days=7)),\n Condition(Column(\"finish_ts\"), Op.LT, ctx.end - timedelta(days=7)),\n Condition(Column(\"project_id\"), Op.EQ, project.id),\n Condition(\n Column(\"transaction_name\"),\n Op.IN,\n [i[\"transaction_name\"] for i in key_transactions],\n ),\n ],\n groupby=[Column(\"transaction_name\")],\n )\n request = Request(dataset=Dataset.Transactions.value, app_id=\"reports\", query=query)\n query_result = raw_snql_query(request, referrer=\"weekly_reports.key_transactions.last_week\")\n\n # Join this week with last week\n last_week_data = {\n i[\"transaction_name\"]: (i[\"count\"], i[\"p95\"]) for i in query_result[\"data\"]\n }\n\n ctx.projects[project.id].key_transactions = [\n (i[\"transaction_name\"], i[\"count\"], i[\"p95\"])\n + last_week_data.get(i[\"transaction_name\"], (0, 0))\n for i in key_transactions\n ]\n\n\ndef project_key_performance_issues(ctx, project):\n if not project.first_event:\n return\n\n with sentry_sdk.start_span(op=\"weekly_reports.project_key_performance_issues\"):\n # Pick the 50 top frequent performance issues last seen within a month with the highest event count from all time.\n # Then, we use this to join with snuba, hoping that the top 3 issue by volume counted in snuba would be within this list.\n # We do this to limit the number of group_ids snuba has to join with.\n groups = Group.objects.filter(\n project_id=project.id,\n status=GroupStatus.UNRESOLVED,\n last_seen__gte=ctx.end - timedelta(days=30),\n # performance issue range\n type__gte=1000,\n type__lt=2000,\n ).order_by(\"-times_seen\")[:50]\n # Django doesn't have a .limit function, and this will actually do its magic to use the LIMIT statement.\n groups = list(groups)\n group_id_to_group = {group.id: group for group in groups}\n\n if len(group_id_to_group) == 0:\n return\n\n # Fine grained query for 3 most frequent events happend during last week\n query = Query(\n match=Entity(\"transactions\"),\n select=[\n Column(\"group_ids\"),\n Function(\"count\", []),\n ],\n where=[\n Condition(Column(\"finish_ts\"), Op.GTE, ctx.start),\n Condition(Column(\"finish_ts\"), Op.LT, ctx.end + timedelta(days=1)),\n # transactions.group_ids is a list of group_ids that the transaction was associated with.\n # We want to find the transactions associated with group_id_to_group.keys()\n # That means group_ids must intersect with group_id_to_group.keys() in order for the transaction to be counted.\n Condition(\n Function(\n \"notEmpty\",\n [\n Function(\n \"arrayIntersect\",\n [Column(\"group_ids\"), list(group_id_to_group.keys())],\n )\n ],\n ),\n Op.EQ,\n 1,\n ),\n Condition(Column(\"project_id\"), Op.EQ, project.id),\n ],\n groupby=[Column(\"group_ids\")],\n orderby=[OrderBy(Function(\"count\", []), Direction.DESC)],\n limit=Limit(3),\n )\n request = Request(dataset=Dataset.Transactions.value, app_id=\"reports\", query=query)\n query_result = raw_snql_query(request, referrer=\"reports.key_performance_issues\")[\"data\"]\n\n key_performance_issues = []\n for d in query_result:\n count = d[\"count()\"]\n group_ids = d[\"group_ids\"]\n for group_id in group_ids:\n group = group_id_to_group.get(group_id)\n if group:\n key_performance_issues.append((group, count))\n break\n\n ctx.projects[project.id].key_performance_issues = key_performance_issues\n\n\n# Organization pass. Depends on project_key_performance_issue.\ndef fetch_key_performance_issue_groups(ctx):\n all_groups = []\n for project_ctx in ctx.projects.values():\n all_groups.extend([group for group, count in project_ctx.key_performance_issues])\n\n if len(all_groups) == 0:\n return\n\n group_id_to_group = {group.id: group for group in all_groups}\n\n group_history = (\n GroupHistory.objects.filter(\n group_id__in=group_id_to_group.keys(), organization_id=ctx.organization.id\n )\n .order_by(\"group_id\", \"-date_added\")\n .distinct(\"group_id\")\n .all()\n )\n group_id_to_group_history = {g.group_id: g for g in group_history}\n\n for project_ctx in ctx.projects.values():\n project_ctx.key_performance_issues = [\n (group, group_id_to_group_history.get(group.id, None), count)\n for group, count in project_ctx.key_performance_issues\n ]\n\n\n# Deliver reports\n# For all users in the organization, we generate the template context for the user, and send the email.\n\n\ndef deliver_reports(ctx, dry_run=False, target_user=None, email_override=None):\n # Specify a sentry user to send this email.\n if email_override:\n send_email(ctx, target_user, dry_run=dry_run, email_override=email_override)\n else:\n # We save the subscription status of the user in a field in UserOptions.\n # Here we do a raw query and LEFT JOIN on a subset of UserOption table where sentry_useroption.key = 'reports:disabled-organizations'\n user_set = User.objects.raw(\n \"\"\"SELECT auth_user.*, sentry_useroption.value as options FROM auth_user\n INNER JOIN sentry_organizationmember on sentry_organizationmember.user_id=auth_user.id\n LEFT JOIN sentry_useroption on sentry_useroption.user_id = auth_user.id and sentry_useroption.key = 'reports:disabled-organizations'\n WHERE auth_user.is_active = true\n AND \"sentry_organizationmember\".\"flags\" & %s = 0\n AND \"sentry_organizationmember\".\"organization_id\"= %s \"\"\",\n [OrganizationMember.flags[\"member-limit:restricted\"], ctx.organization.id],\n )\n\n for user in user_set:\n # We manually pick out user.options and use PickledObjectField to deserialize it. We get a list of organizations the user has unsubscribed from user reports\n option = PickledObjectField().to_python(user.options) or []\n user_subscribed_to_organization_reports = ctx.organization.id not in option\n if user_subscribed_to_organization_reports:\n send_email(ctx, user, dry_run=dry_run)\n\n\nproject_breakdown_colors = [\"#422C6E\", \"#895289\", \"#D6567F\", \"#F38150\", \"#F2B713\"]\ntotal_color = \"\"\"\nlinear-gradient(\n -45deg,\n #ccc 25%,\n transparent 25%,\n transparent 50%,\n #ccc 50%,\n #ccc 75%,\n transparent 75%,\n transparent\n);\n\"\"\"\nother_color = \"#f2f0fa\"\ngroup_status_to_color = {\n GroupHistoryStatus.UNRESOLVED: \"#FAD473\",\n GroupHistoryStatus.RESOLVED: \"#8ACBBC\",\n GroupHistoryStatus.SET_RESOLVED_IN_RELEASE: \"#8ACBBC\",\n GroupHistoryStatus.SET_RESOLVED_IN_COMMIT: \"#8ACBBC\",\n GroupHistoryStatus.SET_RESOLVED_IN_PULL_REQUEST: \"#8ACBBC\",\n GroupHistoryStatus.AUTO_RESOLVED: \"#8ACBBC\",\n GroupHistoryStatus.IGNORED: \"#DBD6E1\",\n GroupHistoryStatus.UNIGNORED: \"#FAD473\",\n GroupHistoryStatus.ASSIGNED: \"#FAAAAC\",\n GroupHistoryStatus.UNASSIGNED: \"#FAD473\",\n GroupHistoryStatus.REGRESSED: \"#FAAAAC\",\n GroupHistoryStatus.DELETED: \"#DBD6E1\",\n GroupHistoryStatus.DELETED_AND_DISCARDED: \"#DBD6E1\",\n GroupHistoryStatus.REVIEWED: \"#FAD473\",\n GroupHistoryStatus.NEW: \"#FAD473\",\n}\n\n\n# Serialize ctx for template, and calculate view parameters (like graph bar heights)\ndef render_template_context(ctx, user):\n # Fetch the list of projects associated with the user.\n # Projects owned by teams that the user has membership of.\n if user and user.id in ctx.project_ownership:\n user_projects = list(\n filter(\n lambda project_ctx: project_ctx.project.id in ctx.project_ownership[user.id],\n ctx.projects.values(),\n )\n )\n if len(user_projects) == 0:\n return None\n else:\n # If user is None, or if the user is not a member of the organization, we assume that the email was directed to a user who joined all teams.\n user_projects = ctx.projects.values()\n\n # Render the first section of the email where we had the table showing the\n # number of accepted/dropped errors/transactions for each project.\n def trends():\n # Given an iterator of event counts, sum up their accepted/dropped errors/transaction counts.\n def sum_event_counts(project_ctxs):\n return reduce(\n lambda a, b: (a[0] + b[0], a[1] + b[1], a[2] + b[2], a[3] + b[3]),\n [\n (\n project_ctx.accepted_error_count,\n project_ctx.dropped_error_count,\n project_ctx.accepted_transaction_count,\n project_ctx.dropped_transaction_count,\n )\n for project_ctx in project_ctxs\n ],\n (0, 0, 0, 0),\n )\n\n # Highest volume projects go first\n projects_associated_with_user = sorted(\n user_projects,\n reverse=True,\n key=lambda item: item.accepted_error_count + (item.accepted_transaction_count / 10),\n )\n # Calculate total\n (\n total_error,\n total_dropped_error,\n total_transaction,\n total_dropped_transaction,\n ) = sum_event_counts(projects_associated_with_user)\n # The number of reports to keep is the same as the number of colors\n # available to use in the legend.\n projects_taken = projects_associated_with_user[: len(project_breakdown_colors)]\n # All other items are merged to \"Others\"\n projects_not_taken = projects_associated_with_user[len(project_breakdown_colors) :]\n\n # Calculate legend\n legend = [\n {\n \"slug\": project_ctx.project.slug,\n \"url\": project_ctx.project.get_absolute_url(),\n \"color\": project_breakdown_colors[i],\n \"dropped_error_count\": project_ctx.dropped_error_count,\n \"accepted_error_count\": project_ctx.accepted_error_count,\n \"dropped_transaction_count\": project_ctx.dropped_transaction_count,\n \"accepted_transaction_count\": project_ctx.accepted_transaction_count,\n }\n for i, project_ctx in enumerate(projects_taken)\n ]\n\n if len(projects_not_taken) > 0:\n (\n others_error,\n others_dropped_error,\n others_transaction,\n others_dropped_transaction,\n ) = sum_event_counts(projects_not_taken)\n legend.append(\n {\n \"slug\": f\"Other ({len(projects_not_taken)})\",\n \"color\": other_color,\n \"dropped_error_count\": others_dropped_error,\n \"accepted_error_count\": others_error,\n \"dropped_transaction_count\": others_dropped_transaction,\n \"accepted_transaction_count\": others_transaction,\n }\n )\n if len(projects_taken) > 1:\n legend.append(\n {\n \"slug\": f\"Total ({len(projects_associated_with_user)})\",\n \"color\": total_color,\n \"dropped_error_count\": total_dropped_error,\n \"accepted_error_count\": total_error,\n \"dropped_transaction_count\": total_dropped_transaction,\n \"accepted_transaction_count\": total_transaction,\n }\n )\n\n # Calculate series\n series = []\n for i in range(0, 7):\n t = int(to_timestamp(ctx.start)) + ONE_DAY * i\n project_series = [\n {\n \"color\": project_breakdown_colors[i],\n \"error_count\": project_ctx.error_count_by_day.get(t, 0),\n \"transaction_count\": project_ctx.transaction_count_by_day.get(t, 0),\n }\n for i, project_ctx in enumerate(projects_taken)\n ]\n if len(projects_not_taken) > 0:\n project_series.append(\n {\n \"color\": other_color,\n \"error_count\": sum(\n map(\n lambda project_ctx: project_ctx.error_count_by_day.get(t, 0),\n projects_not_taken,\n )\n ),\n \"transaction_count\": sum(\n map(\n lambda project_ctx: project_ctx.transaction_count_by_day.get(t, 0),\n projects_not_taken,\n )\n ),\n }\n )\n series.append((to_datetime(t), project_series))\n return {\n \"legend\": legend,\n \"series\": series,\n \"total_error_count\": total_error,\n \"total_transaction_count\": total_transaction,\n \"error_maximum\": max( # The max error count on any single day\n sum(value[\"error_count\"] for value in values) for timestamp, values in series\n ),\n \"transaction_maximum\": max( # The max transaction count on any single day\n sum(value[\"transaction_count\"] for value in values) for timestamp, values in series\n )\n if len(projects_taken) > 0\n else 0,\n }\n\n def key_errors():\n # TODO(Steve): Remove debug logging for Sentry\n def all_key_errors():\n if ctx.organization.slug == \"sentry\":\n logger.info(\n \"render_template_context.all_key_errors.num_projects\",\n extra={\"user_id\": user.id, \"num_user_projects\": len(user_projects)},\n )\n for project_ctx in user_projects:\n if ctx.organization.slug == \"sentry\":\n logger.info(\n \"render_template_context.all_key_errors.project\",\n extra={\n \"user_id\": user.id,\n \"project_id\": project_ctx.project.id,\n },\n )\n for group, group_history, count in project_ctx.key_errors:\n if ctx.organization.slug == \"sentry\":\n logger.info(\n \"render_template_context.all_key_errors.found_error\",\n extra={\n \"group_id\": group.id,\n \"user_id\": user.id,\n \"project_id\": project_ctx.project.id,\n },\n )\n yield {\n \"count\": count,\n \"group\": group,\n \"status\": group_history.get_status_display()\n if group_history\n else \"Unresolved\",\n \"status_color\": group_status_to_color[group_history.status]\n if group_history\n else group_status_to_color[GroupHistoryStatus.NEW],\n }\n\n return heapq.nlargest(3, all_key_errors(), lambda d: d[\"count\"])\n\n def key_transactions():\n def all_key_transactions():\n for project_ctx in user_projects:\n for (\n transaction_name,\n count_this_week,\n p95_this_week,\n count_last_week,\n p95_last_week,\n ) in project_ctx.key_transactions:\n yield {\n \"name\": transaction_name,\n \"count\": count_this_week,\n \"p95\": p95_this_week,\n \"p95_prev_week\": p95_last_week,\n \"project\": project_ctx.project,\n }\n\n return heapq.nlargest(3, all_key_transactions(), lambda d: d[\"count\"])\n\n def key_performance_issues():\n def all_key_performance_issues():\n for project_ctx in user_projects:\n for (group, group_history, count) in project_ctx.key_performance_issues:\n yield {\n \"count\": count,\n \"group\": group,\n \"status\": group_history.get_status_display()\n if group_history\n else \"Unresolved\",\n \"status_color\": group_status_to_color[group_history.status]\n if group_history\n else group_status_to_color[GroupHistoryStatus.NEW],\n }\n\n return heapq.nlargest(3, all_key_performance_issues(), lambda d: d[\"count\"])\n\n def issue_summary():\n all_issue_count = 0\n existing_issue_count = 0\n reopened_issue_count = 0\n new_issue_count = 0\n for project_ctx in user_projects:\n all_issue_count += project_ctx.all_issue_count\n existing_issue_count += project_ctx.existing_issue_count\n reopened_issue_count += project_ctx.reopened_issue_count\n new_issue_count += project_ctx.new_issue_count\n return {\n \"all_issue_count\": all_issue_count,\n \"existing_issue_count\": existing_issue_count,\n \"reopened_issue_count\": reopened_issue_count,\n \"new_issue_count\": new_issue_count,\n }\n\n return {\n \"organization\": ctx.organization,\n \"start\": date_format(ctx.start),\n \"end\": date_format(ctx.end),\n \"trends\": trends(),\n \"key_errors\": key_errors(),\n \"key_transactions\": key_transactions(),\n \"key_performance_issues\": key_performance_issues(),\n \"issue_summary\": issue_summary(),\n }\n\n\ndef send_email(ctx, user, dry_run=False, email_override=None):\n template_ctx = render_template_context(ctx, user)\n if not template_ctx:\n logger.debug(\n f\"Skipping report for {ctx.organization.id} to {user}, no qualifying reports to deliver.\"\n )\n return\n\n message = MessageBuilder(\n subject=f\"Weekly Report for {ctx.organization.name}: {date_format(ctx.start)} - {date_format(ctx.end)}\",\n template=\"sentry/emails/reports/body.txt\",\n html_template=\"sentry/emails/reports/body.html\",\n type=\"report.organization\",\n context=template_ctx,\n headers={\"X-SMTPAPI\": json.dumps({\"category\": \"organization_weekly_report\"})},\n )\n if dry_run:\n return\n if email_override:\n message.send(to=(email_override,))\n else:\n message.add_users((user.id,))\n message.send()\n", "path": "src/sentry/tasks/weekly_reports.py"}], "after_files": [{"content": "import heapq\nimport logging\nfrom datetime import timedelta\nfrom functools import partial, reduce\n\nimport sentry_sdk\nfrom django.db.models import Count\nfrom django.utils import dateformat, timezone\nfrom sentry_sdk import set_tag\nfrom snuba_sdk import Request\nfrom snuba_sdk.column import Column\nfrom snuba_sdk.conditions import Condition, Op\nfrom snuba_sdk.entity import Entity\nfrom snuba_sdk.expressions import Granularity\nfrom snuba_sdk.function import Function\nfrom snuba_sdk.orderby import Direction, OrderBy\nfrom snuba_sdk.query import Limit, Query\n\nfrom sentry.api.serializers.snuba import zerofill\nfrom sentry.constants import DataCategory\nfrom sentry.db.models.fields import PickledObjectField\nfrom sentry.models import (\n Activity,\n Group,\n GroupHistory,\n GroupHistoryStatus,\n GroupStatus,\n Organization,\n OrganizationMember,\n OrganizationStatus,\n User,\n)\nfrom sentry.snuba.dataset import Dataset\nfrom sentry.tasks.base import instrumented_task\nfrom sentry.types.activity import ActivityType\nfrom sentry.utils import json\nfrom sentry.utils.dates import floor_to_utc_day, to_datetime, to_timestamp\nfrom sentry.utils.email import MessageBuilder\nfrom sentry.utils.outcomes import Outcome\nfrom sentry.utils.query import RangeQuerySetWrapper\nfrom sentry.utils.snuba import parse_snuba_datetime, raw_snql_query\n\nONE_DAY = int(timedelta(days=1).total_seconds())\ndate_format = partial(dateformat.format, format_string=\"F jS, Y\")\n\nlogger = logging.getLogger(__name__)\n\n\nclass OrganizationReportContext:\n def __init__(self, timestamp, duration, organization):\n self.timestamp = timestamp\n self.duration = duration\n\n self.start = to_datetime(timestamp - duration)\n self.end = to_datetime(timestamp)\n\n self.organization = organization\n self.projects = {} # { project_id: ProjectContext }\n\n self.project_ownership = {} # { user_id: set<project_id> }\n for project in organization.project_set.all():\n self.projects[project.id] = ProjectContext(project)\n\n def __repr__(self):\n return self.projects.__repr__()\n\n\nclass ProjectContext:\n accepted_error_count = 0\n dropped_error_count = 0\n accepted_transaction_count = 0\n dropped_transaction_count = 0\n\n all_issue_count = 0\n existing_issue_count = 0\n reopened_issue_count = 0\n new_issue_count = 0\n\n def __init__(self, project):\n self.project = project\n\n # Array of (group_id, group_history, count)\n self.key_errors = []\n # Array of (transaction_name, count_this_week, p95_this_week, count_last_week, p95_last_week)\n self.key_transactions = []\n # Array of (Group, count)\n self.key_performance_issues = []\n\n # Dictionary of { timestamp: count }\n self.error_count_by_day = {}\n # Dictionary of { timestamp: count }\n self.transaction_count_by_day = {}\n\n def __repr__(self):\n return f\"{self.key_errors}, Errors: [Accepted {self.accepted_error_count}, Dropped {self.dropped_error_count}]\\nTransactions: [Accepted {self.accepted_transaction_count} Dropped {self.dropped_transaction_count}]\"\n\n\ndef check_if_project_is_empty(project_ctx):\n \"\"\"\n Check if this project has any content we could show in an email.\n \"\"\"\n return (\n not project_ctx.key_errors\n and not project_ctx.key_transactions\n and not project_ctx.key_performance_issues\n and not project_ctx.accepted_error_count\n and not project_ctx.dropped_error_count\n and not project_ctx.accepted_transaction_count\n and not project_ctx.dropped_transaction_count\n )\n\n\ndef check_if_ctx_is_empty(ctx):\n \"\"\"\n Check if the context is empty. If it is, we don't want to send an email.\n \"\"\"\n return all(check_if_project_is_empty(project_ctx) for project_ctx in ctx.projects.values())\n\n\n# The entry point. This task is scheduled to run every week.\n@instrumented_task(\n name=\"sentry.tasks.weekly_reports.schedule_organizations\",\n queue=\"reports.prepare\",\n max_retries=5,\n acks_late=True,\n)\ndef schedule_organizations(dry_run=False, timestamp=None, duration=None):\n if timestamp is None:\n # The time that the report was generated\n timestamp = to_timestamp(floor_to_utc_day(timezone.now()))\n\n if duration is None:\n # The total timespan that the task covers\n duration = ONE_DAY * 7\n\n organizations = Organization.objects.filter(status=OrganizationStatus.ACTIVE)\n for organization in RangeQuerySetWrapper(\n organizations, step=10000, result_value_getter=lambda item: item.id\n ):\n # Create a celery task per organization\n prepare_organization_report.delay(timestamp, duration, organization.id, dry_run=dry_run)\n\n\n# This task is launched per-organization.\n@instrumented_task(\n name=\"sentry.tasks.weekly_reports.prepare_organization_report\",\n queue=\"reports.prepare\",\n max_retries=5,\n acks_late=True,\n)\ndef prepare_organization_report(\n timestamp, duration, organization_id, dry_run=False, target_user=None, email_override=None\n):\n organization = Organization.objects.get(id=organization_id)\n set_tag(\"org.slug\", organization.slug)\n set_tag(\"org.id\", organization_id)\n ctx = OrganizationReportContext(timestamp, duration, organization)\n\n # Run organization passes\n with sentry_sdk.start_span(op=\"weekly_reports.user_project_ownership\"):\n user_project_ownership(ctx)\n with sentry_sdk.start_span(op=\"weekly_reports.project_event_counts_for_organization\"):\n project_event_counts_for_organization(ctx)\n with sentry_sdk.start_span(op=\"weekly_reports.organization_project_issue_summaries\"):\n organization_project_issue_summaries(ctx)\n\n with sentry_sdk.start_span(op=\"weekly_reports.project_passes\"):\n # Run project passes\n for project in organization.project_set.all():\n project_key_errors(ctx, project)\n project_key_transactions(ctx, project)\n project_key_performance_issues(ctx, project)\n\n with sentry_sdk.start_span(op=\"weekly_reports.fetch_key_error_groups\"):\n fetch_key_error_groups(ctx)\n with sentry_sdk.start_span(op=\"weekly_reports.fetch_key_performance_issue_groups\"):\n fetch_key_performance_issue_groups(ctx)\n\n report_is_available = not check_if_ctx_is_empty(ctx)\n set_tag(\"report.available\", report_is_available)\n\n if not report_is_available:\n logger.info(\n \"prepare_organization_report.skipping_empty\", extra={\"organization\": organization_id}\n )\n return\n\n # Finally, deliver the reports\n with sentry_sdk.start_span(op=\"weekly_reports.deliver_reports\"):\n deliver_reports(\n ctx, dry_run=dry_run, target_user=target_user, email_override=email_override\n )\n\n\n# Organization Passes\n\n# Find the projects associated with an user.\n# Populates context.project_ownership which is { user_id: set<project_id> }\ndef user_project_ownership(ctx):\n for (project_id, user_id) in OrganizationMember.objects.filter(\n organization_id=ctx.organization.id, teams__projectteam__project__isnull=False\n ).values_list(\"teams__projectteam__project_id\", \"user_id\"):\n ctx.project_ownership.setdefault(user_id, set()).add(project_id)\n\n\n# Populates context.projects which is { project_id: ProjectContext }\ndef project_event_counts_for_organization(ctx):\n def zerofill_data(data):\n return zerofill(data, ctx.start, ctx.end, ONE_DAY, fill_default=0)\n\n query = Query(\n match=Entity(\"outcomes\"),\n select=[\n Column(\"outcome\"),\n Column(\"category\"),\n Function(\"sum\", [Column(\"quantity\")], \"total\"),\n ],\n where=[\n Condition(Column(\"timestamp\"), Op.GTE, ctx.start),\n Condition(Column(\"timestamp\"), Op.LT, ctx.end + timedelta(days=1)),\n Condition(Column(\"org_id\"), Op.EQ, ctx.organization.id),\n Condition(\n Column(\"outcome\"), Op.IN, [Outcome.ACCEPTED, Outcome.FILTERED, Outcome.RATE_LIMITED]\n ),\n Condition(\n Column(\"category\"),\n Op.IN,\n [*DataCategory.error_categories(), DataCategory.TRANSACTION],\n ),\n ],\n groupby=[Column(\"outcome\"), Column(\"category\"), Column(\"project_id\"), Column(\"time\")],\n granularity=Granularity(ONE_DAY),\n orderby=[OrderBy(Column(\"time\"), Direction.ASC)],\n )\n request = Request(dataset=Dataset.Outcomes.value, app_id=\"reports\", query=query)\n data = raw_snql_query(request, referrer=\"weekly_reports.outcomes\")[\"data\"]\n\n for dat in data:\n project_id = dat[\"project_id\"]\n project_ctx = ctx.projects[project_id]\n total = dat[\"total\"]\n timestamp = int(to_timestamp(parse_snuba_datetime(dat[\"time\"])))\n if dat[\"category\"] == DataCategory.TRANSACTION:\n # Transaction outcome\n if dat[\"outcome\"] == Outcome.RATE_LIMITED or dat[\"outcome\"] == Outcome.FILTERED:\n project_ctx.dropped_transaction_count += total\n else:\n project_ctx.accepted_transaction_count += total\n project_ctx.transaction_count_by_day[timestamp] = total\n else:\n # Error outcome\n if dat[\"outcome\"] == Outcome.RATE_LIMITED or dat[\"outcome\"] == Outcome.FILTERED:\n project_ctx.dropped_error_count += total\n else:\n project_ctx.accepted_error_count += total\n project_ctx.error_count_by_day[timestamp] = (\n project_ctx.error_count_by_day.get(timestamp, 0) + total\n )\n\n\ndef organization_project_issue_summaries(ctx):\n all_issues = Group.objects.exclude(status=GroupStatus.IGNORED)\n new_issue_counts = (\n all_issues.filter(\n project__organization_id=ctx.organization.id,\n first_seen__gte=ctx.start,\n first_seen__lt=ctx.end,\n )\n .values(\"project_id\")\n .annotate(total=Count(\"*\"))\n )\n new_issue_counts = {item[\"project_id\"]: item[\"total\"] for item in new_issue_counts}\n\n # Fetch all regressions. This is a little weird, since there's no way to\n # tell *when* a group regressed using the Group model. Instead, we query\n # all groups that have been seen in the last week and have ever regressed\n # and query the Activity model to find out if they regressed within the\n # past week. (In theory, the activity table *could* be used to answer this\n # query without the subselect, but there's no suitable indexes to make it's\n # performance predictable.)\n reopened_issue_counts = (\n Activity.objects.filter(\n project__organization_id=ctx.organization.id,\n group__in=all_issues.filter(\n last_seen__gte=ctx.start,\n last_seen__lt=ctx.end,\n resolved_at__isnull=False, # signals this has *ever* been resolved\n ),\n type__in=(ActivityType.SET_REGRESSION.value, ActivityType.SET_UNRESOLVED.value),\n datetime__gte=ctx.start,\n datetime__lt=ctx.end,\n )\n .values(\"group__project_id\")\n .annotate(total=Count(\"group_id\", distinct=True))\n )\n reopened_issue_counts = {\n item[\"group__project_id\"]: item[\"total\"] for item in reopened_issue_counts\n }\n\n # Issues seen at least once over the past week\n active_issue_counts = (\n all_issues.filter(\n project__organization_id=ctx.organization.id,\n last_seen__gte=ctx.start,\n last_seen__lt=ctx.end,\n )\n .values(\"project_id\")\n .annotate(total=Count(\"*\"))\n )\n active_issue_counts = {item[\"project_id\"]: item[\"total\"] for item in active_issue_counts}\n\n for project_ctx in ctx.projects.values():\n project_id = project_ctx.project.id\n active_issue_count = active_issue_counts.get(project_id, 0)\n project_ctx.reopened_issue_count = reopened_issue_counts.get(project_id, 0)\n project_ctx.new_issue_count = new_issue_counts.get(project_id, 0)\n project_ctx.existing_issue_count = max(\n active_issue_count - project_ctx.reopened_issue_count - project_ctx.new_issue_count, 0\n )\n project_ctx.all_issue_count = (\n project_ctx.reopened_issue_count\n + project_ctx.new_issue_count\n + project_ctx.existing_issue_count\n )\n\n\n# Project passes\ndef project_key_errors(ctx, project):\n if not project.first_event:\n return\n # Take the 3 most frequently occuring events\n with sentry_sdk.start_span(op=\"weekly_reports.project_key_errors\"):\n query = Query(\n match=Entity(\"events\"),\n select=[Column(\"group_id\"), Function(\"count\", [])],\n where=[\n Condition(Column(\"timestamp\"), Op.GTE, ctx.start),\n Condition(Column(\"timestamp\"), Op.LT, ctx.end + timedelta(days=1)),\n Condition(Column(\"project_id\"), Op.EQ, project.id),\n ],\n groupby=[Column(\"group_id\")],\n orderby=[OrderBy(Function(\"count\", []), Direction.DESC)],\n limit=Limit(3),\n )\n request = Request(dataset=Dataset.Events.value, app_id=\"reports\", query=query)\n query_result = raw_snql_query(request, referrer=\"reports.key_errors\")\n key_errors = query_result[\"data\"]\n # Set project_ctx.key_errors to be an array of (group_id, count) for now.\n # We will query the group history later on in `fetch_key_error_groups`, batched in a per-organization basis\n ctx.projects[project.id].key_errors = [(e[\"group_id\"], e[\"count()\"]) for e in key_errors]\n if ctx.organization.slug == \"sentry\":\n logger.info(\n \"project_key_errors.results\",\n extra={\"project_id\": project.id, \"num_key_errors\": len(key_errors)},\n )\n\n\n# Organization pass. Depends on project_key_errors.\ndef fetch_key_error_groups(ctx):\n all_key_error_group_ids = []\n for project_ctx in ctx.projects.values():\n all_key_error_group_ids.extend([group_id for group_id, count in project_ctx.key_errors])\n\n if len(all_key_error_group_ids) == 0:\n return\n\n group_id_to_group = {}\n for group in Group.objects.filter(id__in=all_key_error_group_ids).all():\n group_id_to_group[group.id] = group\n\n group_history = (\n GroupHistory.objects.filter(\n group_id__in=all_key_error_group_ids, organization_id=ctx.organization.id\n )\n .order_by(\"group_id\", \"-date_added\")\n .distinct(\"group_id\")\n .all()\n )\n group_id_to_group_history = {g.group_id: g for g in group_history}\n\n for project_ctx in ctx.projects.values():\n # note Snuba might have groups that have since been deleted\n # we should just ignore those\n project_ctx.key_errors = list(\n filter(\n lambda x: x[0] is not None,\n [\n (\n group_id_to_group.get(group_id),\n group_id_to_group_history.get(group_id, None),\n count,\n )\n for group_id, count in project_ctx.key_errors\n ],\n )\n )\n\n\ndef project_key_transactions(ctx, project):\n if not project.flags.has_transactions:\n return\n with sentry_sdk.start_span(op=\"weekly_reports.project_key_transactions\"):\n # Take the 3 most frequently occuring transactions this week\n query = Query(\n match=Entity(\"transactions\"),\n select=[\n Column(\"transaction_name\"),\n Function(\"quantile(0.95)\", [Column(\"duration\")], \"p95\"),\n Function(\"count\", [], \"count\"),\n ],\n where=[\n Condition(Column(\"finish_ts\"), Op.GTE, ctx.start),\n Condition(Column(\"finish_ts\"), Op.LT, ctx.end + timedelta(days=1)),\n Condition(Column(\"project_id\"), Op.EQ, project.id),\n ],\n groupby=[Column(\"transaction_name\")],\n orderby=[OrderBy(Function(\"count\", []), Direction.DESC)],\n limit=Limit(3),\n )\n request = Request(dataset=Dataset.Transactions.value, app_id=\"reports\", query=query)\n query_result = raw_snql_query(request, referrer=\"weekly_reports.key_transactions.this_week\")\n key_transactions = query_result[\"data\"]\n ctx.projects[project.id].key_transactions_this_week = [\n (i[\"transaction_name\"], i[\"count\"], i[\"p95\"]) for i in key_transactions\n ]\n\n # Query the p95 for those transactions last week\n query = Query(\n match=Entity(\"transactions\"),\n select=[\n Column(\"transaction_name\"),\n Function(\"quantile(0.95)\", [Column(\"duration\")], \"p95\"),\n Function(\"count\", [], \"count\"),\n ],\n where=[\n Condition(Column(\"finish_ts\"), Op.GTE, ctx.start - timedelta(days=7)),\n Condition(Column(\"finish_ts\"), Op.LT, ctx.end - timedelta(days=7)),\n Condition(Column(\"project_id\"), Op.EQ, project.id),\n Condition(\n Column(\"transaction_name\"),\n Op.IN,\n [i[\"transaction_name\"] for i in key_transactions],\n ),\n ],\n groupby=[Column(\"transaction_name\")],\n )\n request = Request(dataset=Dataset.Transactions.value, app_id=\"reports\", query=query)\n query_result = raw_snql_query(request, referrer=\"weekly_reports.key_transactions.last_week\")\n\n # Join this week with last week\n last_week_data = {\n i[\"transaction_name\"]: (i[\"count\"], i[\"p95\"]) for i in query_result[\"data\"]\n }\n\n ctx.projects[project.id].key_transactions = [\n (i[\"transaction_name\"], i[\"count\"], i[\"p95\"])\n + last_week_data.get(i[\"transaction_name\"], (0, 0))\n for i in key_transactions\n ]\n\n\ndef project_key_performance_issues(ctx, project):\n if not project.first_event:\n return\n\n with sentry_sdk.start_span(op=\"weekly_reports.project_key_performance_issues\"):\n # Pick the 50 top frequent performance issues last seen within a month with the highest event count from all time.\n # Then, we use this to join with snuba, hoping that the top 3 issue by volume counted in snuba would be within this list.\n # We do this to limit the number of group_ids snuba has to join with.\n groups = Group.objects.filter(\n project_id=project.id,\n status=GroupStatus.UNRESOLVED,\n last_seen__gte=ctx.end - timedelta(days=30),\n # performance issue range\n type__gte=1000,\n type__lt=2000,\n ).order_by(\"-times_seen\")[:50]\n # Django doesn't have a .limit function, and this will actually do its magic to use the LIMIT statement.\n groups = list(groups)\n group_id_to_group = {group.id: group for group in groups}\n\n if len(group_id_to_group) == 0:\n return\n\n # Fine grained query for 3 most frequent events happend during last week\n query = Query(\n match=Entity(\"transactions\"),\n select=[\n Column(\"group_ids\"),\n Function(\"count\", []),\n ],\n where=[\n Condition(Column(\"finish_ts\"), Op.GTE, ctx.start),\n Condition(Column(\"finish_ts\"), Op.LT, ctx.end + timedelta(days=1)),\n # transactions.group_ids is a list of group_ids that the transaction was associated with.\n # We want to find the transactions associated with group_id_to_group.keys()\n # That means group_ids must intersect with group_id_to_group.keys() in order for the transaction to be counted.\n Condition(\n Function(\n \"notEmpty\",\n [\n Function(\n \"arrayIntersect\",\n [Column(\"group_ids\"), list(group_id_to_group.keys())],\n )\n ],\n ),\n Op.EQ,\n 1,\n ),\n Condition(Column(\"project_id\"), Op.EQ, project.id),\n ],\n groupby=[Column(\"group_ids\")],\n orderby=[OrderBy(Function(\"count\", []), Direction.DESC)],\n limit=Limit(3),\n )\n request = Request(dataset=Dataset.Transactions.value, app_id=\"reports\", query=query)\n query_result = raw_snql_query(request, referrer=\"reports.key_performance_issues\")[\"data\"]\n\n key_performance_issues = []\n for d in query_result:\n count = d[\"count()\"]\n group_ids = d[\"group_ids\"]\n for group_id in group_ids:\n group = group_id_to_group.get(group_id)\n if group:\n key_performance_issues.append((group, count))\n break\n\n ctx.projects[project.id].key_performance_issues = key_performance_issues\n\n\n# Organization pass. Depends on project_key_performance_issue.\ndef fetch_key_performance_issue_groups(ctx):\n all_groups = []\n for project_ctx in ctx.projects.values():\n all_groups.extend([group for group, count in project_ctx.key_performance_issues])\n\n if len(all_groups) == 0:\n return\n\n group_id_to_group = {group.id: group for group in all_groups}\n\n group_history = (\n GroupHistory.objects.filter(\n group_id__in=group_id_to_group.keys(), organization_id=ctx.organization.id\n )\n .order_by(\"group_id\", \"-date_added\")\n .distinct(\"group_id\")\n .all()\n )\n group_id_to_group_history = {g.group_id: g for g in group_history}\n\n for project_ctx in ctx.projects.values():\n project_ctx.key_performance_issues = [\n (group, group_id_to_group_history.get(group.id, None), count)\n for group, count in project_ctx.key_performance_issues\n ]\n\n\n# Deliver reports\n# For all users in the organization, we generate the template context for the user, and send the email.\n\n\ndef deliver_reports(ctx, dry_run=False, target_user=None, email_override=None):\n # Specify a sentry user to send this email.\n if email_override:\n send_email(ctx, target_user, dry_run=dry_run, email_override=email_override)\n else:\n # We save the subscription status of the user in a field in UserOptions.\n # Here we do a raw query and LEFT JOIN on a subset of UserOption table where sentry_useroption.key = 'reports:disabled-organizations'\n user_set = User.objects.raw(\n \"\"\"SELECT auth_user.*, sentry_useroption.value as options FROM auth_user\n INNER JOIN sentry_organizationmember on sentry_organizationmember.user_id=auth_user.id\n LEFT JOIN sentry_useroption on sentry_useroption.user_id = auth_user.id and sentry_useroption.key = 'reports:disabled-organizations'\n WHERE auth_user.is_active = true\n AND \"sentry_organizationmember\".\"flags\" & %s = 0\n AND \"sentry_organizationmember\".\"organization_id\"= %s \"\"\",\n [OrganizationMember.flags[\"member-limit:restricted\"], ctx.organization.id],\n )\n\n for user in user_set:\n # We manually pick out user.options and use PickledObjectField to deserialize it. We get a list of organizations the user has unsubscribed from user reports\n option = PickledObjectField().to_python(user.options) or []\n user_subscribed_to_organization_reports = ctx.organization.id not in option\n if user_subscribed_to_organization_reports:\n send_email(ctx, user, dry_run=dry_run)\n\n\nproject_breakdown_colors = [\"#422C6E\", \"#895289\", \"#D6567F\", \"#F38150\", \"#F2B713\"]\ntotal_color = \"\"\"\nlinear-gradient(\n -45deg,\n #ccc 25%,\n transparent 25%,\n transparent 50%,\n #ccc 50%,\n #ccc 75%,\n transparent 75%,\n transparent\n);\n\"\"\"\nother_color = \"#f2f0fa\"\ngroup_status_to_color = {\n GroupHistoryStatus.UNRESOLVED: \"#FAD473\",\n GroupHistoryStatus.RESOLVED: \"#8ACBBC\",\n GroupHistoryStatus.SET_RESOLVED_IN_RELEASE: \"#8ACBBC\",\n GroupHistoryStatus.SET_RESOLVED_IN_COMMIT: \"#8ACBBC\",\n GroupHistoryStatus.SET_RESOLVED_IN_PULL_REQUEST: \"#8ACBBC\",\n GroupHistoryStatus.AUTO_RESOLVED: \"#8ACBBC\",\n GroupHistoryStatus.IGNORED: \"#DBD6E1\",\n GroupHistoryStatus.UNIGNORED: \"#FAD473\",\n GroupHistoryStatus.ASSIGNED: \"#FAAAAC\",\n GroupHistoryStatus.UNASSIGNED: \"#FAD473\",\n GroupHistoryStatus.REGRESSED: \"#FAAAAC\",\n GroupHistoryStatus.DELETED: \"#DBD6E1\",\n GroupHistoryStatus.DELETED_AND_DISCARDED: \"#DBD6E1\",\n GroupHistoryStatus.REVIEWED: \"#FAD473\",\n GroupHistoryStatus.NEW: \"#FAD473\",\n}\n\n\n# Serialize ctx for template, and calculate view parameters (like graph bar heights)\ndef render_template_context(ctx, user):\n # Fetch the list of projects associated with the user.\n # Projects owned by teams that the user has membership of.\n if user and user.id in ctx.project_ownership:\n user_projects = list(\n filter(\n lambda project_ctx: project_ctx.project.id in ctx.project_ownership[user.id],\n ctx.projects.values(),\n )\n )\n if len(user_projects) == 0:\n return None\n else:\n # If user is None, or if the user is not a member of the organization, we assume that the email was directed to a user who joined all teams.\n user_projects = ctx.projects.values()\n\n # Render the first section of the email where we had the table showing the\n # number of accepted/dropped errors/transactions for each project.\n def trends():\n # Given an iterator of event counts, sum up their accepted/dropped errors/transaction counts.\n def sum_event_counts(project_ctxs):\n return reduce(\n lambda a, b: (a[0] + b[0], a[1] + b[1], a[2] + b[2], a[3] + b[3]),\n [\n (\n project_ctx.accepted_error_count,\n project_ctx.dropped_error_count,\n project_ctx.accepted_transaction_count,\n project_ctx.dropped_transaction_count,\n )\n for project_ctx in project_ctxs\n ],\n (0, 0, 0, 0),\n )\n\n # Highest volume projects go first\n projects_associated_with_user = sorted(\n user_projects,\n reverse=True,\n key=lambda item: item.accepted_error_count + (item.accepted_transaction_count / 10),\n )\n # Calculate total\n (\n total_error,\n total_dropped_error,\n total_transaction,\n total_dropped_transaction,\n ) = sum_event_counts(projects_associated_with_user)\n # The number of reports to keep is the same as the number of colors\n # available to use in the legend.\n projects_taken = projects_associated_with_user[: len(project_breakdown_colors)]\n # All other items are merged to \"Others\"\n projects_not_taken = projects_associated_with_user[len(project_breakdown_colors) :]\n\n # Calculate legend\n legend = [\n {\n \"slug\": project_ctx.project.slug,\n \"url\": project_ctx.project.get_absolute_url(),\n \"color\": project_breakdown_colors[i],\n \"dropped_error_count\": project_ctx.dropped_error_count,\n \"accepted_error_count\": project_ctx.accepted_error_count,\n \"dropped_transaction_count\": project_ctx.dropped_transaction_count,\n \"accepted_transaction_count\": project_ctx.accepted_transaction_count,\n }\n for i, project_ctx in enumerate(projects_taken)\n ]\n\n if len(projects_not_taken) > 0:\n (\n others_error,\n others_dropped_error,\n others_transaction,\n others_dropped_transaction,\n ) = sum_event_counts(projects_not_taken)\n legend.append(\n {\n \"slug\": f\"Other ({len(projects_not_taken)})\",\n \"color\": other_color,\n \"dropped_error_count\": others_dropped_error,\n \"accepted_error_count\": others_error,\n \"dropped_transaction_count\": others_dropped_transaction,\n \"accepted_transaction_count\": others_transaction,\n }\n )\n if len(projects_taken) > 1:\n legend.append(\n {\n \"slug\": f\"Total ({len(projects_associated_with_user)})\",\n \"color\": total_color,\n \"dropped_error_count\": total_dropped_error,\n \"accepted_error_count\": total_error,\n \"dropped_transaction_count\": total_dropped_transaction,\n \"accepted_transaction_count\": total_transaction,\n }\n )\n\n # Calculate series\n series = []\n for i in range(0, 7):\n t = int(to_timestamp(ctx.start)) + ONE_DAY * i\n project_series = [\n {\n \"color\": project_breakdown_colors[i],\n \"error_count\": project_ctx.error_count_by_day.get(t, 0),\n \"transaction_count\": project_ctx.transaction_count_by_day.get(t, 0),\n }\n for i, project_ctx in enumerate(projects_taken)\n ]\n if len(projects_not_taken) > 0:\n project_series.append(\n {\n \"color\": other_color,\n \"error_count\": sum(\n map(\n lambda project_ctx: project_ctx.error_count_by_day.get(t, 0),\n projects_not_taken,\n )\n ),\n \"transaction_count\": sum(\n map(\n lambda project_ctx: project_ctx.transaction_count_by_day.get(t, 0),\n projects_not_taken,\n )\n ),\n }\n )\n series.append((to_datetime(t), project_series))\n return {\n \"legend\": legend,\n \"series\": series,\n \"total_error_count\": total_error,\n \"total_transaction_count\": total_transaction,\n \"error_maximum\": max( # The max error count on any single day\n sum(value[\"error_count\"] for value in values) for timestamp, values in series\n ),\n \"transaction_maximum\": max( # The max transaction count on any single day\n sum(value[\"transaction_count\"] for value in values) for timestamp, values in series\n )\n if len(projects_taken) > 0\n else 0,\n }\n\n def key_errors():\n # TODO(Steve): Remove debug logging for Sentry\n def all_key_errors():\n if ctx.organization.slug == \"sentry\":\n logger.info(\n \"render_template_context.all_key_errors.num_projects\",\n extra={\"user_id\": user.id, \"num_user_projects\": len(user_projects)},\n )\n for project_ctx in user_projects:\n if ctx.organization.slug == \"sentry\":\n logger.info(\n \"render_template_context.all_key_errors.project\",\n extra={\n \"user_id\": user.id,\n \"project_id\": project_ctx.project.id,\n },\n )\n for group, group_history, count in project_ctx.key_errors:\n if ctx.organization.slug == \"sentry\":\n logger.info(\n \"render_template_context.all_key_errors.found_error\",\n extra={\n \"group_id\": group.id,\n \"user_id\": user.id,\n \"project_id\": project_ctx.project.id,\n },\n )\n yield {\n \"count\": count,\n \"group\": group,\n \"status\": group_history.get_status_display()\n if group_history\n else \"Unresolved\",\n \"status_color\": group_status_to_color[group_history.status]\n if group_history\n else group_status_to_color[GroupHistoryStatus.NEW],\n }\n\n return heapq.nlargest(3, all_key_errors(), lambda d: d[\"count\"])\n\n def key_transactions():\n def all_key_transactions():\n for project_ctx in user_projects:\n for (\n transaction_name,\n count_this_week,\n p95_this_week,\n count_last_week,\n p95_last_week,\n ) in project_ctx.key_transactions:\n yield {\n \"name\": transaction_name,\n \"count\": count_this_week,\n \"p95\": p95_this_week,\n \"p95_prev_week\": p95_last_week,\n \"project\": project_ctx.project,\n }\n\n return heapq.nlargest(3, all_key_transactions(), lambda d: d[\"count\"])\n\n def key_performance_issues():\n def all_key_performance_issues():\n for project_ctx in user_projects:\n for (group, group_history, count) in project_ctx.key_performance_issues:\n yield {\n \"count\": count,\n \"group\": group,\n \"status\": group_history.get_status_display()\n if group_history\n else \"Unresolved\",\n \"status_color\": group_status_to_color[group_history.status]\n if group_history\n else group_status_to_color[GroupHistoryStatus.NEW],\n }\n\n return heapq.nlargest(3, all_key_performance_issues(), lambda d: d[\"count\"])\n\n def issue_summary():\n all_issue_count = 0\n existing_issue_count = 0\n reopened_issue_count = 0\n new_issue_count = 0\n for project_ctx in user_projects:\n all_issue_count += project_ctx.all_issue_count\n existing_issue_count += project_ctx.existing_issue_count\n reopened_issue_count += project_ctx.reopened_issue_count\n new_issue_count += project_ctx.new_issue_count\n return {\n \"all_issue_count\": all_issue_count,\n \"existing_issue_count\": existing_issue_count,\n \"reopened_issue_count\": reopened_issue_count,\n \"new_issue_count\": new_issue_count,\n }\n\n return {\n \"organization\": ctx.organization,\n \"start\": date_format(ctx.start),\n \"end\": date_format(ctx.end),\n \"trends\": trends(),\n \"key_errors\": key_errors(),\n \"key_transactions\": key_transactions(),\n \"key_performance_issues\": key_performance_issues(),\n \"issue_summary\": issue_summary(),\n }\n\n\ndef send_email(ctx, user, dry_run=False, email_override=None):\n template_ctx = render_template_context(ctx, user)\n if not template_ctx:\n logger.debug(\n f\"Skipping report for {ctx.organization.id} to {user}, no qualifying reports to deliver.\"\n )\n return\n\n message = MessageBuilder(\n subject=f\"Weekly Report for {ctx.organization.name}: {date_format(ctx.start)} - {date_format(ctx.end)}\",\n template=\"sentry/emails/reports/body.txt\",\n html_template=\"sentry/emails/reports/body.html\",\n type=\"report.organization\",\n context=template_ctx,\n headers={\"X-SMTPAPI\": json.dumps({\"category\": \"organization_weekly_report\"})},\n )\n if dry_run:\n return\n if email_override:\n message.send(to=(email_override,))\n else:\n message.add_users((user.id,))\n message.send_async()\n", "path": "src/sentry/tasks/weekly_reports.py"}]} |
gh_patches_debug_80 | rasdani/github-patches | git_diff | pwndbg__pwndbg-495 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Wrong context regs display on [remote?] targets that use multiple threads
### Description
While I was debugging a 32-bit process on an ARM Android device, I sometimes noticed that atleast one of the register addresses in pwndbg view is wrong.
This has happened several times for different registers.
Examples (happened in two different debugging sessions):
```
pwndbg> regs r0
R0 0xee69a868 ββΈ 0xee460a00 ββ 0x0
pwndbg> i r r0
r0 0xee4335c8 3997382088
pwndbg> i r sp
sp 0xf136d698 0xf136d698
pwndbg> regs sp
*SP 0xf007a820 ββΈ 0xf007a834 ββ 0xffffffff
```
It happened to me again today while debugging so I tried to ask about this in the IRC channel before opening an issue and when trying to debug the problem, one guy said that this problem is somehow similar with [Issue 460](https://github.com/pwndbg/pwndbg/issues/460) and asked me if I could try the things mentioned there.
After trying to disable caching with:
```
pwndbg> python import pwndbg; pwndbg.memoize.memoize.caching = False
```
The pwndbg registers view got immediately updated with the correct addresses for the registers.
Unfortunately, disabling caching make pwndbg really slow.
### Steps to reproduce
I don't have any consistent way to reproduce the issue as it's not always happening and not easy to notice.
### My setup
pwndbg> version
Gdb: 8.1
Python: 3.6.5 (default, May 11 2018, 04:00:52) [GCC 8.1.0]
Pwndbg: 1.0.0 build: 71d29df
Capstone: 4.0.1024
Unicorn: 1.0.1
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pwndbg/regs.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 """
4 Reading register value from the inferior, and provides a
5 standardized interface to registers like "sp" and "pc".
6 """
7 from __future__ import absolute_import
8 from __future__ import division
9 from __future__ import print_function
10 from __future__ import unicode_literals
11
12 import collections
13 import ctypes
14 import re
15 import sys
16 from types import ModuleType
17
18 import gdb
19 import six
20
21 import pwndbg.arch
22 import pwndbg.events
23 import pwndbg.memoize
24 import pwndbg.proc
25 import pwndbg.remote
26
27 try:
28 long
29 except NameError:
30 long=int
31
32
33 class RegisterSet(object):
34 #: Program counter register
35 pc = None
36
37 #: Stack pointer register
38 stack = None
39
40 #: Frame pointer register
41 frame = None
42
43 #: Return address register
44 retaddr = None
45
46 #: Flags register (eflags, cpsr)
47 flags = None
48
49 #: List of native-size generalp-purpose registers
50 gpr = None
51
52 #: List of miscellaneous, valid registers
53 misc = None
54
55 #: Register-based arguments for most common ABI
56 regs = None
57
58 #: Return value register
59 retval = None
60
61 #: Common registers which should be displayed in the register context
62 common = None
63
64 #: All valid registers
65 all = None
66
67 def __init__(self,
68 pc='pc',
69 stack='sp',
70 frame=None,
71 retaddr=tuple(),
72 flags=dict(),
73 gpr=tuple(),
74 misc=tuple(),
75 args=tuple(),
76 retval=None):
77 self.pc = pc
78 self.stack = stack
79 self.frame = frame
80 self.retaddr = retaddr
81 self.flags = flags
82 self.gpr = gpr
83 self.misc = misc
84 self.args = args
85 self.retval = retval
86
87 # In 'common', we don't want to lose the ordering of:
88 self.common = []
89 for reg in gpr + (frame, stack, pc) + tuple(flags):
90 if reg and reg not in self.common:
91 self.common.append(reg)
92
93 self.all = set(i for i in misc) | set(flags) | set(self.retaddr) | set(self.common)
94 self.all -= {None}
95
96 def __iter__(self):
97 for r in self.all:
98 yield r
99
100 arm = RegisterSet( retaddr = ('lr',),
101 flags = {'cpsr':{}},
102 gpr = tuple('r%i' % i for i in range(13)),
103 args = ('r0','r1','r2','r3'),
104 retval = 'r0')
105
106 aarch64 = RegisterSet( retaddr = ('lr',),
107 flags = {'cpsr':{}},
108 frame = 'x29',
109 gpr = tuple('x%i' % i for i in range(29)),
110 misc = tuple('w%i' % i for i in range(29)),
111 args = ('x0','x1','x2','x3'),
112 retval = 'x0')
113
114 x86flags = {'eflags': {
115 'CF': 0,
116 'PF': 2,
117 'AF': 4,
118 'ZF': 6,
119 'SF': 7,
120 'IF': 9,
121 'DF': 10,
122 'OF': 11,
123 }}
124
125 amd64 = RegisterSet(pc = 'rip',
126 stack = 'rsp',
127 frame = 'rbp',
128 flags = x86flags,
129 gpr = ('rax','rbx','rcx','rdx','rdi','rsi',
130 'r8', 'r9', 'r10','r11','r12',
131 'r13','r14','r15'),
132 misc = ('cs','ss','ds','es','fs','gs',
133 'fsbase', 'gsbase',
134 'ax','ah','al',
135 'bx','bh','bl',
136 'cx','ch','cl',
137 'dx','dh','dl',
138 'dil','sil','spl','bpl',
139 'di','si','bp','sp','ip'),
140 args = ('rdi','rsi','rdx','rcx','r8','r9'),
141 retval = 'rax')
142
143 i386 = RegisterSet( pc = 'eip',
144 stack = 'esp',
145 frame = 'ebp',
146 flags = x86flags,
147 gpr = ('eax','ebx','ecx','edx','edi','esi'),
148 misc = ('cs','ss','ds','es','fs','gs',
149 'fsbase', 'gsbase',
150 'ax','ah','al',
151 'bx','bh','bl',
152 'cx','ch','cl',
153 'dx','dh','dl',
154 'dil','sil','spl','bpl',
155 'di','si','bp','sp','ip'),
156 retval = 'eax')
157
158
159 # http://math-atlas.sourceforge.net/devel/assembly/elfspec_ppc.pdf
160 # r0 Volatile register which may be modified during function linkage
161 # r1 Stack frame pointer, always valid
162 # r2 System-reserved register (points at GOT)
163 # r3-r4 Volatile registers used for parameter passing and return values
164 # r5-r10 Volatile registers used for parameter passing
165 # r11-r12 Volatile registers which may be modified during function linkage
166 # r13 Small data area pointer register (points to TLS)
167 # r14-r30 Registers used for local variables
168 # r31 Used for local variables or "environment pointers"
169 powerpc = RegisterSet( retaddr = ('lr','r0'),
170 flags = {'msr':{},'xer':{}},
171 gpr = tuple('r%i' % i for i in range(3,32)),
172 misc = ('cr','lr','r2'),
173 args = tuple('r%i' for i in range(3,11)),
174 retval = 'r3')
175
176 # http://people.cs.clemson.edu/~mark/sparc/sparc_arch_desc.txt
177 # http://people.cs.clemson.edu/~mark/subroutines/sparc.html
178 # https://www.utdallas.edu/~edsha/security/sparcoverflow.htm
179 #
180 # http://people.cs.clemson.edu/~mark/sparc/assembly.txt
181 # ____________________________________
182 # %g0 == %r0 (always zero) \
183 # %g1 == %r1 | g stands for global
184 # ... |
185 # %g7 == %r7 |
186 # ____________________________________/
187 # %o0 == %r8 \
188 # ... | o stands for output (note: not 0)
189 # %o6 == %r14 == %sp (stack ptr) |
190 # %o7 == %r15 == for return aaddress |
191 # ____________________________________/
192 # %l0 == %r16 \
193 # ... | l stands for local (note: not 1)
194 # %l7 == %r23 |
195 # ____________________________________/
196 # %i0 == %r24 \
197 # ... | i stands for input
198 # %i6 == %r30 == %fp (frame ptr) |
199 # %i7 == %r31 == for return address |
200 # ____________________________________/
201
202 sparc_gp = tuple(['g%i' % i for i in range(1,8)]
203 +['o%i' % i for i in range(0,6)]
204 +['l%i' % i for i in range(0,8)]
205 +['i%i' % i for i in range(0,6)])
206 sparc = RegisterSet(stack = 'o6',
207 frame = 'i6',
208 retaddr = ('o7',),
209 flags = {'psr':{}},
210 gpr = sparc_gp,
211 args = ('i0','i1','i2','i3','i4','i5'),
212 retval = 'o0')
213
214
215 # http://logos.cs.uic.edu/366/notes/mips%20quick%20tutorial.htm
216 # r0 => zero
217 # r1 => temporary
218 # r2-r3 => values
219 # r4-r7 => arguments
220 # r8-r15 => temporary
221 # r16-r23 => saved values
222 # r24-r25 => temporary
223 # r26-r27 => interrupt/trap handler
224 # r28 => global pointer
225 # r29 => stack pointer
226 # r30 => frame pointer
227 # r31 => return address
228 mips = RegisterSet( frame = 'fp',
229 retaddr = ('ra',),
230 gpr = ('v0','v1','a0','a1','a2','a3') \
231 + tuple('t%i' % i for i in range(10)) \
232 + tuple('s%i' % i for i in range(9)),
233 args = ('a0','a1','a2','a3'),
234 retval = 'v0')
235
236 arch_to_regs = {
237 'i386': i386,
238 'x86-64': amd64,
239 'mips': mips,
240 'sparc': sparc,
241 'arm': arm,
242 'aarch64': aarch64,
243 'powerpc': powerpc,
244 }
245
246 @pwndbg.proc.OnlyWhenRunning
247 def gdb77_get_register(name):
248 return gdb.parse_and_eval('$' + name)
249
250 @pwndbg.proc.OnlyWhenRunning
251 def gdb79_get_register(name):
252 return gdb.newest_frame().read_register(name)
253
254 try:
255 gdb.Frame.read_register
256 get_register = gdb79_get_register
257 except AttributeError:
258 get_register = gdb77_get_register
259
260
261 # We need to manually make some ptrace calls to get fs/gs bases on Intel
262 PTRACE_ARCH_PRCTL = 30
263 ARCH_GET_FS = 0x1003
264 ARCH_GET_GS = 0x1004
265
266 class module(ModuleType):
267 last = {}
268
269 @pwndbg.memoize.reset_on_stop
270 @pwndbg.memoize.reset_on_prompt
271 def __getattr__(self, attr):
272 attr = attr.lstrip('$')
273 try:
274 # Seriously, gdb? Only accepts uint32.
275 if 'eflags' in attr:
276 value = gdb77_get_register(attr)
277 value = value.cast(pwndbg.typeinfo.uint32)
278 else:
279 value = get_register(attr)
280 value = value.cast(pwndbg.typeinfo.ptrdiff)
281
282 value = int(value)
283 return value & pwndbg.arch.ptrmask
284 except (ValueError, gdb.error):
285 return None
286
287 @pwndbg.memoize.reset_on_stop
288 def __getitem__(self, item):
289 if isinstance(item, six.integer_types):
290 return arch_to_regs[pwndbg.arch.current][item]
291
292 if not isinstance(item, six.string_types):
293 print("Unknown register type: %r" % (item))
294 import pdb, traceback
295 traceback.print_stack()
296 pdb.set_trace()
297 return None
298
299 # e.g. if we're looking for register "$rax", turn it into "rax"
300 item = item.lstrip('$')
301 item = getattr(self, item.lower())
302
303 if isinstance(item, six.integer_types):
304 return int(item) & pwndbg.arch.ptrmask
305
306 return item
307
308 def __iter__(self):
309 regs = set(arch_to_regs[pwndbg.arch.current]) | set(['pc','sp'])
310 for item in regs:
311 yield item
312
313 @property
314 def current(self):
315 return arch_to_regs[pwndbg.arch.current]
316
317 @property
318 def gpr(self):
319 return arch_to_regs[pwndbg.arch.current].gpr
320
321 @property
322 def common(self):
323 return arch_to_regs[pwndbg.arch.current].common
324
325 @property
326 def frame(self):
327 return arch_to_regs[pwndbg.arch.current].frame
328
329 @property
330 def retaddr(self):
331 return arch_to_regs[pwndbg.arch.current].retaddr
332
333 @property
334 def flags(self):
335 return arch_to_regs[pwndbg.arch.current].flags
336
337 @property
338 def stack(self):
339 return arch_to_regs[pwndbg.arch.current].stack
340
341 @property
342 def retval(self):
343 return arch_to_regs[pwndbg.arch.current].retval
344
345 @property
346 def all(self):
347 regs = arch_to_regs[pwndbg.arch.current]
348 retval = []
349 for regset in (regs.pc, regs.stack, regs.frame, regs.retaddr, regs.flags, regs.gpr, regs.misc):
350 if regset is None:
351 continue
352 elif isinstance(regset, (list, tuple)):
353 retval.extend(regset)
354 elif isinstance(regset, dict):
355 retval.extend(regset.keys())
356 else:
357 retval.append(regset)
358 return retval
359
360 def fix(self, expression):
361 for regname in set(self.all + ['sp','pc']):
362 expression = re.sub(r'\$?\b%s\b' % regname, r'$'+regname, expression)
363 return expression
364
365 def items(self):
366 for regname in self.all:
367 yield regname, self[regname]
368
369 arch_to_regs = arch_to_regs
370
371 @property
372 def changed(self):
373 delta = []
374 for reg, value in self.last.items():
375 if self[reg] != value:
376 delta.append(reg)
377 return delta
378
379 @property
380 @pwndbg.memoize.reset_on_stop
381 def fsbase(self):
382 return self._fs_gs_helper(ARCH_GET_FS)
383
384 @property
385 @pwndbg.memoize.reset_on_stop
386 def gsbase(self):
387 return self._fs_gs_helper(ARCH_GET_GS)
388
389 @pwndbg.memoize.reset_on_stop
390 def _fs_gs_helper(self, which):
391 """Supports fetching based on segmented addressing, a la fs:[0x30].
392
393 Requires ptrace'ing the child directly."""
394
395 # We can't really do anything if the process is remote.
396 if pwndbg.remote.is_remote(): return 0
397
398 # Use the lightweight process ID
399 pid, lwpid, tid = gdb.selected_thread().ptid
400
401 # Get the register
402 ppvoid = ctypes.POINTER(ctypes.c_void_p)
403 value = ppvoid(ctypes.c_void_p())
404 value.contents.value = 0
405
406 libc = ctypes.CDLL('libc.so.6')
407 result = libc.ptrace(PTRACE_ARCH_PRCTL,
408 lwpid,
409 value,
410 which)
411
412 if result == 0:
413 return (value.contents.value or 0) & pwndbg.arch.ptrmask
414
415 return 0
416
417 def __repr__(self):
418 return ('<module pwndbg.regs>')
419
420 # To prevent garbage collection
421 tether = sys.modules[__name__]
422 sys.modules[__name__] = module(__name__, '')
423
424
425 @pwndbg.events.cont
426 def update_last():
427 M = sys.modules[__name__]
428 M.last = {k:M[k] for k in M.common}
429 if pwndbg.config.show_retaddr_reg:
430 M.last.update({k:M[k] for k in M.retaddr})
431
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pwndbg/regs.py b/pwndbg/regs.py
--- a/pwndbg/regs.py
+++ b/pwndbg/regs.py
@@ -423,6 +423,7 @@
@pwndbg.events.cont
[email protected]
def update_last():
M = sys.modules[__name__]
M.last = {k:M[k] for k in M.common}
| {"golden_diff": "diff --git a/pwndbg/regs.py b/pwndbg/regs.py\n--- a/pwndbg/regs.py\n+++ b/pwndbg/regs.py\n@@ -423,6 +423,7 @@\n \n \n @pwndbg.events.cont\[email protected]\n def update_last():\n M = sys.modules[__name__]\n M.last = {k:M[k] for k in M.common}\n", "issue": "Wrong context regs display on [remote?] targets that use multiple threads\n### Description\r\n\r\nWhile I was debugging a 32-bit process on an ARM Android device, I sometimes noticed that atleast one of the register addresses in pwndbg view is wrong.\r\nThis has happened several times for different registers.\r\n\r\nExamples (happened in two different debugging sessions):\r\n```\r\npwndbg> regs r0\r\n R0 0xee69a868 \u2014\u25b8 0xee460a00 \u25c2\u2014 0x0\r\npwndbg> i r r0\r\nr0 0xee4335c8 3997382088\r\n\r\npwndbg> i r sp\r\nsp 0xf136d698 0xf136d698\r\npwndbg> regs sp\r\n*SP 0xf007a820 \u2014\u25b8 0xf007a834 \u25c2\u2014 0xffffffff\r\n```\r\n\r\nIt happened to me again today while debugging so I tried to ask about this in the IRC channel before opening an issue and when trying to debug the problem, one guy said that this problem is somehow similar with [Issue 460](https://github.com/pwndbg/pwndbg/issues/460) and asked me if I could try the things mentioned there.\r\n\r\nAfter trying to disable caching with:\r\n```\r\npwndbg> python import pwndbg; pwndbg.memoize.memoize.caching = False\r\n```\r\n\r\nThe pwndbg registers view got immediately updated with the correct addresses for the registers.\r\nUnfortunately, disabling caching make pwndbg really slow.\r\n\r\n\r\n### Steps to reproduce\r\n\r\nI don't have any consistent way to reproduce the issue as it's not always happening and not easy to notice.\r\n\r\n### My setup\r\n\r\npwndbg> version\r\nGdb: 8.1\r\nPython: 3.6.5 (default, May 11 2018, 04:00:52) [GCC 8.1.0]\r\nPwndbg: 1.0.0 build: 71d29df\r\nCapstone: 4.0.1024\r\nUnicorn: 1.0.1\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\"\"\"\nReading register value from the inferior, and provides a\nstandardized interface to registers like \"sp\" and \"pc\".\n\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\nfrom __future__ import unicode_literals\n\nimport collections\nimport ctypes\nimport re\nimport sys\nfrom types import ModuleType\n\nimport gdb\nimport six\n\nimport pwndbg.arch\nimport pwndbg.events\nimport pwndbg.memoize\nimport pwndbg.proc\nimport pwndbg.remote\n\ntry:\n long\nexcept NameError:\n long=int\n\n\nclass RegisterSet(object):\n #: Program counter register\n pc = None\n\n #: Stack pointer register\n stack = None\n\n #: Frame pointer register\n frame = None\n\n #: Return address register\n retaddr = None\n\n #: Flags register (eflags, cpsr)\n flags = None\n\n #: List of native-size generalp-purpose registers\n gpr = None\n\n #: List of miscellaneous, valid registers\n misc = None\n\n #: Register-based arguments for most common ABI\n regs = None\n\n #: Return value register\n retval = None\n\n #: Common registers which should be displayed in the register context\n common = None\n\n #: All valid registers\n all = None\n\n def __init__(self,\n pc='pc',\n stack='sp',\n frame=None,\n retaddr=tuple(),\n flags=dict(),\n gpr=tuple(),\n misc=tuple(),\n args=tuple(),\n retval=None):\n self.pc = pc\n self.stack = stack\n self.frame = frame\n self.retaddr = retaddr\n self.flags = flags\n self.gpr = gpr\n self.misc = misc\n self.args = args\n self.retval = retval\n\n # In 'common', we don't want to lose the ordering of:\n self.common = []\n for reg in gpr + (frame, stack, pc) + tuple(flags):\n if reg and reg not in self.common:\n self.common.append(reg)\n\n self.all = set(i for i in misc) | set(flags) | set(self.retaddr) | set(self.common)\n self.all -= {None}\n\n def __iter__(self):\n for r in self.all:\n yield r\n\narm = RegisterSet( retaddr = ('lr',),\n flags = {'cpsr':{}},\n gpr = tuple('r%i' % i for i in range(13)),\n args = ('r0','r1','r2','r3'),\n retval = 'r0')\n\naarch64 = RegisterSet( retaddr = ('lr',),\n flags = {'cpsr':{}},\n frame = 'x29',\n gpr = tuple('x%i' % i for i in range(29)),\n misc = tuple('w%i' % i for i in range(29)),\n args = ('x0','x1','x2','x3'),\n retval = 'x0')\n\nx86flags = {'eflags': {\n 'CF': 0,\n 'PF': 2,\n 'AF': 4,\n 'ZF': 6,\n 'SF': 7,\n 'IF': 9,\n 'DF': 10,\n 'OF': 11,\n}}\n\namd64 = RegisterSet(pc = 'rip',\n stack = 'rsp',\n frame = 'rbp',\n flags = x86flags,\n gpr = ('rax','rbx','rcx','rdx','rdi','rsi',\n 'r8', 'r9', 'r10','r11','r12',\n 'r13','r14','r15'),\n misc = ('cs','ss','ds','es','fs','gs',\n 'fsbase', 'gsbase',\n 'ax','ah','al',\n 'bx','bh','bl',\n 'cx','ch','cl',\n 'dx','dh','dl',\n 'dil','sil','spl','bpl',\n 'di','si','bp','sp','ip'),\n args = ('rdi','rsi','rdx','rcx','r8','r9'),\n retval = 'rax')\n\ni386 = RegisterSet( pc = 'eip',\n stack = 'esp',\n frame = 'ebp',\n flags = x86flags,\n gpr = ('eax','ebx','ecx','edx','edi','esi'),\n misc = ('cs','ss','ds','es','fs','gs',\n 'fsbase', 'gsbase',\n 'ax','ah','al',\n 'bx','bh','bl',\n 'cx','ch','cl',\n 'dx','dh','dl',\n 'dil','sil','spl','bpl',\n 'di','si','bp','sp','ip'),\n retval = 'eax')\n\n\n# http://math-atlas.sourceforge.net/devel/assembly/elfspec_ppc.pdf\n# r0 Volatile register which may be modified during function linkage\n# r1 Stack frame pointer, always valid\n# r2 System-reserved register (points at GOT)\n# r3-r4 Volatile registers used for parameter passing and return values\n# r5-r10 Volatile registers used for parameter passing\n# r11-r12 Volatile registers which may be modified during function linkage\n# r13 Small data area pointer register (points to TLS)\n# r14-r30 Registers used for local variables\n# r31 Used for local variables or \"environment pointers\"\npowerpc = RegisterSet( retaddr = ('lr','r0'),\n flags = {'msr':{},'xer':{}},\n gpr = tuple('r%i' % i for i in range(3,32)),\n misc = ('cr','lr','r2'),\n args = tuple('r%i' for i in range(3,11)),\n retval = 'r3')\n\n# http://people.cs.clemson.edu/~mark/sparc/sparc_arch_desc.txt\n# http://people.cs.clemson.edu/~mark/subroutines/sparc.html\n# https://www.utdallas.edu/~edsha/security/sparcoverflow.htm\n#\n# http://people.cs.clemson.edu/~mark/sparc/assembly.txt\n# ____________________________________\n# %g0 == %r0 (always zero) \\\n# %g1 == %r1 | g stands for global\n# ... |\n# %g7 == %r7 |\n# ____________________________________/\n# %o0 == %r8 \\\n# ... | o stands for output (note: not 0)\n# %o6 == %r14 == %sp (stack ptr) |\n# %o7 == %r15 == for return aaddress |\n# ____________________________________/\n# %l0 == %r16 \\\n# ... | l stands for local (note: not 1)\n# %l7 == %r23 |\n# ____________________________________/\n# %i0 == %r24 \\\n# ... | i stands for input\n# %i6 == %r30 == %fp (frame ptr) |\n# %i7 == %r31 == for return address |\n# ____________________________________/\n\nsparc_gp = tuple(['g%i' % i for i in range(1,8)]\n +['o%i' % i for i in range(0,6)]\n +['l%i' % i for i in range(0,8)]\n +['i%i' % i for i in range(0,6)])\nsparc = RegisterSet(stack = 'o6',\n frame = 'i6',\n retaddr = ('o7',),\n flags = {'psr':{}},\n gpr = sparc_gp,\n args = ('i0','i1','i2','i3','i4','i5'),\n retval = 'o0')\n\n\n# http://logos.cs.uic.edu/366/notes/mips%20quick%20tutorial.htm\n# r0 => zero\n# r1 => temporary\n# r2-r3 => values\n# r4-r7 => arguments\n# r8-r15 => temporary\n# r16-r23 => saved values\n# r24-r25 => temporary\n# r26-r27 => interrupt/trap handler\n# r28 => global pointer\n# r29 => stack pointer\n# r30 => frame pointer\n# r31 => return address\nmips = RegisterSet( frame = 'fp',\n retaddr = ('ra',),\n gpr = ('v0','v1','a0','a1','a2','a3') \\\n + tuple('t%i' % i for i in range(10)) \\\n + tuple('s%i' % i for i in range(9)),\n args = ('a0','a1','a2','a3'),\n retval = 'v0')\n\narch_to_regs = {\n 'i386': i386,\n 'x86-64': amd64,\n 'mips': mips,\n 'sparc': sparc,\n 'arm': arm,\n 'aarch64': aarch64,\n 'powerpc': powerpc,\n}\n\[email protected]\ndef gdb77_get_register(name):\n return gdb.parse_and_eval('$' + name)\n\[email protected]\ndef gdb79_get_register(name):\n return gdb.newest_frame().read_register(name)\n\ntry:\n gdb.Frame.read_register\n get_register = gdb79_get_register\nexcept AttributeError:\n get_register = gdb77_get_register\n\n\n# We need to manually make some ptrace calls to get fs/gs bases on Intel\nPTRACE_ARCH_PRCTL = 30\nARCH_GET_FS = 0x1003\nARCH_GET_GS = 0x1004\n\nclass module(ModuleType):\n last = {}\n\n @pwndbg.memoize.reset_on_stop\n @pwndbg.memoize.reset_on_prompt\n def __getattr__(self, attr):\n attr = attr.lstrip('$')\n try:\n # Seriously, gdb? Only accepts uint32.\n if 'eflags' in attr:\n value = gdb77_get_register(attr)\n value = value.cast(pwndbg.typeinfo.uint32)\n else:\n value = get_register(attr)\n value = value.cast(pwndbg.typeinfo.ptrdiff)\n\n value = int(value)\n return value & pwndbg.arch.ptrmask\n except (ValueError, gdb.error):\n return None\n\n @pwndbg.memoize.reset_on_stop\n def __getitem__(self, item):\n if isinstance(item, six.integer_types):\n return arch_to_regs[pwndbg.arch.current][item]\n\n if not isinstance(item, six.string_types):\n print(\"Unknown register type: %r\" % (item))\n import pdb, traceback\n traceback.print_stack()\n pdb.set_trace()\n return None\n\n # e.g. if we're looking for register \"$rax\", turn it into \"rax\"\n item = item.lstrip('$')\n item = getattr(self, item.lower())\n\n if isinstance(item, six.integer_types):\n return int(item) & pwndbg.arch.ptrmask\n\n return item\n\n def __iter__(self):\n regs = set(arch_to_regs[pwndbg.arch.current]) | set(['pc','sp'])\n for item in regs:\n yield item\n\n @property\n def current(self):\n return arch_to_regs[pwndbg.arch.current]\n\n @property\n def gpr(self):\n return arch_to_regs[pwndbg.arch.current].gpr\n\n @property\n def common(self):\n return arch_to_regs[pwndbg.arch.current].common\n\n @property\n def frame(self):\n return arch_to_regs[pwndbg.arch.current].frame\n\n @property\n def retaddr(self):\n return arch_to_regs[pwndbg.arch.current].retaddr\n\n @property\n def flags(self):\n return arch_to_regs[pwndbg.arch.current].flags\n\n @property\n def stack(self):\n return arch_to_regs[pwndbg.arch.current].stack\n\n @property\n def retval(self):\n return arch_to_regs[pwndbg.arch.current].retval\n\n @property\n def all(self):\n regs = arch_to_regs[pwndbg.arch.current]\n retval = []\n for regset in (regs.pc, regs.stack, regs.frame, regs.retaddr, regs.flags, regs.gpr, regs.misc):\n if regset is None:\n continue\n elif isinstance(regset, (list, tuple)):\n retval.extend(regset)\n elif isinstance(regset, dict):\n retval.extend(regset.keys())\n else:\n retval.append(regset)\n return retval\n\n def fix(self, expression):\n for regname in set(self.all + ['sp','pc']):\n expression = re.sub(r'\\$?\\b%s\\b' % regname, r'$'+regname, expression)\n return expression\n\n def items(self):\n for regname in self.all:\n yield regname, self[regname]\n\n arch_to_regs = arch_to_regs\n\n @property\n def changed(self):\n delta = []\n for reg, value in self.last.items():\n if self[reg] != value:\n delta.append(reg)\n return delta\n\n @property\n @pwndbg.memoize.reset_on_stop\n def fsbase(self):\n return self._fs_gs_helper(ARCH_GET_FS)\n\n @property\n @pwndbg.memoize.reset_on_stop\n def gsbase(self):\n return self._fs_gs_helper(ARCH_GET_GS)\n\n @pwndbg.memoize.reset_on_stop\n def _fs_gs_helper(self, which):\n \"\"\"Supports fetching based on segmented addressing, a la fs:[0x30].\n\n Requires ptrace'ing the child directly.\"\"\"\n\n # We can't really do anything if the process is remote.\n if pwndbg.remote.is_remote(): return 0\n\n # Use the lightweight process ID\n pid, lwpid, tid = gdb.selected_thread().ptid\n\n # Get the register\n ppvoid = ctypes.POINTER(ctypes.c_void_p)\n value = ppvoid(ctypes.c_void_p())\n value.contents.value = 0\n\n libc = ctypes.CDLL('libc.so.6')\n result = libc.ptrace(PTRACE_ARCH_PRCTL,\n lwpid,\n value,\n which)\n\n if result == 0:\n return (value.contents.value or 0) & pwndbg.arch.ptrmask\n\n return 0\n\n def __repr__(self):\n return ('<module pwndbg.regs>')\n\n# To prevent garbage collection\ntether = sys.modules[__name__]\nsys.modules[__name__] = module(__name__, '')\n\n\[email protected]\ndef update_last():\n M = sys.modules[__name__]\n M.last = {k:M[k] for k in M.common}\n if pwndbg.config.show_retaddr_reg:\n M.last.update({k:M[k] for k in M.retaddr})\n", "path": "pwndbg/regs.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\"\"\"\nReading register value from the inferior, and provides a\nstandardized interface to registers like \"sp\" and \"pc\".\n\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\nfrom __future__ import unicode_literals\n\nimport collections\nimport ctypes\nimport re\nimport sys\nfrom types import ModuleType\n\nimport gdb\nimport six\n\nimport pwndbg.arch\nimport pwndbg.events\nimport pwndbg.memoize\nimport pwndbg.proc\nimport pwndbg.remote\n\ntry:\n long\nexcept NameError:\n long=int\n\n\nclass RegisterSet(object):\n #: Program counter register\n pc = None\n\n #: Stack pointer register\n stack = None\n\n #: Frame pointer register\n frame = None\n\n #: Return address register\n retaddr = None\n\n #: Flags register (eflags, cpsr)\n flags = None\n\n #: List of native-size generalp-purpose registers\n gpr = None\n\n #: List of miscellaneous, valid registers\n misc = None\n\n #: Register-based arguments for most common ABI\n regs = None\n\n #: Return value register\n retval = None\n\n #: Common registers which should be displayed in the register context\n common = None\n\n #: All valid registers\n all = None\n\n def __init__(self,\n pc='pc',\n stack='sp',\n frame=None,\n retaddr=tuple(),\n flags=dict(),\n gpr=tuple(),\n misc=tuple(),\n args=tuple(),\n retval=None):\n self.pc = pc\n self.stack = stack\n self.frame = frame\n self.retaddr = retaddr\n self.flags = flags\n self.gpr = gpr\n self.misc = misc\n self.args = args\n self.retval = retval\n\n # In 'common', we don't want to lose the ordering of:\n self.common = []\n for reg in gpr + (frame, stack, pc) + tuple(flags):\n if reg and reg not in self.common:\n self.common.append(reg)\n\n self.all = set(i for i in misc) | set(flags) | set(self.retaddr) | set(self.common)\n self.all -= {None}\n\n def __iter__(self):\n for r in self.all:\n yield r\n\narm = RegisterSet( retaddr = ('lr',),\n flags = {'cpsr':{}},\n gpr = tuple('r%i' % i for i in range(13)),\n args = ('r0','r1','r2','r3'),\n retval = 'r0')\n\naarch64 = RegisterSet( retaddr = ('lr',),\n flags = {'cpsr':{}},\n frame = 'x29',\n gpr = tuple('x%i' % i for i in range(29)),\n misc = tuple('w%i' % i for i in range(29)),\n args = ('x0','x1','x2','x3'),\n retval = 'x0')\n\nx86flags = {'eflags': {\n 'CF': 0,\n 'PF': 2,\n 'AF': 4,\n 'ZF': 6,\n 'SF': 7,\n 'IF': 9,\n 'DF': 10,\n 'OF': 11,\n}}\n\namd64 = RegisterSet(pc = 'rip',\n stack = 'rsp',\n frame = 'rbp',\n flags = x86flags,\n gpr = ('rax','rbx','rcx','rdx','rdi','rsi',\n 'r8', 'r9', 'r10','r11','r12',\n 'r13','r14','r15'),\n misc = ('cs','ss','ds','es','fs','gs',\n 'fsbase', 'gsbase',\n 'ax','ah','al',\n 'bx','bh','bl',\n 'cx','ch','cl',\n 'dx','dh','dl',\n 'dil','sil','spl','bpl',\n 'di','si','bp','sp','ip'),\n args = ('rdi','rsi','rdx','rcx','r8','r9'),\n retval = 'rax')\n\ni386 = RegisterSet( pc = 'eip',\n stack = 'esp',\n frame = 'ebp',\n flags = x86flags,\n gpr = ('eax','ebx','ecx','edx','edi','esi'),\n misc = ('cs','ss','ds','es','fs','gs',\n 'fsbase', 'gsbase',\n 'ax','ah','al',\n 'bx','bh','bl',\n 'cx','ch','cl',\n 'dx','dh','dl',\n 'dil','sil','spl','bpl',\n 'di','si','bp','sp','ip'),\n retval = 'eax')\n\n\n# http://math-atlas.sourceforge.net/devel/assembly/elfspec_ppc.pdf\n# r0 Volatile register which may be modified during function linkage\n# r1 Stack frame pointer, always valid\n# r2 System-reserved register (points at GOT)\n# r3-r4 Volatile registers used for parameter passing and return values\n# r5-r10 Volatile registers used for parameter passing\n# r11-r12 Volatile registers which may be modified during function linkage\n# r13 Small data area pointer register (points to TLS)\n# r14-r30 Registers used for local variables\n# r31 Used for local variables or \"environment pointers\"\npowerpc = RegisterSet( retaddr = ('lr','r0'),\n flags = {'msr':{},'xer':{}},\n gpr = tuple('r%i' % i for i in range(3,32)),\n misc = ('cr','lr','r2'),\n args = tuple('r%i' for i in range(3,11)),\n retval = 'r3')\n\n# http://people.cs.clemson.edu/~mark/sparc/sparc_arch_desc.txt\n# http://people.cs.clemson.edu/~mark/subroutines/sparc.html\n# https://www.utdallas.edu/~edsha/security/sparcoverflow.htm\n#\n# http://people.cs.clemson.edu/~mark/sparc/assembly.txt\n# ____________________________________\n# %g0 == %r0 (always zero) \\\n# %g1 == %r1 | g stands for global\n# ... |\n# %g7 == %r7 |\n# ____________________________________/\n# %o0 == %r8 \\\n# ... | o stands for output (note: not 0)\n# %o6 == %r14 == %sp (stack ptr) |\n# %o7 == %r15 == for return aaddress |\n# ____________________________________/\n# %l0 == %r16 \\\n# ... | l stands for local (note: not 1)\n# %l7 == %r23 |\n# ____________________________________/\n# %i0 == %r24 \\\n# ... | i stands for input\n# %i6 == %r30 == %fp (frame ptr) |\n# %i7 == %r31 == for return address |\n# ____________________________________/\n\nsparc_gp = tuple(['g%i' % i for i in range(1,8)]\n +['o%i' % i for i in range(0,6)]\n +['l%i' % i for i in range(0,8)]\n +['i%i' % i for i in range(0,6)])\nsparc = RegisterSet(stack = 'o6',\n frame = 'i6',\n retaddr = ('o7',),\n flags = {'psr':{}},\n gpr = sparc_gp,\n args = ('i0','i1','i2','i3','i4','i5'),\n retval = 'o0')\n\n\n# http://logos.cs.uic.edu/366/notes/mips%20quick%20tutorial.htm\n# r0 => zero\n# r1 => temporary\n# r2-r3 => values\n# r4-r7 => arguments\n# r8-r15 => temporary\n# r16-r23 => saved values\n# r24-r25 => temporary\n# r26-r27 => interrupt/trap handler\n# r28 => global pointer\n# r29 => stack pointer\n# r30 => frame pointer\n# r31 => return address\nmips = RegisterSet( frame = 'fp',\n retaddr = ('ra',),\n gpr = ('v0','v1','a0','a1','a2','a3') \\\n + tuple('t%i' % i for i in range(10)) \\\n + tuple('s%i' % i for i in range(9)),\n args = ('a0','a1','a2','a3'),\n retval = 'v0')\n\narch_to_regs = {\n 'i386': i386,\n 'x86-64': amd64,\n 'mips': mips,\n 'sparc': sparc,\n 'arm': arm,\n 'aarch64': aarch64,\n 'powerpc': powerpc,\n}\n\[email protected]\ndef gdb77_get_register(name):\n return gdb.parse_and_eval('$' + name)\n\[email protected]\ndef gdb79_get_register(name):\n return gdb.newest_frame().read_register(name)\n\ntry:\n gdb.Frame.read_register\n get_register = gdb79_get_register\nexcept AttributeError:\n get_register = gdb77_get_register\n\n\n# We need to manually make some ptrace calls to get fs/gs bases on Intel\nPTRACE_ARCH_PRCTL = 30\nARCH_GET_FS = 0x1003\nARCH_GET_GS = 0x1004\n\nclass module(ModuleType):\n last = {}\n\n @pwndbg.memoize.reset_on_stop\n @pwndbg.memoize.reset_on_prompt\n def __getattr__(self, attr):\n attr = attr.lstrip('$')\n try:\n # Seriously, gdb? Only accepts uint32.\n if 'eflags' in attr:\n value = gdb77_get_register(attr)\n value = value.cast(pwndbg.typeinfo.uint32)\n else:\n value = get_register(attr)\n value = value.cast(pwndbg.typeinfo.ptrdiff)\n\n value = int(value)\n return value & pwndbg.arch.ptrmask\n except (ValueError, gdb.error):\n return None\n\n @pwndbg.memoize.reset_on_stop\n def __getitem__(self, item):\n if isinstance(item, six.integer_types):\n return arch_to_regs[pwndbg.arch.current][item]\n\n if not isinstance(item, six.string_types):\n print(\"Unknown register type: %r\" % (item))\n import pdb, traceback\n traceback.print_stack()\n pdb.set_trace()\n return None\n\n # e.g. if we're looking for register \"$rax\", turn it into \"rax\"\n item = item.lstrip('$')\n item = getattr(self, item.lower())\n\n if isinstance(item, six.integer_types):\n return int(item) & pwndbg.arch.ptrmask\n\n return item\n\n def __iter__(self):\n regs = set(arch_to_regs[pwndbg.arch.current]) | set(['pc','sp'])\n for item in regs:\n yield item\n\n @property\n def current(self):\n return arch_to_regs[pwndbg.arch.current]\n\n @property\n def gpr(self):\n return arch_to_regs[pwndbg.arch.current].gpr\n\n @property\n def common(self):\n return arch_to_regs[pwndbg.arch.current].common\n\n @property\n def frame(self):\n return arch_to_regs[pwndbg.arch.current].frame\n\n @property\n def retaddr(self):\n return arch_to_regs[pwndbg.arch.current].retaddr\n\n @property\n def flags(self):\n return arch_to_regs[pwndbg.arch.current].flags\n\n @property\n def stack(self):\n return arch_to_regs[pwndbg.arch.current].stack\n\n @property\n def retval(self):\n return arch_to_regs[pwndbg.arch.current].retval\n\n @property\n def all(self):\n regs = arch_to_regs[pwndbg.arch.current]\n retval = []\n for regset in (regs.pc, regs.stack, regs.frame, regs.retaddr, regs.flags, regs.gpr, regs.misc):\n if regset is None:\n continue\n elif isinstance(regset, (list, tuple)):\n retval.extend(regset)\n elif isinstance(regset, dict):\n retval.extend(regset.keys())\n else:\n retval.append(regset)\n return retval\n\n def fix(self, expression):\n for regname in set(self.all + ['sp','pc']):\n expression = re.sub(r'\\$?\\b%s\\b' % regname, r'$'+regname, expression)\n return expression\n\n def items(self):\n for regname in self.all:\n yield regname, self[regname]\n\n arch_to_regs = arch_to_regs\n\n @property\n def changed(self):\n delta = []\n for reg, value in self.last.items():\n if self[reg] != value:\n delta.append(reg)\n return delta\n\n @property\n @pwndbg.memoize.reset_on_stop\n def fsbase(self):\n return self._fs_gs_helper(ARCH_GET_FS)\n\n @property\n @pwndbg.memoize.reset_on_stop\n def gsbase(self):\n return self._fs_gs_helper(ARCH_GET_GS)\n\n @pwndbg.memoize.reset_on_stop\n def _fs_gs_helper(self, which):\n \"\"\"Supports fetching based on segmented addressing, a la fs:[0x30].\n\n Requires ptrace'ing the child directly.\"\"\"\n\n # We can't really do anything if the process is remote.\n if pwndbg.remote.is_remote(): return 0\n\n # Use the lightweight process ID\n pid, lwpid, tid = gdb.selected_thread().ptid\n\n # Get the register\n ppvoid = ctypes.POINTER(ctypes.c_void_p)\n value = ppvoid(ctypes.c_void_p())\n value.contents.value = 0\n\n libc = ctypes.CDLL('libc.so.6')\n result = libc.ptrace(PTRACE_ARCH_PRCTL,\n lwpid,\n value,\n which)\n\n if result == 0:\n return (value.contents.value or 0) & pwndbg.arch.ptrmask\n\n return 0\n\n def __repr__(self):\n return ('<module pwndbg.regs>')\n\n# To prevent garbage collection\ntether = sys.modules[__name__]\nsys.modules[__name__] = module(__name__, '')\n\n\[email protected]\[email protected]\ndef update_last():\n M = sys.modules[__name__]\n M.last = {k:M[k] for k in M.common}\n if pwndbg.config.show_retaddr_reg:\n M.last.update({k:M[k] for k in M.retaddr})\n", "path": "pwndbg/regs.py"}]} |
gh_patches_debug_81 | rasdani/github-patches | git_diff | zulip__zulip-13067 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Clean up `update-locked-requirements` and `requirements.in` files to remove `-e` hackery.
It looks like https://github.com/jazzband/pip-tools/pull/807 was included in the latest `pip-tools` release 12 days ago. I think this may mean we can get rid of our semantically incorrect usage of `-e` in our requirements files, which in turn may mean we can remove most of the messy code in `tools/update-locked-requirements` related to hackily removing the `-e` lines.
See `compile_requirements` in that file for details.
My guess is that this means if we upgrade pip-tools, we can delete 50% of the code in `update-locked-requirements` and clean up our `requirements.in` files to not use `-e`.
@hackerkid this might be a good project for you.
Clean up `update-locked-requirements` and `requirements.in` files to remove `-e` hackery.
It looks like https://github.com/jazzband/pip-tools/pull/807 was included in the latest `pip-tools` release 12 days ago. I think this may mean we can get rid of our semantically incorrect usage of `-e` in our requirements files, which in turn may mean we can remove most of the messy code in `tools/update-locked-requirements` related to hackily removing the `-e` lines.
See `compile_requirements` in that file for details.
My guess is that this means if we upgrade pip-tools, we can delete 50% of the code in `update-locked-requirements` and clean up our `requirements.in` files to not use `-e`.
@hackerkid this might be a good project for you.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `version.py`
Content:
```
1 import os
2
3 ZULIP_VERSION = "2.0.4+git"
4 # Add information on number of commits and commit hash to version, if available
5 zulip_git_version_file = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'zulip-git-version')
6 if os.path.exists(zulip_git_version_file):
7 with open(zulip_git_version_file) as f:
8 version = f.read().strip()
9 if version:
10 ZULIP_VERSION = version
11
12 LATEST_MAJOR_VERSION = "2.0"
13 LATEST_RELEASE_VERSION = "2.0.4"
14 LATEST_RELEASE_ANNOUNCEMENT = "https://blog.zulip.org/2019/03/01/zulip-2-0-released/"
15
16 # Bump the minor PROVISION_VERSION to indicate that folks should provision
17 # only when going from an old version of the code to a newer version. Bump
18 # the major version to indicate that folks should provision in both
19 # directions.
20
21 # Typically,
22 # * adding a dependency only requires a minor version bump;
23 # * removing a dependency requires a major version bump;
24 # * upgrading a dependency requires a major version bump, unless the
25 # upgraded dependency is backwards compatible with all of our
26 # historical commits sharing the same major version, in which case a
27 # minor version bump suffices.
28
29 PROVISION_VERSION = '49.1'
30
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/version.py b/version.py
--- a/version.py
+++ b/version.py
@@ -26,4 +26,4 @@
# historical commits sharing the same major version, in which case a
# minor version bump suffices.
-PROVISION_VERSION = '49.1'
+PROVISION_VERSION = '49.2'
| {"golden_diff": "diff --git a/version.py b/version.py\n--- a/version.py\n+++ b/version.py\n@@ -26,4 +26,4 @@\n # historical commits sharing the same major version, in which case a\n # minor version bump suffices.\n \n-PROVISION_VERSION = '49.1'\n+PROVISION_VERSION = '49.2'\n", "issue": "Clean up `update-locked-requirements` and `requirements.in` files to remove `-e` hackery.\nIt looks like https://github.com/jazzband/pip-tools/pull/807 was included in the latest `pip-tools` release 12 days ago. I think this may mean we can get rid of our semantically incorrect usage of `-e` in our requirements files, which in turn may mean we can remove most of the messy code in `tools/update-locked-requirements` related to hackily removing the `-e` lines. \r\n See `compile_requirements` in that file for details. \r\n\r\nMy guess is that this means if we upgrade pip-tools, we can delete 50% of the code in `update-locked-requirements` and clean up our `requirements.in` files to not use `-e`. \r\n\r\n@hackerkid this might be a good project for you.\nClean up `update-locked-requirements` and `requirements.in` files to remove `-e` hackery.\nIt looks like https://github.com/jazzband/pip-tools/pull/807 was included in the latest `pip-tools` release 12 days ago. I think this may mean we can get rid of our semantically incorrect usage of `-e` in our requirements files, which in turn may mean we can remove most of the messy code in `tools/update-locked-requirements` related to hackily removing the `-e` lines. \r\n See `compile_requirements` in that file for details. \r\n\r\nMy guess is that this means if we upgrade pip-tools, we can delete 50% of the code in `update-locked-requirements` and clean up our `requirements.in` files to not use `-e`. \r\n\r\n@hackerkid this might be a good project for you.\n", "before_files": [{"content": "import os\n\nZULIP_VERSION = \"2.0.4+git\"\n# Add information on number of commits and commit hash to version, if available\nzulip_git_version_file = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'zulip-git-version')\nif os.path.exists(zulip_git_version_file):\n with open(zulip_git_version_file) as f:\n version = f.read().strip()\n if version:\n ZULIP_VERSION = version\n\nLATEST_MAJOR_VERSION = \"2.0\"\nLATEST_RELEASE_VERSION = \"2.0.4\"\nLATEST_RELEASE_ANNOUNCEMENT = \"https://blog.zulip.org/2019/03/01/zulip-2-0-released/\"\n\n# Bump the minor PROVISION_VERSION to indicate that folks should provision\n# only when going from an old version of the code to a newer version. Bump\n# the major version to indicate that folks should provision in both\n# directions.\n\n# Typically,\n# * adding a dependency only requires a minor version bump;\n# * removing a dependency requires a major version bump;\n# * upgrading a dependency requires a major version bump, unless the\n# upgraded dependency is backwards compatible with all of our\n# historical commits sharing the same major version, in which case a\n# minor version bump suffices.\n\nPROVISION_VERSION = '49.1'\n", "path": "version.py"}], "after_files": [{"content": "import os\n\nZULIP_VERSION = \"2.0.4+git\"\n# Add information on number of commits and commit hash to version, if available\nzulip_git_version_file = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'zulip-git-version')\nif os.path.exists(zulip_git_version_file):\n with open(zulip_git_version_file) as f:\n version = f.read().strip()\n if version:\n ZULIP_VERSION = version\n\nLATEST_MAJOR_VERSION = \"2.0\"\nLATEST_RELEASE_VERSION = \"2.0.4\"\nLATEST_RELEASE_ANNOUNCEMENT = \"https://blog.zulip.org/2019/03/01/zulip-2-0-released/\"\n\n# Bump the minor PROVISION_VERSION to indicate that folks should provision\n# only when going from an old version of the code to a newer version. Bump\n# the major version to indicate that folks should provision in both\n# directions.\n\n# Typically,\n# * adding a dependency only requires a minor version bump;\n# * removing a dependency requires a major version bump;\n# * upgrading a dependency requires a major version bump, unless the\n# upgraded dependency is backwards compatible with all of our\n# historical commits sharing the same major version, in which case a\n# minor version bump suffices.\n\nPROVISION_VERSION = '49.2'\n", "path": "version.py"}]} |
gh_patches_debug_82 | rasdani/github-patches | git_diff | spyder-ide__spyder-8896 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
spyder 3.3.3 icon theme Spyder 3 problem with PyQt 5.12
## Problem Description
After updating to Spyder 3.3.3 (on Linux, with Python 3.6.7 64-bit | | Qt 5.12.1 | PyQt5 5.12 ) spyder icon theme "Spyder 3" stopped working (because of coming with this version PyQt upgrade probably) . Only the "Spyder 2" icon theme is working.
Below the look of Spyder3 icon theme

After reverting to PyQt 5.9.2 the icon set Spyder3 is working again.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright Β© Spyder Project Contributors
4 # Licensed under the terms of the MIT License
5 # (see spyder/__init__.py for details)
6
7 """
8 Spyder
9 ======
10
11 The Scientific Python Development Environment
12
13 Spyder is a powerful scientific environment written in Python, for Python,
14 and designed by and for scientists, engineers and data analysts.
15
16 It features a unique combination of the advanced editing, analysis, debugging
17 and profiling functionality of a comprehensive development tool with the data
18 exploration, interactive execution, deep inspection and beautiful visualization
19 capabilities of a scientific package.
20 """
21
22 from __future__ import print_function
23
24 import os
25 import os.path as osp
26 import subprocess
27 import sys
28 import shutil
29
30 from distutils.core import setup
31 from distutils.command.install_data import install_data
32
33
34 #==============================================================================
35 # Check for Python 3
36 #==============================================================================
37 PY3 = sys.version_info[0] == 3
38
39
40 #==============================================================================
41 # Minimal Python version sanity check
42 # Taken from the notebook setup.py -- Modified BSD License
43 #==============================================================================
44 v = sys.version_info
45 if v[:2] < (2, 7) or (v[0] >= 3 and v[:2] < (3, 4)):
46 error = "ERROR: Spyder requires Python version 2.7 or 3.4 and above."
47 print(error, file=sys.stderr)
48 sys.exit(1)
49
50
51 #==============================================================================
52 # Constants
53 #==============================================================================
54 NAME = 'spyder'
55 LIBNAME = 'spyder'
56 from spyder import __version__, __website_url__ #analysis:ignore
57
58
59 #==============================================================================
60 # Auxiliary functions
61 #==============================================================================
62 def get_package_data(name, extlist):
63 """Return data files for package *name* with extensions in *extlist*"""
64 flist = []
65 # Workaround to replace os.path.relpath (not available until Python 2.6):
66 offset = len(name)+len(os.pathsep)
67 for dirpath, _dirnames, filenames in os.walk(name):
68 for fname in filenames:
69 if not fname.startswith('.') and osp.splitext(fname)[1] in extlist:
70 flist.append(osp.join(dirpath, fname)[offset:])
71 return flist
72
73
74 def get_subpackages(name):
75 """Return subpackages of package *name*"""
76 splist = []
77 for dirpath, _dirnames, _filenames in os.walk(name):
78 if osp.isfile(osp.join(dirpath, '__init__.py')):
79 splist.append(".".join(dirpath.split(os.sep)))
80 return splist
81
82
83 def get_data_files():
84 """Return data_files in a platform dependent manner"""
85 if sys.platform.startswith('linux'):
86 if PY3:
87 data_files = [('share/applications', ['scripts/spyder3.desktop']),
88 ('share/icons', ['img_src/spyder3.png']),
89 ('share/metainfo', ['scripts/spyder3.appdata.xml'])]
90 else:
91 data_files = [('share/applications', ['scripts/spyder.desktop']),
92 ('share/icons', ['img_src/spyder.png'])]
93 elif os.name == 'nt':
94 data_files = [('scripts', ['img_src/spyder.ico',
95 'img_src/spyder_reset.ico'])]
96 else:
97 data_files = []
98 return data_files
99
100
101 def get_packages():
102 """Return package list"""
103 packages = (
104 get_subpackages(LIBNAME)
105 + get_subpackages('spyder_breakpoints')
106 + get_subpackages('spyder_profiler')
107 + get_subpackages('spyder_pylint')
108 + get_subpackages('spyder_io_dcm')
109 + get_subpackages('spyder_io_hdf5')
110 )
111 return packages
112
113
114 #==============================================================================
115 # Make Linux detect Spyder desktop file
116 #==============================================================================
117 class MyInstallData(install_data):
118 def run(self):
119 install_data.run(self)
120 if sys.platform.startswith('linux'):
121 try:
122 subprocess.call(['update-desktop-database'])
123 except:
124 print("ERROR: unable to update desktop database",
125 file=sys.stderr)
126 CMDCLASS = {'install_data': MyInstallData}
127
128
129 #==============================================================================
130 # Main scripts
131 #==============================================================================
132 # NOTE: the '[...]_win_post_install.py' script is installed even on non-Windows
133 # platforms due to a bug in pip installation process (see Issue 1158)
134 SCRIPTS = ['%s_win_post_install.py' % NAME]
135 if PY3 and sys.platform.startswith('linux'):
136 SCRIPTS.append('spyder3')
137 else:
138 SCRIPTS.append('spyder')
139
140
141 #==============================================================================
142 # Files added to the package
143 #==============================================================================
144 EXTLIST = ['.mo', '.svg', '.png', '.css', '.html', '.js', '.chm', '.ini',
145 '.txt', '.rst', '.qss', '.ttf', '.json', '.c', '.cpp', '.java',
146 '.md', '.R', '.csv', '.pyx', '.ipynb', '.xml']
147 if os.name == 'nt':
148 SCRIPTS += ['spyder.bat']
149 EXTLIST += ['.ico']
150
151
152 #==============================================================================
153 # Setup arguments
154 #==============================================================================
155 setup_args = dict(
156 name=NAME,
157 version=__version__,
158 description='The Scientific Python Development Environment',
159 long_description=(
160 """Spyder is a powerful scientific environment written in Python, for Python,
161 and designed by and for scientists, engineers and data analysts.
162 It features a unique combination of the advanced editing, analysis, debugging
163 and profiling functionality of a comprehensive development tool with the data
164 exploration, interactive execution, deep inspection and beautiful visualization
165 capabilities of a scientific package.\n
166 Furthermore, Spyder offers built-in integration with many popular
167 scientific packages, including NumPy, SciPy, Pandas, IPython, QtConsole,
168 Matplotlib, SymPy, and more.\n
169 Beyond its many built-in features, Spyder's abilities can be extended even
170 further via first- and third-party plugins.\n
171 Spyder can also be used as a PyQt5 extension library, allowing you to build
172 upon its functionality and embed its components, such as the interactive
173 console or advanced editor, in your own software.
174 """),
175 download_url=__website_url__ + "#fh5co-download",
176 author="The Spyder Project Contributors",
177 author_email="[email protected]",
178 url=__website_url__,
179 license='MIT',
180 keywords='PyQt5 editor console widgets IDE science data analysis IPython',
181 platforms=["Windows", "Linux", "Mac OS-X"],
182 packages=get_packages(),
183 package_data={LIBNAME: get_package_data(LIBNAME, EXTLIST),
184 'spyder_breakpoints': get_package_data('spyder_breakpoints',
185 EXTLIST),
186 'spyder_profiler': get_package_data('spyder_profiler',
187 EXTLIST),
188 'spyder_pylint': get_package_data('spyder_pylint',
189 EXTLIST),
190 'spyder_io_dcm': get_package_data('spyder_io_dcm',
191 EXTLIST),
192 'spyder_io_hdf5': get_package_data('spyder_io_hdf5',
193 EXTLIST),
194 },
195 scripts=[osp.join('scripts', fname) for fname in SCRIPTS],
196 data_files=get_data_files(),
197 classifiers=['License :: OSI Approved :: MIT License',
198 'Operating System :: MacOS',
199 'Operating System :: Microsoft :: Windows',
200 'Operating System :: POSIX :: Linux',
201 'Programming Language :: Python :: 2',
202 'Programming Language :: Python :: 2.7',
203 'Programming Language :: Python :: 3',
204 'Programming Language :: Python :: 3.4',
205 'Programming Language :: Python :: 3.5',
206 'Programming Language :: Python :: 3.6',
207 'Programming Language :: Python :: 3.7',
208 'Development Status :: 5 - Production/Stable',
209 'Intended Audience :: Education',
210 'Intended Audience :: Science/Research',
211 'Intended Audience :: Developers',
212 'Topic :: Scientific/Engineering',
213 'Topic :: Software Development :: Widget Sets'],
214 cmdclass=CMDCLASS)
215
216
217 #==============================================================================
218 # Setuptools deps
219 #==============================================================================
220 if any(arg == 'bdist_wheel' for arg in sys.argv):
221 import setuptools # analysis:ignore
222
223 install_requires = [
224 'cloudpickle',
225 'rope>=0.10.5',
226 'jedi>=0.9.0',
227 'pyflakes',
228 'pygments>=2.0',
229 'qtconsole>=4.2.0',
230 'nbconvert',
231 'sphinx',
232 'pycodestyle',
233 'pylint',
234 'psutil',
235 'qtawesome>=0.4.1',
236 'qtpy>=1.5.0',
237 'pickleshare',
238 'pyzmq',
239 'chardet>=2.0.0',
240 'numpydoc',
241 'spyder-kernels>=0.4.2,<1.0',
242 # Don't require keyring for Python 2 and Linux
243 # because it depends on system packages
244 'keyring;sys_platform!="linux2"',
245 # Packages for pyqt5 are only available in
246 # Python 3
247 'pyqt5<5.13;python_version>="3"',
248 # pyqt5 5.12 split WebEngine into the
249 # pyqtwebengine module
250 'pyqtwebengine<5.13'
251 ]
252
253 extras_require = {
254 'test:python_version == "2.7"': ['mock'],
255 'test': ['pytest<4.1',
256 'pytest-qt',
257 'pytest-mock',
258 'pytest-cov',
259 'pytest-xvfb',
260 'mock',
261 'flaky',
262 'pandas',
263 'scipy',
264 'sympy',
265 'pillow',
266 'matplotlib',
267 'cython'],
268 }
269
270 if 'setuptools' in sys.modules:
271 setup_args['install_requires'] = install_requires
272 setup_args['extras_require'] = extras_require
273
274 setup_args['entry_points'] = {
275 'gui_scripts': [
276 '{} = spyder.app.start:main'.format(
277 'spyder3' if PY3 else 'spyder')
278 ]
279 }
280
281 setup_args.pop('scripts', None)
282
283
284 #==============================================================================
285 # Main setup
286 #==============================================================================
287 setup(**setup_args)
288
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -232,7 +232,7 @@
'pycodestyle',
'pylint',
'psutil',
- 'qtawesome>=0.4.1',
+ 'qtawesome>=0.5.7',
'qtpy>=1.5.0',
'pickleshare',
'pyzmq',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -232,7 +232,7 @@\n 'pycodestyle',\n 'pylint',\n 'psutil',\n- 'qtawesome>=0.4.1',\n+ 'qtawesome>=0.5.7',\n 'qtpy>=1.5.0',\n 'pickleshare',\n 'pyzmq',\n", "issue": "spyder 3.3.3 icon theme Spyder 3 problem with PyQt 5.12\n## Problem Description\r\nAfter updating to Spyder 3.3.3 (on Linux, with Python 3.6.7 64-bit | | Qt 5.12.1 | PyQt5 5.12 ) spyder icon theme \"Spyder 3\" stopped working (because of coming with this version PyQt upgrade probably) . Only the \"Spyder 2\" icon theme is working.\r\nBelow the look of Spyder3 icon theme\r\n\r\n\r\nAfter reverting to PyQt 5.9.2 the icon set Spyder3 is working again.\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright \u00a9 Spyder Project Contributors\n# Licensed under the terms of the MIT License\n# (see spyder/__init__.py for details)\n\n\"\"\"\nSpyder\n======\n\nThe Scientific Python Development Environment\n\nSpyder is a powerful scientific environment written in Python, for Python,\nand designed by and for scientists, engineers and data analysts.\n\nIt features a unique combination of the advanced editing, analysis, debugging\nand profiling functionality of a comprehensive development tool with the data\nexploration, interactive execution, deep inspection and beautiful visualization\ncapabilities of a scientific package.\n\"\"\"\n\nfrom __future__ import print_function\n\nimport os\nimport os.path as osp\nimport subprocess\nimport sys\nimport shutil\n\nfrom distutils.core import setup\nfrom distutils.command.install_data import install_data\n\n\n#==============================================================================\n# Check for Python 3\n#==============================================================================\nPY3 = sys.version_info[0] == 3\n\n\n#==============================================================================\n# Minimal Python version sanity check\n# Taken from the notebook setup.py -- Modified BSD License\n#==============================================================================\nv = sys.version_info\nif v[:2] < (2, 7) or (v[0] >= 3 and v[:2] < (3, 4)):\n error = \"ERROR: Spyder requires Python version 2.7 or 3.4 and above.\"\n print(error, file=sys.stderr)\n sys.exit(1)\n\n\n#==============================================================================\n# Constants\n#==============================================================================\nNAME = 'spyder'\nLIBNAME = 'spyder'\nfrom spyder import __version__, __website_url__ #analysis:ignore\n\n\n#==============================================================================\n# Auxiliary functions\n#==============================================================================\ndef get_package_data(name, extlist):\n \"\"\"Return data files for package *name* with extensions in *extlist*\"\"\"\n flist = []\n # Workaround to replace os.path.relpath (not available until Python 2.6):\n offset = len(name)+len(os.pathsep)\n for dirpath, _dirnames, filenames in os.walk(name):\n for fname in filenames:\n if not fname.startswith('.') and osp.splitext(fname)[1] in extlist:\n flist.append(osp.join(dirpath, fname)[offset:])\n return flist\n\n\ndef get_subpackages(name):\n \"\"\"Return subpackages of package *name*\"\"\"\n splist = []\n for dirpath, _dirnames, _filenames in os.walk(name):\n if osp.isfile(osp.join(dirpath, '__init__.py')):\n splist.append(\".\".join(dirpath.split(os.sep)))\n return splist\n\n\ndef get_data_files():\n \"\"\"Return data_files in a platform dependent manner\"\"\"\n if sys.platform.startswith('linux'):\n if PY3:\n data_files = [('share/applications', ['scripts/spyder3.desktop']),\n ('share/icons', ['img_src/spyder3.png']),\n ('share/metainfo', ['scripts/spyder3.appdata.xml'])]\n else:\n data_files = [('share/applications', ['scripts/spyder.desktop']),\n ('share/icons', ['img_src/spyder.png'])]\n elif os.name == 'nt':\n data_files = [('scripts', ['img_src/spyder.ico',\n 'img_src/spyder_reset.ico'])]\n else:\n data_files = []\n return data_files\n\n\ndef get_packages():\n \"\"\"Return package list\"\"\"\n packages = (\n get_subpackages(LIBNAME)\n + get_subpackages('spyder_breakpoints')\n + get_subpackages('spyder_profiler')\n + get_subpackages('spyder_pylint')\n + get_subpackages('spyder_io_dcm')\n + get_subpackages('spyder_io_hdf5')\n )\n return packages\n\n\n#==============================================================================\n# Make Linux detect Spyder desktop file\n#==============================================================================\nclass MyInstallData(install_data):\n def run(self):\n install_data.run(self)\n if sys.platform.startswith('linux'):\n try:\n subprocess.call(['update-desktop-database'])\n except:\n print(\"ERROR: unable to update desktop database\",\n file=sys.stderr)\nCMDCLASS = {'install_data': MyInstallData}\n\n\n#==============================================================================\n# Main scripts\n#==============================================================================\n# NOTE: the '[...]_win_post_install.py' script is installed even on non-Windows\n# platforms due to a bug in pip installation process (see Issue 1158)\nSCRIPTS = ['%s_win_post_install.py' % NAME]\nif PY3 and sys.platform.startswith('linux'):\n SCRIPTS.append('spyder3')\nelse:\n SCRIPTS.append('spyder')\n\n\n#==============================================================================\n# Files added to the package\n#==============================================================================\nEXTLIST = ['.mo', '.svg', '.png', '.css', '.html', '.js', '.chm', '.ini',\n '.txt', '.rst', '.qss', '.ttf', '.json', '.c', '.cpp', '.java',\n '.md', '.R', '.csv', '.pyx', '.ipynb', '.xml']\nif os.name == 'nt':\n SCRIPTS += ['spyder.bat']\n EXTLIST += ['.ico']\n\n\n#==============================================================================\n# Setup arguments\n#==============================================================================\nsetup_args = dict(\n name=NAME,\n version=__version__,\n description='The Scientific Python Development Environment',\n long_description=(\n\"\"\"Spyder is a powerful scientific environment written in Python, for Python,\nand designed by and for scientists, engineers and data analysts.\nIt features a unique combination of the advanced editing, analysis, debugging\nand profiling functionality of a comprehensive development tool with the data\nexploration, interactive execution, deep inspection and beautiful visualization\ncapabilities of a scientific package.\\n\nFurthermore, Spyder offers built-in integration with many popular\nscientific packages, including NumPy, SciPy, Pandas, IPython, QtConsole,\nMatplotlib, SymPy, and more.\\n\nBeyond its many built-in features, Spyder's abilities can be extended even\nfurther via first- and third-party plugins.\\n\nSpyder can also be used as a PyQt5 extension library, allowing you to build\nupon its functionality and embed its components, such as the interactive\nconsole or advanced editor, in your own software.\n\"\"\"),\n download_url=__website_url__ + \"#fh5co-download\",\n author=\"The Spyder Project Contributors\",\n author_email=\"[email protected]\",\n url=__website_url__,\n license='MIT',\n keywords='PyQt5 editor console widgets IDE science data analysis IPython',\n platforms=[\"Windows\", \"Linux\", \"Mac OS-X\"],\n packages=get_packages(),\n package_data={LIBNAME: get_package_data(LIBNAME, EXTLIST),\n 'spyder_breakpoints': get_package_data('spyder_breakpoints',\n EXTLIST),\n 'spyder_profiler': get_package_data('spyder_profiler',\n EXTLIST),\n 'spyder_pylint': get_package_data('spyder_pylint',\n EXTLIST),\n 'spyder_io_dcm': get_package_data('spyder_io_dcm',\n EXTLIST),\n 'spyder_io_hdf5': get_package_data('spyder_io_hdf5',\n EXTLIST),\n },\n scripts=[osp.join('scripts', fname) for fname in SCRIPTS],\n data_files=get_data_files(),\n classifiers=['License :: OSI Approved :: MIT License',\n 'Operating System :: MacOS',\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: POSIX :: Linux',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'Intended Audience :: Developers',\n 'Topic :: Scientific/Engineering',\n 'Topic :: Software Development :: Widget Sets'],\n cmdclass=CMDCLASS)\n\n\n#==============================================================================\n# Setuptools deps\n#==============================================================================\nif any(arg == 'bdist_wheel' for arg in sys.argv):\n import setuptools # analysis:ignore\n\ninstall_requires = [\n 'cloudpickle',\n 'rope>=0.10.5',\n 'jedi>=0.9.0',\n 'pyflakes',\n 'pygments>=2.0',\n 'qtconsole>=4.2.0',\n 'nbconvert',\n 'sphinx',\n 'pycodestyle',\n 'pylint',\n 'psutil',\n 'qtawesome>=0.4.1',\n 'qtpy>=1.5.0',\n 'pickleshare',\n 'pyzmq',\n 'chardet>=2.0.0',\n 'numpydoc',\n 'spyder-kernels>=0.4.2,<1.0',\n # Don't require keyring for Python 2 and Linux\n # because it depends on system packages\n 'keyring;sys_platform!=\"linux2\"',\n # Packages for pyqt5 are only available in\n # Python 3\n 'pyqt5<5.13;python_version>=\"3\"',\n # pyqt5 5.12 split WebEngine into the\n # pyqtwebengine module\n 'pyqtwebengine<5.13'\n]\n\nextras_require = {\n 'test:python_version == \"2.7\"': ['mock'],\n 'test': ['pytest<4.1',\n 'pytest-qt',\n 'pytest-mock',\n 'pytest-cov',\n 'pytest-xvfb',\n 'mock',\n 'flaky',\n 'pandas',\n 'scipy',\n 'sympy',\n 'pillow',\n 'matplotlib',\n 'cython'],\n}\n\nif 'setuptools' in sys.modules:\n setup_args['install_requires'] = install_requires\n setup_args['extras_require'] = extras_require\n\n setup_args['entry_points'] = {\n 'gui_scripts': [\n '{} = spyder.app.start:main'.format(\n 'spyder3' if PY3 else 'spyder')\n ]\n }\n\n setup_args.pop('scripts', None)\n\n\n#==============================================================================\n# Main setup\n#==============================================================================\nsetup(**setup_args)\n", "path": "setup.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright \u00a9 Spyder Project Contributors\n# Licensed under the terms of the MIT License\n# (see spyder/__init__.py for details)\n\n\"\"\"\nSpyder\n======\n\nThe Scientific Python Development Environment\n\nSpyder is a powerful scientific environment written in Python, for Python,\nand designed by and for scientists, engineers and data analysts.\n\nIt features a unique combination of the advanced editing, analysis, debugging\nand profiling functionality of a comprehensive development tool with the data\nexploration, interactive execution, deep inspection and beautiful visualization\ncapabilities of a scientific package.\n\"\"\"\n\nfrom __future__ import print_function\n\nimport os\nimport os.path as osp\nimport subprocess\nimport sys\nimport shutil\n\nfrom distutils.core import setup\nfrom distutils.command.install_data import install_data\n\n\n#==============================================================================\n# Check for Python 3\n#==============================================================================\nPY3 = sys.version_info[0] == 3\n\n\n#==============================================================================\n# Minimal Python version sanity check\n# Taken from the notebook setup.py -- Modified BSD License\n#==============================================================================\nv = sys.version_info\nif v[:2] < (2, 7) or (v[0] >= 3 and v[:2] < (3, 4)):\n error = \"ERROR: Spyder requires Python version 2.7 or 3.4 and above.\"\n print(error, file=sys.stderr)\n sys.exit(1)\n\n\n#==============================================================================\n# Constants\n#==============================================================================\nNAME = 'spyder'\nLIBNAME = 'spyder'\nfrom spyder import __version__, __website_url__ #analysis:ignore\n\n\n#==============================================================================\n# Auxiliary functions\n#==============================================================================\ndef get_package_data(name, extlist):\n \"\"\"Return data files for package *name* with extensions in *extlist*\"\"\"\n flist = []\n # Workaround to replace os.path.relpath (not available until Python 2.6):\n offset = len(name)+len(os.pathsep)\n for dirpath, _dirnames, filenames in os.walk(name):\n for fname in filenames:\n if not fname.startswith('.') and osp.splitext(fname)[1] in extlist:\n flist.append(osp.join(dirpath, fname)[offset:])\n return flist\n\n\ndef get_subpackages(name):\n \"\"\"Return subpackages of package *name*\"\"\"\n splist = []\n for dirpath, _dirnames, _filenames in os.walk(name):\n if osp.isfile(osp.join(dirpath, '__init__.py')):\n splist.append(\".\".join(dirpath.split(os.sep)))\n return splist\n\n\ndef get_data_files():\n \"\"\"Return data_files in a platform dependent manner\"\"\"\n if sys.platform.startswith('linux'):\n if PY3:\n data_files = [('share/applications', ['scripts/spyder3.desktop']),\n ('share/icons', ['img_src/spyder3.png']),\n ('share/metainfo', ['scripts/spyder3.appdata.xml'])]\n else:\n data_files = [('share/applications', ['scripts/spyder.desktop']),\n ('share/icons', ['img_src/spyder.png'])]\n elif os.name == 'nt':\n data_files = [('scripts', ['img_src/spyder.ico',\n 'img_src/spyder_reset.ico'])]\n else:\n data_files = []\n return data_files\n\n\ndef get_packages():\n \"\"\"Return package list\"\"\"\n packages = (\n get_subpackages(LIBNAME)\n + get_subpackages('spyder_breakpoints')\n + get_subpackages('spyder_profiler')\n + get_subpackages('spyder_pylint')\n + get_subpackages('spyder_io_dcm')\n + get_subpackages('spyder_io_hdf5')\n )\n return packages\n\n\n#==============================================================================\n# Make Linux detect Spyder desktop file\n#==============================================================================\nclass MyInstallData(install_data):\n def run(self):\n install_data.run(self)\n if sys.platform.startswith('linux'):\n try:\n subprocess.call(['update-desktop-database'])\n except:\n print(\"ERROR: unable to update desktop database\",\n file=sys.stderr)\nCMDCLASS = {'install_data': MyInstallData}\n\n\n#==============================================================================\n# Main scripts\n#==============================================================================\n# NOTE: the '[...]_win_post_install.py' script is installed even on non-Windows\n# platforms due to a bug in pip installation process (see Issue 1158)\nSCRIPTS = ['%s_win_post_install.py' % NAME]\nif PY3 and sys.platform.startswith('linux'):\n SCRIPTS.append('spyder3')\nelse:\n SCRIPTS.append('spyder')\n\n\n#==============================================================================\n# Files added to the package\n#==============================================================================\nEXTLIST = ['.mo', '.svg', '.png', '.css', '.html', '.js', '.chm', '.ini',\n '.txt', '.rst', '.qss', '.ttf', '.json', '.c', '.cpp', '.java',\n '.md', '.R', '.csv', '.pyx', '.ipynb', '.xml']\nif os.name == 'nt':\n SCRIPTS += ['spyder.bat']\n EXTLIST += ['.ico']\n\n\n#==============================================================================\n# Setup arguments\n#==============================================================================\nsetup_args = dict(\n name=NAME,\n version=__version__,\n description='The Scientific Python Development Environment',\n long_description=(\n\"\"\"Spyder is a powerful scientific environment written in Python, for Python,\nand designed by and for scientists, engineers and data analysts.\nIt features a unique combination of the advanced editing, analysis, debugging\nand profiling functionality of a comprehensive development tool with the data\nexploration, interactive execution, deep inspection and beautiful visualization\ncapabilities of a scientific package.\\n\nFurthermore, Spyder offers built-in integration with many popular\nscientific packages, including NumPy, SciPy, Pandas, IPython, QtConsole,\nMatplotlib, SymPy, and more.\\n\nBeyond its many built-in features, Spyder's abilities can be extended even\nfurther via first- and third-party plugins.\\n\nSpyder can also be used as a PyQt5 extension library, allowing you to build\nupon its functionality and embed its components, such as the interactive\nconsole or advanced editor, in your own software.\n\"\"\"),\n download_url=__website_url__ + \"#fh5co-download\",\n author=\"The Spyder Project Contributors\",\n author_email=\"[email protected]\",\n url=__website_url__,\n license='MIT',\n keywords='PyQt5 editor console widgets IDE science data analysis IPython',\n platforms=[\"Windows\", \"Linux\", \"Mac OS-X\"],\n packages=get_packages(),\n package_data={LIBNAME: get_package_data(LIBNAME, EXTLIST),\n 'spyder_breakpoints': get_package_data('spyder_breakpoints',\n EXTLIST),\n 'spyder_profiler': get_package_data('spyder_profiler',\n EXTLIST),\n 'spyder_pylint': get_package_data('spyder_pylint',\n EXTLIST),\n 'spyder_io_dcm': get_package_data('spyder_io_dcm',\n EXTLIST),\n 'spyder_io_hdf5': get_package_data('spyder_io_hdf5',\n EXTLIST),\n },\n scripts=[osp.join('scripts', fname) for fname in SCRIPTS],\n data_files=get_data_files(),\n classifiers=['License :: OSI Approved :: MIT License',\n 'Operating System :: MacOS',\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: POSIX :: Linux',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'Intended Audience :: Developers',\n 'Topic :: Scientific/Engineering',\n 'Topic :: Software Development :: Widget Sets'],\n cmdclass=CMDCLASS)\n\n\n#==============================================================================\n# Setuptools deps\n#==============================================================================\nif any(arg == 'bdist_wheel' for arg in sys.argv):\n import setuptools # analysis:ignore\n\ninstall_requires = [\n 'cloudpickle',\n 'rope>=0.10.5',\n 'jedi>=0.9.0',\n 'pyflakes',\n 'pygments>=2.0',\n 'qtconsole>=4.2.0',\n 'nbconvert',\n 'sphinx',\n 'pycodestyle',\n 'pylint',\n 'psutil',\n 'qtawesome>=0.5.7',\n 'qtpy>=1.5.0',\n 'pickleshare',\n 'pyzmq',\n 'chardet>=2.0.0',\n 'numpydoc',\n 'spyder-kernels>=0.4.2,<1.0',\n # Don't require keyring for Python 2 and Linux\n # because it depends on system packages\n 'keyring;sys_platform!=\"linux2\"',\n # Packages for pyqt5 are only available in\n # Python 3\n 'pyqt5<5.13;python_version>=\"3\"',\n # pyqt5 5.12 split WebEngine into the\n # pyqtwebengine module\n 'pyqtwebengine<5.13'\n]\n\nextras_require = {\n 'test:python_version == \"2.7\"': ['mock'],\n 'test': ['pytest<4.1',\n 'pytest-qt',\n 'pytest-mock',\n 'pytest-cov',\n 'pytest-xvfb',\n 'mock',\n 'flaky',\n 'pandas',\n 'scipy',\n 'sympy',\n 'pillow',\n 'matplotlib',\n 'cython'],\n}\n\nif 'setuptools' in sys.modules:\n setup_args['install_requires'] = install_requires\n setup_args['extras_require'] = extras_require\n\n setup_args['entry_points'] = {\n 'gui_scripts': [\n '{} = spyder.app.start:main'.format(\n 'spyder3' if PY3 else 'spyder')\n ]\n }\n\n setup_args.pop('scripts', None)\n\n\n#==============================================================================\n# Main setup\n#==============================================================================\nsetup(**setup_args)\n", "path": "setup.py"}]} |
gh_patches_debug_83 | rasdani/github-patches | git_diff | saulpw__visidata-2036 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug: cannot read .vds with expression column
**Small description**
Visidata cannot read back sheet(s) it saved as `.vds` if they contain an
expression column.
"KeyError: 'ExprColumn'" shows as error, resulting in a partial read.
**Expected result**
It should be able to read those files.
**Actual result with ~~screenshot~~ stacktrace**
```
Traceback (most recent call last):
File "/nix/store/z4xjb4j8i73894r2wqjvlnps9j60rjr0-visidata-2.11/lib/python3.10/site-packages/visidata/threads.py", line 198, in _toplevelTryFunc
t.status = func(*args, **kwargs)
File "/nix/store/z4xjb4j8i73894r2wqjvlnps9j60rjr0-visidata-2.11/lib/python3.10/site-packages/visidata/pyobj.py", line 26, in reload
for r in self.iterload():
File "/nix/store/z4xjb4j8i73894r2wqjvlnps9j60rjr0-visidata-2.11/lib/python3.10/site-packages/visidata/loaders/vds.py", line 76, in iterload
c = globals()[classname](d.pop('name'), sheet=self)
KeyError: 'ExprColumn'
```
**Steps to reproduce with sample data and a .vd**
Create and save some test sheet with an expr column with this `cmdlog.vdj`:
```
#!vd -p
{"col": "", "row": "", "longname": "open-new", "input": "", "keystrokes": "Shift+A", "comment": "Open new empty sheet"}
{"sheet": "unnamed", "col": "A", "row": "", "longname": "type-int", "input": "", "keystrokes": "#", "comment": "set type of current column to int"}
{"sheet": "unnamed", "col": "", "row": "", "longname": "add-row", "input": "", "keystrokes": "a", "comment": "append a blank row"}
{"sheet": "unnamed", "col": "A", "row": "0", "longname": "edit-cell", "input": "2", "keystrokes": "e", "comment": "edit contents of current cell"}
{"sheet": "unnamed", "col": "A", "row": "", "longname": "addcol-expr", "input": "A*2", "keystrokes": "=", "comment": "create new column from Python expression, with column names as variables"}
{"sheet": "unnamed", "col": "", "row": "", "longname": "save-sheet", "input": "sheet.vds", "keystrokes": "Ctrl+S", "comment": "save current sheet to filename in format determined by extension (default .tsv)"}
```
This produces `sheet.vds` as follows, which seems valid:
```
#{"name": "unnamed"}
#{"name": "A", "width": 4, "height": 1, "expr": null, "keycol": 0, "formatter": "", "fmtstr": "", "voffset": 0, "hoffset": 0, "aggstr": "", "type": "int", "col": "Column"}
#{"name": "A*2", "width": 5, "height": 1, "expr": "A*2", "keycol": 0, "formatter": "", "fmtstr": "", "voffset": 0, "hoffset": 0, "aggstr": "", "type": "", "col": "ExprColumn"}
{"A": 2, "A*2": 4}
```
Quit visidata and open that file again with `vd sheet.vds`,
and observe the loading error.
**Additional context**
- visidata v2.11
- python 3.10.12
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `visidata/loaders/vds.py`
Content:
```
1 'Custom VisiData save format'
2
3 import json
4
5 from visidata import VisiData, JsonSheet, Progress, IndexSheet, SettableColumn, ItemColumn
6
7
8 NL='\n'
9
10 @VisiData.api
11 def open_vds(vd, p):
12 return VdsIndexSheet(p.name, source=p)
13
14
15 @VisiData.api
16 def save_vds(vd, p, *sheets):
17 'Save in custom VisiData format, preserving columns and their attributes.'
18
19 with p.open(mode='w', encoding='utf-8') as fp:
20 for vs in sheets:
21 # class and attrs for vs
22 d = { 'name': vs.name, }
23 fp.write('#'+json.dumps(d)+NL)
24
25 # class and attrs for each column in vs
26 for col in vs.visibleCols:
27 d = col.__getstate__()
28 if isinstance(col, SettableColumn):
29 d['col'] = 'Column'
30 else:
31 d['col'] = type(col).__name__
32 fp.write('#'+json.dumps(d)+NL)
33
34 with Progress(gerund='saving'):
35 for row in vs.iterdispvals(*vs.visibleCols, format=False):
36 d = {col.name:val for col, val in row.items()}
37 fp.write(json.dumps(d, default=str)+NL)
38
39
40 class VdsIndexSheet(IndexSheet):
41 def iterload(self):
42 vs = None
43 with self.source.open(encoding='utf-8') as fp:
44 line = fp.readline()
45 while line:
46 if line.startswith('#{'):
47 d = json.loads(line[1:])
48 if 'col' not in d:
49 vs = VdsSheet(d.pop('name'), columns=[], source=self.source, source_fpos=fp.tell())
50 yield vs
51 line = fp.readline()
52
53
54 class VdsSheet(JsonSheet):
55 def newRow(self):
56 return {} # rowdef: dict
57
58 def iterload(self):
59 self.colnames = {}
60 self.columns = []
61
62 with self.source.open(encoding='utf-8') as fp:
63 fp.seek(self.source_fpos)
64
65 # consume all metadata, create columns
66 line = fp.readline()
67 while line and line.startswith('#{'):
68 d = json.loads(line[1:])
69 if 'col' not in d:
70 raise Exception(d)
71 classname = d.pop('col')
72 if classname == 'Column':
73 classname = 'ItemColumn'
74 d['expr'] = d['name']
75
76 c = globals()[classname](d.pop('name'), sheet=self)
77 self.addColumn(c)
78 self.colnames[c.name] = c
79 for k, v in d.items():
80 setattr(c, k, v)
81
82 line = fp.readline()
83
84 while line and not line.startswith('#{'):
85 d = json.loads(line)
86 yield d
87 line = fp.readline()
88
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/visidata/loaders/vds.py b/visidata/loaders/vds.py
--- a/visidata/loaders/vds.py
+++ b/visidata/loaders/vds.py
@@ -2,7 +2,7 @@
import json
-from visidata import VisiData, JsonSheet, Progress, IndexSheet, SettableColumn, ItemColumn
+from visidata import VisiData, JsonSheet, Progress, IndexSheet, SettableColumn, ItemColumn, ExprColumn
NL='\n'
| {"golden_diff": "diff --git a/visidata/loaders/vds.py b/visidata/loaders/vds.py\n--- a/visidata/loaders/vds.py\n+++ b/visidata/loaders/vds.py\n@@ -2,7 +2,7 @@\n \n import json\n \n-from visidata import VisiData, JsonSheet, Progress, IndexSheet, SettableColumn, ItemColumn\n+from visidata import VisiData, JsonSheet, Progress, IndexSheet, SettableColumn, ItemColumn, ExprColumn\n \n \n NL='\\n'\n", "issue": "Bug: cannot read .vds with expression column\n**Small description**\n\nVisidata cannot read back sheet(s) it saved as `.vds` if they contain an\nexpression column.\n\n\"KeyError: 'ExprColumn'\" shows as error, resulting in a partial read.\n\n\n**Expected result**\n\nIt should be able to read those files.\n\n\n**Actual result with ~~screenshot~~ stacktrace**\n\n```\nTraceback (most recent call last):\n File \"/nix/store/z4xjb4j8i73894r2wqjvlnps9j60rjr0-visidata-2.11/lib/python3.10/site-packages/visidata/threads.py\", line 198, in _toplevelTryFunc\n t.status = func(*args, **kwargs)\n File \"/nix/store/z4xjb4j8i73894r2wqjvlnps9j60rjr0-visidata-2.11/lib/python3.10/site-packages/visidata/pyobj.py\", line 26, in reload\n for r in self.iterload():\n File \"/nix/store/z4xjb4j8i73894r2wqjvlnps9j60rjr0-visidata-2.11/lib/python3.10/site-packages/visidata/loaders/vds.py\", line 76, in iterload\n c = globals()[classname](d.pop('name'), sheet=self)\nKeyError: 'ExprColumn'\n```\n\n\n**Steps to reproduce with sample data and a .vd**\n\nCreate and save some test sheet with an expr column with this `cmdlog.vdj`:\n\n```\n#!vd -p\n{\"col\": \"\", \"row\": \"\", \"longname\": \"open-new\", \"input\": \"\", \"keystrokes\": \"Shift+A\", \"comment\": \"Open new empty sheet\"}\n{\"sheet\": \"unnamed\", \"col\": \"A\", \"row\": \"\", \"longname\": \"type-int\", \"input\": \"\", \"keystrokes\": \"#\", \"comment\": \"set type of current column to int\"}\n{\"sheet\": \"unnamed\", \"col\": \"\", \"row\": \"\", \"longname\": \"add-row\", \"input\": \"\", \"keystrokes\": \"a\", \"comment\": \"append a blank row\"}\n{\"sheet\": \"unnamed\", \"col\": \"A\", \"row\": \"0\", \"longname\": \"edit-cell\", \"input\": \"2\", \"keystrokes\": \"e\", \"comment\": \"edit contents of current cell\"}\n{\"sheet\": \"unnamed\", \"col\": \"A\", \"row\": \"\", \"longname\": \"addcol-expr\", \"input\": \"A*2\", \"keystrokes\": \"=\", \"comment\": \"create new column from Python expression, with column names as variables\"}\n{\"sheet\": \"unnamed\", \"col\": \"\", \"row\": \"\", \"longname\": \"save-sheet\", \"input\": \"sheet.vds\", \"keystrokes\": \"Ctrl+S\", \"comment\": \"save current sheet to filename in format determined by extension (default .tsv)\"}\n```\n\nThis produces `sheet.vds` as follows, which seems valid:\n\n```\n#{\"name\": \"unnamed\"}\n#{\"name\": \"A\", \"width\": 4, \"height\": 1, \"expr\": null, \"keycol\": 0, \"formatter\": \"\", \"fmtstr\": \"\", \"voffset\": 0, \"hoffset\": 0, \"aggstr\": \"\", \"type\": \"int\", \"col\": \"Column\"}\n#{\"name\": \"A*2\", \"width\": 5, \"height\": 1, \"expr\": \"A*2\", \"keycol\": 0, \"formatter\": \"\", \"fmtstr\": \"\", \"voffset\": 0, \"hoffset\": 0, \"aggstr\": \"\", \"type\": \"\", \"col\": \"ExprColumn\"}\n{\"A\": 2, \"A*2\": 4}\n```\n\nQuit visidata and open that file again with `vd sheet.vds`,\nand observe the loading error.\n\n\n**Additional context**\n\n- visidata v2.11\n- python 3.10.12\n\n", "before_files": [{"content": "'Custom VisiData save format'\n\nimport json\n\nfrom visidata import VisiData, JsonSheet, Progress, IndexSheet, SettableColumn, ItemColumn\n\n\nNL='\\n'\n\[email protected]\ndef open_vds(vd, p):\n return VdsIndexSheet(p.name, source=p)\n\n\[email protected]\ndef save_vds(vd, p, *sheets):\n 'Save in custom VisiData format, preserving columns and their attributes.'\n\n with p.open(mode='w', encoding='utf-8') as fp:\n for vs in sheets:\n # class and attrs for vs\n d = { 'name': vs.name, }\n fp.write('#'+json.dumps(d)+NL)\n\n # class and attrs for each column in vs\n for col in vs.visibleCols:\n d = col.__getstate__()\n if isinstance(col, SettableColumn):\n d['col'] = 'Column'\n else:\n d['col'] = type(col).__name__\n fp.write('#'+json.dumps(d)+NL)\n\n with Progress(gerund='saving'):\n for row in vs.iterdispvals(*vs.visibleCols, format=False):\n d = {col.name:val for col, val in row.items()}\n fp.write(json.dumps(d, default=str)+NL)\n\n\nclass VdsIndexSheet(IndexSheet):\n def iterload(self):\n vs = None\n with self.source.open(encoding='utf-8') as fp:\n line = fp.readline()\n while line:\n if line.startswith('#{'):\n d = json.loads(line[1:])\n if 'col' not in d:\n vs = VdsSheet(d.pop('name'), columns=[], source=self.source, source_fpos=fp.tell())\n yield vs\n line = fp.readline()\n\n\nclass VdsSheet(JsonSheet):\n def newRow(self):\n return {} # rowdef: dict\n\n def iterload(self):\n self.colnames = {}\n self.columns = []\n\n with self.source.open(encoding='utf-8') as fp:\n fp.seek(self.source_fpos)\n\n # consume all metadata, create columns\n line = fp.readline()\n while line and line.startswith('#{'):\n d = json.loads(line[1:])\n if 'col' not in d:\n raise Exception(d)\n classname = d.pop('col')\n if classname == 'Column':\n classname = 'ItemColumn'\n d['expr'] = d['name']\n\n c = globals()[classname](d.pop('name'), sheet=self)\n self.addColumn(c)\n self.colnames[c.name] = c\n for k, v in d.items():\n setattr(c, k, v)\n\n line = fp.readline()\n\n while line and not line.startswith('#{'):\n d = json.loads(line)\n yield d\n line = fp.readline()\n", "path": "visidata/loaders/vds.py"}], "after_files": [{"content": "'Custom VisiData save format'\n\nimport json\n\nfrom visidata import VisiData, JsonSheet, Progress, IndexSheet, SettableColumn, ItemColumn, ExprColumn\n\n\nNL='\\n'\n\[email protected]\ndef open_vds(vd, p):\n return VdsIndexSheet(p.name, source=p)\n\n\[email protected]\ndef save_vds(vd, p, *sheets):\n 'Save in custom VisiData format, preserving columns and their attributes.'\n\n with p.open(mode='w', encoding='utf-8') as fp:\n for vs in sheets:\n # class and attrs for vs\n d = { 'name': vs.name, }\n fp.write('#'+json.dumps(d)+NL)\n\n # class and attrs for each column in vs\n for col in vs.visibleCols:\n d = col.__getstate__()\n if isinstance(col, SettableColumn):\n d['col'] = 'Column'\n else:\n d['col'] = type(col).__name__\n fp.write('#'+json.dumps(d)+NL)\n\n with Progress(gerund='saving'):\n for row in vs.iterdispvals(*vs.visibleCols, format=False):\n d = {col.name:val for col, val in row.items()}\n fp.write(json.dumps(d, default=str)+NL)\n\n\nclass VdsIndexSheet(IndexSheet):\n def iterload(self):\n vs = None\n with self.source.open(encoding='utf-8') as fp:\n line = fp.readline()\n while line:\n if line.startswith('#{'):\n d = json.loads(line[1:])\n if 'col' not in d:\n vs = VdsSheet(d.pop('name'), columns=[], source=self.source, source_fpos=fp.tell())\n yield vs\n line = fp.readline()\n\n\nclass VdsSheet(JsonSheet):\n def newRow(self):\n return {} # rowdef: dict\n\n def iterload(self):\n self.colnames = {}\n self.columns = []\n\n with self.source.open(encoding='utf-8') as fp:\n fp.seek(self.source_fpos)\n\n # consume all metadata, create columns\n line = fp.readline()\n while line and line.startswith('#{'):\n d = json.loads(line[1:])\n if 'col' not in d:\n raise Exception(d)\n classname = d.pop('col')\n if classname == 'Column':\n classname = 'ItemColumn'\n d['expr'] = d['name']\n\n c = globals()[classname](d.pop('name'), sheet=self)\n self.addColumn(c)\n self.colnames[c.name] = c\n for k, v in d.items():\n setattr(c, k, v)\n\n line = fp.readline()\n\n while line and not line.startswith('#{'):\n d = json.loads(line)\n yield d\n line = fp.readline()\n", "path": "visidata/loaders/vds.py"}]} |
gh_patches_debug_84 | rasdani/github-patches | git_diff | cupy__cupy-5857 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Drop support for NumPy 1.17 in v10 (NEP 29)
CuPy should drop support for these legacy versions, following [NEP 29](https://numpy.org/neps/nep-0029-deprecation_policy.html#support-table).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2
3 import glob
4 import os
5 from setuptools import setup, find_packages
6 import sys
7
8 source_root = os.path.abspath(os.path.dirname(__file__))
9 sys.path.append(os.path.join(source_root, 'install'))
10
11 import cupy_builder # NOQA
12 from cupy_builder import cupy_setup_build # NOQA
13
14 ctx = cupy_builder.Context(source_root)
15 cupy_builder.initialize(ctx)
16 if not cupy_builder.preflight_check(ctx):
17 sys.exit(1)
18
19
20 # TODO(kmaehashi): migrate to pyproject.toml (see #4727, #4619)
21 setup_requires = [
22 'Cython>=0.29.22,<3',
23 'fastrlock>=0.5',
24 ]
25 install_requires = [
26 'numpy>=1.17,<1.24', # see #4773
27 'fastrlock>=0.5',
28 ]
29 extras_require = {
30 'all': [
31 'scipy>=1.4,<1.10', # see #4773
32 'Cython>=0.29.22,<3',
33 'optuna>=2.0',
34 ],
35 'stylecheck': [
36 'autopep8==1.5.5',
37 'flake8==3.8.4',
38 'pbr==5.5.1',
39 'pycodestyle==2.6.0',
40 ],
41 'test': [
42 # 4.2 <= pytest < 6.2 is slow collecting tests and times out on CI.
43 'pytest>=6.2',
44 ],
45 # TODO(kmaehashi): Remove 'jenkins' requirements.
46 'jenkins': [
47 'pytest>=6.2',
48 'pytest-timeout',
49 'pytest-cov',
50 'coveralls',
51 'codecov',
52 'coverage<5', # Otherwise, Python must be built with sqlite
53 ],
54 }
55 tests_require = extras_require['test']
56
57
58 # List of files that needs to be in the distribution (sdist/wheel).
59 # Notes:
60 # - Files only needed in sdist should be added to `MANIFEST.in`.
61 # - The following glob (`**`) ignores items starting with `.`.
62 cupy_package_data = [
63 'cupy/cuda/cupy_thrust.cu',
64 'cupy/cuda/cupy_cub.cu',
65 'cupy/cuda/cupy_cufftXt.cu', # for cuFFT callback
66 'cupy/cuda/cupy_cufftXt.h', # for cuFFT callback
67 'cupy/cuda/cupy_cufft.h', # for cuFFT callback
68 'cupy/cuda/cufft.pxd', # for cuFFT callback
69 'cupy/cuda/cufft.pyx', # for cuFFT callback
70 'cupy/random/cupy_distributions.cu',
71 'cupy/random/cupy_distributions.cuh',
72 ] + [
73 x for x in glob.glob('cupy/_core/include/cupy/**', recursive=True)
74 if os.path.isfile(x)
75 ]
76
77 package_data = {
78 'cupy': [
79 os.path.relpath(x, 'cupy') for x in cupy_package_data
80 ],
81 }
82
83 package_data['cupy'] += cupy_setup_build.prepare_wheel_libs(ctx)
84
85 ext_modules = cupy_setup_build.get_ext_modules(False, ctx)
86 build_ext = cupy_setup_build.custom_build_ext
87
88 # Get __version__ variable
89 with open(os.path.join(source_root, 'cupy', '_version.py')) as f:
90 exec(f.read())
91
92 long_description = None
93 if ctx.long_description_path is not None:
94 with open(ctx.long_description_path) as f:
95 long_description = f.read()
96
97
98 CLASSIFIERS = """\
99 Development Status :: 5 - Production/Stable
100 Intended Audience :: Science/Research
101 Intended Audience :: Developers
102 License :: OSI Approved :: MIT License
103 Programming Language :: Python
104 Programming Language :: Python :: 3
105 Programming Language :: Python :: 3.7
106 Programming Language :: Python :: 3.8
107 Programming Language :: Python :: 3.9
108 Programming Language :: Python :: 3 :: Only
109 Programming Language :: Cython
110 Topic :: Software Development
111 Topic :: Scientific/Engineering
112 Operating System :: POSIX
113 Operating System :: Microsoft :: Windows
114 """
115
116
117 setup(
118 name=ctx.package_name,
119 version=__version__, # NOQA
120 description='CuPy: NumPy & SciPy for GPU',
121 long_description=long_description,
122 author='Seiya Tokui',
123 author_email='[email protected]',
124 maintainer='CuPy Developers',
125 url='https://cupy.dev/',
126 license='MIT License',
127 project_urls={
128 "Bug Tracker": "https://github.com/cupy/cupy/issues",
129 "Documentation": "https://docs.cupy.dev/",
130 "Source Code": "https://github.com/cupy/cupy",
131 },
132 classifiers=[_f for _f in CLASSIFIERS.split('\n') if _f],
133 packages=find_packages(exclude=['install', 'tests']),
134 package_data=package_data,
135 zip_safe=False,
136 python_requires='>=3.7',
137 setup_requires=setup_requires,
138 install_requires=install_requires,
139 tests_require=tests_require,
140 extras_require=extras_require,
141 ext_modules=ext_modules,
142 cmdclass={'build_ext': build_ext},
143 )
144
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -23,7 +23,7 @@
'fastrlock>=0.5',
]
install_requires = [
- 'numpy>=1.17,<1.24', # see #4773
+ 'numpy>=1.18,<1.24', # see #4773
'fastrlock>=0.5',
]
extras_require = {
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -23,7 +23,7 @@\n 'fastrlock>=0.5',\n ]\n install_requires = [\n- 'numpy>=1.17,<1.24', # see #4773\n+ 'numpy>=1.18,<1.24', # see #4773\n 'fastrlock>=0.5',\n ]\n extras_require = {\n", "issue": "Drop support for NumPy 1.17 in v10 (NEP 29)\nCuPy should drop support for these legacy versions, following [NEP 29](https://numpy.org/neps/nep-0029-deprecation_policy.html#support-table).\n", "before_files": [{"content": "#!/usr/bin/env python\n\nimport glob\nimport os\nfrom setuptools import setup, find_packages\nimport sys\n\nsource_root = os.path.abspath(os.path.dirname(__file__))\nsys.path.append(os.path.join(source_root, 'install'))\n\nimport cupy_builder # NOQA\nfrom cupy_builder import cupy_setup_build # NOQA\n\nctx = cupy_builder.Context(source_root)\ncupy_builder.initialize(ctx)\nif not cupy_builder.preflight_check(ctx):\n sys.exit(1)\n\n\n# TODO(kmaehashi): migrate to pyproject.toml (see #4727, #4619)\nsetup_requires = [\n 'Cython>=0.29.22,<3',\n 'fastrlock>=0.5',\n]\ninstall_requires = [\n 'numpy>=1.17,<1.24', # see #4773\n 'fastrlock>=0.5',\n]\nextras_require = {\n 'all': [\n 'scipy>=1.4,<1.10', # see #4773\n 'Cython>=0.29.22,<3',\n 'optuna>=2.0',\n ],\n 'stylecheck': [\n 'autopep8==1.5.5',\n 'flake8==3.8.4',\n 'pbr==5.5.1',\n 'pycodestyle==2.6.0',\n ],\n 'test': [\n # 4.2 <= pytest < 6.2 is slow collecting tests and times out on CI.\n 'pytest>=6.2',\n ],\n # TODO(kmaehashi): Remove 'jenkins' requirements.\n 'jenkins': [\n 'pytest>=6.2',\n 'pytest-timeout',\n 'pytest-cov',\n 'coveralls',\n 'codecov',\n 'coverage<5', # Otherwise, Python must be built with sqlite\n ],\n}\ntests_require = extras_require['test']\n\n\n# List of files that needs to be in the distribution (sdist/wheel).\n# Notes:\n# - Files only needed in sdist should be added to `MANIFEST.in`.\n# - The following glob (`**`) ignores items starting with `.`.\ncupy_package_data = [\n 'cupy/cuda/cupy_thrust.cu',\n 'cupy/cuda/cupy_cub.cu',\n 'cupy/cuda/cupy_cufftXt.cu', # for cuFFT callback\n 'cupy/cuda/cupy_cufftXt.h', # for cuFFT callback\n 'cupy/cuda/cupy_cufft.h', # for cuFFT callback\n 'cupy/cuda/cufft.pxd', # for cuFFT callback\n 'cupy/cuda/cufft.pyx', # for cuFFT callback\n 'cupy/random/cupy_distributions.cu',\n 'cupy/random/cupy_distributions.cuh',\n] + [\n x for x in glob.glob('cupy/_core/include/cupy/**', recursive=True)\n if os.path.isfile(x)\n]\n\npackage_data = {\n 'cupy': [\n os.path.relpath(x, 'cupy') for x in cupy_package_data\n ],\n}\n\npackage_data['cupy'] += cupy_setup_build.prepare_wheel_libs(ctx)\n\next_modules = cupy_setup_build.get_ext_modules(False, ctx)\nbuild_ext = cupy_setup_build.custom_build_ext\n\n# Get __version__ variable\nwith open(os.path.join(source_root, 'cupy', '_version.py')) as f:\n exec(f.read())\n\nlong_description = None\nif ctx.long_description_path is not None:\n with open(ctx.long_description_path) as f:\n long_description = f.read()\n\n\nCLASSIFIERS = \"\"\"\\\nDevelopment Status :: 5 - Production/Stable\nIntended Audience :: Science/Research\nIntended Audience :: Developers\nLicense :: OSI Approved :: MIT License\nProgramming Language :: Python\nProgramming Language :: Python :: 3\nProgramming Language :: Python :: 3.7\nProgramming Language :: Python :: 3.8\nProgramming Language :: Python :: 3.9\nProgramming Language :: Python :: 3 :: Only\nProgramming Language :: Cython\nTopic :: Software Development\nTopic :: Scientific/Engineering\nOperating System :: POSIX\nOperating System :: Microsoft :: Windows\n\"\"\"\n\n\nsetup(\n name=ctx.package_name,\n version=__version__, # NOQA\n description='CuPy: NumPy & SciPy for GPU',\n long_description=long_description,\n author='Seiya Tokui',\n author_email='[email protected]',\n maintainer='CuPy Developers',\n url='https://cupy.dev/',\n license='MIT License',\n project_urls={\n \"Bug Tracker\": \"https://github.com/cupy/cupy/issues\",\n \"Documentation\": \"https://docs.cupy.dev/\",\n \"Source Code\": \"https://github.com/cupy/cupy\",\n },\n classifiers=[_f for _f in CLASSIFIERS.split('\\n') if _f],\n packages=find_packages(exclude=['install', 'tests']),\n package_data=package_data,\n zip_safe=False,\n python_requires='>=3.7',\n setup_requires=setup_requires,\n install_requires=install_requires,\n tests_require=tests_require,\n extras_require=extras_require,\n ext_modules=ext_modules,\n cmdclass={'build_ext': build_ext},\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\nimport glob\nimport os\nfrom setuptools import setup, find_packages\nimport sys\n\nsource_root = os.path.abspath(os.path.dirname(__file__))\nsys.path.append(os.path.join(source_root, 'install'))\n\nimport cupy_builder # NOQA\nfrom cupy_builder import cupy_setup_build # NOQA\n\nctx = cupy_builder.Context(source_root)\ncupy_builder.initialize(ctx)\nif not cupy_builder.preflight_check(ctx):\n sys.exit(1)\n\n\n# TODO(kmaehashi): migrate to pyproject.toml (see #4727, #4619)\nsetup_requires = [\n 'Cython>=0.29.22,<3',\n 'fastrlock>=0.5',\n]\ninstall_requires = [\n 'numpy>=1.18,<1.24', # see #4773\n 'fastrlock>=0.5',\n]\nextras_require = {\n 'all': [\n 'scipy>=1.4,<1.10', # see #4773\n 'Cython>=0.29.22,<3',\n 'optuna>=2.0',\n ],\n 'stylecheck': [\n 'autopep8==1.5.5',\n 'flake8==3.8.4',\n 'pbr==5.5.1',\n 'pycodestyle==2.6.0',\n ],\n 'test': [\n # 4.2 <= pytest < 6.2 is slow collecting tests and times out on CI.\n 'pytest>=6.2',\n ],\n # TODO(kmaehashi): Remove 'jenkins' requirements.\n 'jenkins': [\n 'pytest>=6.2',\n 'pytest-timeout',\n 'pytest-cov',\n 'coveralls',\n 'codecov',\n 'coverage<5', # Otherwise, Python must be built with sqlite\n ],\n}\ntests_require = extras_require['test']\n\n\n# List of files that needs to be in the distribution (sdist/wheel).\n# Notes:\n# - Files only needed in sdist should be added to `MANIFEST.in`.\n# - The following glob (`**`) ignores items starting with `.`.\ncupy_package_data = [\n 'cupy/cuda/cupy_thrust.cu',\n 'cupy/cuda/cupy_cub.cu',\n 'cupy/cuda/cupy_cufftXt.cu', # for cuFFT callback\n 'cupy/cuda/cupy_cufftXt.h', # for cuFFT callback\n 'cupy/cuda/cupy_cufft.h', # for cuFFT callback\n 'cupy/cuda/cufft.pxd', # for cuFFT callback\n 'cupy/cuda/cufft.pyx', # for cuFFT callback\n 'cupy/random/cupy_distributions.cu',\n 'cupy/random/cupy_distributions.cuh',\n] + [\n x for x in glob.glob('cupy/_core/include/cupy/**', recursive=True)\n if os.path.isfile(x)\n]\n\npackage_data = {\n 'cupy': [\n os.path.relpath(x, 'cupy') for x in cupy_package_data\n ],\n}\n\npackage_data['cupy'] += cupy_setup_build.prepare_wheel_libs(ctx)\n\next_modules = cupy_setup_build.get_ext_modules(False, ctx)\nbuild_ext = cupy_setup_build.custom_build_ext\n\n# Get __version__ variable\nwith open(os.path.join(source_root, 'cupy', '_version.py')) as f:\n exec(f.read())\n\nlong_description = None\nif ctx.long_description_path is not None:\n with open(ctx.long_description_path) as f:\n long_description = f.read()\n\n\nCLASSIFIERS = \"\"\"\\\nDevelopment Status :: 5 - Production/Stable\nIntended Audience :: Science/Research\nIntended Audience :: Developers\nLicense :: OSI Approved :: MIT License\nProgramming Language :: Python\nProgramming Language :: Python :: 3\nProgramming Language :: Python :: 3.7\nProgramming Language :: Python :: 3.8\nProgramming Language :: Python :: 3.9\nProgramming Language :: Python :: 3 :: Only\nProgramming Language :: Cython\nTopic :: Software Development\nTopic :: Scientific/Engineering\nOperating System :: POSIX\nOperating System :: Microsoft :: Windows\n\"\"\"\n\n\nsetup(\n name=ctx.package_name,\n version=__version__, # NOQA\n description='CuPy: NumPy & SciPy for GPU',\n long_description=long_description,\n author='Seiya Tokui',\n author_email='[email protected]',\n maintainer='CuPy Developers',\n url='https://cupy.dev/',\n license='MIT License',\n project_urls={\n \"Bug Tracker\": \"https://github.com/cupy/cupy/issues\",\n \"Documentation\": \"https://docs.cupy.dev/\",\n \"Source Code\": \"https://github.com/cupy/cupy\",\n },\n classifiers=[_f for _f in CLASSIFIERS.split('\\n') if _f],\n packages=find_packages(exclude=['install', 'tests']),\n package_data=package_data,\n zip_safe=False,\n python_requires='>=3.7',\n setup_requires=setup_requires,\n install_requires=install_requires,\n tests_require=tests_require,\n extras_require=extras_require,\n ext_modules=ext_modules,\n cmdclass={'build_ext': build_ext},\n)\n", "path": "setup.py"}]} |
gh_patches_debug_85 | rasdani/github-patches | git_diff | saulpw__visidata-2307 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[cmdlog] sheets created with no keypresses show errors
**Small description**
clicking on `dir_hidden` in the DirSheet guide raises an exception
**Actual result with screenshot**
```
File "/home/midichef/.local/lib/python3.10/site-packages/visidata/statusbar.py", line 56, in sheetlist
if len(vs.shortcut) == 1:
TypeError: object of type 'NoneType' has no len()
```
**Steps to reproduce with sample data and a .vd**
`vd .`, navigate to `filename` column to bring the DirSheet guide up, click on `dir_hidden`
**Additional context**
visidata 3.1dev
It looks like vs.shortcut is `None` because some code for `shortcut()` is obsolete, where it checks `cmdlog.rows[0].keystrokes`:
https://github.com/saulpw/visidata/blob/aa9d2615f3b2773001cf75a1b24219903a91c1bb/visidata/cmdlog.py#L415
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `visidata/cmdlog.py`
Content:
```
1 import threading
2
3 from visidata import vd, UNLOADED, namedlist, vlen, asyncthread, globalCommand, date
4 from visidata import VisiData, BaseSheet, Sheet, ColumnAttr, VisiDataMetaSheet, JsonLinesSheet, TypedWrapper, AttrDict, Progress, ErrorSheet, CompleteKey, Path
5 import visidata
6
7 vd.option('replay_wait', 0.0, 'time to wait between replayed commands, in seconds', sheettype=None)
8 vd.theme_option('disp_replay_play', 'βΆ', 'status indicator for active replay')
9 vd.theme_option('color_status_replay', 'green', 'color of replay status indicator')
10
11 # prefixes which should not be logged
12 nonLogged = '''forget exec-longname undo redo quit
13 show error errors statuses options threads jump
14 replay cancel save-cmdlog macro cmdlog-sheet menu repeat reload-every
15 go- search scroll prev next page start end zoom resize visibility sidebar
16 mouse suspend redraw no-op help syscopy sysopen profile toggle'''.split()
17
18 vd.option('rowkey_prefix', 'γ', 'string prefix for rowkey in the cmdlog', sheettype=None)
19
20 vd.activeCommand = UNLOADED
21 vd._nextCommands = [] # list[str|CommandLogRow] for vd.queueCommand
22
23 CommandLogRow = namedlist('CommandLogRow', 'sheet col row longname input keystrokes comment undofuncs'.split())
24
25 @VisiData.api
26 def queueCommand(vd, longname, input=None, sheet=None, col=None, row=None):
27 'Add command to queue of next commands to execute.'
28 vd._nextCommands.append(CommandLogRow(longname=longname, input=input, sheet=sheet, col=col, row=row))
29
30
31 @VisiData.api
32 def open_vd(vd, p):
33 return CommandLog(p.base_stem, source=p, precious=True)
34
35 @VisiData.api
36 def open_vdj(vd, p):
37 return CommandLogJsonl(p.base_stem, source=p, precious=True)
38
39 VisiData.save_vd = VisiData.save_tsv
40
41
42 @VisiData.api
43 def save_vdj(vd, p, *vsheets):
44 with p.open(mode='w', encoding=vsheets[0].options.save_encoding) as fp:
45 fp.write("#!vd -p\n")
46 for vs in vsheets:
47 vs.write_jsonl(fp)
48
49
50 @VisiData.api
51 def checkVersion(vd, desired_version):
52 if desired_version != visidata.__version_info__:
53 vd.fail("version %s required" % desired_version)
54
55 @VisiData.api
56 def fnSuffix(vd, prefix:str):
57 i = 0
58 fn = prefix + '.vdj'
59 while Path(fn).exists():
60 i += 1
61 fn = f'{prefix}-{i}.vdj'
62
63 return fn
64
65 def indexMatch(L, func):
66 'returns the smallest i for which func(L[i]) is true'
67 for i, x in enumerate(L):
68 if func(x):
69 return i
70
71 def keystr(k):
72 return vd.options.rowkey_prefix+','.join(map(str, k))
73
74 @VisiData.api
75 def isLoggableCommand(vd, longname):
76 for n in nonLogged:
77 if longname.startswith(n):
78 return False
79 return True
80
81 def isLoggableSheet(sheet):
82 return sheet is not vd.cmdlog and not isinstance(sheet, (vd.OptionsSheet, ErrorSheet))
83
84
85 @Sheet.api
86 def moveToRow(vs, rowstr):
87 'Move cursor to row given by *rowstr*, which can be either the row number or keystr.'
88 rowidx = vs.getRowIndexFromStr(rowstr)
89 if rowidx is None:
90 return False
91
92 vs.cursorRowIndex = rowidx
93
94 return True
95
96 @Sheet.api
97 def getRowIndexFromStr(vs, rowstr):
98 index = indexMatch(vs.rows, lambda r,vs=vs,rowstr=rowstr: keystr(vs.rowkey(r)) == rowstr)
99 if index is not None:
100 return index
101
102 try:
103 return int(rowstr)
104 except ValueError:
105 return None
106
107 @Sheet.api
108 def moveToCol(vs, col):
109 'Move cursor to column given by *col*, which can be either the column number or column name.'
110 if isinstance(col, str):
111 vcolidx = indexMatch(vs.visibleCols, lambda c,name=col: name == c.name)
112 elif isinstance(col, int):
113 vcolidx = col
114
115 if vcolidx is None or vcolidx >= vs.nVisibleCols:
116 return False
117
118 vs.cursorVisibleColIndex = vcolidx
119
120 return True
121
122
123 @BaseSheet.api
124 def commandCursor(sheet, execstr):
125 'Return (col, row) of cursor suitable for cmdlog replay of execstr.'
126 colname, rowname = '', ''
127 contains = lambda s, *substrs: any((a in s) for a in substrs)
128 if contains(execstr, 'cursorTypedValue', 'cursorDisplay', 'cursorValue', 'cursorCell', 'cursorRow') and sheet.nRows > 0:
129 k = sheet.rowkey(sheet.cursorRow)
130 rowname = keystr(k) if k else sheet.cursorRowIndex
131
132 if contains(execstr, 'cursorTypedValue', 'cursorDisplay', 'cursorValue', 'cursorCell', 'cursorCol', 'cursorVisibleCol', 'ColumnAtCursor'):
133 if sheet.cursorCol:
134 colname = sheet.cursorCol.name or sheet.visibleCols.index(sheet.cursorCol)
135 else:
136 colname = None
137 return colname, rowname
138
139
140 # rowdef: namedlist (like TsvSheet)
141 class CommandLogBase:
142 'Log of commands for current session.'
143 rowtype = 'logged commands'
144 precious = False
145 _rowtype = CommandLogRow
146 columns = [
147 ColumnAttr('sheet'),
148 ColumnAttr('col'),
149 ColumnAttr('row'),
150 ColumnAttr('longname'),
151 ColumnAttr('input'),
152 ColumnAttr('keystrokes'),
153 ColumnAttr('comment'),
154 ColumnAttr('undo', 'undofuncs', type=vlen, width=0)
155 ]
156
157 filetype = 'vd'
158
159 def newRow(self, **fields):
160 return self._rowtype(**fields)
161
162 def beforeExecHook(self, sheet, cmd, args, keystrokes):
163 if vd.activeCommand:
164 self.afterExecSheet(sheet, False, '')
165
166 colname, rowname, sheetname = '', '', None
167 if sheet and not (cmd.longname.startswith('open-') and not cmd.longname in ('open-row', 'open-cell')):
168 sheetname = sheet.name
169
170 colname, rowname = sheet.commandCursor(cmd.execstr)
171
172 contains = lambda s, *substrs: any((a in s) for a in substrs)
173 if contains(cmd.execstr, 'pasteFromClipboard'):
174 args = vd.sysclipValue().strip()
175
176
177 comment = vd.currentReplayRow.comment if vd.currentReplayRow else cmd.helpstr
178 vd.activeCommand = self.newRow(sheet=sheetname,
179 col=colname,
180 row=str(rowname),
181 keystrokes=keystrokes,
182 input=args,
183 longname=cmd.longname,
184 comment=comment,
185 undofuncs=[])
186
187 def afterExecSheet(self, sheet, escaped, err):
188 'Records vd.activeCommand'
189 if not vd.activeCommand: # nothing to record
190 return
191
192 if err:
193 vd.activeCommand[-1] += ' [%s]' % err
194
195 if escaped:
196 vd.activeCommand = None
197 return
198
199 # remove user-aborted commands and simple movements (unless first command on the sheet, which created the sheet)
200 if not sheet.cmdlog_sheet.rows or vd.isLoggableCommand(vd.activeCommand.longname):
201 if isLoggableSheet(sheet): # don't record actions from cmdlog or other internal sheets on global cmdlog
202 self.addRow(vd.activeCommand) # add to global cmdlog
203 sheet.cmdlog_sheet.addRow(vd.activeCommand) # add to sheet-specific cmdlog
204
205 vd.activeCommand = None
206
207 def openHook(self, vs, src):
208 while isinstance(src, BaseSheet):
209 src = src.source
210 r = self.newRow(keystrokes='o', input=str(src), longname='open-file')
211 vs.cmdlog_sheet.addRow(r)
212 self.addRow(r)
213
214 class CommandLog(CommandLogBase, VisiDataMetaSheet):
215 pass
216
217 class CommandLogJsonl(CommandLogBase, JsonLinesSheet):
218
219 filetype = 'vdj'
220
221 def newRow(self, **fields):
222 return AttrDict(JsonLinesSheet.newRow(self, **fields))
223
224 def iterload(self):
225 for r in JsonLinesSheet.iterload(self):
226 if isinstance(r, TypedWrapper):
227 yield r
228 else:
229 yield AttrDict(r)
230
231
232 ### replay
233
234 vd.paused = False
235 vd.currentReplay = None # CommandLog replaying currently
236 vd.currentReplayRow = None # must be global, to allow replay
237
238
239 @VisiData.api
240 def replay_cancel(vd):
241 vd.currentReplayRow = None
242 vd.currentReplay = None
243 vd._nextCommands.clear()
244
245
246 @VisiData.api
247 def moveToReplayContext(vd, r, vs):
248 'set the sheet/row/col to the values in the replay row'
249 vs.ensureLoaded()
250 vd.sync()
251 vd.clearCaches()
252
253 if r.row not in [None, '']:
254 vs.moveToRow(r.row) or vd.error(f'no "{r.row}" row on {vs}')
255
256 if r.col not in [None, '']:
257 vs.moveToCol(r.col) or vd.error(f'no "{r.col}" column on {vs}')
258
259
260 @VisiData.api
261 def replayOne(vd, r):
262 'Replay the command in one given row.'
263 vd.currentReplayRow = r
264 longname = getattr(r, 'longname', None)
265 if longname is None and getattr(r, 'keystrokes', None) is None:
266 vd.fail('failed to find command to replay')
267
268 if r.sheet and longname not in ['set-option', 'unset-option']:
269 vs = vd.getSheet(r.sheet) or vd.error('no sheet named %s' % r.sheet)
270 else:
271 vs = None
272
273 if longname in ['set-option', 'unset-option']:
274 try:
275 context = vs if r.sheet and vs else vd
276 option_scope = r.sheet or r.col or 'global'
277 if option_scope == 'override': option_scope = 'global' # override is deprecated, is now global
278 if longname == 'set-option':
279 context.options.set(r.row, r.input, option_scope)
280 else:
281 context.options.unset(r.row, option_scope)
282
283 escaped = False
284 except Exception as e:
285 vd.exceptionCaught(e)
286 escaped = True
287 else:
288 vs = vs or vd.activeSheet
289 if vs:
290 if vs in vd.sheets: # if already on sheet stack, push to top
291 vd.push(vs)
292 else:
293 vs = vd.cmdlog
294
295 try:
296 vd.moveToReplayContext(r, vs)
297 if r.comment:
298 vd.status(r.comment)
299
300 # <=v1.2 used keystrokes in longname column; getCommand fetches both
301 escaped = vs.execCommand(longname if longname else r.keystrokes, keystrokes=r.keystrokes)
302 except Exception as e:
303 vd.exceptionCaught(e)
304 escaped = True
305
306 vd.currentReplayRow = None
307
308 if escaped: # escape during replay aborts replay
309 vd.warning('replay aborted during %s' % (longname or r.keystrokes))
310 return escaped
311
312
313 @VisiData.api
314 class DisableAsync:
315 def __enter__(self):
316 vd.execAsync = vd.execSync
317
318 def __exit__(self, exc_type, exc_val, tb):
319 vd.execAsync = lambda *args, vd=vd, **kwargs: visidata.VisiData.execAsync(vd, *args, **kwargs)
320
321
322 @VisiData.api
323 def replay_sync(vd, cmdlog):
324 'Replay all commands in *cmdlog*.'
325 with vd.DisableAsync():
326 cmdlog.cursorRowIndex = 0
327 vd.currentReplay = cmdlog
328
329 with Progress(total=len(cmdlog.rows)) as prog:
330 while cmdlog.cursorRowIndex < len(cmdlog.rows):
331 if vd.currentReplay is None:
332 vd.status('replay canceled')
333 return
334
335 vd.statuses.clear()
336 try:
337 if vd.replayOne(cmdlog.cursorRow):
338 vd.replay_cancel()
339 return True
340 except Exception as e:
341 vd.replay_cancel()
342 vd.exceptionCaught(e)
343 vd.status('replay canceled')
344 return True
345
346 cmdlog.cursorRowIndex += 1
347 prog.addProgress(1)
348
349 if vd.activeSheet:
350 vd.activeSheet.ensureLoaded()
351
352 vd.status('replay complete')
353 vd.currentReplay = None
354
355
356 @VisiData.api
357 def replay(vd, cmdlog):
358 'Inject commands into live execution with interface.'
359 vd.push(cmdlog)
360 vd._nextCommands.extend(cmdlog.rows)
361
362
363 @VisiData.api
364 def getLastArgs(vd):
365 'Get user input for the currently playing command.'
366 if vd.currentReplayRow:
367 return vd.currentReplayRow.input
368 return None
369
370
371 @VisiData.api
372 def setLastArgs(vd, args):
373 'Set user input on last command, if not already set.'
374 # only set if not already set (second input usually confirmation)
375 if (vd.activeCommand is not None) and (vd.activeCommand is not UNLOADED):
376 if not vd.activeCommand.input:
377 vd.activeCommand.input = args
378
379
380 @VisiData.property
381 def replayStatus(vd):
382 if vd._nextCommands:
383 return f' | [:status_replay] {len(vd._nextCommands)} {vd.options.disp_replay_play}[:]'
384 return ''
385
386
387 @BaseSheet.property
388 def cmdlog(sheet):
389 rows = sheet.cmdlog_sheet.rows
390 if isinstance(sheet.source, BaseSheet):
391 rows = sheet.source.cmdlog.rows + rows
392 return CommandLogJsonl(sheet.name+'_cmdlog', source=sheet, rows=rows)
393
394
395 @BaseSheet.lazy_property
396 def cmdlog_sheet(sheet):
397 c = CommandLogJsonl(sheet.name+'_cmdlog', source=sheet, rows=[])
398 # copy over all existing globally set options
399 # you only need to do this for the first BaseSheet in a tree
400 if not isinstance(sheet.source, BaseSheet):
401 for r in vd.cmdlog.rows:
402 if r.sheet == 'global' and (r.longname == 'set-option') or (r.longname == 'unset-option'):
403 c.addRow(r)
404 return c
405
406
407 @BaseSheet.property
408 def shortcut(self):
409 if self._shortcut:
410 return self._shortcut
411 try:
412 return str(vd.allSheets.index(self)+1)
413 except ValueError:
414 pass
415
416 try:
417 return self.cmdlog_sheet.rows[0].keystrokes
418 except Exception:
419 pass
420
421 return ''
422
423
424 @VisiData.property
425 def cmdlog(vd):
426 if not vd._cmdlog:
427 vd._cmdlog = CommandLogJsonl('cmdlog', rows=[]) # no reload
428 vd._cmdlog.resetCols()
429 vd.beforeExecHooks.append(vd._cmdlog.beforeExecHook)
430 return vd._cmdlog
431
432 @VisiData.property
433 def modifyCommand(vd):
434 if vd.activeCommand is not None and vd.isLoggableCommand(vd.activeCommand.longname):
435 return vd.activeCommand
436 if not vd.cmdlog.rows:
437 return None
438 return vd.cmdlog.rows[-1]
439
440
441 @CommandLogJsonl.api
442 @asyncthread
443 def repeat_for_n(cmdlog, r, n=1):
444 r.sheet = r.row = r.col = ""
445 for i in range(n):
446 vd.replayOne(r)
447
448 @CommandLogJsonl.api
449 @asyncthread
450 def repeat_for_selected(cmdlog, r):
451 r.sheet = r.row = r.col = ""
452
453 for idx, r in enumerate(vd.sheet.rows):
454 if vd.sheet.isSelected(r):
455 vd.sheet.cursorRowIndex = idx
456 vd.replayOne(r)
457
458
459 BaseSheet.init('_shortcut')
460
461
462 globalCommand('gD', 'cmdlog-all', 'vd.push(vd.cmdlog)', 'open global CommandLog for all commands executed in current session')
463 globalCommand('D', 'cmdlog-sheet', 'vd.push(sheet.cmdlog)', "open current sheet's CommandLog with all other loose ends removed; includes commands from parent sheets")
464 globalCommand('zD', 'cmdlog-sheet-only', 'vd.push(sheet.cmdlog_sheet)', 'open CommandLog for current sheet with commands from parent sheets removed')
465 BaseSheet.addCommand('^D', 'save-cmdlog', 'saveSheets(inputPath("save cmdlog to: ", value=fnSuffix(name)), vd.cmdlog)', 'save CommandLog to filename.vdj file')
466 BaseSheet.bindkey('^N', 'no-op')
467 BaseSheet.addCommand('^K', 'replay-stop', 'vd.replay_cancel(); vd.warning("replay canceled")', 'cancel current replay')
468
469 globalCommand(None, 'show-status', 'status(input("status: "))', 'show given message on status line')
470 globalCommand('^V', 'show-version', 'status(__version_info__);', 'Show version and copyright information on status line')
471 globalCommand('z^V', 'check-version', 'checkVersion(input("require version: ", value=__version_info__))', 'check VisiData version against given version')
472
473 CommandLog.addCommand('x', 'replay-row', 'vd.replayOne(cursorRow); status("replayed one row")', 'replay command in current row')
474 CommandLog.addCommand('gx', 'replay-all', 'vd.replay(sheet)', 'replay contents of entire CommandLog')
475
476 CommandLogJsonl.addCommand('x', 'replay-row', 'vd.replayOne(cursorRow); status("replayed one row")', 'replay command in current row')
477 CommandLogJsonl.addCommand('gx', 'replay-all', 'vd.replay(sheet)', 'replay contents of entire CommandLog')
478
479 CommandLog.options.json_sort_keys = False
480 CommandLog.options.encoding = 'utf-8'
481 CommandLogJsonl.options.json_sort_keys = False
482
483 vd.addGlobals(CommandLogBase=CommandLogBase, CommandLogRow=CommandLogRow)
484
485 vd.addMenuItems('''
486 View > Command log > this sheet > cmdlog-sheet
487 View > Command log > this sheet only > cmdlog-sheet-only
488 View > Command log > all commands > cmdlog-all
489 System > Execute longname > exec-longname
490 Help > Version > show-version
491 ''')
492
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/visidata/cmdlog.py b/visidata/cmdlog.py
--- a/visidata/cmdlog.py
+++ b/visidata/cmdlog.py
@@ -414,7 +414,7 @@
pass
try:
- return self.cmdlog_sheet.rows[0].keystrokes
+ return self.cmdlog_sheet.rows[0].keystrokes or '' #2293
except Exception:
pass
| {"golden_diff": "diff --git a/visidata/cmdlog.py b/visidata/cmdlog.py\n--- a/visidata/cmdlog.py\n+++ b/visidata/cmdlog.py\n@@ -414,7 +414,7 @@\n pass\n \n try:\n- return self.cmdlog_sheet.rows[0].keystrokes\n+ return self.cmdlog_sheet.rows[0].keystrokes or '' #2293\n except Exception:\n pass\n", "issue": "[cmdlog] sheets created with no keypresses show errors \n**Small description**\r\nclicking on `dir_hidden` in the DirSheet guide raises an exception\r\n\r\n**Actual result with screenshot**\r\n```\r\nFile \"/home/midichef/.local/lib/python3.10/site-packages/visidata/statusbar.py\", line 56, in sheetlist\r\nif len(vs.shortcut) == 1:\r\nTypeError: object of type 'NoneType' has no len()\r\n```\r\n\r\n**Steps to reproduce with sample data and a .vd**\r\n`vd .`, navigate to `filename` column to bring the DirSheet guide up, click on `dir_hidden`\r\n\r\n**Additional context**\r\nvisidata 3.1dev\r\n\r\nIt looks like vs.shortcut is `None` because some code for `shortcut()` is obsolete, where it checks `cmdlog.rows[0].keystrokes`:\r\nhttps://github.com/saulpw/visidata/blob/aa9d2615f3b2773001cf75a1b24219903a91c1bb/visidata/cmdlog.py#L415\n", "before_files": [{"content": "import threading\n\nfrom visidata import vd, UNLOADED, namedlist, vlen, asyncthread, globalCommand, date\nfrom visidata import VisiData, BaseSheet, Sheet, ColumnAttr, VisiDataMetaSheet, JsonLinesSheet, TypedWrapper, AttrDict, Progress, ErrorSheet, CompleteKey, Path\nimport visidata\n\nvd.option('replay_wait', 0.0, 'time to wait between replayed commands, in seconds', sheettype=None)\nvd.theme_option('disp_replay_play', '\u25b6', 'status indicator for active replay')\nvd.theme_option('color_status_replay', 'green', 'color of replay status indicator')\n\n# prefixes which should not be logged\nnonLogged = '''forget exec-longname undo redo quit\nshow error errors statuses options threads jump\nreplay cancel save-cmdlog macro cmdlog-sheet menu repeat reload-every\ngo- search scroll prev next page start end zoom resize visibility sidebar\nmouse suspend redraw no-op help syscopy sysopen profile toggle'''.split()\n\nvd.option('rowkey_prefix', '\u30ad', 'string prefix for rowkey in the cmdlog', sheettype=None)\n\nvd.activeCommand = UNLOADED\nvd._nextCommands = [] # list[str|CommandLogRow] for vd.queueCommand\n\nCommandLogRow = namedlist('CommandLogRow', 'sheet col row longname input keystrokes comment undofuncs'.split())\n\[email protected]\ndef queueCommand(vd, longname, input=None, sheet=None, col=None, row=None):\n 'Add command to queue of next commands to execute.'\n vd._nextCommands.append(CommandLogRow(longname=longname, input=input, sheet=sheet, col=col, row=row))\n\n\[email protected]\ndef open_vd(vd, p):\n return CommandLog(p.base_stem, source=p, precious=True)\n\[email protected]\ndef open_vdj(vd, p):\n return CommandLogJsonl(p.base_stem, source=p, precious=True)\n\nVisiData.save_vd = VisiData.save_tsv\n\n\[email protected]\ndef save_vdj(vd, p, *vsheets):\n with p.open(mode='w', encoding=vsheets[0].options.save_encoding) as fp:\n fp.write(\"#!vd -p\\n\")\n for vs in vsheets:\n vs.write_jsonl(fp)\n\n\[email protected]\ndef checkVersion(vd, desired_version):\n if desired_version != visidata.__version_info__:\n vd.fail(\"version %s required\" % desired_version)\n\[email protected]\ndef fnSuffix(vd, prefix:str):\n i = 0\n fn = prefix + '.vdj'\n while Path(fn).exists():\n i += 1\n fn = f'{prefix}-{i}.vdj'\n\n return fn\n\ndef indexMatch(L, func):\n 'returns the smallest i for which func(L[i]) is true'\n for i, x in enumerate(L):\n if func(x):\n return i\n\ndef keystr(k):\n return vd.options.rowkey_prefix+','.join(map(str, k))\n\[email protected]\ndef isLoggableCommand(vd, longname):\n for n in nonLogged:\n if longname.startswith(n):\n return False\n return True\n\ndef isLoggableSheet(sheet):\n return sheet is not vd.cmdlog and not isinstance(sheet, (vd.OptionsSheet, ErrorSheet))\n\n\[email protected]\ndef moveToRow(vs, rowstr):\n 'Move cursor to row given by *rowstr*, which can be either the row number or keystr.'\n rowidx = vs.getRowIndexFromStr(rowstr)\n if rowidx is None:\n return False\n\n vs.cursorRowIndex = rowidx\n\n return True\n\[email protected]\ndef getRowIndexFromStr(vs, rowstr):\n index = indexMatch(vs.rows, lambda r,vs=vs,rowstr=rowstr: keystr(vs.rowkey(r)) == rowstr)\n if index is not None:\n return index\n\n try:\n return int(rowstr)\n except ValueError:\n return None\n\[email protected]\ndef moveToCol(vs, col):\n 'Move cursor to column given by *col*, which can be either the column number or column name.'\n if isinstance(col, str):\n vcolidx = indexMatch(vs.visibleCols, lambda c,name=col: name == c.name)\n elif isinstance(col, int):\n vcolidx = col\n\n if vcolidx is None or vcolidx >= vs.nVisibleCols:\n return False\n\n vs.cursorVisibleColIndex = vcolidx\n\n return True\n\n\[email protected]\ndef commandCursor(sheet, execstr):\n 'Return (col, row) of cursor suitable for cmdlog replay of execstr.'\n colname, rowname = '', ''\n contains = lambda s, *substrs: any((a in s) for a in substrs)\n if contains(execstr, 'cursorTypedValue', 'cursorDisplay', 'cursorValue', 'cursorCell', 'cursorRow') and sheet.nRows > 0:\n k = sheet.rowkey(sheet.cursorRow)\n rowname = keystr(k) if k else sheet.cursorRowIndex\n\n if contains(execstr, 'cursorTypedValue', 'cursorDisplay', 'cursorValue', 'cursorCell', 'cursorCol', 'cursorVisibleCol', 'ColumnAtCursor'):\n if sheet.cursorCol:\n colname = sheet.cursorCol.name or sheet.visibleCols.index(sheet.cursorCol)\n else:\n colname = None\n return colname, rowname\n\n\n# rowdef: namedlist (like TsvSheet)\nclass CommandLogBase:\n 'Log of commands for current session.'\n rowtype = 'logged commands'\n precious = False\n _rowtype = CommandLogRow\n columns = [\n ColumnAttr('sheet'),\n ColumnAttr('col'),\n ColumnAttr('row'),\n ColumnAttr('longname'),\n ColumnAttr('input'),\n ColumnAttr('keystrokes'),\n ColumnAttr('comment'),\n ColumnAttr('undo', 'undofuncs', type=vlen, width=0)\n ]\n\n filetype = 'vd'\n\n def newRow(self, **fields):\n return self._rowtype(**fields)\n\n def beforeExecHook(self, sheet, cmd, args, keystrokes):\n if vd.activeCommand:\n self.afterExecSheet(sheet, False, '')\n\n colname, rowname, sheetname = '', '', None\n if sheet and not (cmd.longname.startswith('open-') and not cmd.longname in ('open-row', 'open-cell')):\n sheetname = sheet.name\n\n colname, rowname = sheet.commandCursor(cmd.execstr)\n\n contains = lambda s, *substrs: any((a in s) for a in substrs)\n if contains(cmd.execstr, 'pasteFromClipboard'):\n args = vd.sysclipValue().strip()\n\n\n comment = vd.currentReplayRow.comment if vd.currentReplayRow else cmd.helpstr\n vd.activeCommand = self.newRow(sheet=sheetname,\n col=colname,\n row=str(rowname),\n keystrokes=keystrokes,\n input=args,\n longname=cmd.longname,\n comment=comment,\n undofuncs=[])\n\n def afterExecSheet(self, sheet, escaped, err):\n 'Records vd.activeCommand'\n if not vd.activeCommand: # nothing to record\n return\n\n if err:\n vd.activeCommand[-1] += ' [%s]' % err\n\n if escaped:\n vd.activeCommand = None\n return\n\n # remove user-aborted commands and simple movements (unless first command on the sheet, which created the sheet)\n if not sheet.cmdlog_sheet.rows or vd.isLoggableCommand(vd.activeCommand.longname):\n if isLoggableSheet(sheet): # don't record actions from cmdlog or other internal sheets on global cmdlog\n self.addRow(vd.activeCommand) # add to global cmdlog\n sheet.cmdlog_sheet.addRow(vd.activeCommand) # add to sheet-specific cmdlog\n\n vd.activeCommand = None\n\n def openHook(self, vs, src):\n while isinstance(src, BaseSheet):\n src = src.source\n r = self.newRow(keystrokes='o', input=str(src), longname='open-file')\n vs.cmdlog_sheet.addRow(r)\n self.addRow(r)\n\nclass CommandLog(CommandLogBase, VisiDataMetaSheet):\n pass\n\nclass CommandLogJsonl(CommandLogBase, JsonLinesSheet):\n\n filetype = 'vdj'\n\n def newRow(self, **fields):\n return AttrDict(JsonLinesSheet.newRow(self, **fields))\n\n def iterload(self):\n for r in JsonLinesSheet.iterload(self):\n if isinstance(r, TypedWrapper):\n yield r\n else:\n yield AttrDict(r)\n\n\n### replay\n\nvd.paused = False\nvd.currentReplay = None # CommandLog replaying currently\nvd.currentReplayRow = None # must be global, to allow replay\n\n\[email protected]\ndef replay_cancel(vd):\n vd.currentReplayRow = None\n vd.currentReplay = None\n vd._nextCommands.clear()\n\n\[email protected]\ndef moveToReplayContext(vd, r, vs):\n 'set the sheet/row/col to the values in the replay row'\n vs.ensureLoaded()\n vd.sync()\n vd.clearCaches()\n\n if r.row not in [None, '']:\n vs.moveToRow(r.row) or vd.error(f'no \"{r.row}\" row on {vs}')\n\n if r.col not in [None, '']:\n vs.moveToCol(r.col) or vd.error(f'no \"{r.col}\" column on {vs}')\n\n\[email protected]\ndef replayOne(vd, r):\n 'Replay the command in one given row.'\n vd.currentReplayRow = r\n longname = getattr(r, 'longname', None)\n if longname is None and getattr(r, 'keystrokes', None) is None:\n vd.fail('failed to find command to replay')\n\n if r.sheet and longname not in ['set-option', 'unset-option']:\n vs = vd.getSheet(r.sheet) or vd.error('no sheet named %s' % r.sheet)\n else:\n vs = None\n\n if longname in ['set-option', 'unset-option']:\n try:\n context = vs if r.sheet and vs else vd\n option_scope = r.sheet or r.col or 'global'\n if option_scope == 'override': option_scope = 'global' # override is deprecated, is now global\n if longname == 'set-option':\n context.options.set(r.row, r.input, option_scope)\n else:\n context.options.unset(r.row, option_scope)\n\n escaped = False\n except Exception as e:\n vd.exceptionCaught(e)\n escaped = True\n else:\n vs = vs or vd.activeSheet\n if vs:\n if vs in vd.sheets: # if already on sheet stack, push to top\n vd.push(vs)\n else:\n vs = vd.cmdlog\n\n try:\n vd.moveToReplayContext(r, vs)\n if r.comment:\n vd.status(r.comment)\n\n # <=v1.2 used keystrokes in longname column; getCommand fetches both\n escaped = vs.execCommand(longname if longname else r.keystrokes, keystrokes=r.keystrokes)\n except Exception as e:\n vd.exceptionCaught(e)\n escaped = True\n\n vd.currentReplayRow = None\n\n if escaped: # escape during replay aborts replay\n vd.warning('replay aborted during %s' % (longname or r.keystrokes))\n return escaped\n\n\[email protected]\nclass DisableAsync:\n def __enter__(self):\n vd.execAsync = vd.execSync\n\n def __exit__(self, exc_type, exc_val, tb):\n vd.execAsync = lambda *args, vd=vd, **kwargs: visidata.VisiData.execAsync(vd, *args, **kwargs)\n\n\[email protected]\ndef replay_sync(vd, cmdlog):\n 'Replay all commands in *cmdlog*.'\n with vd.DisableAsync():\n cmdlog.cursorRowIndex = 0\n vd.currentReplay = cmdlog\n\n with Progress(total=len(cmdlog.rows)) as prog:\n while cmdlog.cursorRowIndex < len(cmdlog.rows):\n if vd.currentReplay is None:\n vd.status('replay canceled')\n return\n\n vd.statuses.clear()\n try:\n if vd.replayOne(cmdlog.cursorRow):\n vd.replay_cancel()\n return True\n except Exception as e:\n vd.replay_cancel()\n vd.exceptionCaught(e)\n vd.status('replay canceled')\n return True\n\n cmdlog.cursorRowIndex += 1\n prog.addProgress(1)\n\n if vd.activeSheet:\n vd.activeSheet.ensureLoaded()\n\n vd.status('replay complete')\n vd.currentReplay = None\n\n\[email protected]\ndef replay(vd, cmdlog):\n 'Inject commands into live execution with interface.'\n vd.push(cmdlog)\n vd._nextCommands.extend(cmdlog.rows)\n\n\[email protected]\ndef getLastArgs(vd):\n 'Get user input for the currently playing command.'\n if vd.currentReplayRow:\n return vd.currentReplayRow.input\n return None\n\n\[email protected]\ndef setLastArgs(vd, args):\n 'Set user input on last command, if not already set.'\n # only set if not already set (second input usually confirmation)\n if (vd.activeCommand is not None) and (vd.activeCommand is not UNLOADED):\n if not vd.activeCommand.input:\n vd.activeCommand.input = args\n\n\[email protected]\ndef replayStatus(vd):\n if vd._nextCommands:\n return f' | [:status_replay] {len(vd._nextCommands)} {vd.options.disp_replay_play}[:]'\n return ''\n\n\[email protected]\ndef cmdlog(sheet):\n rows = sheet.cmdlog_sheet.rows\n if isinstance(sheet.source, BaseSheet):\n rows = sheet.source.cmdlog.rows + rows\n return CommandLogJsonl(sheet.name+'_cmdlog', source=sheet, rows=rows)\n\n\[email protected]_property\ndef cmdlog_sheet(sheet):\n c = CommandLogJsonl(sheet.name+'_cmdlog', source=sheet, rows=[])\n # copy over all existing globally set options\n # you only need to do this for the first BaseSheet in a tree\n if not isinstance(sheet.source, BaseSheet):\n for r in vd.cmdlog.rows:\n if r.sheet == 'global' and (r.longname == 'set-option') or (r.longname == 'unset-option'):\n c.addRow(r)\n return c\n\n\[email protected]\ndef shortcut(self):\n if self._shortcut:\n return self._shortcut\n try:\n return str(vd.allSheets.index(self)+1)\n except ValueError:\n pass\n\n try:\n return self.cmdlog_sheet.rows[0].keystrokes\n except Exception:\n pass\n\n return ''\n\n\[email protected]\ndef cmdlog(vd):\n if not vd._cmdlog:\n vd._cmdlog = CommandLogJsonl('cmdlog', rows=[]) # no reload\n vd._cmdlog.resetCols()\n vd.beforeExecHooks.append(vd._cmdlog.beforeExecHook)\n return vd._cmdlog\n\[email protected]\ndef modifyCommand(vd):\n if vd.activeCommand is not None and vd.isLoggableCommand(vd.activeCommand.longname):\n return vd.activeCommand\n if not vd.cmdlog.rows:\n return None\n return vd.cmdlog.rows[-1]\n\n\[email protected]\n@asyncthread\ndef repeat_for_n(cmdlog, r, n=1):\n r.sheet = r.row = r.col = \"\"\n for i in range(n):\n vd.replayOne(r)\n\[email protected]\n@asyncthread\ndef repeat_for_selected(cmdlog, r):\n r.sheet = r.row = r.col = \"\"\n\n for idx, r in enumerate(vd.sheet.rows):\n if vd.sheet.isSelected(r):\n vd.sheet.cursorRowIndex = idx\n vd.replayOne(r)\n\n\nBaseSheet.init('_shortcut')\n\n\nglobalCommand('gD', 'cmdlog-all', 'vd.push(vd.cmdlog)', 'open global CommandLog for all commands executed in current session')\nglobalCommand('D', 'cmdlog-sheet', 'vd.push(sheet.cmdlog)', \"open current sheet's CommandLog with all other loose ends removed; includes commands from parent sheets\")\nglobalCommand('zD', 'cmdlog-sheet-only', 'vd.push(sheet.cmdlog_sheet)', 'open CommandLog for current sheet with commands from parent sheets removed')\nBaseSheet.addCommand('^D', 'save-cmdlog', 'saveSheets(inputPath(\"save cmdlog to: \", value=fnSuffix(name)), vd.cmdlog)', 'save CommandLog to filename.vdj file')\nBaseSheet.bindkey('^N', 'no-op')\nBaseSheet.addCommand('^K', 'replay-stop', 'vd.replay_cancel(); vd.warning(\"replay canceled\")', 'cancel current replay')\n\nglobalCommand(None, 'show-status', 'status(input(\"status: \"))', 'show given message on status line')\nglobalCommand('^V', 'show-version', 'status(__version_info__);', 'Show version and copyright information on status line')\nglobalCommand('z^V', 'check-version', 'checkVersion(input(\"require version: \", value=__version_info__))', 'check VisiData version against given version')\n\nCommandLog.addCommand('x', 'replay-row', 'vd.replayOne(cursorRow); status(\"replayed one row\")', 'replay command in current row')\nCommandLog.addCommand('gx', 'replay-all', 'vd.replay(sheet)', 'replay contents of entire CommandLog')\n\nCommandLogJsonl.addCommand('x', 'replay-row', 'vd.replayOne(cursorRow); status(\"replayed one row\")', 'replay command in current row')\nCommandLogJsonl.addCommand('gx', 'replay-all', 'vd.replay(sheet)', 'replay contents of entire CommandLog')\n\nCommandLog.options.json_sort_keys = False\nCommandLog.options.encoding = 'utf-8'\nCommandLogJsonl.options.json_sort_keys = False\n\nvd.addGlobals(CommandLogBase=CommandLogBase, CommandLogRow=CommandLogRow)\n\nvd.addMenuItems('''\n View > Command log > this sheet > cmdlog-sheet\n View > Command log > this sheet only > cmdlog-sheet-only\n View > Command log > all commands > cmdlog-all\n System > Execute longname > exec-longname\n Help > Version > show-version\n''')\n", "path": "visidata/cmdlog.py"}], "after_files": [{"content": "import threading\n\nfrom visidata import vd, UNLOADED, namedlist, vlen, asyncthread, globalCommand, date\nfrom visidata import VisiData, BaseSheet, Sheet, ColumnAttr, VisiDataMetaSheet, JsonLinesSheet, TypedWrapper, AttrDict, Progress, ErrorSheet, CompleteKey, Path\nimport visidata\n\nvd.option('replay_wait', 0.0, 'time to wait between replayed commands, in seconds', sheettype=None)\nvd.theme_option('disp_replay_play', '\u25b6', 'status indicator for active replay')\nvd.theme_option('color_status_replay', 'green', 'color of replay status indicator')\n\n# prefixes which should not be logged\nnonLogged = '''forget exec-longname undo redo quit\nshow error errors statuses options threads jump\nreplay cancel save-cmdlog macro cmdlog-sheet menu repeat reload-every\ngo- search scroll prev next page start end zoom resize visibility sidebar\nmouse suspend redraw no-op help syscopy sysopen profile toggle'''.split()\n\nvd.option('rowkey_prefix', '\u30ad', 'string prefix for rowkey in the cmdlog', sheettype=None)\n\nvd.activeCommand = UNLOADED\nvd._nextCommands = [] # list[str|CommandLogRow] for vd.queueCommand\n\nCommandLogRow = namedlist('CommandLogRow', 'sheet col row longname input keystrokes comment undofuncs'.split())\n\[email protected]\ndef queueCommand(vd, longname, input=None, sheet=None, col=None, row=None):\n 'Add command to queue of next commands to execute.'\n vd._nextCommands.append(CommandLogRow(longname=longname, input=input, sheet=sheet, col=col, row=row))\n\n\[email protected]\ndef open_vd(vd, p):\n return CommandLog(p.base_stem, source=p, precious=True)\n\[email protected]\ndef open_vdj(vd, p):\n return CommandLogJsonl(p.base_stem, source=p, precious=True)\n\nVisiData.save_vd = VisiData.save_tsv\n\n\[email protected]\ndef save_vdj(vd, p, *vsheets):\n with p.open(mode='w', encoding=vsheets[0].options.save_encoding) as fp:\n fp.write(\"#!vd -p\\n\")\n for vs in vsheets:\n vs.write_jsonl(fp)\n\n\[email protected]\ndef checkVersion(vd, desired_version):\n if desired_version != visidata.__version_info__:\n vd.fail(\"version %s required\" % desired_version)\n\[email protected]\ndef fnSuffix(vd, prefix:str):\n i = 0\n fn = prefix + '.vdj'\n while Path(fn).exists():\n i += 1\n fn = f'{prefix}-{i}.vdj'\n\n return fn\n\ndef indexMatch(L, func):\n 'returns the smallest i for which func(L[i]) is true'\n for i, x in enumerate(L):\n if func(x):\n return i\n\ndef keystr(k):\n return vd.options.rowkey_prefix+','.join(map(str, k))\n\[email protected]\ndef isLoggableCommand(vd, longname):\n for n in nonLogged:\n if longname.startswith(n):\n return False\n return True\n\ndef isLoggableSheet(sheet):\n return sheet is not vd.cmdlog and not isinstance(sheet, (vd.OptionsSheet, ErrorSheet))\n\n\[email protected]\ndef moveToRow(vs, rowstr):\n 'Move cursor to row given by *rowstr*, which can be either the row number or keystr.'\n rowidx = vs.getRowIndexFromStr(rowstr)\n if rowidx is None:\n return False\n\n vs.cursorRowIndex = rowidx\n\n return True\n\[email protected]\ndef getRowIndexFromStr(vs, rowstr):\n index = indexMatch(vs.rows, lambda r,vs=vs,rowstr=rowstr: keystr(vs.rowkey(r)) == rowstr)\n if index is not None:\n return index\n\n try:\n return int(rowstr)\n except ValueError:\n return None\n\[email protected]\ndef moveToCol(vs, col):\n 'Move cursor to column given by *col*, which can be either the column number or column name.'\n if isinstance(col, str):\n vcolidx = indexMatch(vs.visibleCols, lambda c,name=col: name == c.name)\n elif isinstance(col, int):\n vcolidx = col\n\n if vcolidx is None or vcolidx >= vs.nVisibleCols:\n return False\n\n vs.cursorVisibleColIndex = vcolidx\n\n return True\n\n\[email protected]\ndef commandCursor(sheet, execstr):\n 'Return (col, row) of cursor suitable for cmdlog replay of execstr.'\n colname, rowname = '', ''\n contains = lambda s, *substrs: any((a in s) for a in substrs)\n if contains(execstr, 'cursorTypedValue', 'cursorDisplay', 'cursorValue', 'cursorCell', 'cursorRow') and sheet.nRows > 0:\n k = sheet.rowkey(sheet.cursorRow)\n rowname = keystr(k) if k else sheet.cursorRowIndex\n\n if contains(execstr, 'cursorTypedValue', 'cursorDisplay', 'cursorValue', 'cursorCell', 'cursorCol', 'cursorVisibleCol', 'ColumnAtCursor'):\n if sheet.cursorCol:\n colname = sheet.cursorCol.name or sheet.visibleCols.index(sheet.cursorCol)\n else:\n colname = None\n return colname, rowname\n\n\n# rowdef: namedlist (like TsvSheet)\nclass CommandLogBase:\n 'Log of commands for current session.'\n rowtype = 'logged commands'\n precious = False\n _rowtype = CommandLogRow\n columns = [\n ColumnAttr('sheet'),\n ColumnAttr('col'),\n ColumnAttr('row'),\n ColumnAttr('longname'),\n ColumnAttr('input'),\n ColumnAttr('keystrokes'),\n ColumnAttr('comment'),\n ColumnAttr('undo', 'undofuncs', type=vlen, width=0)\n ]\n\n filetype = 'vd'\n\n def newRow(self, **fields):\n return self._rowtype(**fields)\n\n def beforeExecHook(self, sheet, cmd, args, keystrokes):\n if vd.activeCommand:\n self.afterExecSheet(sheet, False, '')\n\n colname, rowname, sheetname = '', '', None\n if sheet and not (cmd.longname.startswith('open-') and not cmd.longname in ('open-row', 'open-cell')):\n sheetname = sheet.name\n\n colname, rowname = sheet.commandCursor(cmd.execstr)\n\n contains = lambda s, *substrs: any((a in s) for a in substrs)\n if contains(cmd.execstr, 'pasteFromClipboard'):\n args = vd.sysclipValue().strip()\n\n\n comment = vd.currentReplayRow.comment if vd.currentReplayRow else cmd.helpstr\n vd.activeCommand = self.newRow(sheet=sheetname,\n col=colname,\n row=str(rowname),\n keystrokes=keystrokes,\n input=args,\n longname=cmd.longname,\n comment=comment,\n undofuncs=[])\n\n def afterExecSheet(self, sheet, escaped, err):\n 'Records vd.activeCommand'\n if not vd.activeCommand: # nothing to record\n return\n\n if err:\n vd.activeCommand[-1] += ' [%s]' % err\n\n if escaped:\n vd.activeCommand = None\n return\n\n # remove user-aborted commands and simple movements (unless first command on the sheet, which created the sheet)\n if not sheet.cmdlog_sheet.rows or vd.isLoggableCommand(vd.activeCommand.longname):\n if isLoggableSheet(sheet): # don't record actions from cmdlog or other internal sheets on global cmdlog\n self.addRow(vd.activeCommand) # add to global cmdlog\n sheet.cmdlog_sheet.addRow(vd.activeCommand) # add to sheet-specific cmdlog\n\n vd.activeCommand = None\n\n def openHook(self, vs, src):\n while isinstance(src, BaseSheet):\n src = src.source\n r = self.newRow(keystrokes='o', input=str(src), longname='open-file')\n vs.cmdlog_sheet.addRow(r)\n self.addRow(r)\n\nclass CommandLog(CommandLogBase, VisiDataMetaSheet):\n pass\n\nclass CommandLogJsonl(CommandLogBase, JsonLinesSheet):\n\n filetype = 'vdj'\n\n def newRow(self, **fields):\n return AttrDict(JsonLinesSheet.newRow(self, **fields))\n\n def iterload(self):\n for r in JsonLinesSheet.iterload(self):\n if isinstance(r, TypedWrapper):\n yield r\n else:\n yield AttrDict(r)\n\n\n### replay\n\nvd.paused = False\nvd.currentReplay = None # CommandLog replaying currently\nvd.currentReplayRow = None # must be global, to allow replay\n\n\[email protected]\ndef replay_cancel(vd):\n vd.currentReplayRow = None\n vd.currentReplay = None\n vd._nextCommands.clear()\n\n\[email protected]\ndef moveToReplayContext(vd, r, vs):\n 'set the sheet/row/col to the values in the replay row'\n vs.ensureLoaded()\n vd.sync()\n vd.clearCaches()\n\n if r.row not in [None, '']:\n vs.moveToRow(r.row) or vd.error(f'no \"{r.row}\" row on {vs}')\n\n if r.col not in [None, '']:\n vs.moveToCol(r.col) or vd.error(f'no \"{r.col}\" column on {vs}')\n\n\[email protected]\ndef replayOne(vd, r):\n 'Replay the command in one given row.'\n vd.currentReplayRow = r\n longname = getattr(r, 'longname', None)\n if longname is None and getattr(r, 'keystrokes', None) is None:\n vd.fail('failed to find command to replay')\n\n if r.sheet and longname not in ['set-option', 'unset-option']:\n vs = vd.getSheet(r.sheet) or vd.error('no sheet named %s' % r.sheet)\n else:\n vs = None\n\n if longname in ['set-option', 'unset-option']:\n try:\n context = vs if r.sheet and vs else vd\n option_scope = r.sheet or r.col or 'global'\n if option_scope == 'override': option_scope = 'global' # override is deprecated, is now global\n if longname == 'set-option':\n context.options.set(r.row, r.input, option_scope)\n else:\n context.options.unset(r.row, option_scope)\n\n escaped = False\n except Exception as e:\n vd.exceptionCaught(e)\n escaped = True\n else:\n vs = vs or vd.activeSheet\n if vs:\n if vs in vd.sheets: # if already on sheet stack, push to top\n vd.push(vs)\n else:\n vs = vd.cmdlog\n\n try:\n vd.moveToReplayContext(r, vs)\n if r.comment:\n vd.status(r.comment)\n\n # <=v1.2 used keystrokes in longname column; getCommand fetches both\n escaped = vs.execCommand(longname if longname else r.keystrokes, keystrokes=r.keystrokes)\n except Exception as e:\n vd.exceptionCaught(e)\n escaped = True\n\n vd.currentReplayRow = None\n\n if escaped: # escape during replay aborts replay\n vd.warning('replay aborted during %s' % (longname or r.keystrokes))\n return escaped\n\n\[email protected]\nclass DisableAsync:\n def __enter__(self):\n vd.execAsync = vd.execSync\n\n def __exit__(self, exc_type, exc_val, tb):\n vd.execAsync = lambda *args, vd=vd, **kwargs: visidata.VisiData.execAsync(vd, *args, **kwargs)\n\n\[email protected]\ndef replay_sync(vd, cmdlog):\n 'Replay all commands in *cmdlog*.'\n with vd.DisableAsync():\n cmdlog.cursorRowIndex = 0\n vd.currentReplay = cmdlog\n\n with Progress(total=len(cmdlog.rows)) as prog:\n while cmdlog.cursorRowIndex < len(cmdlog.rows):\n if vd.currentReplay is None:\n vd.status('replay canceled')\n return\n\n vd.statuses.clear()\n try:\n if vd.replayOne(cmdlog.cursorRow):\n vd.replay_cancel()\n return True\n except Exception as e:\n vd.replay_cancel()\n vd.exceptionCaught(e)\n vd.status('replay canceled')\n return True\n\n cmdlog.cursorRowIndex += 1\n prog.addProgress(1)\n\n if vd.activeSheet:\n vd.activeSheet.ensureLoaded()\n\n vd.status('replay complete')\n vd.currentReplay = None\n\n\[email protected]\ndef replay(vd, cmdlog):\n 'Inject commands into live execution with interface.'\n vd.push(cmdlog)\n vd._nextCommands.extend(cmdlog.rows)\n\n\[email protected]\ndef getLastArgs(vd):\n 'Get user input for the currently playing command.'\n if vd.currentReplayRow:\n return vd.currentReplayRow.input\n return None\n\n\[email protected]\ndef setLastArgs(vd, args):\n 'Set user input on last command, if not already set.'\n # only set if not already set (second input usually confirmation)\n if (vd.activeCommand is not None) and (vd.activeCommand is not UNLOADED):\n if not vd.activeCommand.input:\n vd.activeCommand.input = args\n\n\[email protected]\ndef replayStatus(vd):\n if vd._nextCommands:\n return f' | [:status_replay] {len(vd._nextCommands)} {vd.options.disp_replay_play}[:]'\n return ''\n\n\[email protected]\ndef cmdlog(sheet):\n rows = sheet.cmdlog_sheet.rows\n if isinstance(sheet.source, BaseSheet):\n rows = sheet.source.cmdlog.rows + rows\n return CommandLogJsonl(sheet.name+'_cmdlog', source=sheet, rows=rows)\n\n\[email protected]_property\ndef cmdlog_sheet(sheet):\n c = CommandLogJsonl(sheet.name+'_cmdlog', source=sheet, rows=[])\n # copy over all existing globally set options\n # you only need to do this for the first BaseSheet in a tree\n if not isinstance(sheet.source, BaseSheet):\n for r in vd.cmdlog.rows:\n if r.sheet == 'global' and (r.longname == 'set-option') or (r.longname == 'unset-option'):\n c.addRow(r)\n return c\n\n\[email protected]\ndef shortcut(self):\n if self._shortcut:\n return self._shortcut\n try:\n return str(vd.allSheets.index(self)+1)\n except ValueError:\n pass\n\n try:\n return self.cmdlog_sheet.rows[0].keystrokes or '' #2293\n except Exception:\n pass\n\n return ''\n\n\[email protected]\ndef cmdlog(vd):\n if not vd._cmdlog:\n vd._cmdlog = CommandLogJsonl('cmdlog', rows=[]) # no reload\n vd._cmdlog.resetCols()\n vd.beforeExecHooks.append(vd._cmdlog.beforeExecHook)\n return vd._cmdlog\n\[email protected]\ndef modifyCommand(vd):\n if vd.activeCommand is not None and vd.isLoggableCommand(vd.activeCommand.longname):\n return vd.activeCommand\n if not vd.cmdlog.rows:\n return None\n return vd.cmdlog.rows[-1]\n\n\[email protected]\n@asyncthread\ndef repeat_for_n(cmdlog, r, n=1):\n r.sheet = r.row = r.col = \"\"\n for i in range(n):\n vd.replayOne(r)\n\[email protected]\n@asyncthread\ndef repeat_for_selected(cmdlog, r):\n r.sheet = r.row = r.col = \"\"\n\n for idx, r in enumerate(vd.sheet.rows):\n if vd.sheet.isSelected(r):\n vd.sheet.cursorRowIndex = idx\n vd.replayOne(r)\n\n\nBaseSheet.init('_shortcut')\n\n\nglobalCommand('gD', 'cmdlog-all', 'vd.push(vd.cmdlog)', 'open global CommandLog for all commands executed in current session')\nglobalCommand('D', 'cmdlog-sheet', 'vd.push(sheet.cmdlog)', \"open current sheet's CommandLog with all other loose ends removed; includes commands from parent sheets\")\nglobalCommand('zD', 'cmdlog-sheet-only', 'vd.push(sheet.cmdlog_sheet)', 'open CommandLog for current sheet with commands from parent sheets removed')\nBaseSheet.addCommand('^D', 'save-cmdlog', 'saveSheets(inputPath(\"save cmdlog to: \", value=fnSuffix(name)), vd.cmdlog)', 'save CommandLog to filename.vdj file')\nBaseSheet.bindkey('^N', 'no-op')\nBaseSheet.addCommand('^K', 'replay-stop', 'vd.replay_cancel(); vd.warning(\"replay canceled\")', 'cancel current replay')\n\nglobalCommand(None, 'show-status', 'status(input(\"status: \"))', 'show given message on status line')\nglobalCommand('^V', 'show-version', 'status(__version_info__);', 'Show version and copyright information on status line')\nglobalCommand('z^V', 'check-version', 'checkVersion(input(\"require version: \", value=__version_info__))', 'check VisiData version against given version')\n\nCommandLog.addCommand('x', 'replay-row', 'vd.replayOne(cursorRow); status(\"replayed one row\")', 'replay command in current row')\nCommandLog.addCommand('gx', 'replay-all', 'vd.replay(sheet)', 'replay contents of entire CommandLog')\n\nCommandLogJsonl.addCommand('x', 'replay-row', 'vd.replayOne(cursorRow); status(\"replayed one row\")', 'replay command in current row')\nCommandLogJsonl.addCommand('gx', 'replay-all', 'vd.replay(sheet)', 'replay contents of entire CommandLog')\n\nCommandLog.options.json_sort_keys = False\nCommandLog.options.encoding = 'utf-8'\nCommandLogJsonl.options.json_sort_keys = False\n\nvd.addGlobals(CommandLogBase=CommandLogBase, CommandLogRow=CommandLogRow)\n\nvd.addMenuItems('''\n View > Command log > this sheet > cmdlog-sheet\n View > Command log > this sheet only > cmdlog-sheet-only\n View > Command log > all commands > cmdlog-all\n System > Execute longname > exec-longname\n Help > Version > show-version\n''')\n", "path": "visidata/cmdlog.py"}]} |
gh_patches_debug_86 | rasdani/github-patches | git_diff | cupy__cupy-2938 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Drop support of older NumPy (<=1.14)?
According to [NEP 29](https://numpy.org/neps/nep-0029-deprecation_policy.html), an unusual NumPy Enhancement Proposal that declares a community-wide policy instead of merely proposing changes to NumPy itself, the support of NumPy <=1.14 will be dropped in early January, 2020, which is a few days later:
> Drop Schedule
> ...
> On Jan 07, 2020 drop support for Numpy 1.14 (initially released on Jan 06, 2018)
Would CuPy consider following NEP 29 so that some test codes can be simplified without worrying too much about backward compatibilities? I've seen this caused hard time for a few PRs.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2
3 import os
4 from setuptools import setup
5 import sys
6
7 import cupy_setup_build
8
9
10 if sys.version_info[:3] == (3, 5, 0):
11 if not int(os.getenv('CUPY_PYTHON_350_FORCE', '0')):
12 msg = """
13 CuPy does not work with Python 3.5.0.
14
15 We strongly recommend to use another version of Python.
16 If you want to use CuPy with Python 3.5.0 at your own risk,
17 set 1 to CUPY_PYTHON_350_FORCE environment variable."""
18 print(msg)
19 sys.exit(1)
20
21
22 requirements = {
23 'setup': [
24 'fastrlock>=0.3',
25 ],
26 'install': [
27 'numpy>=1.9.0',
28 'fastrlock>=0.3',
29 ],
30 'stylecheck': [
31 'autopep8==1.3.5',
32 'flake8==3.5.0',
33 'pbr==4.0.4',
34 'pycodestyle==2.3.1',
35 ],
36 'test': [
37 'pytest<4.2.0', # 4.2.0 is slow collecting tests and times out on CI.
38 'attrs<19.2.0', # pytest 4.1.1 does not run with attrs==19.2.0
39 'mock',
40 ],
41 'doctest': [
42 'matplotlib',
43 'theano',
44 ],
45 'docs': [
46 'sphinx',
47 'sphinx_rtd_theme',
48 ],
49 'travis': [
50 '-r stylecheck',
51 '-r docs',
52 ],
53 'appveyor': [
54 '-r test',
55 ],
56 'jenkins': [
57 '-r test',
58 'pytest-timeout',
59 'pytest-cov',
60 'coveralls',
61 'codecov',
62 ],
63 }
64
65
66 def reduce_requirements(key):
67 # Resolve recursive requirements notation (-r)
68 reqs = requirements[key]
69 resolved_reqs = []
70 for req in reqs:
71 if req.startswith('-r'):
72 depend_key = req[2:].lstrip()
73 reduce_requirements(depend_key)
74 resolved_reqs += requirements[depend_key]
75 else:
76 resolved_reqs.append(req)
77 requirements[key] = resolved_reqs
78
79
80 for k in requirements.keys():
81 reduce_requirements(k)
82
83
84 extras_require = {k: v for k, v in requirements.items() if k != 'install'}
85
86
87 setup_requires = requirements['setup']
88 install_requires = requirements['install']
89 tests_require = requirements['test']
90
91
92 package_data = {
93 'cupy': [
94 'core/include/cupy/complex/arithmetic.h',
95 'core/include/cupy/complex/catrig.h',
96 'core/include/cupy/complex/catrigf.h',
97 'core/include/cupy/complex/ccosh.h',
98 'core/include/cupy/complex/ccoshf.h',
99 'core/include/cupy/complex/cexp.h',
100 'core/include/cupy/complex/cexpf.h',
101 'core/include/cupy/complex/clog.h',
102 'core/include/cupy/complex/clogf.h',
103 'core/include/cupy/complex/complex.h',
104 'core/include/cupy/complex/complex_inl.h',
105 'core/include/cupy/complex/cpow.h',
106 'core/include/cupy/complex/cproj.h',
107 'core/include/cupy/complex/csinh.h',
108 'core/include/cupy/complex/csinhf.h',
109 'core/include/cupy/complex/csqrt.h',
110 'core/include/cupy/complex/csqrtf.h',
111 'core/include/cupy/complex/ctanh.h',
112 'core/include/cupy/complex/ctanhf.h',
113 'core/include/cupy/complex/math_private.h',
114 'core/include/cupy/carray.cuh',
115 'core/include/cupy/complex.cuh',
116 'core/include/cupy/atomics.cuh',
117 'core/include/cupy/cuComplex_bridge.h',
118 'core/include/cupy/_cuda/cuda-*/*.h',
119 'core/include/cupy/_cuda/cuda-*/*.hpp',
120 'cuda/cupy_thrust.cu',
121 ],
122 }
123
124 package_data['cupy'] += cupy_setup_build.prepare_wheel_libs()
125
126 package_name = cupy_setup_build.get_package_name()
127 long_description = cupy_setup_build.get_long_description()
128 ext_modules = cupy_setup_build.get_ext_modules()
129 build_ext = cupy_setup_build.custom_build_ext
130 sdist = cupy_setup_build.sdist_with_cython
131
132 here = os.path.abspath(os.path.dirname(__file__))
133 # Get __version__ variable
134 exec(open(os.path.join(here, 'cupy', '_version.py')).read())
135
136 CLASSIFIERS = """\
137 Development Status :: 5 - Production/Stable
138 Intended Audience :: Science/Research
139 Intended Audience :: Developers
140 License :: OSI Approved :: MIT License
141 Programming Language :: Python
142 Programming Language :: Python :: 3
143 Programming Language :: Python :: 3.5
144 Programming Language :: Python :: 3.6
145 Programming Language :: Python :: 3.7
146 Programming Language :: Python :: 3 :: Only
147 Programming Language :: Cython
148 Topic :: Software Development
149 Topic :: Scientific/Engineering
150 Operating System :: Microsoft :: Windows
151 Operating System :: POSIX
152 Operating System :: MacOS
153 """
154
155
156 setup(
157 name=package_name,
158 version=__version__, # NOQA
159 description='CuPy: NumPy-like API accelerated with CUDA',
160 long_description=long_description,
161 author='Seiya Tokui',
162 author_email='[email protected]',
163 url='https://cupy.chainer.org/',
164 license='MIT License',
165 project_urls={
166 "Bug Tracker": "https://github.com/cupy/cupy/issues",
167 "Documentation": "https://docs-cupy.chainer.org/",
168 "Source Code": "https://github.com/cupy/cupy",
169 },
170 classifiers=[_f for _f in CLASSIFIERS.split('\n') if _f],
171 packages=[
172 'cupy',
173 'cupy.binary',
174 'cupy.core',
175 'cupy.creation',
176 'cupy.cuda',
177 'cupy.cuda.memory_hooks',
178 'cupy.ext',
179 'cupy.fft',
180 'cupy.indexing',
181 'cupy.io',
182 'cupy.lib',
183 'cupy.linalg',
184 'cupy.logic',
185 'cupy.manipulation',
186 'cupy.math',
187 'cupy.misc',
188 'cupy.padding',
189 'cupy.prof',
190 'cupy.random',
191 'cupy._sorting',
192 'cupy.sparse',
193 'cupy.sparse.linalg',
194 'cupy.statistics',
195 'cupy.testing',
196 'cupyx',
197 'cupyx.fallback_mode',
198 'cupyx.scipy',
199 'cupyx.scipy.fft',
200 'cupyx.scipy.fftpack',
201 'cupyx.scipy.ndimage',
202 'cupyx.scipy.sparse',
203 'cupyx.scipy.sparse.linalg',
204 'cupyx.scipy.special',
205 'cupyx.scipy.linalg',
206 'cupyx.linalg',
207 'cupyx.linalg.sparse'
208 ],
209 package_data=package_data,
210 zip_safe=False,
211 python_requires='>=3.5.0',
212 setup_requires=setup_requires,
213 install_requires=install_requires,
214 tests_require=tests_require,
215 extras_require=extras_require,
216 ext_modules=ext_modules,
217 cmdclass={'build_ext': build_ext,
218 'sdist': sdist},
219 )
220
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -24,7 +24,7 @@
'fastrlock>=0.3',
],
'install': [
- 'numpy>=1.9.0',
+ 'numpy>=1.15',
'fastrlock>=0.3',
],
'stylecheck': [
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -24,7 +24,7 @@\n 'fastrlock>=0.3',\n ],\n 'install': [\n- 'numpy>=1.9.0',\n+ 'numpy>=1.15',\n 'fastrlock>=0.3',\n ],\n 'stylecheck': [\n", "issue": "Drop support of older NumPy (<=1.14)?\nAccording to [NEP 29](https://numpy.org/neps/nep-0029-deprecation_policy.html), an unusual NumPy Enhancement Proposal that declares a community-wide policy instead of merely proposing changes to NumPy itself, the support of NumPy <=1.14 will be dropped in early January, 2020, which is a few days later:\r\n> Drop Schedule\r\n> ...\r\n> On Jan 07, 2020 drop support for Numpy 1.14 (initially released on Jan 06, 2018)\r\n\r\nWould CuPy consider following NEP 29 so that some test codes can be simplified without worrying too much about backward compatibilities? I've seen this caused hard time for a few PRs.\n", "before_files": [{"content": "#!/usr/bin/env python\n\nimport os\nfrom setuptools import setup\nimport sys\n\nimport cupy_setup_build\n\n\nif sys.version_info[:3] == (3, 5, 0):\n if not int(os.getenv('CUPY_PYTHON_350_FORCE', '0')):\n msg = \"\"\"\nCuPy does not work with Python 3.5.0.\n\nWe strongly recommend to use another version of Python.\nIf you want to use CuPy with Python 3.5.0 at your own risk,\nset 1 to CUPY_PYTHON_350_FORCE environment variable.\"\"\"\n print(msg)\n sys.exit(1)\n\n\nrequirements = {\n 'setup': [\n 'fastrlock>=0.3',\n ],\n 'install': [\n 'numpy>=1.9.0',\n 'fastrlock>=0.3',\n ],\n 'stylecheck': [\n 'autopep8==1.3.5',\n 'flake8==3.5.0',\n 'pbr==4.0.4',\n 'pycodestyle==2.3.1',\n ],\n 'test': [\n 'pytest<4.2.0', # 4.2.0 is slow collecting tests and times out on CI.\n 'attrs<19.2.0', # pytest 4.1.1 does not run with attrs==19.2.0\n 'mock',\n ],\n 'doctest': [\n 'matplotlib',\n 'theano',\n ],\n 'docs': [\n 'sphinx',\n 'sphinx_rtd_theme',\n ],\n 'travis': [\n '-r stylecheck',\n '-r docs',\n ],\n 'appveyor': [\n '-r test',\n ],\n 'jenkins': [\n '-r test',\n 'pytest-timeout',\n 'pytest-cov',\n 'coveralls',\n 'codecov',\n ],\n}\n\n\ndef reduce_requirements(key):\n # Resolve recursive requirements notation (-r)\n reqs = requirements[key]\n resolved_reqs = []\n for req in reqs:\n if req.startswith('-r'):\n depend_key = req[2:].lstrip()\n reduce_requirements(depend_key)\n resolved_reqs += requirements[depend_key]\n else:\n resolved_reqs.append(req)\n requirements[key] = resolved_reqs\n\n\nfor k in requirements.keys():\n reduce_requirements(k)\n\n\nextras_require = {k: v for k, v in requirements.items() if k != 'install'}\n\n\nsetup_requires = requirements['setup']\ninstall_requires = requirements['install']\ntests_require = requirements['test']\n\n\npackage_data = {\n 'cupy': [\n 'core/include/cupy/complex/arithmetic.h',\n 'core/include/cupy/complex/catrig.h',\n 'core/include/cupy/complex/catrigf.h',\n 'core/include/cupy/complex/ccosh.h',\n 'core/include/cupy/complex/ccoshf.h',\n 'core/include/cupy/complex/cexp.h',\n 'core/include/cupy/complex/cexpf.h',\n 'core/include/cupy/complex/clog.h',\n 'core/include/cupy/complex/clogf.h',\n 'core/include/cupy/complex/complex.h',\n 'core/include/cupy/complex/complex_inl.h',\n 'core/include/cupy/complex/cpow.h',\n 'core/include/cupy/complex/cproj.h',\n 'core/include/cupy/complex/csinh.h',\n 'core/include/cupy/complex/csinhf.h',\n 'core/include/cupy/complex/csqrt.h',\n 'core/include/cupy/complex/csqrtf.h',\n 'core/include/cupy/complex/ctanh.h',\n 'core/include/cupy/complex/ctanhf.h',\n 'core/include/cupy/complex/math_private.h',\n 'core/include/cupy/carray.cuh',\n 'core/include/cupy/complex.cuh',\n 'core/include/cupy/atomics.cuh',\n 'core/include/cupy/cuComplex_bridge.h',\n 'core/include/cupy/_cuda/cuda-*/*.h',\n 'core/include/cupy/_cuda/cuda-*/*.hpp',\n 'cuda/cupy_thrust.cu',\n ],\n}\n\npackage_data['cupy'] += cupy_setup_build.prepare_wheel_libs()\n\npackage_name = cupy_setup_build.get_package_name()\nlong_description = cupy_setup_build.get_long_description()\next_modules = cupy_setup_build.get_ext_modules()\nbuild_ext = cupy_setup_build.custom_build_ext\nsdist = cupy_setup_build.sdist_with_cython\n\nhere = os.path.abspath(os.path.dirname(__file__))\n# Get __version__ variable\nexec(open(os.path.join(here, 'cupy', '_version.py')).read())\n\nCLASSIFIERS = \"\"\"\\\nDevelopment Status :: 5 - Production/Stable\nIntended Audience :: Science/Research\nIntended Audience :: Developers\nLicense :: OSI Approved :: MIT License\nProgramming Language :: Python\nProgramming Language :: Python :: 3\nProgramming Language :: Python :: 3.5\nProgramming Language :: Python :: 3.6\nProgramming Language :: Python :: 3.7\nProgramming Language :: Python :: 3 :: Only\nProgramming Language :: Cython\nTopic :: Software Development\nTopic :: Scientific/Engineering\nOperating System :: Microsoft :: Windows\nOperating System :: POSIX\nOperating System :: MacOS\n\"\"\"\n\n\nsetup(\n name=package_name,\n version=__version__, # NOQA\n description='CuPy: NumPy-like API accelerated with CUDA',\n long_description=long_description,\n author='Seiya Tokui',\n author_email='[email protected]',\n url='https://cupy.chainer.org/',\n license='MIT License',\n project_urls={\n \"Bug Tracker\": \"https://github.com/cupy/cupy/issues\",\n \"Documentation\": \"https://docs-cupy.chainer.org/\",\n \"Source Code\": \"https://github.com/cupy/cupy\",\n },\n classifiers=[_f for _f in CLASSIFIERS.split('\\n') if _f],\n packages=[\n 'cupy',\n 'cupy.binary',\n 'cupy.core',\n 'cupy.creation',\n 'cupy.cuda',\n 'cupy.cuda.memory_hooks',\n 'cupy.ext',\n 'cupy.fft',\n 'cupy.indexing',\n 'cupy.io',\n 'cupy.lib',\n 'cupy.linalg',\n 'cupy.logic',\n 'cupy.manipulation',\n 'cupy.math',\n 'cupy.misc',\n 'cupy.padding',\n 'cupy.prof',\n 'cupy.random',\n 'cupy._sorting',\n 'cupy.sparse',\n 'cupy.sparse.linalg',\n 'cupy.statistics',\n 'cupy.testing',\n 'cupyx',\n 'cupyx.fallback_mode',\n 'cupyx.scipy',\n 'cupyx.scipy.fft',\n 'cupyx.scipy.fftpack',\n 'cupyx.scipy.ndimage',\n 'cupyx.scipy.sparse',\n 'cupyx.scipy.sparse.linalg',\n 'cupyx.scipy.special',\n 'cupyx.scipy.linalg',\n 'cupyx.linalg',\n 'cupyx.linalg.sparse'\n ],\n package_data=package_data,\n zip_safe=False,\n python_requires='>=3.5.0',\n setup_requires=setup_requires,\n install_requires=install_requires,\n tests_require=tests_require,\n extras_require=extras_require,\n ext_modules=ext_modules,\n cmdclass={'build_ext': build_ext,\n 'sdist': sdist},\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\nimport os\nfrom setuptools import setup\nimport sys\n\nimport cupy_setup_build\n\n\nif sys.version_info[:3] == (3, 5, 0):\n if not int(os.getenv('CUPY_PYTHON_350_FORCE', '0')):\n msg = \"\"\"\nCuPy does not work with Python 3.5.0.\n\nWe strongly recommend to use another version of Python.\nIf you want to use CuPy with Python 3.5.0 at your own risk,\nset 1 to CUPY_PYTHON_350_FORCE environment variable.\"\"\"\n print(msg)\n sys.exit(1)\n\n\nrequirements = {\n 'setup': [\n 'fastrlock>=0.3',\n ],\n 'install': [\n 'numpy>=1.15',\n 'fastrlock>=0.3',\n ],\n 'stylecheck': [\n 'autopep8==1.3.5',\n 'flake8==3.5.0',\n 'pbr==4.0.4',\n 'pycodestyle==2.3.1',\n ],\n 'test': [\n 'pytest<4.2.0', # 4.2.0 is slow collecting tests and times out on CI.\n 'attrs<19.2.0', # pytest 4.1.1 does not run with attrs==19.2.0\n 'mock',\n ],\n 'doctest': [\n 'matplotlib',\n 'theano',\n ],\n 'docs': [\n 'sphinx',\n 'sphinx_rtd_theme',\n ],\n 'travis': [\n '-r stylecheck',\n '-r docs',\n ],\n 'appveyor': [\n '-r test',\n ],\n 'jenkins': [\n '-r test',\n 'pytest-timeout',\n 'pytest-cov',\n 'coveralls',\n 'codecov',\n ],\n}\n\n\ndef reduce_requirements(key):\n # Resolve recursive requirements notation (-r)\n reqs = requirements[key]\n resolved_reqs = []\n for req in reqs:\n if req.startswith('-r'):\n depend_key = req[2:].lstrip()\n reduce_requirements(depend_key)\n resolved_reqs += requirements[depend_key]\n else:\n resolved_reqs.append(req)\n requirements[key] = resolved_reqs\n\n\nfor k in requirements.keys():\n reduce_requirements(k)\n\n\nextras_require = {k: v for k, v in requirements.items() if k != 'install'}\n\n\nsetup_requires = requirements['setup']\ninstall_requires = requirements['install']\ntests_require = requirements['test']\n\n\npackage_data = {\n 'cupy': [\n 'core/include/cupy/complex/arithmetic.h',\n 'core/include/cupy/complex/catrig.h',\n 'core/include/cupy/complex/catrigf.h',\n 'core/include/cupy/complex/ccosh.h',\n 'core/include/cupy/complex/ccoshf.h',\n 'core/include/cupy/complex/cexp.h',\n 'core/include/cupy/complex/cexpf.h',\n 'core/include/cupy/complex/clog.h',\n 'core/include/cupy/complex/clogf.h',\n 'core/include/cupy/complex/complex.h',\n 'core/include/cupy/complex/complex_inl.h',\n 'core/include/cupy/complex/cpow.h',\n 'core/include/cupy/complex/cproj.h',\n 'core/include/cupy/complex/csinh.h',\n 'core/include/cupy/complex/csinhf.h',\n 'core/include/cupy/complex/csqrt.h',\n 'core/include/cupy/complex/csqrtf.h',\n 'core/include/cupy/complex/ctanh.h',\n 'core/include/cupy/complex/ctanhf.h',\n 'core/include/cupy/complex/math_private.h',\n 'core/include/cupy/carray.cuh',\n 'core/include/cupy/complex.cuh',\n 'core/include/cupy/atomics.cuh',\n 'core/include/cupy/cuComplex_bridge.h',\n 'core/include/cupy/_cuda/cuda-*/*.h',\n 'core/include/cupy/_cuda/cuda-*/*.hpp',\n 'cuda/cupy_thrust.cu',\n ],\n}\n\npackage_data['cupy'] += cupy_setup_build.prepare_wheel_libs()\n\npackage_name = cupy_setup_build.get_package_name()\nlong_description = cupy_setup_build.get_long_description()\next_modules = cupy_setup_build.get_ext_modules()\nbuild_ext = cupy_setup_build.custom_build_ext\nsdist = cupy_setup_build.sdist_with_cython\n\nhere = os.path.abspath(os.path.dirname(__file__))\n# Get __version__ variable\nexec(open(os.path.join(here, 'cupy', '_version.py')).read())\n\nCLASSIFIERS = \"\"\"\\\nDevelopment Status :: 5 - Production/Stable\nIntended Audience :: Science/Research\nIntended Audience :: Developers\nLicense :: OSI Approved :: MIT License\nProgramming Language :: Python\nProgramming Language :: Python :: 3\nProgramming Language :: Python :: 3.5\nProgramming Language :: Python :: 3.6\nProgramming Language :: Python :: 3.7\nProgramming Language :: Python :: 3 :: Only\nProgramming Language :: Cython\nTopic :: Software Development\nTopic :: Scientific/Engineering\nOperating System :: Microsoft :: Windows\nOperating System :: POSIX\nOperating System :: MacOS\n\"\"\"\n\n\nsetup(\n name=package_name,\n version=__version__, # NOQA\n description='CuPy: NumPy-like API accelerated with CUDA',\n long_description=long_description,\n author='Seiya Tokui',\n author_email='[email protected]',\n url='https://cupy.chainer.org/',\n license='MIT License',\n project_urls={\n \"Bug Tracker\": \"https://github.com/cupy/cupy/issues\",\n \"Documentation\": \"https://docs-cupy.chainer.org/\",\n \"Source Code\": \"https://github.com/cupy/cupy\",\n },\n classifiers=[_f for _f in CLASSIFIERS.split('\\n') if _f],\n packages=[\n 'cupy',\n 'cupy.binary',\n 'cupy.core',\n 'cupy.creation',\n 'cupy.cuda',\n 'cupy.cuda.memory_hooks',\n 'cupy.ext',\n 'cupy.fft',\n 'cupy.indexing',\n 'cupy.io',\n 'cupy.lib',\n 'cupy.linalg',\n 'cupy.logic',\n 'cupy.manipulation',\n 'cupy.math',\n 'cupy.misc',\n 'cupy.padding',\n 'cupy.prof',\n 'cupy.random',\n 'cupy._sorting',\n 'cupy.sparse',\n 'cupy.sparse.linalg',\n 'cupy.statistics',\n 'cupy.testing',\n 'cupyx',\n 'cupyx.fallback_mode',\n 'cupyx.scipy',\n 'cupyx.scipy.fft',\n 'cupyx.scipy.fftpack',\n 'cupyx.scipy.ndimage',\n 'cupyx.scipy.sparse',\n 'cupyx.scipy.sparse.linalg',\n 'cupyx.scipy.special',\n 'cupyx.scipy.linalg',\n 'cupyx.linalg',\n 'cupyx.linalg.sparse'\n ],\n package_data=package_data,\n zip_safe=False,\n python_requires='>=3.5.0',\n setup_requires=setup_requires,\n install_requires=install_requires,\n tests_require=tests_require,\n extras_require=extras_require,\n ext_modules=ext_modules,\n cmdclass={'build_ext': build_ext,\n 'sdist': sdist},\n)\n", "path": "setup.py"}]} |
gh_patches_debug_87 | rasdani/github-patches | git_diff | flask-admin__flask-admin-1732 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Regression: Batch actions not working
On the master branch, batch actions fail with a JS error: `TypeError: undefined is not an object (evaluating 'modelActions.execute')`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/sqla/app.py`
Content:
```
1 import os
2 import os.path as op
3 from flask import Flask
4 from flask_sqlalchemy import SQLAlchemy
5 from sqlalchemy.ext.hybrid import hybrid_property
6
7 from wtforms import validators
8
9 import flask_admin as admin
10 from flask_admin.base import MenuLink
11 from flask_admin.contrib import sqla
12 from flask_admin.contrib.sqla import filters
13 from flask_admin.contrib.sqla.form import InlineModelConverter
14 from flask_admin.contrib.sqla.fields import InlineModelFormList
15 from flask_admin.contrib.sqla.filters import BaseSQLAFilter, FilterEqual
16
17
18 # Create application
19 app = Flask(__name__)
20
21 # set optional bootswatch theme
22 # see http://bootswatch.com/3/ for available swatches
23 app.config['FLASK_ADMIN_SWATCH'] = 'cerulean'
24
25 # Create dummy secrey key so we can use sessions
26 app.config['SECRET_KEY'] = '123456790'
27
28 # Create in-memory database
29 app.config['DATABASE_FILE'] = 'sample_db.sqlite'
30 app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///' + app.config['DATABASE_FILE']
31 app.config['SQLALCHEMY_ECHO'] = True
32 db = SQLAlchemy(app)
33
34
35 # Create models
36 class User(db.Model):
37 id = db.Column(db.Integer, primary_key=True)
38 first_name = db.Column(db.String(100))
39 last_name = db.Column(db.String(100))
40 email = db.Column(db.String(120), unique=True)
41 pets = db.relationship('Pet', backref='owner')
42
43 def __str__(self):
44 return "{}, {}".format(self.last_name, self.first_name)
45
46
47 class Pet(db.Model):
48 id = db.Column(db.Integer, primary_key=True)
49 name = db.Column(db.String(50), nullable=False)
50 person_id = db.Column(db.Integer, db.ForeignKey('user.id'))
51 available = db.Column(db.Boolean)
52
53 def __str__(self):
54 return self.name
55
56
57 # Create M2M table
58 post_tags_table = db.Table('post_tags', db.Model.metadata,
59 db.Column('post_id', db.Integer, db.ForeignKey('post.id')),
60 db.Column('tag_id', db.Integer, db.ForeignKey('tag.id'))
61 )
62
63
64 class Post(db.Model):
65 id = db.Column(db.Integer, primary_key=True)
66 title = db.Column(db.String(120))
67 text = db.Column(db.Text, nullable=False)
68 date = db.Column(db.Date)
69
70 user_id = db.Column(db.Integer(), db.ForeignKey(User.id))
71 user = db.relationship(User, backref='posts')
72
73 tags = db.relationship('Tag', secondary=post_tags_table)
74
75 def __str__(self):
76 return "{}".format(self.title)
77
78
79 class Tag(db.Model):
80 id = db.Column(db.Integer, primary_key=True)
81 name = db.Column(db.Unicode(64))
82
83 def __str__(self):
84 return "{}".format(self.name)
85
86
87 class UserInfo(db.Model):
88 id = db.Column(db.Integer, primary_key=True)
89
90 key = db.Column(db.String(64), nullable=False)
91 value = db.Column(db.String(64))
92
93 user_id = db.Column(db.Integer(), db.ForeignKey(User.id))
94 user = db.relationship(User, backref='info')
95
96 def __str__(self):
97 return "{} - {}".format(self.key, self.value)
98
99
100 class Tree(db.Model):
101 id = db.Column(db.Integer, primary_key=True)
102 name = db.Column(db.String(64))
103 parent_id = db.Column(db.Integer, db.ForeignKey('tree.id'))
104 parent = db.relationship('Tree', remote_side=[id], backref='children')
105
106 def __str__(self):
107 return "{}".format(self.name)
108
109
110 class Screen(db.Model):
111 __tablename__ = 'screen'
112 id = db.Column(db.Integer, primary_key=True)
113 width = db.Column(db.Integer, nullable=False)
114 height = db.Column(db.Integer, nullable=False)
115
116 @hybrid_property
117 def number_of_pixels(self):
118 return self.width * self.height
119
120
121 # Flask views
122 @app.route('/')
123 def index():
124 return '<a href="/admin/">Click me to get to Admin!</a>'
125
126
127 # Custom filter class
128 class FilterLastNameBrown(BaseSQLAFilter):
129 def apply(self, query, value, alias=None):
130 if value == '1':
131 return query.filter(self.column == "Brown")
132 else:
133 return query.filter(self.column != "Brown")
134
135 def operation(self):
136 return 'is Brown'
137
138
139 # Customized User model admin
140 inline_form_options = {
141 'form_label': "Info item",
142 'form_columns': ['id', 'key', 'value'],
143 'form_args': None,
144 'form_extra_fields': None,
145 }
146
147 class UserAdmin(sqla.ModelView):
148 column_display_pk = True
149 column_list = [
150 'id',
151 'last_name',
152 'first_name',
153 'email',
154 'pets',
155 ]
156 column_default_sort = [('last_name', False), ('first_name', False)] # sort on multiple columns
157
158 # custom filter: each filter in the list is a filter operation (equals, not equals, etc)
159 # filters with the same name will appear as operations under the same filter
160 column_filters = [
161 FilterEqual(column=User.last_name, name='Last Name'),
162 FilterLastNameBrown(column=User.last_name, name='Last Name',
163 options=(('1', 'Yes'), ('0', 'No')))
164 ]
165 inline_models = [(UserInfo, inline_form_options), ]
166
167 # setup create & edit forms so that only 'available' pets can be selected
168 def create_form(self):
169 return self._use_filtered_parent(
170 super(UserAdmin, self).create_form()
171 )
172
173 def edit_form(self, obj):
174 return self._use_filtered_parent(
175 super(UserAdmin, self).edit_form(obj)
176 )
177
178 def _use_filtered_parent(self, form):
179 form.pets.query_factory = self._get_parent_list
180 return form
181
182 def _get_parent_list(self):
183 # only show available pets in the form
184 return Pet.query.filter_by(available=True).all()
185
186
187
188 # Customized Post model admin
189 class PostAdmin(sqla.ModelView):
190 column_exclude_list = ['text']
191 column_default_sort = ('date', True)
192 column_sortable_list = [
193 'title',
194 'date',
195 ('user', ('user.last_name', 'user.first_name')), # sort on multiple columns
196 ]
197 column_labels = dict(title='Post Title') # Rename 'title' column in list view
198 column_searchable_list = [
199 'title',
200 User.first_name,
201 User.last_name,
202 'tags.name',
203 ]
204 column_filters = [
205 'user',
206 'title',
207 'date',
208 'tags',
209 filters.FilterLike(Post.title, 'Fixed Title', options=(('test1', 'Test 1'), ('test2', 'Test 2'))),
210 ]
211
212 # Pass arguments to WTForms. In this case, change label for text field to
213 # be 'Big Text' and add required() validator.
214 form_args = dict(
215 text=dict(label='Big Text', validators=[validators.required()])
216 )
217
218 form_ajax_refs = {
219 'user': {
220 'fields': (User.first_name, User.last_name)
221 },
222 'tags': {
223 'fields': (Tag.name,),
224 'minimum_input_length': 0, # show suggestions, even before any user input
225 'placeholder': 'Please select',
226 'page_size': 5,
227 },
228 }
229
230 def __init__(self, session):
231 # Just call parent class with predefined model.
232 super(PostAdmin, self).__init__(Post, session)
233
234
235 class TreeView(sqla.ModelView):
236 form_excluded_columns = ['children', ]
237
238
239 class ScreenView(sqla.ModelView):
240 column_list = ['id', 'width', 'height', 'number_of_pixels'] # not that 'number_of_pixels' is a hybrid property, not a field
241 column_sortable_list = ['id', 'width', 'height', 'number_of_pixels']
242
243 # Flask-admin can automatically detect the relevant filters for hybrid properties.
244 column_filters = ('number_of_pixels', )
245
246
247 # Create admin
248 admin = admin.Admin(app, name='Example: SQLAlchemy', template_mode='bootstrap3')
249
250 # Add views
251 admin.add_view(UserAdmin(User, db.session))
252 admin.add_view(sqla.ModelView(Tag, db.session))
253 admin.add_view(PostAdmin(db.session))
254 admin.add_view(sqla.ModelView(Pet, db.session, category="Other"))
255 admin.add_view(sqla.ModelView(UserInfo, db.session, category="Other"))
256 admin.add_view(TreeView(Tree, db.session, category="Other"))
257 admin.add_view(ScreenView(Screen, db.session, category="Other"))
258 admin.add_sub_category(name="Links", parent_name="Other")
259 admin.add_link(MenuLink(name='Back Home', url='/', category='Links'))
260 admin.add_link(MenuLink(name='Google', url='http://www.google.com/', category='Links'))
261 admin.add_link(MenuLink(name='Mozilla', url='http://mozilla.org/', category='Links'))
262
263
264 def build_sample_db():
265 """
266 Populate a small db with some example entries.
267 """
268
269 import random
270 import datetime
271
272 db.drop_all()
273 db.create_all()
274
275 # Create sample Users
276 first_names = [
277 'Harry', 'Amelia', 'Oliver', 'Jack', 'Isabella', 'Charlie', 'Sophie', 'Mia',
278 'Jacob', 'Thomas', 'Emily', 'Lily', 'Ava', 'Isla', 'Alfie', 'Olivia', 'Jessica',
279 'Riley', 'William', 'James', 'Geoffrey', 'Lisa', 'Benjamin', 'Stacey', 'Lucy'
280 ]
281 last_names = [
282 'Brown', 'Brown', 'Patel', 'Jones', 'Williams', 'Johnson', 'Taylor', 'Thomas',
283 'Roberts', 'Khan', 'Clarke', 'Clarke', 'Clarke', 'James', 'Phillips', 'Wilson',
284 'Ali', 'Mason', 'Mitchell', 'Rose', 'Davis', 'Davies', 'Rodriguez', 'Cox', 'Alexander'
285 ]
286
287 user_list = []
288 for i in range(len(first_names)):
289 user = User()
290 user.first_name = first_names[i]
291 user.last_name = last_names[i]
292 user.email = first_names[i].lower() + "@example.com"
293 user.info.append(UserInfo(key="foo", value="bar"))
294 user_list.append(user)
295 db.session.add(user)
296
297 # Create sample Tags
298 tag_list = []
299 for tmp in ["YELLOW", "WHITE", "BLUE", "GREEN", "RED", "BLACK", "BROWN", "PURPLE", "ORANGE"]:
300 tag = Tag()
301 tag.name = tmp
302 tag_list.append(tag)
303 db.session.add(tag)
304
305 # Create sample Posts
306 sample_text = [
307 {
308 'title': "de Finibus Bonorum et Malorum - Part I",
309 'content': "Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor \
310 incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud \
311 exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure \
312 dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. \
313 Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt \
314 mollit anim id est laborum."
315 },
316 {
317 'title': "de Finibus Bonorum et Malorum - Part II",
318 'content': "Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque \
319 laudantium, totam rem aperiam, eaque ipsa quae ab illo inventore veritatis et quasi architecto \
320 beatae vitae dicta sunt explicabo. Nemo enim ipsam voluptatem quia voluptas sit aspernatur \
321 aut odit aut fugit, sed quia consequuntur magni dolores eos qui ratione voluptatem sequi \
322 nesciunt. Neque porro quisquam est, qui dolorem ipsum quia dolor sit amet, consectetur, \
323 adipisci velit, sed quia non numquam eius modi tempora incidunt ut labore et dolore magnam \
324 aliquam quaerat voluptatem. Ut enim ad minima veniam, quis nostrum exercitationem ullam \
325 corporis suscipit laboriosam, nisi ut aliquid ex ea commodi consequatur? Quis autem vel eum \
326 iure reprehenderit qui in ea voluptate velit esse quam nihil molestiae consequatur, vel illum \
327 qui dolorem eum fugiat quo voluptas nulla pariatur?"
328 },
329 {
330 'title': "de Finibus Bonorum et Malorum - Part III",
331 'content': "At vero eos et accusamus et iusto odio dignissimos ducimus qui blanditiis praesentium \
332 voluptatum deleniti atque corrupti quos dolores et quas molestias excepturi sint occaecati \
333 cupiditate non provident, similique sunt in culpa qui officia deserunt mollitia animi, id \
334 est laborum et dolorum fuga. Et harum quidem rerum facilis est et expedita distinctio. Nam \
335 libero tempore, cum soluta nobis est eligendi optio cumque nihil impedit quo minus id quod \
336 maxime placeat facere possimus, omnis voluptas assumenda est, omnis dolor repellendus. \
337 Temporibus autem quibusdam et aut officiis debitis aut rerum necessitatibus saepe eveniet \
338 ut et voluptates repudiandae sint et molestiae non recusandae. Itaque earum rerum hic tenetur \
339 a sapiente delectus, ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis \
340 doloribus asperiores repellat."
341 }
342 ]
343
344 for user in user_list:
345 entry = random.choice(sample_text) # select text at random
346 post = Post()
347 post.user = user
348 post.title = entry['title']
349 post.text = entry['content']
350 tmp = int(1000*random.random()) # random number between 0 and 1000:
351 post.date = datetime.datetime.now() - datetime.timedelta(days=tmp)
352 post.tags = random.sample(tag_list, 2) # select a couple of tags at random
353 db.session.add(post)
354
355 # Create a sample Tree structure
356 trunk = Tree(name="Trunk")
357 db.session.add(trunk)
358 for i in range(5):
359 branch = Tree()
360 branch.name = "Branch " + str(i+1)
361 branch.parent = trunk
362 db.session.add(branch)
363 for j in range(5):
364 leaf = Tree()
365 leaf.name = "Leaf " + str(j+1)
366 leaf.parent = branch
367 db.session.add(leaf)
368
369 db.session.add(Pet(name='Dog', available=True))
370 db.session.add(Pet(name='Fish', available=True))
371 db.session.add(Pet(name='Cat', available=True))
372 db.session.add(Pet(name='Parrot', available=True))
373 db.session.add(Pet(name='Ocelot', available=False))
374
375 db.session.add(Screen(width=500, height=2000))
376 db.session.add(Screen(width=550, height=1900))
377
378 db.session.commit()
379 return
380
381 if __name__ == '__main__':
382 # Build a sample db on the fly, if one does not exist yet.
383 app_dir = op.realpath(os.path.dirname(__file__))
384 database_path = op.join(app_dir, app.config['DATABASE_FILE'])
385 if not os.path.exists(database_path):
386 build_sample_db()
387
388 # Start app
389 app.run(debug=True)
390
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/examples/sqla/app.py b/examples/sqla/app.py
--- a/examples/sqla/app.py
+++ b/examples/sqla/app.py
@@ -145,6 +145,7 @@
}
class UserAdmin(sqla.ModelView):
+ action_disallowed_list = ['delete', ]
column_display_pk = True
column_list = [
'id',
| {"golden_diff": "diff --git a/examples/sqla/app.py b/examples/sqla/app.py\n--- a/examples/sqla/app.py\n+++ b/examples/sqla/app.py\n@@ -145,6 +145,7 @@\n }\n \n class UserAdmin(sqla.ModelView):\n+ action_disallowed_list = ['delete', ]\n column_display_pk = True\n column_list = [\n 'id',\n", "issue": "Regression: Batch actions not working\nOn the master branch, batch actions fail with a JS error: `TypeError: undefined is not an object (evaluating 'modelActions.execute')`\n", "before_files": [{"content": "import os\nimport os.path as op\nfrom flask import Flask\nfrom flask_sqlalchemy import SQLAlchemy\nfrom sqlalchemy.ext.hybrid import hybrid_property\n\nfrom wtforms import validators\n\nimport flask_admin as admin\nfrom flask_admin.base import MenuLink\nfrom flask_admin.contrib import sqla\nfrom flask_admin.contrib.sqla import filters\nfrom flask_admin.contrib.sqla.form import InlineModelConverter\nfrom flask_admin.contrib.sqla.fields import InlineModelFormList\nfrom flask_admin.contrib.sqla.filters import BaseSQLAFilter, FilterEqual\n\n\n# Create application\napp = Flask(__name__)\n\n# set optional bootswatch theme\n# see http://bootswatch.com/3/ for available swatches\napp.config['FLASK_ADMIN_SWATCH'] = 'cerulean'\n\n# Create dummy secrey key so we can use sessions\napp.config['SECRET_KEY'] = '123456790'\n\n# Create in-memory database\napp.config['DATABASE_FILE'] = 'sample_db.sqlite'\napp.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///' + app.config['DATABASE_FILE']\napp.config['SQLALCHEMY_ECHO'] = True\ndb = SQLAlchemy(app)\n\n\n# Create models\nclass User(db.Model):\n id = db.Column(db.Integer, primary_key=True)\n first_name = db.Column(db.String(100))\n last_name = db.Column(db.String(100))\n email = db.Column(db.String(120), unique=True)\n pets = db.relationship('Pet', backref='owner')\n\n def __str__(self):\n return \"{}, {}\".format(self.last_name, self.first_name)\n\n\nclass Pet(db.Model):\n id = db.Column(db.Integer, primary_key=True)\n name = db.Column(db.String(50), nullable=False)\n person_id = db.Column(db.Integer, db.ForeignKey('user.id'))\n available = db.Column(db.Boolean)\n\n def __str__(self):\n return self.name\n\n\n# Create M2M table\npost_tags_table = db.Table('post_tags', db.Model.metadata,\n db.Column('post_id', db.Integer, db.ForeignKey('post.id')),\n db.Column('tag_id', db.Integer, db.ForeignKey('tag.id'))\n )\n\n\nclass Post(db.Model):\n id = db.Column(db.Integer, primary_key=True)\n title = db.Column(db.String(120))\n text = db.Column(db.Text, nullable=False)\n date = db.Column(db.Date)\n\n user_id = db.Column(db.Integer(), db.ForeignKey(User.id))\n user = db.relationship(User, backref='posts')\n\n tags = db.relationship('Tag', secondary=post_tags_table)\n\n def __str__(self):\n return \"{}\".format(self.title)\n\n\nclass Tag(db.Model):\n id = db.Column(db.Integer, primary_key=True)\n name = db.Column(db.Unicode(64))\n\n def __str__(self):\n return \"{}\".format(self.name)\n\n\nclass UserInfo(db.Model):\n id = db.Column(db.Integer, primary_key=True)\n\n key = db.Column(db.String(64), nullable=False)\n value = db.Column(db.String(64))\n\n user_id = db.Column(db.Integer(), db.ForeignKey(User.id))\n user = db.relationship(User, backref='info')\n\n def __str__(self):\n return \"{} - {}\".format(self.key, self.value)\n\n\nclass Tree(db.Model):\n id = db.Column(db.Integer, primary_key=True)\n name = db.Column(db.String(64))\n parent_id = db.Column(db.Integer, db.ForeignKey('tree.id'))\n parent = db.relationship('Tree', remote_side=[id], backref='children')\n\n def __str__(self):\n return \"{}\".format(self.name)\n\n\nclass Screen(db.Model):\n __tablename__ = 'screen'\n id = db.Column(db.Integer, primary_key=True)\n width = db.Column(db.Integer, nullable=False)\n height = db.Column(db.Integer, nullable=False)\n\n @hybrid_property\n def number_of_pixels(self):\n return self.width * self.height\n\n\n# Flask views\[email protected]('/')\ndef index():\n return '<a href=\"/admin/\">Click me to get to Admin!</a>'\n\n\n# Custom filter class\nclass FilterLastNameBrown(BaseSQLAFilter):\n def apply(self, query, value, alias=None):\n if value == '1':\n return query.filter(self.column == \"Brown\")\n else:\n return query.filter(self.column != \"Brown\")\n\n def operation(self):\n return 'is Brown'\n\n\n# Customized User model admin\ninline_form_options = {\n 'form_label': \"Info item\",\n 'form_columns': ['id', 'key', 'value'],\n 'form_args': None,\n 'form_extra_fields': None,\n}\n\nclass UserAdmin(sqla.ModelView):\n column_display_pk = True\n column_list = [\n 'id',\n 'last_name',\n 'first_name',\n 'email',\n 'pets',\n ]\n column_default_sort = [('last_name', False), ('first_name', False)] # sort on multiple columns\n\n # custom filter: each filter in the list is a filter operation (equals, not equals, etc)\n # filters with the same name will appear as operations under the same filter\n column_filters = [\n FilterEqual(column=User.last_name, name='Last Name'),\n FilterLastNameBrown(column=User.last_name, name='Last Name',\n options=(('1', 'Yes'), ('0', 'No')))\n ]\n inline_models = [(UserInfo, inline_form_options), ]\n\n # setup create & edit forms so that only 'available' pets can be selected\n def create_form(self):\n return self._use_filtered_parent(\n super(UserAdmin, self).create_form()\n )\n\n def edit_form(self, obj):\n return self._use_filtered_parent(\n super(UserAdmin, self).edit_form(obj)\n )\n\n def _use_filtered_parent(self, form):\n form.pets.query_factory = self._get_parent_list\n return form\n\n def _get_parent_list(self):\n # only show available pets in the form\n return Pet.query.filter_by(available=True).all()\n\n\n\n# Customized Post model admin\nclass PostAdmin(sqla.ModelView):\n column_exclude_list = ['text']\n column_default_sort = ('date', True)\n column_sortable_list = [\n 'title',\n 'date',\n ('user', ('user.last_name', 'user.first_name')), # sort on multiple columns\n ]\n column_labels = dict(title='Post Title') # Rename 'title' column in list view\n column_searchable_list = [\n 'title',\n User.first_name,\n User.last_name,\n 'tags.name',\n ]\n column_filters = [\n 'user',\n 'title',\n 'date',\n 'tags',\n filters.FilterLike(Post.title, 'Fixed Title', options=(('test1', 'Test 1'), ('test2', 'Test 2'))),\n ]\n\n # Pass arguments to WTForms. In this case, change label for text field to\n # be 'Big Text' and add required() validator.\n form_args = dict(\n text=dict(label='Big Text', validators=[validators.required()])\n )\n\n form_ajax_refs = {\n 'user': {\n 'fields': (User.first_name, User.last_name)\n },\n 'tags': {\n 'fields': (Tag.name,),\n 'minimum_input_length': 0, # show suggestions, even before any user input\n 'placeholder': 'Please select',\n 'page_size': 5,\n },\n }\n\n def __init__(self, session):\n # Just call parent class with predefined model.\n super(PostAdmin, self).__init__(Post, session)\n\n\nclass TreeView(sqla.ModelView):\n form_excluded_columns = ['children', ]\n\n\nclass ScreenView(sqla.ModelView):\n column_list = ['id', 'width', 'height', 'number_of_pixels'] # not that 'number_of_pixels' is a hybrid property, not a field\n column_sortable_list = ['id', 'width', 'height', 'number_of_pixels']\n\n # Flask-admin can automatically detect the relevant filters for hybrid properties.\n column_filters = ('number_of_pixels', )\n\n\n# Create admin\nadmin = admin.Admin(app, name='Example: SQLAlchemy', template_mode='bootstrap3')\n\n# Add views\nadmin.add_view(UserAdmin(User, db.session))\nadmin.add_view(sqla.ModelView(Tag, db.session))\nadmin.add_view(PostAdmin(db.session))\nadmin.add_view(sqla.ModelView(Pet, db.session, category=\"Other\"))\nadmin.add_view(sqla.ModelView(UserInfo, db.session, category=\"Other\"))\nadmin.add_view(TreeView(Tree, db.session, category=\"Other\"))\nadmin.add_view(ScreenView(Screen, db.session, category=\"Other\"))\nadmin.add_sub_category(name=\"Links\", parent_name=\"Other\")\nadmin.add_link(MenuLink(name='Back Home', url='/', category='Links'))\nadmin.add_link(MenuLink(name='Google', url='http://www.google.com/', category='Links'))\nadmin.add_link(MenuLink(name='Mozilla', url='http://mozilla.org/', category='Links'))\n\n\ndef build_sample_db():\n \"\"\"\n Populate a small db with some example entries.\n \"\"\"\n\n import random\n import datetime\n\n db.drop_all()\n db.create_all()\n\n # Create sample Users\n first_names = [\n 'Harry', 'Amelia', 'Oliver', 'Jack', 'Isabella', 'Charlie', 'Sophie', 'Mia',\n 'Jacob', 'Thomas', 'Emily', 'Lily', 'Ava', 'Isla', 'Alfie', 'Olivia', 'Jessica',\n 'Riley', 'William', 'James', 'Geoffrey', 'Lisa', 'Benjamin', 'Stacey', 'Lucy'\n ]\n last_names = [\n 'Brown', 'Brown', 'Patel', 'Jones', 'Williams', 'Johnson', 'Taylor', 'Thomas',\n 'Roberts', 'Khan', 'Clarke', 'Clarke', 'Clarke', 'James', 'Phillips', 'Wilson',\n 'Ali', 'Mason', 'Mitchell', 'Rose', 'Davis', 'Davies', 'Rodriguez', 'Cox', 'Alexander'\n ]\n\n user_list = []\n for i in range(len(first_names)):\n user = User()\n user.first_name = first_names[i]\n user.last_name = last_names[i]\n user.email = first_names[i].lower() + \"@example.com\"\n user.info.append(UserInfo(key=\"foo\", value=\"bar\"))\n user_list.append(user)\n db.session.add(user)\n\n # Create sample Tags\n tag_list = []\n for tmp in [\"YELLOW\", \"WHITE\", \"BLUE\", \"GREEN\", \"RED\", \"BLACK\", \"BROWN\", \"PURPLE\", \"ORANGE\"]:\n tag = Tag()\n tag.name = tmp\n tag_list.append(tag)\n db.session.add(tag)\n\n # Create sample Posts\n sample_text = [\n {\n 'title': \"de Finibus Bonorum et Malorum - Part I\",\n 'content': \"Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor \\\n incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud \\\n exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure \\\n dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. \\\n Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt \\\n mollit anim id est laborum.\"\n },\n {\n 'title': \"de Finibus Bonorum et Malorum - Part II\",\n 'content': \"Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque \\\n laudantium, totam rem aperiam, eaque ipsa quae ab illo inventore veritatis et quasi architecto \\\n beatae vitae dicta sunt explicabo. Nemo enim ipsam voluptatem quia voluptas sit aspernatur \\\n aut odit aut fugit, sed quia consequuntur magni dolores eos qui ratione voluptatem sequi \\\n nesciunt. Neque porro quisquam est, qui dolorem ipsum quia dolor sit amet, consectetur, \\\n adipisci velit, sed quia non numquam eius modi tempora incidunt ut labore et dolore magnam \\\n aliquam quaerat voluptatem. Ut enim ad minima veniam, quis nostrum exercitationem ullam \\\n corporis suscipit laboriosam, nisi ut aliquid ex ea commodi consequatur? Quis autem vel eum \\\n iure reprehenderit qui in ea voluptate velit esse quam nihil molestiae consequatur, vel illum \\\n qui dolorem eum fugiat quo voluptas nulla pariatur?\"\n },\n {\n 'title': \"de Finibus Bonorum et Malorum - Part III\",\n 'content': \"At vero eos et accusamus et iusto odio dignissimos ducimus qui blanditiis praesentium \\\n voluptatum deleniti atque corrupti quos dolores et quas molestias excepturi sint occaecati \\\n cupiditate non provident, similique sunt in culpa qui officia deserunt mollitia animi, id \\\n est laborum et dolorum fuga. Et harum quidem rerum facilis est et expedita distinctio. Nam \\\n libero tempore, cum soluta nobis est eligendi optio cumque nihil impedit quo minus id quod \\\n maxime placeat facere possimus, omnis voluptas assumenda est, omnis dolor repellendus. \\\n Temporibus autem quibusdam et aut officiis debitis aut rerum necessitatibus saepe eveniet \\\n ut et voluptates repudiandae sint et molestiae non recusandae. Itaque earum rerum hic tenetur \\\n a sapiente delectus, ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis \\\n doloribus asperiores repellat.\"\n }\n ]\n\n for user in user_list:\n entry = random.choice(sample_text) # select text at random\n post = Post()\n post.user = user\n post.title = entry['title']\n post.text = entry['content']\n tmp = int(1000*random.random()) # random number between 0 and 1000:\n post.date = datetime.datetime.now() - datetime.timedelta(days=tmp)\n post.tags = random.sample(tag_list, 2) # select a couple of tags at random\n db.session.add(post)\n\n # Create a sample Tree structure\n trunk = Tree(name=\"Trunk\")\n db.session.add(trunk)\n for i in range(5):\n branch = Tree()\n branch.name = \"Branch \" + str(i+1)\n branch.parent = trunk\n db.session.add(branch)\n for j in range(5):\n leaf = Tree()\n leaf.name = \"Leaf \" + str(j+1)\n leaf.parent = branch\n db.session.add(leaf)\n\n db.session.add(Pet(name='Dog', available=True))\n db.session.add(Pet(name='Fish', available=True))\n db.session.add(Pet(name='Cat', available=True))\n db.session.add(Pet(name='Parrot', available=True))\n db.session.add(Pet(name='Ocelot', available=False))\n\n db.session.add(Screen(width=500, height=2000))\n db.session.add(Screen(width=550, height=1900))\n\n db.session.commit()\n return\n\nif __name__ == '__main__':\n # Build a sample db on the fly, if one does not exist yet.\n app_dir = op.realpath(os.path.dirname(__file__))\n database_path = op.join(app_dir, app.config['DATABASE_FILE'])\n if not os.path.exists(database_path):\n build_sample_db()\n\n # Start app\n app.run(debug=True)\n", "path": "examples/sqla/app.py"}], "after_files": [{"content": "import os\nimport os.path as op\nfrom flask import Flask\nfrom flask_sqlalchemy import SQLAlchemy\nfrom sqlalchemy.ext.hybrid import hybrid_property\n\nfrom wtforms import validators\n\nimport flask_admin as admin\nfrom flask_admin.base import MenuLink\nfrom flask_admin.contrib import sqla\nfrom flask_admin.contrib.sqla import filters\nfrom flask_admin.contrib.sqla.form import InlineModelConverter\nfrom flask_admin.contrib.sqla.fields import InlineModelFormList\nfrom flask_admin.contrib.sqla.filters import BaseSQLAFilter, FilterEqual\n\n\n# Create application\napp = Flask(__name__)\n\n# set optional bootswatch theme\n# see http://bootswatch.com/3/ for available swatches\napp.config['FLASK_ADMIN_SWATCH'] = 'cerulean'\n\n# Create dummy secrey key so we can use sessions\napp.config['SECRET_KEY'] = '123456790'\n\n# Create in-memory database\napp.config['DATABASE_FILE'] = 'sample_db.sqlite'\napp.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///' + app.config['DATABASE_FILE']\napp.config['SQLALCHEMY_ECHO'] = True\ndb = SQLAlchemy(app)\n\n\n# Create models\nclass User(db.Model):\n id = db.Column(db.Integer, primary_key=True)\n first_name = db.Column(db.String(100))\n last_name = db.Column(db.String(100))\n email = db.Column(db.String(120), unique=True)\n pets = db.relationship('Pet', backref='owner')\n\n def __str__(self):\n return \"{}, {}\".format(self.last_name, self.first_name)\n\n\nclass Pet(db.Model):\n id = db.Column(db.Integer, primary_key=True)\n name = db.Column(db.String(50), nullable=False)\n person_id = db.Column(db.Integer, db.ForeignKey('user.id'))\n available = db.Column(db.Boolean)\n\n def __str__(self):\n return self.name\n\n\n# Create M2M table\npost_tags_table = db.Table('post_tags', db.Model.metadata,\n db.Column('post_id', db.Integer, db.ForeignKey('post.id')),\n db.Column('tag_id', db.Integer, db.ForeignKey('tag.id'))\n )\n\n\nclass Post(db.Model):\n id = db.Column(db.Integer, primary_key=True)\n title = db.Column(db.String(120))\n text = db.Column(db.Text, nullable=False)\n date = db.Column(db.Date)\n\n user_id = db.Column(db.Integer(), db.ForeignKey(User.id))\n user = db.relationship(User, backref='posts')\n\n tags = db.relationship('Tag', secondary=post_tags_table)\n\n def __str__(self):\n return \"{}\".format(self.title)\n\n\nclass Tag(db.Model):\n id = db.Column(db.Integer, primary_key=True)\n name = db.Column(db.Unicode(64))\n\n def __str__(self):\n return \"{}\".format(self.name)\n\n\nclass UserInfo(db.Model):\n id = db.Column(db.Integer, primary_key=True)\n\n key = db.Column(db.String(64), nullable=False)\n value = db.Column(db.String(64))\n\n user_id = db.Column(db.Integer(), db.ForeignKey(User.id))\n user = db.relationship(User, backref='info')\n\n def __str__(self):\n return \"{} - {}\".format(self.key, self.value)\n\n\nclass Tree(db.Model):\n id = db.Column(db.Integer, primary_key=True)\n name = db.Column(db.String(64))\n parent_id = db.Column(db.Integer, db.ForeignKey('tree.id'))\n parent = db.relationship('Tree', remote_side=[id], backref='children')\n\n def __str__(self):\n return \"{}\".format(self.name)\n\n\nclass Screen(db.Model):\n __tablename__ = 'screen'\n id = db.Column(db.Integer, primary_key=True)\n width = db.Column(db.Integer, nullable=False)\n height = db.Column(db.Integer, nullable=False)\n\n @hybrid_property\n def number_of_pixels(self):\n return self.width * self.height\n\n\n# Flask views\[email protected]('/')\ndef index():\n return '<a href=\"/admin/\">Click me to get to Admin!</a>'\n\n\n# Custom filter class\nclass FilterLastNameBrown(BaseSQLAFilter):\n def apply(self, query, value, alias=None):\n if value == '1':\n return query.filter(self.column == \"Brown\")\n else:\n return query.filter(self.column != \"Brown\")\n\n def operation(self):\n return 'is Brown'\n\n\n# Customized User model admin\ninline_form_options = {\n 'form_label': \"Info item\",\n 'form_columns': ['id', 'key', 'value'],\n 'form_args': None,\n 'form_extra_fields': None,\n}\n\nclass UserAdmin(sqla.ModelView):\n action_disallowed_list = ['delete', ]\n column_display_pk = True\n column_list = [\n 'id',\n 'last_name',\n 'first_name',\n 'email',\n 'pets',\n ]\n column_default_sort = [('last_name', False), ('first_name', False)] # sort on multiple columns\n\n # custom filter: each filter in the list is a filter operation (equals, not equals, etc)\n # filters with the same name will appear as operations under the same filter\n column_filters = [\n FilterEqual(column=User.last_name, name='Last Name'),\n FilterLastNameBrown(column=User.last_name, name='Last Name',\n options=(('1', 'Yes'), ('0', 'No')))\n ]\n inline_models = [(UserInfo, inline_form_options), ]\n\n # setup create & edit forms so that only 'available' pets can be selected\n def create_form(self):\n return self._use_filtered_parent(\n super(UserAdmin, self).create_form()\n )\n\n def edit_form(self, obj):\n return self._use_filtered_parent(\n super(UserAdmin, self).edit_form(obj)\n )\n\n def _use_filtered_parent(self, form):\n form.pets.query_factory = self._get_parent_list\n return form\n\n def _get_parent_list(self):\n # only show available pets in the form\n return Pet.query.filter_by(available=True).all()\n\n\n\n# Customized Post model admin\nclass PostAdmin(sqla.ModelView):\n column_exclude_list = ['text']\n column_default_sort = ('date', True)\n column_sortable_list = [\n 'title',\n 'date',\n ('user', ('user.last_name', 'user.first_name')), # sort on multiple columns\n ]\n column_labels = dict(title='Post Title') # Rename 'title' column in list view\n column_searchable_list = [\n 'title',\n User.first_name,\n User.last_name,\n 'tags.name',\n ]\n column_filters = [\n 'user',\n 'title',\n 'date',\n 'tags',\n filters.FilterLike(Post.title, 'Fixed Title', options=(('test1', 'Test 1'), ('test2', 'Test 2'))),\n ]\n\n # Pass arguments to WTForms. In this case, change label for text field to\n # be 'Big Text' and add required() validator.\n form_args = dict(\n text=dict(label='Big Text', validators=[validators.required()])\n )\n\n form_ajax_refs = {\n 'user': {\n 'fields': (User.first_name, User.last_name)\n },\n 'tags': {\n 'fields': (Tag.name,),\n 'minimum_input_length': 0, # show suggestions, even before any user input\n 'placeholder': 'Please select',\n 'page_size': 5,\n },\n }\n\n def __init__(self, session):\n # Just call parent class with predefined model.\n super(PostAdmin, self).__init__(Post, session)\n\n\nclass TreeView(sqla.ModelView):\n form_excluded_columns = ['children', ]\n\n\nclass ScreenView(sqla.ModelView):\n column_list = ['id', 'width', 'height', 'number_of_pixels'] # not that 'number_of_pixels' is a hybrid property, not a field\n column_sortable_list = ['id', 'width', 'height', 'number_of_pixels']\n\n # Flask-admin can automatically detect the relevant filters for hybrid properties.\n column_filters = ('number_of_pixels', )\n\n\n# Create admin\nadmin = admin.Admin(app, name='Example: SQLAlchemy', template_mode='bootstrap3')\n\n# Add views\nadmin.add_view(UserAdmin(User, db.session))\nadmin.add_view(sqla.ModelView(Tag, db.session))\nadmin.add_view(PostAdmin(db.session))\nadmin.add_view(sqla.ModelView(Pet, db.session, category=\"Other\"))\nadmin.add_view(sqla.ModelView(UserInfo, db.session, category=\"Other\"))\nadmin.add_view(TreeView(Tree, db.session, category=\"Other\"))\nadmin.add_view(ScreenView(Screen, db.session, category=\"Other\"))\nadmin.add_sub_category(name=\"Links\", parent_name=\"Other\")\nadmin.add_link(MenuLink(name='Back Home', url='/', category='Links'))\nadmin.add_link(MenuLink(name='Google', url='http://www.google.com/', category='Links'))\nadmin.add_link(MenuLink(name='Mozilla', url='http://mozilla.org/', category='Links'))\n\n\ndef build_sample_db():\n \"\"\"\n Populate a small db with some example entries.\n \"\"\"\n\n import random\n import datetime\n\n db.drop_all()\n db.create_all()\n\n # Create sample Users\n first_names = [\n 'Harry', 'Amelia', 'Oliver', 'Jack', 'Isabella', 'Charlie', 'Sophie', 'Mia',\n 'Jacob', 'Thomas', 'Emily', 'Lily', 'Ava', 'Isla', 'Alfie', 'Olivia', 'Jessica',\n 'Riley', 'William', 'James', 'Geoffrey', 'Lisa', 'Benjamin', 'Stacey', 'Lucy'\n ]\n last_names = [\n 'Brown', 'Brown', 'Patel', 'Jones', 'Williams', 'Johnson', 'Taylor', 'Thomas',\n 'Roberts', 'Khan', 'Clarke', 'Clarke', 'Clarke', 'James', 'Phillips', 'Wilson',\n 'Ali', 'Mason', 'Mitchell', 'Rose', 'Davis', 'Davies', 'Rodriguez', 'Cox', 'Alexander'\n ]\n\n user_list = []\n for i in range(len(first_names)):\n user = User()\n user.first_name = first_names[i]\n user.last_name = last_names[i]\n user.email = first_names[i].lower() + \"@example.com\"\n user.info.append(UserInfo(key=\"foo\", value=\"bar\"))\n user_list.append(user)\n db.session.add(user)\n\n # Create sample Tags\n tag_list = []\n for tmp in [\"YELLOW\", \"WHITE\", \"BLUE\", \"GREEN\", \"RED\", \"BLACK\", \"BROWN\", \"PURPLE\", \"ORANGE\"]:\n tag = Tag()\n tag.name = tmp\n tag_list.append(tag)\n db.session.add(tag)\n\n # Create sample Posts\n sample_text = [\n {\n 'title': \"de Finibus Bonorum et Malorum - Part I\",\n 'content': \"Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor \\\n incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud \\\n exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure \\\n dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. \\\n Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt \\\n mollit anim id est laborum.\"\n },\n {\n 'title': \"de Finibus Bonorum et Malorum - Part II\",\n 'content': \"Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque \\\n laudantium, totam rem aperiam, eaque ipsa quae ab illo inventore veritatis et quasi architecto \\\n beatae vitae dicta sunt explicabo. Nemo enim ipsam voluptatem quia voluptas sit aspernatur \\\n aut odit aut fugit, sed quia consequuntur magni dolores eos qui ratione voluptatem sequi \\\n nesciunt. Neque porro quisquam est, qui dolorem ipsum quia dolor sit amet, consectetur, \\\n adipisci velit, sed quia non numquam eius modi tempora incidunt ut labore et dolore magnam \\\n aliquam quaerat voluptatem. Ut enim ad minima veniam, quis nostrum exercitationem ullam \\\n corporis suscipit laboriosam, nisi ut aliquid ex ea commodi consequatur? Quis autem vel eum \\\n iure reprehenderit qui in ea voluptate velit esse quam nihil molestiae consequatur, vel illum \\\n qui dolorem eum fugiat quo voluptas nulla pariatur?\"\n },\n {\n 'title': \"de Finibus Bonorum et Malorum - Part III\",\n 'content': \"At vero eos et accusamus et iusto odio dignissimos ducimus qui blanditiis praesentium \\\n voluptatum deleniti atque corrupti quos dolores et quas molestias excepturi sint occaecati \\\n cupiditate non provident, similique sunt in culpa qui officia deserunt mollitia animi, id \\\n est laborum et dolorum fuga. Et harum quidem rerum facilis est et expedita distinctio. Nam \\\n libero tempore, cum soluta nobis est eligendi optio cumque nihil impedit quo minus id quod \\\n maxime placeat facere possimus, omnis voluptas assumenda est, omnis dolor repellendus. \\\n Temporibus autem quibusdam et aut officiis debitis aut rerum necessitatibus saepe eveniet \\\n ut et voluptates repudiandae sint et molestiae non recusandae. Itaque earum rerum hic tenetur \\\n a sapiente delectus, ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis \\\n doloribus asperiores repellat.\"\n }\n ]\n\n for user in user_list:\n entry = random.choice(sample_text) # select text at random\n post = Post()\n post.user = user\n post.title = entry['title']\n post.text = entry['content']\n tmp = int(1000*random.random()) # random number between 0 and 1000:\n post.date = datetime.datetime.now() - datetime.timedelta(days=tmp)\n post.tags = random.sample(tag_list, 2) # select a couple of tags at random\n db.session.add(post)\n\n # Create a sample Tree structure\n trunk = Tree(name=\"Trunk\")\n db.session.add(trunk)\n for i in range(5):\n branch = Tree()\n branch.name = \"Branch \" + str(i+1)\n branch.parent = trunk\n db.session.add(branch)\n for j in range(5):\n leaf = Tree()\n leaf.name = \"Leaf \" + str(j+1)\n leaf.parent = branch\n db.session.add(leaf)\n\n db.session.add(Pet(name='Dog', available=True))\n db.session.add(Pet(name='Fish', available=True))\n db.session.add(Pet(name='Cat', available=True))\n db.session.add(Pet(name='Parrot', available=True))\n db.session.add(Pet(name='Ocelot', available=False))\n\n db.session.add(Screen(width=500, height=2000))\n db.session.add(Screen(width=550, height=1900))\n\n db.session.commit()\n return\n\nif __name__ == '__main__':\n # Build a sample db on the fly, if one does not exist yet.\n app_dir = op.realpath(os.path.dirname(__file__))\n database_path = op.join(app_dir, app.config['DATABASE_FILE'])\n if not os.path.exists(database_path):\n build_sample_db()\n\n # Start app\n app.run(debug=True)\n", "path": "examples/sqla/app.py"}]} |
gh_patches_debug_88 | rasdani/github-patches | git_diff | spacetelescope__jwql-677 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Update Bokeh to latest version
I remember there was some reason that we were holding off on upgrading Bokeh from 1.3.4. However, Bokeh is now up to version 2.2.1 I believe. We should look into upgrading the version used for JWQL in order to take advantage of new features and so that we minimize the number of plots created under 1.3.4 which may need to be tweaked to work under the new version.
For example, one difference I ran into today was that the keyword "legend", which is used in 1.3.4 to denote the string printed in the legend for a particular element, has been changed to "legend_label" in version 2.2.1.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 import numpy as np
2 from setuptools import setup
3 from setuptools import find_packages
4
5 VERSION = '0.24.0'
6
7 AUTHORS = 'Matthew Bourque, Lauren Chambers, Misty Cracraft, Mike Engesser, Mees Fix, Joe Filippazzo, Bryan Hilbert, '
8 AUTHORS += 'Graham Kanarek, Teagan King, Catherine Martlin, Maria Pena-Guerrero, Johannes Sahlmann, Ben Sunnquist'
9
10 DESCRIPTION = 'The James Webb Space Telescope Quicklook Project'
11
12 DEPENDENCY_LINKS = ['git+https://github.com/spacetelescope/jwst_reffiles#egg=jwst_reffiles']
13
14 REQUIRES = [
15 'asdf>=2.3.3',
16 'astropy>=3.2.1',
17 'astroquery>=0.3.9',
18 'authlib',
19 'bokeh>=1.0,<1.4',
20 'codecov',
21 'crds',
22 'cryptography',
23 'django',
24 'flake8',
25 'inflection',
26 'ipython',
27 'jinja2',
28 'jsonschema',
29 'jwedb>=0.0.3',
30 'jwst',
31 'matplotlib',
32 'nodejs',
33 'numpy',
34 'numpydoc',
35 'pandas',
36 'psycopg2',
37 'pysiaf',
38 'pytest',
39 'pytest-cov',
40 'scipy',
41 'sphinx',
42 'sqlalchemy',
43 'stsci_rtd_theme',
44 'twine',
45 'wtforms'
46 ]
47
48 setup(
49 name='jwql',
50 version=VERSION,
51 description=DESCRIPTION,
52 url='https://github.com/spacetelescope/jwql.git',
53 author=AUTHORS,
54 author_email='[email protected]',
55 license='BSD',
56 keywords=['astronomy', 'python'],
57 classifiers=['Programming Language :: Python'],
58 packages=find_packages(),
59 install_requires=REQUIRES,
60 dependency_links=DEPENDENCY_LINKS,
61 include_package_data=True,
62 include_dirs=[np.get_include()],
63 )
64
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -16,7 +16,7 @@
'astropy>=3.2.1',
'astroquery>=0.3.9',
'authlib',
- 'bokeh>=1.0,<1.4',
+ 'bokeh',
'codecov',
'crds',
'cryptography',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -16,7 +16,7 @@\n 'astropy>=3.2.1',\n 'astroquery>=0.3.9',\n 'authlib',\n- 'bokeh>=1.0,<1.4',\n+ 'bokeh',\n 'codecov',\n 'crds',\n 'cryptography',\n", "issue": "Update Bokeh to latest version\nI remember there was some reason that we were holding off on upgrading Bokeh from 1.3.4. However, Bokeh is now up to version 2.2.1 I believe. We should look into upgrading the version used for JWQL in order to take advantage of new features and so that we minimize the number of plots created under 1.3.4 which may need to be tweaked to work under the new version.\r\n\r\nFor example, one difference I ran into today was that the keyword \"legend\", which is used in 1.3.4 to denote the string printed in the legend for a particular element, has been changed to \"legend_label\" in version 2.2.1.\n", "before_files": [{"content": "import numpy as np\nfrom setuptools import setup\nfrom setuptools import find_packages\n\nVERSION = '0.24.0'\n\nAUTHORS = 'Matthew Bourque, Lauren Chambers, Misty Cracraft, Mike Engesser, Mees Fix, Joe Filippazzo, Bryan Hilbert, '\nAUTHORS += 'Graham Kanarek, Teagan King, Catherine Martlin, Maria Pena-Guerrero, Johannes Sahlmann, Ben Sunnquist'\n\nDESCRIPTION = 'The James Webb Space Telescope Quicklook Project'\n\nDEPENDENCY_LINKS = ['git+https://github.com/spacetelescope/jwst_reffiles#egg=jwst_reffiles']\n\nREQUIRES = [\n 'asdf>=2.3.3',\n 'astropy>=3.2.1',\n 'astroquery>=0.3.9',\n 'authlib',\n 'bokeh>=1.0,<1.4',\n 'codecov',\n 'crds',\n 'cryptography',\n 'django',\n 'flake8',\n 'inflection',\n 'ipython',\n 'jinja2',\n 'jsonschema',\n 'jwedb>=0.0.3',\n 'jwst',\n 'matplotlib',\n 'nodejs',\n 'numpy',\n 'numpydoc',\n 'pandas',\n 'psycopg2',\n 'pysiaf',\n 'pytest',\n 'pytest-cov',\n 'scipy',\n 'sphinx',\n 'sqlalchemy',\n 'stsci_rtd_theme',\n 'twine',\n 'wtforms'\n]\n\nsetup(\n name='jwql',\n version=VERSION,\n description=DESCRIPTION,\n url='https://github.com/spacetelescope/jwql.git',\n author=AUTHORS,\n author_email='[email protected]',\n license='BSD',\n keywords=['astronomy', 'python'],\n classifiers=['Programming Language :: Python'],\n packages=find_packages(),\n install_requires=REQUIRES,\n dependency_links=DEPENDENCY_LINKS,\n include_package_data=True,\n include_dirs=[np.get_include()],\n)\n", "path": "setup.py"}], "after_files": [{"content": "import numpy as np\nfrom setuptools import setup\nfrom setuptools import find_packages\n\nVERSION = '0.24.0'\n\nAUTHORS = 'Matthew Bourque, Lauren Chambers, Misty Cracraft, Mike Engesser, Mees Fix, Joe Filippazzo, Bryan Hilbert, '\nAUTHORS += 'Graham Kanarek, Teagan King, Catherine Martlin, Maria Pena-Guerrero, Johannes Sahlmann, Ben Sunnquist'\n\nDESCRIPTION = 'The James Webb Space Telescope Quicklook Project'\n\nDEPENDENCY_LINKS = ['git+https://github.com/spacetelescope/jwst_reffiles#egg=jwst_reffiles']\n\nREQUIRES = [\n 'asdf>=2.3.3',\n 'astropy>=3.2.1',\n 'astroquery>=0.3.9',\n 'authlib',\n 'bokeh',\n 'codecov',\n 'crds',\n 'cryptography',\n 'django',\n 'flake8',\n 'inflection',\n 'ipython',\n 'jinja2',\n 'jsonschema',\n 'jwedb>=0.0.3',\n 'jwst',\n 'matplotlib',\n 'nodejs',\n 'numpy',\n 'numpydoc',\n 'pandas',\n 'psycopg2',\n 'pysiaf',\n 'pytest',\n 'pytest-cov',\n 'scipy',\n 'sphinx',\n 'sqlalchemy',\n 'stsci_rtd_theme',\n 'twine',\n 'wtforms'\n]\n\nsetup(\n name='jwql',\n version=VERSION,\n description=DESCRIPTION,\n url='https://github.com/spacetelescope/jwql.git',\n author=AUTHORS,\n author_email='[email protected]',\n license='BSD',\n keywords=['astronomy', 'python'],\n classifiers=['Programming Language :: Python'],\n packages=find_packages(),\n install_requires=REQUIRES,\n dependency_links=DEPENDENCY_LINKS,\n include_package_data=True,\n include_dirs=[np.get_include()],\n)\n", "path": "setup.py"}]} |
gh_patches_debug_89 | rasdani/github-patches | git_diff | googleapis__python-spanner-django-652 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Kokoro build is failing for new PRs with no change in code
Kokoro build is failing for new PRs with no change in code https://github.com/googleapis/python-spanner-django/pull/652
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `version.py`
Content:
```
1 # Copyright 2020 Google LLC
2 #
3 # Use of this source code is governed by a BSD-style
4 # license that can be found in the LICENSE file or at
5 # https://developers.google.com/open-source/licenses/bsd
6
7 __version__ = "2.2.1b1"
8
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/version.py b/version.py
--- a/version.py
+++ b/version.py
@@ -4,4 +4,4 @@
# license that can be found in the LICENSE file or at
# https://developers.google.com/open-source/licenses/bsd
-__version__ = "2.2.1b1"
+__version__ = "2.2.1b2"
| {"golden_diff": "diff --git a/version.py b/version.py\n--- a/version.py\n+++ b/version.py\n@@ -4,4 +4,4 @@\n # license that can be found in the LICENSE file or at\n # https://developers.google.com/open-source/licenses/bsd\n \n-__version__ = \"2.2.1b1\"\n+__version__ = \"2.2.1b2\"\n", "issue": "Kokoro build is failing for new PRs with no change in code\nKokoro build is failing for new PRs with no change in code https://github.com/googleapis/python-spanner-django/pull/652\n", "before_files": [{"content": "# Copyright 2020 Google LLC\n#\n# Use of this source code is governed by a BSD-style\n# license that can be found in the LICENSE file or at\n# https://developers.google.com/open-source/licenses/bsd\n\n__version__ = \"2.2.1b1\"\n", "path": "version.py"}], "after_files": [{"content": "# Copyright 2020 Google LLC\n#\n# Use of this source code is governed by a BSD-style\n# license that can be found in the LICENSE file or at\n# https://developers.google.com/open-source/licenses/bsd\n\n__version__ = \"2.2.1b2\"\n", "path": "version.py"}]} |
gh_patches_debug_90 | rasdani/github-patches | git_diff | microsoft__ptvsd-167 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Error reading integer
From VS (might not be a ptvsd bug, not sure at this point):
Create new python application
Add new item, python unit test
Set the unit test as startup file
F5
Result:
```
---------------------------
Microsoft Visual Studio
---------------------------
Error reading integer. Unexpected token: Boolean. Path 'exitCode'.
---------------------------
OK
---------------------------
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ptvsd/debugger.py`
Content:
```
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License. See LICENSE in the project root
3 # for license information.
4
5 import sys
6
7
8 __author__ = "Microsoft Corporation <[email protected]>"
9 __version__ = "4.0.0a1"
10
11 DONT_DEBUG = []
12
13
14 def debug(filename, port_num, debug_id, debug_options, run_as):
15 # TODO: docstring
16
17 # import the wrapper first, so that it gets a chance
18 # to detour pydevd socket functionality.
19 import ptvsd.wrapper
20 import pydevd
21
22 args = [
23 '--port', str(port_num),
24 '--client', '127.0.0.1',
25 ]
26 if run_as == 'module':
27 args.append('--module')
28 args.extend(('--file', filename + ":"))
29 else:
30 args.extend(('--file', filename))
31 sys.argv[1:0] = args
32 try:
33 pydevd.main()
34 except SystemExit as ex:
35 ptvsd.wrapper.ptvsd_sys_exit_code = ex.code
36 raise
37
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ptvsd/debugger.py b/ptvsd/debugger.py
--- a/ptvsd/debugger.py
+++ b/ptvsd/debugger.py
@@ -32,5 +32,5 @@
try:
pydevd.main()
except SystemExit as ex:
- ptvsd.wrapper.ptvsd_sys_exit_code = ex.code
+ ptvsd.wrapper.ptvsd_sys_exit_code = int(ex.code)
raise
| {"golden_diff": "diff --git a/ptvsd/debugger.py b/ptvsd/debugger.py\n--- a/ptvsd/debugger.py\n+++ b/ptvsd/debugger.py\n@@ -32,5 +32,5 @@\n try:\n pydevd.main()\n except SystemExit as ex:\n- ptvsd.wrapper.ptvsd_sys_exit_code = ex.code\n+ ptvsd.wrapper.ptvsd_sys_exit_code = int(ex.code)\n raise\n", "issue": "Error reading integer\nFrom VS (might not be a ptvsd bug, not sure at this point):\r\nCreate new python application\r\nAdd new item, python unit test\r\nSet the unit test as startup file\r\nF5\r\n\r\nResult:\r\n```\r\n---------------------------\r\nMicrosoft Visual Studio\r\n---------------------------\r\nError reading integer. Unexpected token: Boolean. Path 'exitCode'.\r\n---------------------------\r\nOK \r\n---------------------------\r\n```\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See LICENSE in the project root\n# for license information.\n\nimport sys\n\n\n__author__ = \"Microsoft Corporation <[email protected]>\"\n__version__ = \"4.0.0a1\"\n\nDONT_DEBUG = []\n\n\ndef debug(filename, port_num, debug_id, debug_options, run_as):\n # TODO: docstring\n\n # import the wrapper first, so that it gets a chance\n # to detour pydevd socket functionality.\n import ptvsd.wrapper\n import pydevd\n\n args = [\n '--port', str(port_num),\n '--client', '127.0.0.1',\n ]\n if run_as == 'module':\n args.append('--module')\n args.extend(('--file', filename + \":\"))\n else:\n args.extend(('--file', filename))\n sys.argv[1:0] = args\n try:\n pydevd.main()\n except SystemExit as ex:\n ptvsd.wrapper.ptvsd_sys_exit_code = ex.code\n raise\n", "path": "ptvsd/debugger.py"}], "after_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See LICENSE in the project root\n# for license information.\n\nimport sys\n\n\n__author__ = \"Microsoft Corporation <[email protected]>\"\n__version__ = \"4.0.0a1\"\n\nDONT_DEBUG = []\n\n\ndef debug(filename, port_num, debug_id, debug_options, run_as):\n # TODO: docstring\n\n # import the wrapper first, so that it gets a chance\n # to detour pydevd socket functionality.\n import ptvsd.wrapper\n import pydevd\n\n args = [\n '--port', str(port_num),\n '--client', '127.0.0.1',\n ]\n if run_as == 'module':\n args.append('--module')\n args.extend(('--file', filename + \":\"))\n else:\n args.extend(('--file', filename))\n sys.argv[1:0] = args\n try:\n pydevd.main()\n except SystemExit as ex:\n ptvsd.wrapper.ptvsd_sys_exit_code = int(ex.code)\n raise\n", "path": "ptvsd/debugger.py"}]} |
gh_patches_debug_91 | rasdani/github-patches | git_diff | Pylons__pyramid-3271 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bump Sphinx to >=1.7.2
Would anyone be opposed to bumping Sphinx to >=1.7.2, != 1.7.3 in `setup.py`? I really want our PDFs to have `emphasize-lines` support, at long last, and bring in support for Unicode characters in PDFs via xelatex.
Refs:
* #667
* #2572
* https://github.com/rtfd/readthedocs.org/issues/4015
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 ##############################################################################
2 #
3 # Copyright (c) 2008-2013 Agendaless Consulting and Contributors.
4 # All Rights Reserved.
5 #
6 # This software is subject to the provisions of the BSD-like license at
7 # http://www.repoze.org/LICENSE.txt. A copy of the license should accompany
8 # this distribution. THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL
9 # EXPRESS OR IMPLIED WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO,
10 # THE IMPLIED WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND
11 # FITNESS FOR A PARTICULAR PURPOSE
12 #
13 ##############################################################################
14 from setuptools import setup, find_packages
15
16 def readfile(name):
17 with open(name) as f:
18 return f.read()
19
20 README = readfile('README.rst')
21 CHANGES = readfile('CHANGES.rst')
22
23 install_requires = [
24 'setuptools',
25 'WebOb >= 1.7.0', # Response.has_body
26 'zope.interface >= 3.8.0', # has zope.interface.registry
27 'zope.deprecation >= 3.5.0', # py3 compat
28 'venusian >= 1.0', # ``ignore``
29 'translationstring >= 0.4', # py3 compat
30 'PasteDeploy >= 1.5.0', # py3 compat
31 'plaster',
32 'plaster_pastedeploy',
33 'hupper',
34 ]
35
36 tests_require = [
37 'WebTest >= 1.3.1', # py3 compat
38 'zope.component >= 4.0', # py3 compat
39 ]
40
41
42 docs_extras = [
43 'Sphinx >= 1.3.5, != 1.7.3',
44 'docutils',
45 'repoze.sphinx.autointerface',
46 'pylons_sphinx_latesturl',
47 'pylons-sphinx-themes',
48 'sphinxcontrib-autoprogram',
49 ]
50
51 testing_extras = tests_require + [
52 'nose',
53 'coverage',
54 'virtualenv', # for scaffolding tests
55 ]
56
57 setup(name='pyramid',
58 version='1.10.dev0',
59 description='The Pyramid Web Framework, a Pylons project',
60 long_description=README + '\n\n' + CHANGES,
61 classifiers=[
62 "Development Status :: 6 - Mature",
63 "Intended Audience :: Developers",
64 "Programming Language :: Python",
65 "Programming Language :: Python :: 2.7",
66 "Programming Language :: Python :: 3",
67 "Programming Language :: Python :: 3.4",
68 "Programming Language :: Python :: 3.5",
69 "Programming Language :: Python :: 3.6",
70 "Programming Language :: Python :: Implementation :: CPython",
71 "Programming Language :: Python :: Implementation :: PyPy",
72 "Framework :: Pyramid",
73 "Topic :: Internet :: WWW/HTTP",
74 "Topic :: Internet :: WWW/HTTP :: WSGI",
75 "License :: Repoze Public License",
76 ],
77 keywords='web wsgi pylons pyramid',
78 author="Chris McDonough, Agendaless Consulting",
79 author_email="[email protected]",
80 url="https://trypyramid.com",
81 license="BSD-derived (http://www.repoze.org/LICENSE.txt)",
82 packages=find_packages(),
83 include_package_data=True,
84 zip_safe=False,
85 python_requires='>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*',
86 install_requires=install_requires,
87 extras_require={
88 ':python_version<"3.2"': ['repoze.lru >= 0.4'],
89 'testing': testing_extras,
90 'docs': docs_extras,
91 },
92 tests_require=tests_require,
93 test_suite="pyramid.tests",
94 entry_points="""\
95 [pyramid.scaffold]
96 starter=pyramid.scaffolds:StarterProjectTemplate
97 zodb=pyramid.scaffolds:ZODBProjectTemplate
98 alchemy=pyramid.scaffolds:AlchemyProjectTemplate
99 [pyramid.pshell_runner]
100 python=pyramid.scripts.pshell:python_shell_runner
101 [console_scripts]
102 pcreate = pyramid.scripts.pcreate:main
103 pserve = pyramid.scripts.pserve:main
104 pshell = pyramid.scripts.pshell:main
105 proutes = pyramid.scripts.proutes:main
106 pviews = pyramid.scripts.pviews:main
107 ptweens = pyramid.scripts.ptweens:main
108 prequest = pyramid.scripts.prequest:main
109 pdistreport = pyramid.scripts.pdistreport:main
110 [paste.server_runner]
111 wsgiref = pyramid.scripts.pserve:wsgiref_server_runner
112 cherrypy = pyramid.scripts.pserve:cherrypy_server_runner
113 """
114 )
115
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -40,7 +40,7 @@
docs_extras = [
- 'Sphinx >= 1.3.5, != 1.7.3',
+ 'Sphinx >= 1.7.4',
'docutils',
'repoze.sphinx.autointerface',
'pylons_sphinx_latesturl',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -40,7 +40,7 @@\n \n \n docs_extras = [\n- 'Sphinx >= 1.3.5, != 1.7.3',\n+ 'Sphinx >= 1.7.4',\n 'docutils',\n 'repoze.sphinx.autointerface',\n 'pylons_sphinx_latesturl',\n", "issue": "Bump Sphinx to >=1.7.2\nWould anyone be opposed to bumping Sphinx to >=1.7.2, != 1.7.3 in `setup.py`? I really want our PDFs to have `emphasize-lines` support, at long last, and bring in support for Unicode characters in PDFs via xelatex.\r\n\r\nRefs:\r\n* #667\r\n* #2572\r\n* https://github.com/rtfd/readthedocs.org/issues/4015\r\n\n", "before_files": [{"content": "##############################################################################\n#\n# Copyright (c) 2008-2013 Agendaless Consulting and Contributors.\n# All Rights Reserved.\n#\n# This software is subject to the provisions of the BSD-like license at\n# http://www.repoze.org/LICENSE.txt. A copy of the license should accompany\n# this distribution. THIS SOFTWARE IS PROVIDED \"AS IS\" AND ANY AND ALL\n# EXPRESS OR IMPLIED WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO,\n# THE IMPLIED WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND\n# FITNESS FOR A PARTICULAR PURPOSE\n#\n##############################################################################\nfrom setuptools import setup, find_packages\n\ndef readfile(name):\n with open(name) as f:\n return f.read()\n\nREADME = readfile('README.rst')\nCHANGES = readfile('CHANGES.rst')\n\ninstall_requires = [\n 'setuptools',\n 'WebOb >= 1.7.0', # Response.has_body\n 'zope.interface >= 3.8.0', # has zope.interface.registry\n 'zope.deprecation >= 3.5.0', # py3 compat\n 'venusian >= 1.0', # ``ignore``\n 'translationstring >= 0.4', # py3 compat\n 'PasteDeploy >= 1.5.0', # py3 compat\n 'plaster',\n 'plaster_pastedeploy',\n 'hupper',\n ]\n\ntests_require = [\n 'WebTest >= 1.3.1', # py3 compat\n 'zope.component >= 4.0', # py3 compat\n ]\n\n\ndocs_extras = [\n 'Sphinx >= 1.3.5, != 1.7.3',\n 'docutils',\n 'repoze.sphinx.autointerface',\n 'pylons_sphinx_latesturl',\n 'pylons-sphinx-themes',\n 'sphinxcontrib-autoprogram',\n ]\n\ntesting_extras = tests_require + [\n 'nose',\n 'coverage',\n 'virtualenv', # for scaffolding tests\n ]\n\nsetup(name='pyramid',\n version='1.10.dev0',\n description='The Pyramid Web Framework, a Pylons project',\n long_description=README + '\\n\\n' + CHANGES,\n classifiers=[\n \"Development Status :: 6 - Mature\",\n \"Intended Audience :: Developers\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n \"Framework :: Pyramid\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Topic :: Internet :: WWW/HTTP :: WSGI\",\n \"License :: Repoze Public License\",\n ],\n keywords='web wsgi pylons pyramid',\n author=\"Chris McDonough, Agendaless Consulting\",\n author_email=\"[email protected]\",\n url=\"https://trypyramid.com\",\n license=\"BSD-derived (http://www.repoze.org/LICENSE.txt)\",\n packages=find_packages(),\n include_package_data=True,\n zip_safe=False,\n python_requires='>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*',\n install_requires=install_requires,\n extras_require={\n ':python_version<\"3.2\"': ['repoze.lru >= 0.4'],\n 'testing': testing_extras,\n 'docs': docs_extras,\n },\n tests_require=tests_require,\n test_suite=\"pyramid.tests\",\n entry_points=\"\"\"\\\n [pyramid.scaffold]\n starter=pyramid.scaffolds:StarterProjectTemplate\n zodb=pyramid.scaffolds:ZODBProjectTemplate\n alchemy=pyramid.scaffolds:AlchemyProjectTemplate\n [pyramid.pshell_runner]\n python=pyramid.scripts.pshell:python_shell_runner\n [console_scripts]\n pcreate = pyramid.scripts.pcreate:main\n pserve = pyramid.scripts.pserve:main\n pshell = pyramid.scripts.pshell:main\n proutes = pyramid.scripts.proutes:main\n pviews = pyramid.scripts.pviews:main\n ptweens = pyramid.scripts.ptweens:main\n prequest = pyramid.scripts.prequest:main\n pdistreport = pyramid.scripts.pdistreport:main\n [paste.server_runner]\n wsgiref = pyramid.scripts.pserve:wsgiref_server_runner\n cherrypy = pyramid.scripts.pserve:cherrypy_server_runner\n \"\"\"\n )\n", "path": "setup.py"}], "after_files": [{"content": "##############################################################################\n#\n# Copyright (c) 2008-2013 Agendaless Consulting and Contributors.\n# All Rights Reserved.\n#\n# This software is subject to the provisions of the BSD-like license at\n# http://www.repoze.org/LICENSE.txt. A copy of the license should accompany\n# this distribution. THIS SOFTWARE IS PROVIDED \"AS IS\" AND ANY AND ALL\n# EXPRESS OR IMPLIED WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO,\n# THE IMPLIED WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND\n# FITNESS FOR A PARTICULAR PURPOSE\n#\n##############################################################################\nfrom setuptools import setup, find_packages\n\ndef readfile(name):\n with open(name) as f:\n return f.read()\n\nREADME = readfile('README.rst')\nCHANGES = readfile('CHANGES.rst')\n\ninstall_requires = [\n 'setuptools',\n 'WebOb >= 1.7.0', # Response.has_body\n 'zope.interface >= 3.8.0', # has zope.interface.registry\n 'zope.deprecation >= 3.5.0', # py3 compat\n 'venusian >= 1.0', # ``ignore``\n 'translationstring >= 0.4', # py3 compat\n 'PasteDeploy >= 1.5.0', # py3 compat\n 'plaster',\n 'plaster_pastedeploy',\n 'hupper',\n ]\n\ntests_require = [\n 'WebTest >= 1.3.1', # py3 compat\n 'zope.component >= 4.0', # py3 compat\n ]\n\n\ndocs_extras = [\n 'Sphinx >= 1.7.4',\n 'docutils',\n 'repoze.sphinx.autointerface',\n 'pylons_sphinx_latesturl',\n 'pylons-sphinx-themes',\n 'sphinxcontrib-autoprogram',\n ]\n\ntesting_extras = tests_require + [\n 'nose',\n 'coverage',\n 'virtualenv', # for scaffolding tests\n ]\n\nsetup(name='pyramid',\n version='1.10.dev0',\n description='The Pyramid Web Framework, a Pylons project',\n long_description=README + '\\n\\n' + CHANGES,\n classifiers=[\n \"Development Status :: 6 - Mature\",\n \"Intended Audience :: Developers\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n \"Framework :: Pyramid\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Topic :: Internet :: WWW/HTTP :: WSGI\",\n \"License :: Repoze Public License\",\n ],\n keywords='web wsgi pylons pyramid',\n author=\"Chris McDonough, Agendaless Consulting\",\n author_email=\"[email protected]\",\n url=\"https://trypyramid.com\",\n license=\"BSD-derived (http://www.repoze.org/LICENSE.txt)\",\n packages=find_packages(),\n include_package_data=True,\n zip_safe=False,\n python_requires='>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*',\n install_requires=install_requires,\n extras_require={\n ':python_version<\"3.2\"': ['repoze.lru >= 0.4'],\n 'testing': testing_extras,\n 'docs': docs_extras,\n },\n tests_require=tests_require,\n test_suite=\"pyramid.tests\",\n entry_points=\"\"\"\\\n [pyramid.scaffold]\n starter=pyramid.scaffolds:StarterProjectTemplate\n zodb=pyramid.scaffolds:ZODBProjectTemplate\n alchemy=pyramid.scaffolds:AlchemyProjectTemplate\n [pyramid.pshell_runner]\n python=pyramid.scripts.pshell:python_shell_runner\n [console_scripts]\n pcreate = pyramid.scripts.pcreate:main\n pserve = pyramid.scripts.pserve:main\n pshell = pyramid.scripts.pshell:main\n proutes = pyramid.scripts.proutes:main\n pviews = pyramid.scripts.pviews:main\n ptweens = pyramid.scripts.ptweens:main\n prequest = pyramid.scripts.prequest:main\n pdistreport = pyramid.scripts.pdistreport:main\n [paste.server_runner]\n wsgiref = pyramid.scripts.pserve:wsgiref_server_runner\n cherrypy = pyramid.scripts.pserve:cherrypy_server_runner\n \"\"\"\n )\n", "path": "setup.py"}]} |
gh_patches_debug_92 | rasdani/github-patches | git_diff | Kinto__kinto-797 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
init fails on osx at the backend selection step
I followed the instructions given [here](https://kinto.readthedocs.io/en/stable/tutorials/install.html#from-sources), but when asked:
```
$ Select the backend you would like to use: (1 - postgresql, 2 - redis, default - memory)
```
entering `1` and `2` leads to the following error:
```
Traceback (most recent call last):
File ".venv/bin/kinto", line 11, in <module>
load_entry_point('kinto', 'console_scripts', 'kinto')()
File "/work/git/kinto/kinto/__main__.py", line 108, in main
answer = input(prompt).strip()
AttributeError: 'int' object has no attribute 'strip'
```
and entering nothing + enter will lead to the following error.
```
Traceback (most recent call last):
File ".venv/bin/kinto", line 11, in <module>
load_entry_point('kinto', 'console_scripts', 'kinto')()
File "/work/git/kinto/kinto/__main__.py", line 108, in main
answer = input(prompt).strip()
File "<string>", line 0
^
SyntaxError: unexpected EOF while parsing
```
It appears that the code expects a `string` but getting a number and null, therefore failing on the `.strip()` call [here](https://github.com/Kinto/kinto/blob/master/kinto/__main__.py#L108).
---
Entering `""`, `"1"` and `"2"` works. I'm assuming that's not the way it's designed to be?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kinto/__main__.py`
Content:
```
1 from __future__ import print_function
2 import argparse
3 import os
4 import sys
5 import logging
6 import logging.config
7
8 from kinto.core import scripts
9 from pyramid.scripts import pserve
10 from pyramid.paster import bootstrap
11 from kinto import __version__
12 from kinto.config import init
13
14 DEFAULT_CONFIG_FILE = 'config/kinto.ini'
15 DEFAULT_PORT = 8888
16 DEFAULT_LOG_LEVEL = logging.INFO
17 DEFAULT_LOG_FORMAT = "%(levelname)-5.5s %(message)s"
18
19
20 def main(args=None):
21 """The main routine."""
22 if args is None:
23 args = sys.argv[1:]
24
25 parser = argparse.ArgumentParser(description="Kinto Command-Line "
26 "Interface")
27 # XXX: deprecate this option, unnatural as first argument.
28 parser.add_argument('--ini',
29 help='Application configuration file',
30 dest='ini_file',
31 required=False,
32 default=DEFAULT_CONFIG_FILE)
33
34 parser.add_argument('-q', '--quiet', action='store_const',
35 const=logging.CRITICAL, dest='verbosity',
36 help='Show only critical errors.')
37
38 parser.add_argument('--debug', action='store_const',
39 const=logging.DEBUG, dest='verbosity',
40 help='Show all messages, including debug messages.')
41
42 commands = ('init', 'start', 'migrate', 'delete-collection', 'version')
43 subparsers = parser.add_subparsers(title='subcommands',
44 description='Main Kinto CLI commands',
45 dest='subcommand',
46 help="Choose and run with --help")
47 subparsers.required = True
48
49 for command in commands:
50 subparser = subparsers.add_parser(command)
51 subparser.set_defaults(which=command)
52
53 if command == 'init':
54 subparser.add_argument('--backend',
55 help='{memory,redis,postgresql}',
56 dest='backend',
57 required=False,
58 default=None)
59 elif command == 'migrate':
60 subparser.add_argument('--dry-run',
61 action='store_true',
62 help='Simulate the migration operations '
63 'and show information',
64 dest='dry_run',
65 required=False,
66 default=False)
67 elif command == 'delete-collection':
68 subparser.add_argument('--bucket',
69 help='The bucket where the collection '
70 'belongs to.',
71 required=True)
72 subparser.add_argument('--collection',
73 help='The collection to remove.',
74 required=True)
75
76 elif command == 'start':
77 subparser.add_argument('--reload',
78 action='store_true',
79 help='Restart when code or config changes',
80 required=False,
81 default=False)
82 subparser.add_argument('--port',
83 type=int,
84 help='Listening port number',
85 required=False,
86 default=DEFAULT_PORT)
87
88 # Parse command-line arguments
89 parsed_args = vars(parser.parse_args(args))
90
91 config_file = parsed_args['ini_file']
92 which_command = parsed_args['which']
93
94 # Initialize logging from
95 level = parsed_args.get('verbosity') or DEFAULT_LOG_LEVEL
96 logging.basicConfig(level=level, format=DEFAULT_LOG_FORMAT)
97
98 if which_command == 'init':
99 if os.path.exists(config_file):
100 print("%s already exists." % config_file, file=sys.stderr)
101 return 1
102
103 backend = parsed_args['backend']
104 if not backend:
105 while True:
106 prompt = ("Select the backend you would like to use: "
107 "(1 - postgresql, 2 - redis, default - memory) ")
108 answer = input(prompt).strip()
109 try:
110 backends = {"1": "postgresql", "2": "redis", "": "memory"}
111 backend = backends[answer]
112 break
113 except KeyError:
114 pass
115
116 init(config_file, backend)
117
118 # Install postgresql libraries if necessary
119 if backend == "postgresql":
120 try:
121 import psycopg2 # NOQA
122 except ImportError:
123 import pip
124 pip.main(['install', "kinto[postgresql]"])
125 elif backend == "redis":
126 try:
127 import kinto_redis # NOQA
128 except ImportError:
129 import pip
130 pip.main(['install', "kinto[redis]"])
131
132 elif which_command == 'migrate':
133 dry_run = parsed_args['dry_run']
134 env = bootstrap(config_file)
135 scripts.migrate(env, dry_run=dry_run)
136
137 elif which_command == 'delete-collection':
138 env = bootstrap(config_file)
139 return scripts.delete_collection(env,
140 parsed_args['bucket'],
141 parsed_args['collection'])
142
143 elif which_command == 'start':
144 pserve_argv = ['pserve', config_file]
145 if parsed_args['reload']:
146 pserve_argv.append('--reload')
147 pserve_argv.append('http_port=%s' % parsed_args['port'])
148 pserve.main(pserve_argv)
149
150 elif which_command == 'version':
151 print(__version__)
152
153 return 0
154
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/kinto/__main__.py b/kinto/__main__.py
--- a/kinto/__main__.py
+++ b/kinto/__main__.py
@@ -4,6 +4,7 @@
import sys
import logging
import logging.config
+from six.moves import input
from kinto.core import scripts
from pyramid.scripts import pserve
| {"golden_diff": "diff --git a/kinto/__main__.py b/kinto/__main__.py\n--- a/kinto/__main__.py\n+++ b/kinto/__main__.py\n@@ -4,6 +4,7 @@\n import sys\n import logging\n import logging.config\n+from six.moves import input\n \n from kinto.core import scripts\n from pyramid.scripts import pserve\n", "issue": "init fails on osx at the backend selection step\nI followed the instructions given [here](https://kinto.readthedocs.io/en/stable/tutorials/install.html#from-sources), but when asked:\n\n```\n$ Select the backend you would like to use: (1 - postgresql, 2 - redis, default - memory)\n```\n\nentering `1` and `2` leads to the following error:\n\n```\nTraceback (most recent call last):\n File \".venv/bin/kinto\", line 11, in <module>\n load_entry_point('kinto', 'console_scripts', 'kinto')()\n File \"/work/git/kinto/kinto/__main__.py\", line 108, in main\n answer = input(prompt).strip()\nAttributeError: 'int' object has no attribute 'strip'\n```\n\nand entering nothing + enter will lead to the following error.\n\n```\nTraceback (most recent call last):\n File \".venv/bin/kinto\", line 11, in <module>\n load_entry_point('kinto', 'console_scripts', 'kinto')()\n File \"/work/git/kinto/kinto/__main__.py\", line 108, in main\n answer = input(prompt).strip()\n File \"<string>\", line 0\n\n ^\nSyntaxError: unexpected EOF while parsing\n```\n\nIt appears that the code expects a `string` but getting a number and null, therefore failing on the `.strip()` call [here](https://github.com/Kinto/kinto/blob/master/kinto/__main__.py#L108).\n\n---\n\nEntering `\"\"`, `\"1\"` and `\"2\"` works. I'm assuming that's not the way it's designed to be? \n\n", "before_files": [{"content": "from __future__ import print_function\nimport argparse\nimport os\nimport sys\nimport logging\nimport logging.config\n\nfrom kinto.core import scripts\nfrom pyramid.scripts import pserve\nfrom pyramid.paster import bootstrap\nfrom kinto import __version__\nfrom kinto.config import init\n\nDEFAULT_CONFIG_FILE = 'config/kinto.ini'\nDEFAULT_PORT = 8888\nDEFAULT_LOG_LEVEL = logging.INFO\nDEFAULT_LOG_FORMAT = \"%(levelname)-5.5s %(message)s\"\n\n\ndef main(args=None):\n \"\"\"The main routine.\"\"\"\n if args is None:\n args = sys.argv[1:]\n\n parser = argparse.ArgumentParser(description=\"Kinto Command-Line \"\n \"Interface\")\n # XXX: deprecate this option, unnatural as first argument.\n parser.add_argument('--ini',\n help='Application configuration file',\n dest='ini_file',\n required=False,\n default=DEFAULT_CONFIG_FILE)\n\n parser.add_argument('-q', '--quiet', action='store_const',\n const=logging.CRITICAL, dest='verbosity',\n help='Show only critical errors.')\n\n parser.add_argument('--debug', action='store_const',\n const=logging.DEBUG, dest='verbosity',\n help='Show all messages, including debug messages.')\n\n commands = ('init', 'start', 'migrate', 'delete-collection', 'version')\n subparsers = parser.add_subparsers(title='subcommands',\n description='Main Kinto CLI commands',\n dest='subcommand',\n help=\"Choose and run with --help\")\n subparsers.required = True\n\n for command in commands:\n subparser = subparsers.add_parser(command)\n subparser.set_defaults(which=command)\n\n if command == 'init':\n subparser.add_argument('--backend',\n help='{memory,redis,postgresql}',\n dest='backend',\n required=False,\n default=None)\n elif command == 'migrate':\n subparser.add_argument('--dry-run',\n action='store_true',\n help='Simulate the migration operations '\n 'and show information',\n dest='dry_run',\n required=False,\n default=False)\n elif command == 'delete-collection':\n subparser.add_argument('--bucket',\n help='The bucket where the collection '\n 'belongs to.',\n required=True)\n subparser.add_argument('--collection',\n help='The collection to remove.',\n required=True)\n\n elif command == 'start':\n subparser.add_argument('--reload',\n action='store_true',\n help='Restart when code or config changes',\n required=False,\n default=False)\n subparser.add_argument('--port',\n type=int,\n help='Listening port number',\n required=False,\n default=DEFAULT_PORT)\n\n # Parse command-line arguments\n parsed_args = vars(parser.parse_args(args))\n\n config_file = parsed_args['ini_file']\n which_command = parsed_args['which']\n\n # Initialize logging from\n level = parsed_args.get('verbosity') or DEFAULT_LOG_LEVEL\n logging.basicConfig(level=level, format=DEFAULT_LOG_FORMAT)\n\n if which_command == 'init':\n if os.path.exists(config_file):\n print(\"%s already exists.\" % config_file, file=sys.stderr)\n return 1\n\n backend = parsed_args['backend']\n if not backend:\n while True:\n prompt = (\"Select the backend you would like to use: \"\n \"(1 - postgresql, 2 - redis, default - memory) \")\n answer = input(prompt).strip()\n try:\n backends = {\"1\": \"postgresql\", \"2\": \"redis\", \"\": \"memory\"}\n backend = backends[answer]\n break\n except KeyError:\n pass\n\n init(config_file, backend)\n\n # Install postgresql libraries if necessary\n if backend == \"postgresql\":\n try:\n import psycopg2 # NOQA\n except ImportError:\n import pip\n pip.main(['install', \"kinto[postgresql]\"])\n elif backend == \"redis\":\n try:\n import kinto_redis # NOQA\n except ImportError:\n import pip\n pip.main(['install', \"kinto[redis]\"])\n\n elif which_command == 'migrate':\n dry_run = parsed_args['dry_run']\n env = bootstrap(config_file)\n scripts.migrate(env, dry_run=dry_run)\n\n elif which_command == 'delete-collection':\n env = bootstrap(config_file)\n return scripts.delete_collection(env,\n parsed_args['bucket'],\n parsed_args['collection'])\n\n elif which_command == 'start':\n pserve_argv = ['pserve', config_file]\n if parsed_args['reload']:\n pserve_argv.append('--reload')\n pserve_argv.append('http_port=%s' % parsed_args['port'])\n pserve.main(pserve_argv)\n\n elif which_command == 'version':\n print(__version__)\n\n return 0\n", "path": "kinto/__main__.py"}], "after_files": [{"content": "from __future__ import print_function\nimport argparse\nimport os\nimport sys\nimport logging\nimport logging.config\nfrom six.moves import input\n\nfrom kinto.core import scripts\nfrom pyramid.scripts import pserve\nfrom pyramid.paster import bootstrap\nfrom kinto import __version__\nfrom kinto.config import init\n\nDEFAULT_CONFIG_FILE = 'config/kinto.ini'\nDEFAULT_PORT = 8888\nDEFAULT_LOG_LEVEL = logging.INFO\nDEFAULT_LOG_FORMAT = \"%(levelname)-5.5s %(message)s\"\n\n\ndef main(args=None):\n \"\"\"The main routine.\"\"\"\n if args is None:\n args = sys.argv[1:]\n\n parser = argparse.ArgumentParser(description=\"Kinto Command-Line \"\n \"Interface\")\n # XXX: deprecate this option, unnatural as first argument.\n parser.add_argument('--ini',\n help='Application configuration file',\n dest='ini_file',\n required=False,\n default=DEFAULT_CONFIG_FILE)\n\n parser.add_argument('-q', '--quiet', action='store_const',\n const=logging.CRITICAL, dest='verbosity',\n help='Show only critical errors.')\n\n parser.add_argument('--debug', action='store_const',\n const=logging.DEBUG, dest='verbosity',\n help='Show all messages, including debug messages.')\n\n commands = ('init', 'start', 'migrate', 'delete-collection', 'version')\n subparsers = parser.add_subparsers(title='subcommands',\n description='Main Kinto CLI commands',\n dest='subcommand',\n help=\"Choose and run with --help\")\n subparsers.required = True\n\n for command in commands:\n subparser = subparsers.add_parser(command)\n subparser.set_defaults(which=command)\n\n if command == 'init':\n subparser.add_argument('--backend',\n help='{memory,redis,postgresql}',\n dest='backend',\n required=False,\n default=None)\n elif command == 'migrate':\n subparser.add_argument('--dry-run',\n action='store_true',\n help='Simulate the migration operations '\n 'and show information',\n dest='dry_run',\n required=False,\n default=False)\n elif command == 'delete-collection':\n subparser.add_argument('--bucket',\n help='The bucket where the collection '\n 'belongs to.',\n required=True)\n subparser.add_argument('--collection',\n help='The collection to remove.',\n required=True)\n\n elif command == 'start':\n subparser.add_argument('--reload',\n action='store_true',\n help='Restart when code or config changes',\n required=False,\n default=False)\n subparser.add_argument('--port',\n type=int,\n help='Listening port number',\n required=False,\n default=DEFAULT_PORT)\n\n # Parse command-line arguments\n parsed_args = vars(parser.parse_args(args))\n\n config_file = parsed_args['ini_file']\n which_command = parsed_args['which']\n\n # Initialize logging from\n level = parsed_args.get('verbosity') or DEFAULT_LOG_LEVEL\n logging.basicConfig(level=level, format=DEFAULT_LOG_FORMAT)\n\n if which_command == 'init':\n if os.path.exists(config_file):\n print(\"%s already exists.\" % config_file, file=sys.stderr)\n return 1\n\n backend = parsed_args['backend']\n if not backend:\n while True:\n prompt = (\"Select the backend you would like to use: \"\n \"(1 - postgresql, 2 - redis, default - memory) \")\n answer = input(prompt).strip()\n try:\n backends = {\"1\": \"postgresql\", \"2\": \"redis\", \"\": \"memory\"}\n backend = backends[answer]\n break\n except KeyError:\n pass\n\n init(config_file, backend)\n\n # Install postgresql libraries if necessary\n if backend == \"postgresql\":\n try:\n import psycopg2 # NOQA\n except ImportError:\n import pip\n pip.main(['install', \"kinto[postgresql]\"])\n elif backend == \"redis\":\n try:\n import kinto_redis # NOQA\n except ImportError:\n import pip\n pip.main(['install', \"kinto[redis]\"])\n\n elif which_command == 'migrate':\n dry_run = parsed_args['dry_run']\n env = bootstrap(config_file)\n scripts.migrate(env, dry_run=dry_run)\n\n elif which_command == 'delete-collection':\n env = bootstrap(config_file)\n return scripts.delete_collection(env,\n parsed_args['bucket'],\n parsed_args['collection'])\n\n elif which_command == 'start':\n pserve_argv = ['pserve', config_file]\n if parsed_args['reload']:\n pserve_argv.append('--reload')\n pserve_argv.append('http_port=%s' % parsed_args['port'])\n pserve.main(pserve_argv)\n\n elif which_command == 'version':\n print(__version__)\n\n return 0\n", "path": "kinto/__main__.py"}]} |
gh_patches_debug_93 | rasdani/github-patches | git_diff | ManimCommunity__manim-1635 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
numpy not imported in `manim.mobject.probability`
## Description of bug / unexpected behavior
<!-- Add a clear and concise description of the problem you encountered. -->
When you try to use `BarChart` it raises an error saying `np is not defined`
## Expected behavior
<!-- Add a clear and concise description of what you expected to happen. -->
To not get the error and show the bar chart.
## How to reproduce the issue
<!-- Provide a piece of code illustrating the undesired behavior. -->
<details><summary>Code for reproducing the problem</summary>
```py
class Barchart(Scene):
def construct(self):
ls = [12,12,13,15,19,20,21]
bg = BarChart(ls)
self.add(bg)
```
</details>
## Additional media files
<!-- Paste in the files manim produced on rendering the code above. -->
<details><summary>Images/GIFs</summary>
<!-- PASTE MEDIA HERE -->
</details>
## Logs
<details><summary>Terminal output</summary>
<!-- Add "-v DEBUG" when calling manim to generate more detailed logs -->
```
<string> in <module>
<string> in construct(self)
/usr/local/lib/python3.7/dist-packages/manim/mobject/probability.py in add_axes(self, width, height)
197 x_axis = Line(self.tick_width * LEFT / 2, width * RIGHT)
198 y_axis = Line(MED_LARGE_BUFF * DOWN, height * UP)
--> 199 ticks = VGroup()
200 heights = np.linspace(0, height, self.n_ticks + 1)
201 values = np.linspace(0, self.max_value, self.n_ticks + 1)
NameError: name 'np' is not defined
```
<!-- Insert screenshots here (only when absolutely necessary, we prefer copy/pasted output!) -->
</details>
## System specifications
<details><summary>System Details</summary>
- OS (with version, e.g Windows 10 v2004 or macOS 10.15 (Catalina)):
- RAM:
- Python version (`python/py/python3 --version`):
- Installed modules (provide output from `pip list`):
```
Google Colab
```
</details>
<details><summary>LaTeX details</summary>
+ LaTeX distribution (e.g. TeX Live 2020):
+ Installed LaTeX packages:
<!-- output of `tlmgr list --only-installed` for TeX Live or a screenshot of the Packages page for MikTeX -->
</details>
<details><summary>FFMPEG</summary>
Output of `ffmpeg -version`:
```
PASTE HERE
```
</details>
## Additional comments
<!-- Add further context that you think might be relevant for this issue here. -->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `manim/mobject/probability.py`
Content:
```
1 """Mobjects representing objects from probability theory and statistics."""
2
3 __all__ = ["SampleSpace", "BarChart"]
4
5
6 from ..constants import *
7 from ..mobject.geometry import Line, Rectangle
8 from ..mobject.mobject import Mobject
9 from ..mobject.opengl_mobject import OpenGLMobject
10 from ..mobject.svg.brace import Brace
11 from ..mobject.svg.tex_mobject import MathTex, Tex
12 from ..mobject.types.vectorized_mobject import VGroup
13 from ..utils.color import (
14 BLUE,
15 BLUE_E,
16 DARK_GREY,
17 GREEN_E,
18 LIGHT_GREY,
19 MAROON_B,
20 YELLOW,
21 color_gradient,
22 )
23 from ..utils.iterables import tuplify
24
25 EPSILON = 0.0001
26
27
28 class SampleSpace(Rectangle):
29 def __init__(
30 self,
31 height=3,
32 width=3,
33 fill_color=DARK_GREY,
34 fill_opacity=1,
35 stroke_width=0.5,
36 stroke_color=LIGHT_GREY,
37 default_label_scale_val=1,
38 ):
39 Rectangle.__init__(
40 self,
41 height=height,
42 width=width,
43 fill_color=fill_color,
44 fill_opacity=fill_opacity,
45 stroke_width=stroke_width,
46 stroke_color=stroke_color,
47 )
48 self.default_label_scale_val = default_label_scale_val
49
50 def add_title(self, title="Sample space", buff=MED_SMALL_BUFF):
51 # TODO, should this really exist in SampleSpaceScene
52 title_mob = Tex(title)
53 if title_mob.width > self.width:
54 title_mob.width = self.width
55 title_mob.next_to(self, UP, buff=buff)
56 self.title = title_mob
57 self.add(title_mob)
58
59 def add_label(self, label):
60 self.label = label
61
62 def complete_p_list(self, p_list):
63 new_p_list = list(tuplify(p_list))
64 remainder = 1.0 - sum(new_p_list)
65 if abs(remainder) > EPSILON:
66 new_p_list.append(remainder)
67 return new_p_list
68
69 def get_division_along_dimension(self, p_list, dim, colors, vect):
70 p_list = self.complete_p_list(p_list)
71 colors = color_gradient(colors, len(p_list))
72
73 last_point = self.get_edge_center(-vect)
74 parts = VGroup()
75 for factor, color in zip(p_list, colors):
76 part = SampleSpace()
77 part.set_fill(color, 1)
78 part.replace(self, stretch=True)
79 part.stretch(factor, dim)
80 part.move_to(last_point, -vect)
81 last_point = part.get_edge_center(vect)
82 parts.add(part)
83 return parts
84
85 def get_horizontal_division(self, p_list, colors=[GREEN_E, BLUE_E], vect=DOWN):
86 return self.get_division_along_dimension(p_list, 1, colors, vect)
87
88 def get_vertical_division(self, p_list, colors=[MAROON_B, YELLOW], vect=RIGHT):
89 return self.get_division_along_dimension(p_list, 0, colors, vect)
90
91 def divide_horizontally(self, *args, **kwargs):
92 self.horizontal_parts = self.get_horizontal_division(*args, **kwargs)
93 self.add(self.horizontal_parts)
94
95 def divide_vertically(self, *args, **kwargs):
96 self.vertical_parts = self.get_vertical_division(*args, **kwargs)
97 self.add(self.vertical_parts)
98
99 def get_subdivision_braces_and_labels(
100 self, parts, labels, direction, buff=SMALL_BUFF, min_num_quads=1
101 ):
102 label_mobs = VGroup()
103 braces = VGroup()
104 for label, part in zip(labels, parts):
105 brace = Brace(part, direction, min_num_quads=min_num_quads, buff=buff)
106 if isinstance(label, (Mobject, OpenGLMobject)):
107 label_mob = label
108 else:
109 label_mob = MathTex(label)
110 label_mob.scale(self.default_label_scale_val)
111 label_mob.next_to(brace, direction, buff)
112
113 braces.add(brace)
114 label_mobs.add(label_mob)
115 parts.braces = braces
116 parts.labels = label_mobs
117 parts.label_kwargs = {
118 "labels": label_mobs.copy(),
119 "direction": direction,
120 "buff": buff,
121 }
122 return VGroup(parts.braces, parts.labels)
123
124 def get_side_braces_and_labels(self, labels, direction=LEFT, **kwargs):
125 assert hasattr(self, "horizontal_parts")
126 parts = self.horizontal_parts
127 return self.get_subdivision_braces_and_labels(
128 parts, labels, direction, **kwargs
129 )
130
131 def get_top_braces_and_labels(self, labels, **kwargs):
132 assert hasattr(self, "vertical_parts")
133 parts = self.vertical_parts
134 return self.get_subdivision_braces_and_labels(parts, labels, UP, **kwargs)
135
136 def get_bottom_braces_and_labels(self, labels, **kwargs):
137 assert hasattr(self, "vertical_parts")
138 parts = self.vertical_parts
139 return self.get_subdivision_braces_and_labels(parts, labels, DOWN, **kwargs)
140
141 def add_braces_and_labels(self):
142 for attr in "horizontal_parts", "vertical_parts":
143 if not hasattr(self, attr):
144 continue
145 parts = getattr(self, attr)
146 for subattr in "braces", "labels":
147 if hasattr(parts, subattr):
148 self.add(getattr(parts, subattr))
149
150 def __getitem__(self, index):
151 if hasattr(self, "horizontal_parts"):
152 return self.horizontal_parts[index]
153 elif hasattr(self, "vertical_parts"):
154 return self.vertical_parts[index]
155 return self.split()[index]
156
157
158 class BarChart(VGroup):
159 def __init__(
160 self,
161 values,
162 height=4,
163 width=6,
164 n_ticks=4,
165 tick_width=0.2,
166 label_y_axis=True,
167 y_axis_label_height=0.25,
168 max_value=1,
169 bar_colors=[BLUE, YELLOW],
170 bar_fill_opacity=0.8,
171 bar_stroke_width=3,
172 bar_names=[],
173 bar_label_scale_val=0.75,
174 **kwargs
175 ):
176 VGroup.__init__(self, **kwargs)
177 self.n_ticks = n_ticks
178 self.tick_width = tick_width
179 self.label_y_axis = label_y_axis
180 self.y_axis_label_height = y_axis_label_height
181 self.max_value = max_value
182 self.bar_colors = bar_colors
183 self.bar_fill_opacity = bar_fill_opacity
184 self.bar_stroke_width = bar_stroke_width
185 self.bar_names = bar_names
186 self.bar_label_scale_val = bar_label_scale_val
187
188 if self.max_value is None:
189 self.max_value = max(values)
190
191 self.add_axes(width, height)
192 self.add_bars(values, width, height)
193 self.center()
194
195 def add_axes(self, width, height):
196 x_axis = Line(self.tick_width * LEFT / 2, width * RIGHT)
197 y_axis = Line(MED_LARGE_BUFF * DOWN, height * UP)
198 ticks = VGroup()
199 heights = np.linspace(0, height, self.n_ticks + 1)
200 values = np.linspace(0, self.max_value, self.n_ticks + 1)
201 for y, _value in zip(heights, values):
202 tick = Line(LEFT, RIGHT)
203 tick.width = self.tick_width
204 tick.move_to(y * UP)
205 ticks.add(tick)
206 y_axis.add(ticks)
207
208 self.add(x_axis, y_axis)
209 self.x_axis, self.y_axis = x_axis, y_axis
210
211 if self.label_y_axis:
212 labels = VGroup()
213 for tick, value in zip(ticks, values):
214 label = MathTex(str(np.round(value, 2)))
215 label.height = self.y_axis_label_height
216 label.next_to(tick, LEFT, SMALL_BUFF)
217 labels.add(label)
218 self.y_axis_labels = labels
219 self.add(labels)
220
221 def add_bars(self, values, width, height):
222 buff = float(width) / (2 * len(values) + 1)
223 bars = VGroup()
224 for i, value in enumerate(values):
225 bar = Rectangle(
226 height=(value / self.max_value) * height,
227 width=buff,
228 stroke_width=self.bar_stroke_width,
229 fill_opacity=self.bar_fill_opacity,
230 )
231 bar.move_to((2 * i + 1) * buff * RIGHT, DOWN + LEFT)
232 bars.add(bar)
233 bars.set_color_by_gradient(*self.bar_colors)
234
235 bar_labels = VGroup()
236 for bar, name in zip(bars, self.bar_names):
237 label = MathTex(str(name))
238 label.scale(self.bar_label_scale_val)
239 label.next_to(bar, DOWN, SMALL_BUFF)
240 bar_labels.add(label)
241
242 self.add(bars, bar_labels)
243 self.bars = bars
244 self.bar_labels = bar_labels
245
246 def change_bar_values(self, values):
247 for bar, value in zip(self.bars, values):
248 bar_bottom = bar.get_bottom()
249 bar.stretch_to_fit_height((value / self.max_value) * self.height)
250 bar.move_to(bar_bottom, DOWN)
251
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/manim/mobject/probability.py b/manim/mobject/probability.py
--- a/manim/mobject/probability.py
+++ b/manim/mobject/probability.py
@@ -2,6 +2,7 @@
__all__ = ["SampleSpace", "BarChart"]
+import numpy as np
from ..constants import *
from ..mobject.geometry import Line, Rectangle
| {"golden_diff": "diff --git a/manim/mobject/probability.py b/manim/mobject/probability.py\n--- a/manim/mobject/probability.py\n+++ b/manim/mobject/probability.py\n@@ -2,6 +2,7 @@\n \n __all__ = [\"SampleSpace\", \"BarChart\"]\n \n+import numpy as np\n \n from ..constants import *\n from ..mobject.geometry import Line, Rectangle\n", "issue": "numpy not imported in `manim.mobject.probability`\n## Description of bug / unexpected behavior\r\n<!-- Add a clear and concise description of the problem you encountered. -->\r\nWhen you try to use `BarChart` it raises an error saying `np is not defined`\r\n\r\n## Expected behavior\r\n<!-- Add a clear and concise description of what you expected to happen. -->\r\nTo not get the error and show the bar chart.\r\n\r\n## How to reproduce the issue\r\n<!-- Provide a piece of code illustrating the undesired behavior. -->\r\n\r\n<details><summary>Code for reproducing the problem</summary>\r\n\r\n```py\r\nclass Barchart(Scene):\r\n def construct(self):\r\n ls = [12,12,13,15,19,20,21]\r\n bg = BarChart(ls)\r\n self.add(bg)\r\n```\r\n\r\n</details>\r\n\r\n\r\n## Additional media files\r\n<!-- Paste in the files manim produced on rendering the code above. -->\r\n\r\n<details><summary>Images/GIFs</summary>\r\n\r\n<!-- PASTE MEDIA HERE -->\r\n\r\n</details>\r\n\r\n\r\n## Logs\r\n<details><summary>Terminal output</summary>\r\n<!-- Add \"-v DEBUG\" when calling manim to generate more detailed logs -->\r\n\r\n```\r\n<string> in <module>\r\n\r\n<string> in construct(self)\r\n\r\n/usr/local/lib/python3.7/dist-packages/manim/mobject/probability.py in add_axes(self, width, height)\r\n 197 x_axis = Line(self.tick_width * LEFT / 2, width * RIGHT)\r\n 198 y_axis = Line(MED_LARGE_BUFF * DOWN, height * UP)\r\n--> 199 ticks = VGroup()\r\n 200 heights = np.linspace(0, height, self.n_ticks + 1)\r\n 201 values = np.linspace(0, self.max_value, self.n_ticks + 1)\r\n\r\nNameError: name 'np' is not defined\r\n```\r\n\r\n<!-- Insert screenshots here (only when absolutely necessary, we prefer copy/pasted output!) -->\r\n\r\n</details>\r\n\r\n\r\n## System specifications\r\n\r\n<details><summary>System Details</summary>\r\n\r\n- OS (with version, e.g Windows 10 v2004 or macOS 10.15 (Catalina)):\r\n- RAM:\r\n- Python version (`python/py/python3 --version`):\r\n- Installed modules (provide output from `pip list`):\r\n```\r\nGoogle Colab\r\n```\r\n</details>\r\n\r\n<details><summary>LaTeX details</summary>\r\n\r\n+ LaTeX distribution (e.g. TeX Live 2020):\r\n+ Installed LaTeX packages:\r\n<!-- output of `tlmgr list --only-installed` for TeX Live or a screenshot of the Packages page for MikTeX -->\r\n</details>\r\n\r\n<details><summary>FFMPEG</summary>\r\n\r\nOutput of `ffmpeg -version`:\r\n\r\n```\r\nPASTE HERE\r\n```\r\n</details>\r\n\r\n## Additional comments\r\n<!-- Add further context that you think might be relevant for this issue here. -->\r\n\n", "before_files": [{"content": "\"\"\"Mobjects representing objects from probability theory and statistics.\"\"\"\n\n__all__ = [\"SampleSpace\", \"BarChart\"]\n\n\nfrom ..constants import *\nfrom ..mobject.geometry import Line, Rectangle\nfrom ..mobject.mobject import Mobject\nfrom ..mobject.opengl_mobject import OpenGLMobject\nfrom ..mobject.svg.brace import Brace\nfrom ..mobject.svg.tex_mobject import MathTex, Tex\nfrom ..mobject.types.vectorized_mobject import VGroup\nfrom ..utils.color import (\n BLUE,\n BLUE_E,\n DARK_GREY,\n GREEN_E,\n LIGHT_GREY,\n MAROON_B,\n YELLOW,\n color_gradient,\n)\nfrom ..utils.iterables import tuplify\n\nEPSILON = 0.0001\n\n\nclass SampleSpace(Rectangle):\n def __init__(\n self,\n height=3,\n width=3,\n fill_color=DARK_GREY,\n fill_opacity=1,\n stroke_width=0.5,\n stroke_color=LIGHT_GREY,\n default_label_scale_val=1,\n ):\n Rectangle.__init__(\n self,\n height=height,\n width=width,\n fill_color=fill_color,\n fill_opacity=fill_opacity,\n stroke_width=stroke_width,\n stroke_color=stroke_color,\n )\n self.default_label_scale_val = default_label_scale_val\n\n def add_title(self, title=\"Sample space\", buff=MED_SMALL_BUFF):\n # TODO, should this really exist in SampleSpaceScene\n title_mob = Tex(title)\n if title_mob.width > self.width:\n title_mob.width = self.width\n title_mob.next_to(self, UP, buff=buff)\n self.title = title_mob\n self.add(title_mob)\n\n def add_label(self, label):\n self.label = label\n\n def complete_p_list(self, p_list):\n new_p_list = list(tuplify(p_list))\n remainder = 1.0 - sum(new_p_list)\n if abs(remainder) > EPSILON:\n new_p_list.append(remainder)\n return new_p_list\n\n def get_division_along_dimension(self, p_list, dim, colors, vect):\n p_list = self.complete_p_list(p_list)\n colors = color_gradient(colors, len(p_list))\n\n last_point = self.get_edge_center(-vect)\n parts = VGroup()\n for factor, color in zip(p_list, colors):\n part = SampleSpace()\n part.set_fill(color, 1)\n part.replace(self, stretch=True)\n part.stretch(factor, dim)\n part.move_to(last_point, -vect)\n last_point = part.get_edge_center(vect)\n parts.add(part)\n return parts\n\n def get_horizontal_division(self, p_list, colors=[GREEN_E, BLUE_E], vect=DOWN):\n return self.get_division_along_dimension(p_list, 1, colors, vect)\n\n def get_vertical_division(self, p_list, colors=[MAROON_B, YELLOW], vect=RIGHT):\n return self.get_division_along_dimension(p_list, 0, colors, vect)\n\n def divide_horizontally(self, *args, **kwargs):\n self.horizontal_parts = self.get_horizontal_division(*args, **kwargs)\n self.add(self.horizontal_parts)\n\n def divide_vertically(self, *args, **kwargs):\n self.vertical_parts = self.get_vertical_division(*args, **kwargs)\n self.add(self.vertical_parts)\n\n def get_subdivision_braces_and_labels(\n self, parts, labels, direction, buff=SMALL_BUFF, min_num_quads=1\n ):\n label_mobs = VGroup()\n braces = VGroup()\n for label, part in zip(labels, parts):\n brace = Brace(part, direction, min_num_quads=min_num_quads, buff=buff)\n if isinstance(label, (Mobject, OpenGLMobject)):\n label_mob = label\n else:\n label_mob = MathTex(label)\n label_mob.scale(self.default_label_scale_val)\n label_mob.next_to(brace, direction, buff)\n\n braces.add(brace)\n label_mobs.add(label_mob)\n parts.braces = braces\n parts.labels = label_mobs\n parts.label_kwargs = {\n \"labels\": label_mobs.copy(),\n \"direction\": direction,\n \"buff\": buff,\n }\n return VGroup(parts.braces, parts.labels)\n\n def get_side_braces_and_labels(self, labels, direction=LEFT, **kwargs):\n assert hasattr(self, \"horizontal_parts\")\n parts = self.horizontal_parts\n return self.get_subdivision_braces_and_labels(\n parts, labels, direction, **kwargs\n )\n\n def get_top_braces_and_labels(self, labels, **kwargs):\n assert hasattr(self, \"vertical_parts\")\n parts = self.vertical_parts\n return self.get_subdivision_braces_and_labels(parts, labels, UP, **kwargs)\n\n def get_bottom_braces_and_labels(self, labels, **kwargs):\n assert hasattr(self, \"vertical_parts\")\n parts = self.vertical_parts\n return self.get_subdivision_braces_and_labels(parts, labels, DOWN, **kwargs)\n\n def add_braces_and_labels(self):\n for attr in \"horizontal_parts\", \"vertical_parts\":\n if not hasattr(self, attr):\n continue\n parts = getattr(self, attr)\n for subattr in \"braces\", \"labels\":\n if hasattr(parts, subattr):\n self.add(getattr(parts, subattr))\n\n def __getitem__(self, index):\n if hasattr(self, \"horizontal_parts\"):\n return self.horizontal_parts[index]\n elif hasattr(self, \"vertical_parts\"):\n return self.vertical_parts[index]\n return self.split()[index]\n\n\nclass BarChart(VGroup):\n def __init__(\n self,\n values,\n height=4,\n width=6,\n n_ticks=4,\n tick_width=0.2,\n label_y_axis=True,\n y_axis_label_height=0.25,\n max_value=1,\n bar_colors=[BLUE, YELLOW],\n bar_fill_opacity=0.8,\n bar_stroke_width=3,\n bar_names=[],\n bar_label_scale_val=0.75,\n **kwargs\n ):\n VGroup.__init__(self, **kwargs)\n self.n_ticks = n_ticks\n self.tick_width = tick_width\n self.label_y_axis = label_y_axis\n self.y_axis_label_height = y_axis_label_height\n self.max_value = max_value\n self.bar_colors = bar_colors\n self.bar_fill_opacity = bar_fill_opacity\n self.bar_stroke_width = bar_stroke_width\n self.bar_names = bar_names\n self.bar_label_scale_val = bar_label_scale_val\n\n if self.max_value is None:\n self.max_value = max(values)\n\n self.add_axes(width, height)\n self.add_bars(values, width, height)\n self.center()\n\n def add_axes(self, width, height):\n x_axis = Line(self.tick_width * LEFT / 2, width * RIGHT)\n y_axis = Line(MED_LARGE_BUFF * DOWN, height * UP)\n ticks = VGroup()\n heights = np.linspace(0, height, self.n_ticks + 1)\n values = np.linspace(0, self.max_value, self.n_ticks + 1)\n for y, _value in zip(heights, values):\n tick = Line(LEFT, RIGHT)\n tick.width = self.tick_width\n tick.move_to(y * UP)\n ticks.add(tick)\n y_axis.add(ticks)\n\n self.add(x_axis, y_axis)\n self.x_axis, self.y_axis = x_axis, y_axis\n\n if self.label_y_axis:\n labels = VGroup()\n for tick, value in zip(ticks, values):\n label = MathTex(str(np.round(value, 2)))\n label.height = self.y_axis_label_height\n label.next_to(tick, LEFT, SMALL_BUFF)\n labels.add(label)\n self.y_axis_labels = labels\n self.add(labels)\n\n def add_bars(self, values, width, height):\n buff = float(width) / (2 * len(values) + 1)\n bars = VGroup()\n for i, value in enumerate(values):\n bar = Rectangle(\n height=(value / self.max_value) * height,\n width=buff,\n stroke_width=self.bar_stroke_width,\n fill_opacity=self.bar_fill_opacity,\n )\n bar.move_to((2 * i + 1) * buff * RIGHT, DOWN + LEFT)\n bars.add(bar)\n bars.set_color_by_gradient(*self.bar_colors)\n\n bar_labels = VGroup()\n for bar, name in zip(bars, self.bar_names):\n label = MathTex(str(name))\n label.scale(self.bar_label_scale_val)\n label.next_to(bar, DOWN, SMALL_BUFF)\n bar_labels.add(label)\n\n self.add(bars, bar_labels)\n self.bars = bars\n self.bar_labels = bar_labels\n\n def change_bar_values(self, values):\n for bar, value in zip(self.bars, values):\n bar_bottom = bar.get_bottom()\n bar.stretch_to_fit_height((value / self.max_value) * self.height)\n bar.move_to(bar_bottom, DOWN)\n", "path": "manim/mobject/probability.py"}], "after_files": [{"content": "\"\"\"Mobjects representing objects from probability theory and statistics.\"\"\"\n\n__all__ = [\"SampleSpace\", \"BarChart\"]\n\nimport numpy as np\n\nfrom ..constants import *\nfrom ..mobject.geometry import Line, Rectangle\nfrom ..mobject.mobject import Mobject\nfrom ..mobject.opengl_mobject import OpenGLMobject\nfrom ..mobject.svg.brace import Brace\nfrom ..mobject.svg.tex_mobject import MathTex, Tex\nfrom ..mobject.types.vectorized_mobject import VGroup\nfrom ..utils.color import (\n BLUE,\n BLUE_E,\n DARK_GREY,\n GREEN_E,\n LIGHT_GREY,\n MAROON_B,\n YELLOW,\n color_gradient,\n)\nfrom ..utils.iterables import tuplify\n\nEPSILON = 0.0001\n\n\nclass SampleSpace(Rectangle):\n def __init__(\n self,\n height=3,\n width=3,\n fill_color=DARK_GREY,\n fill_opacity=1,\n stroke_width=0.5,\n stroke_color=LIGHT_GREY,\n default_label_scale_val=1,\n ):\n Rectangle.__init__(\n self,\n height=height,\n width=width,\n fill_color=fill_color,\n fill_opacity=fill_opacity,\n stroke_width=stroke_width,\n stroke_color=stroke_color,\n )\n self.default_label_scale_val = default_label_scale_val\n\n def add_title(self, title=\"Sample space\", buff=MED_SMALL_BUFF):\n # TODO, should this really exist in SampleSpaceScene\n title_mob = Tex(title)\n if title_mob.width > self.width:\n title_mob.width = self.width\n title_mob.next_to(self, UP, buff=buff)\n self.title = title_mob\n self.add(title_mob)\n\n def add_label(self, label):\n self.label = label\n\n def complete_p_list(self, p_list):\n new_p_list = list(tuplify(p_list))\n remainder = 1.0 - sum(new_p_list)\n if abs(remainder) > EPSILON:\n new_p_list.append(remainder)\n return new_p_list\n\n def get_division_along_dimension(self, p_list, dim, colors, vect):\n p_list = self.complete_p_list(p_list)\n colors = color_gradient(colors, len(p_list))\n\n last_point = self.get_edge_center(-vect)\n parts = VGroup()\n for factor, color in zip(p_list, colors):\n part = SampleSpace()\n part.set_fill(color, 1)\n part.replace(self, stretch=True)\n part.stretch(factor, dim)\n part.move_to(last_point, -vect)\n last_point = part.get_edge_center(vect)\n parts.add(part)\n return parts\n\n def get_horizontal_division(self, p_list, colors=[GREEN_E, BLUE_E], vect=DOWN):\n return self.get_division_along_dimension(p_list, 1, colors, vect)\n\n def get_vertical_division(self, p_list, colors=[MAROON_B, YELLOW], vect=RIGHT):\n return self.get_division_along_dimension(p_list, 0, colors, vect)\n\n def divide_horizontally(self, *args, **kwargs):\n self.horizontal_parts = self.get_horizontal_division(*args, **kwargs)\n self.add(self.horizontal_parts)\n\n def divide_vertically(self, *args, **kwargs):\n self.vertical_parts = self.get_vertical_division(*args, **kwargs)\n self.add(self.vertical_parts)\n\n def get_subdivision_braces_and_labels(\n self, parts, labels, direction, buff=SMALL_BUFF, min_num_quads=1\n ):\n label_mobs = VGroup()\n braces = VGroup()\n for label, part in zip(labels, parts):\n brace = Brace(part, direction, min_num_quads=min_num_quads, buff=buff)\n if isinstance(label, (Mobject, OpenGLMobject)):\n label_mob = label\n else:\n label_mob = MathTex(label)\n label_mob.scale(self.default_label_scale_val)\n label_mob.next_to(brace, direction, buff)\n\n braces.add(brace)\n label_mobs.add(label_mob)\n parts.braces = braces\n parts.labels = label_mobs\n parts.label_kwargs = {\n \"labels\": label_mobs.copy(),\n \"direction\": direction,\n \"buff\": buff,\n }\n return VGroup(parts.braces, parts.labels)\n\n def get_side_braces_and_labels(self, labels, direction=LEFT, **kwargs):\n assert hasattr(self, \"horizontal_parts\")\n parts = self.horizontal_parts\n return self.get_subdivision_braces_and_labels(\n parts, labels, direction, **kwargs\n )\n\n def get_top_braces_and_labels(self, labels, **kwargs):\n assert hasattr(self, \"vertical_parts\")\n parts = self.vertical_parts\n return self.get_subdivision_braces_and_labels(parts, labels, UP, **kwargs)\n\n def get_bottom_braces_and_labels(self, labels, **kwargs):\n assert hasattr(self, \"vertical_parts\")\n parts = self.vertical_parts\n return self.get_subdivision_braces_and_labels(parts, labels, DOWN, **kwargs)\n\n def add_braces_and_labels(self):\n for attr in \"horizontal_parts\", \"vertical_parts\":\n if not hasattr(self, attr):\n continue\n parts = getattr(self, attr)\n for subattr in \"braces\", \"labels\":\n if hasattr(parts, subattr):\n self.add(getattr(parts, subattr))\n\n def __getitem__(self, index):\n if hasattr(self, \"horizontal_parts\"):\n return self.horizontal_parts[index]\n elif hasattr(self, \"vertical_parts\"):\n return self.vertical_parts[index]\n return self.split()[index]\n\n\nclass BarChart(VGroup):\n def __init__(\n self,\n values,\n height=4,\n width=6,\n n_ticks=4,\n tick_width=0.2,\n label_y_axis=True,\n y_axis_label_height=0.25,\n max_value=1,\n bar_colors=[BLUE, YELLOW],\n bar_fill_opacity=0.8,\n bar_stroke_width=3,\n bar_names=[],\n bar_label_scale_val=0.75,\n **kwargs\n ):\n VGroup.__init__(self, **kwargs)\n self.n_ticks = n_ticks\n self.tick_width = tick_width\n self.label_y_axis = label_y_axis\n self.y_axis_label_height = y_axis_label_height\n self.max_value = max_value\n self.bar_colors = bar_colors\n self.bar_fill_opacity = bar_fill_opacity\n self.bar_stroke_width = bar_stroke_width\n self.bar_names = bar_names\n self.bar_label_scale_val = bar_label_scale_val\n\n if self.max_value is None:\n self.max_value = max(values)\n\n self.add_axes(width, height)\n self.add_bars(values, width, height)\n self.center()\n\n def add_axes(self, width, height):\n x_axis = Line(self.tick_width * LEFT / 2, width * RIGHT)\n y_axis = Line(MED_LARGE_BUFF * DOWN, height * UP)\n ticks = VGroup()\n heights = np.linspace(0, height, self.n_ticks + 1)\n values = np.linspace(0, self.max_value, self.n_ticks + 1)\n for y, _value in zip(heights, values):\n tick = Line(LEFT, RIGHT)\n tick.width = self.tick_width\n tick.move_to(y * UP)\n ticks.add(tick)\n y_axis.add(ticks)\n\n self.add(x_axis, y_axis)\n self.x_axis, self.y_axis = x_axis, y_axis\n\n if self.label_y_axis:\n labels = VGroup()\n for tick, value in zip(ticks, values):\n label = MathTex(str(np.round(value, 2)))\n label.height = self.y_axis_label_height\n label.next_to(tick, LEFT, SMALL_BUFF)\n labels.add(label)\n self.y_axis_labels = labels\n self.add(labels)\n\n def add_bars(self, values, width, height):\n buff = float(width) / (2 * len(values) + 1)\n bars = VGroup()\n for i, value in enumerate(values):\n bar = Rectangle(\n height=(value / self.max_value) * height,\n width=buff,\n stroke_width=self.bar_stroke_width,\n fill_opacity=self.bar_fill_opacity,\n )\n bar.move_to((2 * i + 1) * buff * RIGHT, DOWN + LEFT)\n bars.add(bar)\n bars.set_color_by_gradient(*self.bar_colors)\n\n bar_labels = VGroup()\n for bar, name in zip(bars, self.bar_names):\n label = MathTex(str(name))\n label.scale(self.bar_label_scale_val)\n label.next_to(bar, DOWN, SMALL_BUFF)\n bar_labels.add(label)\n\n self.add(bars, bar_labels)\n self.bars = bars\n self.bar_labels = bar_labels\n\n def change_bar_values(self, values):\n for bar, value in zip(self.bars, values):\n bar_bottom = bar.get_bottom()\n bar.stretch_to_fit_height((value / self.max_value) * self.height)\n bar.move_to(bar_bottom, DOWN)\n", "path": "manim/mobject/probability.py"}]} |
gh_patches_debug_94 | rasdani/github-patches | git_diff | docker__docker-py-1669 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Issue with port option in 2.4.0 version
Hi,
I update to the 2.4 today and i got issue with docker-compose when i try to add the following line to my configuration file (docker-compose.yml) :
`ports:
- "127.0.0.1:9292:9090"`
I got the following error:
`
ERROR: for ContainerName expected string or buffer
Traceback (most recent call last):
File "/usr/local/bin/docker-compose", line 11, in <module>
sys.exit(main())
File "/usr/local/lib/python2.7/dist-packages/compose/cli/main.py", line 68, in main
command()
File "/usr/local/lib/python2.7/dist-packages/compose/cli/main.py", line 118, in perform_command
handler(command, command_options)
File "/usr/local/lib/python2.7/dist-packages/compose/cli/main.py", line 926, in up
scale_override=parse_scale_args(options['--scale']),
File "/usr/local/lib/python2.7/dist-packages/compose/project.py", line 424, in up
get_deps
File "/usr/local/lib/python2.7/dist-packages/compose/parallel.py", line 69, in parallel_execute
raise error_to_reraise
TypeError: expected string or buffer
`
I have no issue when i downgrade again to the 2.3 version of the package
To reproduce the issue, i use the following configuration ( it doesn't seem to depend on the image):
```
version: '2'
services :
ContainerName:
image: bae2d441e03a
ports:
- "127.0.0.1:9292:9090"
```
I run on Ubuntu 14.04.5 LTS with the following package:
```
docker==2.4.0
docker-compose==1.14.0
docker-pycreds==0.2.1
dockerpty==0.4.1
Python 2.7.6
Client:
Version: 17.05.0-ce
API version: 1.29
Go version: go1.7.5
Git commit: 89658be
Built: Thu May 4 22:06:06 2017
OS/Arch: linux/amd64
Server:
Version: 17.05.0-ce
API version: 1.29 (minimum version 1.12)
Go version: go1.7.5
Git commit: 89658be
Built: Thu May 4 22:06:06 2017
OS/Arch: linux/amd64
Experimental: false
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docker/utils/ports.py`
Content:
```
1 import re
2
3 PORT_SPEC = re.compile(
4 "^" # Match full string
5 "(" # External part
6 "((?P<host>[a-fA-F\d.:]+):)?" # Address
7 "(?P<ext>[\d]*)(-(?P<ext_end>[\d]+))?:" # External range
8 ")?"
9 "(?P<int>[\d]+)(-(?P<int_end>[\d]+))?" # Internal range
10 "(?P<proto>/(udp|tcp))?" # Protocol
11 "$" # Match full string
12 )
13
14
15 def add_port_mapping(port_bindings, internal_port, external):
16 if internal_port in port_bindings:
17 port_bindings[internal_port].append(external)
18 else:
19 port_bindings[internal_port] = [external]
20
21
22 def add_port(port_bindings, internal_port_range, external_range):
23 if external_range is None:
24 for internal_port in internal_port_range:
25 add_port_mapping(port_bindings, internal_port, None)
26 else:
27 ports = zip(internal_port_range, external_range)
28 for internal_port, external_port in ports:
29 add_port_mapping(port_bindings, internal_port, external_port)
30
31
32 def build_port_bindings(ports):
33 port_bindings = {}
34 for port in ports:
35 internal_port_range, external_range = split_port(port)
36 add_port(port_bindings, internal_port_range, external_range)
37 return port_bindings
38
39
40 def _raise_invalid_port(port):
41 raise ValueError('Invalid port "%s", should be '
42 '[[remote_ip:]remote_port[-remote_port]:]'
43 'port[/protocol]' % port)
44
45
46 def port_range(start, end, proto, randomly_available_port=False):
47 if not start:
48 return start
49 if not end:
50 return [start + proto]
51 if randomly_available_port:
52 return ['{}-{}'.format(start, end) + proto]
53 return [str(port) + proto for port in range(int(start), int(end) + 1)]
54
55
56 def split_port(port):
57 match = PORT_SPEC.match(port)
58 if match is None:
59 _raise_invalid_port(port)
60 parts = match.groupdict()
61
62 host = parts['host']
63 proto = parts['proto'] or ''
64 internal = port_range(parts['int'], parts['int_end'], proto)
65 external = port_range(
66 parts['ext'], parts['ext_end'], '', len(internal) == 1)
67
68 if host is None:
69 if external is not None and len(internal) != len(external):
70 raise ValueError('Port ranges don\'t match in length')
71 return internal, external
72 else:
73 if not external:
74 external = [None] * len(internal)
75 elif len(internal) != len(external):
76 raise ValueError('Port ranges don\'t match in length')
77 return internal, [(host, ext_port) for ext_port in external]
78
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docker/utils/ports.py b/docker/utils/ports.py
--- a/docker/utils/ports.py
+++ b/docker/utils/ports.py
@@ -54,6 +54,7 @@
def split_port(port):
+ port = str(port)
match = PORT_SPEC.match(port)
if match is None:
_raise_invalid_port(port)
| {"golden_diff": "diff --git a/docker/utils/ports.py b/docker/utils/ports.py\n--- a/docker/utils/ports.py\n+++ b/docker/utils/ports.py\n@@ -54,6 +54,7 @@\n \n \n def split_port(port):\n+ port = str(port)\n match = PORT_SPEC.match(port)\n if match is None:\n _raise_invalid_port(port)\n", "issue": "Issue with port option in 2.4.0 version\nHi,\r\nI update to the 2.4 today and i got issue with docker-compose when i try to add the following line to my configuration file (docker-compose.yml) : \r\n`ports:\r\n - \"127.0.0.1:9292:9090\"`\r\n\r\nI got the following error:\r\n\r\n`\r\nERROR: for ContainerName expected string or buffer\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/docker-compose\", line 11, in <module>\r\n sys.exit(main())\r\n File \"/usr/local/lib/python2.7/dist-packages/compose/cli/main.py\", line 68, in main\r\n command()\r\n File \"/usr/local/lib/python2.7/dist-packages/compose/cli/main.py\", line 118, in perform_command\r\n handler(command, command_options)\r\n File \"/usr/local/lib/python2.7/dist-packages/compose/cli/main.py\", line 926, in up\r\n scale_override=parse_scale_args(options['--scale']),\r\n File \"/usr/local/lib/python2.7/dist-packages/compose/project.py\", line 424, in up\r\n get_deps\r\n File \"/usr/local/lib/python2.7/dist-packages/compose/parallel.py\", line 69, in parallel_execute\r\n raise error_to_reraise\r\nTypeError: expected string or buffer\r\n`\r\n\r\nI have no issue when i downgrade again to the 2.3 version of the package\r\n\r\nTo reproduce the issue, i use the following configuration ( it doesn't seem to depend on the image):\r\n```\r\nversion: '2'\r\n\r\nservices :\r\n ContainerName:\r\n image: bae2d441e03a\r\n ports:\r\n - \"127.0.0.1:9292:9090\"\r\n```\r\n\r\nI run on Ubuntu 14.04.5 LTS with the following package:\r\n```\r\ndocker==2.4.0\r\ndocker-compose==1.14.0\r\ndocker-pycreds==0.2.1\r\ndockerpty==0.4.1\r\nPython 2.7.6\r\nClient:\r\n Version: 17.05.0-ce\r\n API version: 1.29\r\n Go version: go1.7.5\r\n Git commit: 89658be\r\n Built: Thu May 4 22:06:06 2017\r\n OS/Arch: linux/amd64\r\n\r\nServer:\r\n Version: 17.05.0-ce\r\n API version: 1.29 (minimum version 1.12)\r\n Go version: go1.7.5\r\n Git commit: 89658be\r\n Built: Thu May 4 22:06:06 2017\r\n OS/Arch: linux/amd64\r\n Experimental: false\r\n```\n", "before_files": [{"content": "import re\n\nPORT_SPEC = re.compile(\n \"^\" # Match full string\n \"(\" # External part\n \"((?P<host>[a-fA-F\\d.:]+):)?\" # Address\n \"(?P<ext>[\\d]*)(-(?P<ext_end>[\\d]+))?:\" # External range\n \")?\"\n \"(?P<int>[\\d]+)(-(?P<int_end>[\\d]+))?\" # Internal range\n \"(?P<proto>/(udp|tcp))?\" # Protocol\n \"$\" # Match full string\n)\n\n\ndef add_port_mapping(port_bindings, internal_port, external):\n if internal_port in port_bindings:\n port_bindings[internal_port].append(external)\n else:\n port_bindings[internal_port] = [external]\n\n\ndef add_port(port_bindings, internal_port_range, external_range):\n if external_range is None:\n for internal_port in internal_port_range:\n add_port_mapping(port_bindings, internal_port, None)\n else:\n ports = zip(internal_port_range, external_range)\n for internal_port, external_port in ports:\n add_port_mapping(port_bindings, internal_port, external_port)\n\n\ndef build_port_bindings(ports):\n port_bindings = {}\n for port in ports:\n internal_port_range, external_range = split_port(port)\n add_port(port_bindings, internal_port_range, external_range)\n return port_bindings\n\n\ndef _raise_invalid_port(port):\n raise ValueError('Invalid port \"%s\", should be '\n '[[remote_ip:]remote_port[-remote_port]:]'\n 'port[/protocol]' % port)\n\n\ndef port_range(start, end, proto, randomly_available_port=False):\n if not start:\n return start\n if not end:\n return [start + proto]\n if randomly_available_port:\n return ['{}-{}'.format(start, end) + proto]\n return [str(port) + proto for port in range(int(start), int(end) + 1)]\n\n\ndef split_port(port):\n match = PORT_SPEC.match(port)\n if match is None:\n _raise_invalid_port(port)\n parts = match.groupdict()\n\n host = parts['host']\n proto = parts['proto'] or ''\n internal = port_range(parts['int'], parts['int_end'], proto)\n external = port_range(\n parts['ext'], parts['ext_end'], '', len(internal) == 1)\n\n if host is None:\n if external is not None and len(internal) != len(external):\n raise ValueError('Port ranges don\\'t match in length')\n return internal, external\n else:\n if not external:\n external = [None] * len(internal)\n elif len(internal) != len(external):\n raise ValueError('Port ranges don\\'t match in length')\n return internal, [(host, ext_port) for ext_port in external]\n", "path": "docker/utils/ports.py"}], "after_files": [{"content": "import re\n\nPORT_SPEC = re.compile(\n \"^\" # Match full string\n \"(\" # External part\n \"((?P<host>[a-fA-F\\d.:]+):)?\" # Address\n \"(?P<ext>[\\d]*)(-(?P<ext_end>[\\d]+))?:\" # External range\n \")?\"\n \"(?P<int>[\\d]+)(-(?P<int_end>[\\d]+))?\" # Internal range\n \"(?P<proto>/(udp|tcp))?\" # Protocol\n \"$\" # Match full string\n)\n\n\ndef add_port_mapping(port_bindings, internal_port, external):\n if internal_port in port_bindings:\n port_bindings[internal_port].append(external)\n else:\n port_bindings[internal_port] = [external]\n\n\ndef add_port(port_bindings, internal_port_range, external_range):\n if external_range is None:\n for internal_port in internal_port_range:\n add_port_mapping(port_bindings, internal_port, None)\n else:\n ports = zip(internal_port_range, external_range)\n for internal_port, external_port in ports:\n add_port_mapping(port_bindings, internal_port, external_port)\n\n\ndef build_port_bindings(ports):\n port_bindings = {}\n for port in ports:\n internal_port_range, external_range = split_port(port)\n add_port(port_bindings, internal_port_range, external_range)\n return port_bindings\n\n\ndef _raise_invalid_port(port):\n raise ValueError('Invalid port \"%s\", should be '\n '[[remote_ip:]remote_port[-remote_port]:]'\n 'port[/protocol]' % port)\n\n\ndef port_range(start, end, proto, randomly_available_port=False):\n if not start:\n return start\n if not end:\n return [start + proto]\n if randomly_available_port:\n return ['{}-{}'.format(start, end) + proto]\n return [str(port) + proto for port in range(int(start), int(end) + 1)]\n\n\ndef split_port(port):\n port = str(port)\n match = PORT_SPEC.match(port)\n if match is None:\n _raise_invalid_port(port)\n parts = match.groupdict()\n\n host = parts['host']\n proto = parts['proto'] or ''\n internal = port_range(parts['int'], parts['int_end'], proto)\n external = port_range(\n parts['ext'], parts['ext_end'], '', len(internal) == 1)\n\n if host is None:\n if external is not None and len(internal) != len(external):\n raise ValueError('Port ranges don\\'t match in length')\n return internal, external\n else:\n if not external:\n external = [None] * len(internal)\n elif len(internal) != len(external):\n raise ValueError('Port ranges don\\'t match in length')\n return internal, [(host, ext_port) for ext_port in external]\n", "path": "docker/utils/ports.py"}]} |
gh_patches_debug_95 | rasdani/github-patches | git_diff | python-telegram-bot__python-telegram-bot-4130 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
AttributeError: 'Updater' object has no attribute '_Updater__polling_cleanup_cb' and no __dict__ for setting new attributes
### Steps to Reproduce
1. Created the bot and run the code below:
```python
import asyncio
import telegram
async def main():
bot = telegram.Bot("TOKEN")
async with bot:
print(await bot.get_me())
if __name__ == '__main__':
asyncio.run(main())
```
2. Added a new file and run the code below:
```python
import logging
from telegram import Update
from telegram.ext import ApplicationBuilder, ContextTypes, CommandHandler
logging.basicConfig(
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
level=logging.INFO
)
async def start(update: Update, context: ContextTypes.DEFAULT_TYPE):
await context.bot.send_message(chat_id= update.effective_chat.id, text="Bot Started.")
if __name__=="__main__":
application= ApplicationBuilder().token("6900324258:AAEMo7fMCqGE816sPd30-Jmsiw1P5jgpKHA").build()
start_handler= CommandHandler("start", start)
application.add_handler(start_handler)
application.run_polling()
```
### Expected behaviour
There shouldn't be any errors or problems.
### Actual behaviour
Raised attribute_error. Log sent on Log output.
### Operating System
windows 10
### Version of Python, python-telegram-bot & dependencies
```shell
python-telegram-bot 20.8
Bot API 7.0
Python 3.13.0a2 (tags/v3.13.0a2:9c4347e, Nov 22 2023, 18:30:15) [MSC v.1937 64 bit (AMD64)]
```
### Relevant log output
```python
File "f:\Codes\Python\Telegram_Bot\main.py", line 15, in <module>
application= ApplicationBuilder().token(token).build()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\user\AppData\Local\Programs\Python\Python313\Lib\site-packages\telegram\ext\_applicationbuilder.py", line 312, in build
updater = Updater(bot=bot, update_queue=update_queue)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\user\AppData\Local\Programs\Python\Python313\Lib\site-packages\telegram\ext\_updater.py", line 128, in __init__
self.__polling_cleanup_cb: Optional[Callable[[], Coroutine[Any, Any, None]]] = None
^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'Updater' object has no attribute '_Updater__polling_cleanup_cb' and no __dict__ for setting new attributes
```
### Additional Context
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `telegram/ext/_updater.py`
Content:
```
1 #!/usr/bin/env python
2 #
3 # A library that provides a Python interface to the Telegram Bot API
4 # Copyright (C) 2015-2024
5 # Leandro Toledo de Souza <[email protected]>
6 #
7 # This program is free software: you can redistribute it and/or modify
8 # it under the terms of the GNU Lesser Public License as published by
9 # the Free Software Foundation, either version 3 of the License, or
10 # (at your option) any later version.
11 #
12 # This program is distributed in the hope that it will be useful,
13 # but WITHOUT ANY WARRANTY; without even the implied warranty of
14 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 # GNU Lesser Public License for more details.
16 #
17 # You should have received a copy of the GNU Lesser Public License
18 # along with this program. If not, see [http://www.gnu.org/licenses/].
19 """This module contains the class Updater, which tries to make creating Telegram bots intuitive."""
20 import asyncio
21 import contextlib
22 import ssl
23 from pathlib import Path
24 from types import TracebackType
25 from typing import (
26 TYPE_CHECKING,
27 Any,
28 AsyncContextManager,
29 Callable,
30 Coroutine,
31 List,
32 Optional,
33 Type,
34 TypeVar,
35 Union,
36 )
37
38 from telegram._utils.defaultvalue import DEFAULT_80, DEFAULT_IP, DEFAULT_NONE, DefaultValue
39 from telegram._utils.logging import get_logger
40 from telegram._utils.repr import build_repr_with_selected_attrs
41 from telegram._utils.types import DVType, ODVInput
42 from telegram.error import InvalidToken, RetryAfter, TelegramError, TimedOut
43
44 try:
45 from telegram.ext._utils.webhookhandler import WebhookAppClass, WebhookServer
46
47 WEBHOOKS_AVAILABLE = True
48 except ImportError:
49 WEBHOOKS_AVAILABLE = False
50
51 if TYPE_CHECKING:
52 from telegram import Bot
53
54
55 _UpdaterType = TypeVar("_UpdaterType", bound="Updater") # pylint: disable=invalid-name
56 _LOGGER = get_logger(__name__)
57
58
59 class Updater(AsyncContextManager["Updater"]):
60 """This class fetches updates for the bot either via long polling or by starting a webhook
61 server. Received updates are enqueued into the :attr:`update_queue` and may be fetched from
62 there to handle them appropriately.
63
64 Instances of this class can be used as asyncio context managers, where
65
66 .. code:: python
67
68 async with updater:
69 # code
70
71 is roughly equivalent to
72
73 .. code:: python
74
75 try:
76 await updater.initialize()
77 # code
78 finally:
79 await updater.shutdown()
80
81 .. seealso:: :meth:`__aenter__` and :meth:`__aexit__`.
82
83 .. seealso:: :wiki:`Architecture Overview <Architecture>`,
84 :wiki:`Builder Pattern <Builder-Pattern>`
85
86 .. versionchanged:: 20.0
87
88 * Removed argument and attribute ``user_sig_handler``
89 * The only arguments and attributes are now :attr:`bot` and :attr:`update_queue` as now
90 the sole purpose of this class is to fetch updates. The entry point to a PTB application
91 is now :class:`telegram.ext.Application`.
92
93 Args:
94 bot (:class:`telegram.Bot`): The bot used with this Updater.
95 update_queue (:class:`asyncio.Queue`): Queue for the updates.
96
97 Attributes:
98 bot (:class:`telegram.Bot`): The bot used with this Updater.
99 update_queue (:class:`asyncio.Queue`): Queue for the updates.
100
101 """
102
103 __slots__ = (
104 "__lock",
105 "__polling_task",
106 "_httpd",
107 "_initialized",
108 "_last_update_id",
109 "_running",
110 "bot",
111 "update_queue",
112 )
113
114 def __init__(
115 self,
116 bot: "Bot",
117 update_queue: "asyncio.Queue[object]",
118 ):
119 self.bot: Bot = bot
120 self.update_queue: asyncio.Queue[object] = update_queue
121
122 self._last_update_id = 0
123 self._running = False
124 self._initialized = False
125 self._httpd: Optional[WebhookServer] = None
126 self.__lock = asyncio.Lock()
127 self.__polling_task: Optional[asyncio.Task] = None
128 self.__polling_cleanup_cb: Optional[Callable[[], Coroutine[Any, Any, None]]] = None
129
130 async def __aenter__(self: _UpdaterType) -> _UpdaterType: # noqa: PYI019
131 """
132 |async_context_manager| :meth:`initializes <initialize>` the Updater.
133
134 Returns:
135 The initialized Updater instance.
136
137 Raises:
138 :exc:`Exception`: If an exception is raised during initialization, :meth:`shutdown`
139 is called in this case.
140 """
141 try:
142 await self.initialize()
143 return self
144 except Exception as exc:
145 await self.shutdown()
146 raise exc
147
148 async def __aexit__(
149 self,
150 exc_type: Optional[Type[BaseException]],
151 exc_val: Optional[BaseException],
152 exc_tb: Optional[TracebackType],
153 ) -> None:
154 """|async_context_manager| :meth:`shuts down <shutdown>` the Updater."""
155 # Make sure not to return `True` so that exceptions are not suppressed
156 # https://docs.python.org/3/reference/datamodel.html?#object.__aexit__
157 await self.shutdown()
158
159 def __repr__(self) -> str:
160 """Give a string representation of the updater in the form ``Updater[bot=...]``.
161
162 As this class doesn't implement :meth:`object.__str__`, the default implementation
163 will be used, which is equivalent to :meth:`__repr__`.
164
165 Returns:
166 :obj:`str`
167 """
168 return build_repr_with_selected_attrs(self, bot=self.bot)
169
170 @property
171 def running(self) -> bool:
172 return self._running
173
174 async def initialize(self) -> None:
175 """Initializes the Updater & the associated :attr:`bot` by calling
176 :meth:`telegram.Bot.initialize`.
177
178 .. seealso::
179 :meth:`shutdown`
180 """
181 if self._initialized:
182 _LOGGER.debug("This Updater is already initialized.")
183 return
184
185 await self.bot.initialize()
186 self._initialized = True
187
188 async def shutdown(self) -> None:
189 """
190 Shutdown the Updater & the associated :attr:`bot` by calling :meth:`telegram.Bot.shutdown`.
191
192 .. seealso::
193 :meth:`initialize`
194
195 Raises:
196 :exc:`RuntimeError`: If the updater is still running.
197 """
198 if self.running:
199 raise RuntimeError("This Updater is still running!")
200
201 if not self._initialized:
202 _LOGGER.debug("This Updater is already shut down. Returning.")
203 return
204
205 await self.bot.shutdown()
206 self._initialized = False
207 _LOGGER.debug("Shut down of Updater complete")
208
209 async def start_polling(
210 self,
211 poll_interval: float = 0.0,
212 timeout: int = 10,
213 bootstrap_retries: int = -1,
214 read_timeout: ODVInput[float] = DEFAULT_NONE,
215 write_timeout: ODVInput[float] = DEFAULT_NONE,
216 connect_timeout: ODVInput[float] = DEFAULT_NONE,
217 pool_timeout: ODVInput[float] = DEFAULT_NONE,
218 allowed_updates: Optional[List[str]] = None,
219 drop_pending_updates: Optional[bool] = None,
220 error_callback: Optional[Callable[[TelegramError], None]] = None,
221 ) -> "asyncio.Queue[object]":
222 """Starts polling updates from Telegram.
223
224 .. versionchanged:: 20.0
225 Removed the ``clean`` argument in favor of :paramref:`drop_pending_updates`.
226
227 Args:
228 poll_interval (:obj:`float`, optional): Time to wait between polling updates from
229 Telegram in seconds. Default is ``0.0``.
230 timeout (:obj:`int`, optional): Passed to
231 :paramref:`telegram.Bot.get_updates.timeout`. Defaults to ``10`` seconds.
232 bootstrap_retries (:obj:`int`, optional): Whether the bootstrapping phase of the
233 :class:`telegram.ext.Updater` will retry on failures on the Telegram server.
234
235 * < 0 - retry indefinitely (default)
236 * 0 - no retries
237 * > 0 - retry up to X times
238 read_timeout (:obj:`float`, optional): Value to pass to
239 :paramref:`telegram.Bot.get_updates.read_timeout`. Defaults to
240 :attr:`~telegram.request.BaseRequest.DEFAULT_NONE`.
241
242 .. versionchanged:: 20.7
243 Defaults to :attr:`~telegram.request.BaseRequest.DEFAULT_NONE` instead of
244 ``2``.
245 .. deprecated:: 20.7
246 Deprecated in favor of setting the timeout via
247 :meth:`telegram.ext.ApplicationBuilder.get_updates_read_timeout` or
248 :paramref:`telegram.Bot.get_updates_request`.
249 write_timeout (:obj:`float` | :obj:`None`, optional): Value to pass to
250 :paramref:`telegram.Bot.get_updates.write_timeout`. Defaults to
251 :attr:`~telegram.request.BaseRequest.DEFAULT_NONE`.
252
253 .. deprecated:: 20.7
254 Deprecated in favor of setting the timeout via
255 :meth:`telegram.ext.ApplicationBuilder.get_updates_write_timeout` or
256 :paramref:`telegram.Bot.get_updates_request`.
257 connect_timeout (:obj:`float` | :obj:`None`, optional): Value to pass to
258 :paramref:`telegram.Bot.get_updates.connect_timeout`. Defaults to
259 :attr:`~telegram.request.BaseRequest.DEFAULT_NONE`.
260
261 .. deprecated:: 20.7
262 Deprecated in favor of setting the timeout via
263 :meth:`telegram.ext.ApplicationBuilder.get_updates_connect_timeout` or
264 :paramref:`telegram.Bot.get_updates_request`.
265 pool_timeout (:obj:`float` | :obj:`None`, optional): Value to pass to
266 :paramref:`telegram.Bot.get_updates.pool_timeout`. Defaults to
267 :attr:`~telegram.request.BaseRequest.DEFAULT_NONE`.
268
269 .. deprecated:: 20.7
270 Deprecated in favor of setting the timeout via
271 :meth:`telegram.ext.ApplicationBuilder.get_updates_pool_timeout` or
272 :paramref:`telegram.Bot.get_updates_request`.
273 allowed_updates (List[:obj:`str`], optional): Passed to
274 :meth:`telegram.Bot.get_updates`.
275 drop_pending_updates (:obj:`bool`, optional): Whether to clean any pending updates on
276 Telegram servers before actually starting to poll. Default is :obj:`False`.
277
278 .. versionadded :: 13.4
279 error_callback (Callable[[:exc:`telegram.error.TelegramError`], :obj:`None`], \
280 optional): Callback to handle :exc:`telegram.error.TelegramError` s that occur
281 while calling :meth:`telegram.Bot.get_updates` during polling. Defaults to
282 :obj:`None`, in which case errors will be logged. Callback signature::
283
284 def callback(error: telegram.error.TelegramError)
285
286 Note:
287 The :paramref:`error_callback` must *not* be a :term:`coroutine function`! If
288 asynchronous behavior of the callback is wanted, please schedule a task from
289 within the callback.
290
291 Returns:
292 :class:`asyncio.Queue`: The update queue that can be filled from the main thread.
293
294 Raises:
295 :exc:`RuntimeError`: If the updater is already running or was not initialized.
296
297 """
298 # We refrain from issuing deprecation warnings for the timeout parameters here, as we
299 # already issue them in `Application`. This means that there are no warnings when using
300 # `Updater` without `Application`, but this is a rather special use case.
301
302 if error_callback and asyncio.iscoroutinefunction(error_callback):
303 raise TypeError(
304 "The `error_callback` must not be a coroutine function! Use an ordinary function "
305 "instead. "
306 )
307
308 async with self.__lock:
309 if self.running:
310 raise RuntimeError("This Updater is already running!")
311 if not self._initialized:
312 raise RuntimeError("This Updater was not initialized via `Updater.initialize`!")
313
314 self._running = True
315
316 try:
317 # Create & start tasks
318 polling_ready = asyncio.Event()
319
320 await self._start_polling(
321 poll_interval=poll_interval,
322 timeout=timeout,
323 read_timeout=read_timeout,
324 write_timeout=write_timeout,
325 connect_timeout=connect_timeout,
326 pool_timeout=pool_timeout,
327 bootstrap_retries=bootstrap_retries,
328 drop_pending_updates=drop_pending_updates,
329 allowed_updates=allowed_updates,
330 ready=polling_ready,
331 error_callback=error_callback,
332 )
333
334 _LOGGER.debug("Waiting for polling to start")
335 await polling_ready.wait()
336 _LOGGER.debug("Polling updates from Telegram started")
337
338 return self.update_queue
339 except Exception as exc:
340 self._running = False
341 raise exc
342
343 async def _start_polling(
344 self,
345 poll_interval: float,
346 timeout: int,
347 read_timeout: ODVInput[float],
348 write_timeout: ODVInput[float],
349 connect_timeout: ODVInput[float],
350 pool_timeout: ODVInput[float],
351 bootstrap_retries: int,
352 drop_pending_updates: Optional[bool],
353 allowed_updates: Optional[List[str]],
354 ready: asyncio.Event,
355 error_callback: Optional[Callable[[TelegramError], None]],
356 ) -> None:
357 _LOGGER.debug("Updater started (polling)")
358
359 # the bootstrapping phase does two things:
360 # 1) make sure there is no webhook set
361 # 2) apply drop_pending_updates
362 await self._bootstrap(
363 bootstrap_retries,
364 drop_pending_updates=drop_pending_updates,
365 webhook_url="",
366 allowed_updates=None,
367 )
368
369 _LOGGER.debug("Bootstrap done")
370
371 async def polling_action_cb() -> bool:
372 try:
373 updates = await self.bot.get_updates(
374 offset=self._last_update_id,
375 timeout=timeout,
376 read_timeout=read_timeout,
377 connect_timeout=connect_timeout,
378 write_timeout=write_timeout,
379 pool_timeout=pool_timeout,
380 allowed_updates=allowed_updates,
381 )
382 except TelegramError as exc:
383 # TelegramErrors should be processed by the network retry loop
384 raise exc
385 except Exception as exc:
386 # Other exceptions should not. Let's log them for now.
387 _LOGGER.critical(
388 "Something went wrong processing the data received from Telegram. "
389 "Received data was *not* processed!",
390 exc_info=exc,
391 )
392 return True
393
394 if updates:
395 if not self.running:
396 _LOGGER.critical(
397 "Updater stopped unexpectedly. Pulled updates will be ignored and pulled "
398 "again on restart."
399 )
400 else:
401 for update in updates:
402 await self.update_queue.put(update)
403 self._last_update_id = updates[-1].update_id + 1 # Add one to 'confirm' it
404
405 return True # Keep fetching updates & don't quit. Polls with poll_interval.
406
407 def default_error_callback(exc: TelegramError) -> None:
408 _LOGGER.exception("Exception happened while polling for updates.", exc_info=exc)
409
410 # Start task that runs in background, pulls
411 # updates from Telegram and inserts them in the update queue of the
412 # Application.
413 self.__polling_task = asyncio.create_task(
414 self._network_loop_retry(
415 action_cb=polling_action_cb,
416 on_err_cb=error_callback or default_error_callback,
417 description="getting Updates",
418 interval=poll_interval,
419 ),
420 name="Updater:start_polling:polling_task",
421 )
422
423 # Prepare a cleanup callback to await on _stop_polling
424 # Calling get_updates one more time with the latest `offset` parameter ensures that
425 # all updates that where put into the update queue are also marked as "read" to TG,
426 # so we do not receive them again on the next startup
427 # We define this here so that we can use the same parameters as in the polling task
428 async def _get_updates_cleanup() -> None:
429 _LOGGER.debug(
430 "Calling `get_updates` one more time to mark all fetched updates as read."
431 )
432 try:
433 await self.bot.get_updates(
434 offset=self._last_update_id,
435 # We don't want to do long polling here!
436 timeout=0,
437 read_timeout=read_timeout,
438 connect_timeout=connect_timeout,
439 write_timeout=write_timeout,
440 pool_timeout=pool_timeout,
441 allowed_updates=allowed_updates,
442 )
443 except TelegramError as exc:
444 _LOGGER.error(
445 "Error while calling `get_updates` one more time to mark all fetched updates "
446 "as read: %s. Suppressing error to ensure graceful shutdown. When polling for "
447 "updates is restarted, updates may be fetched again. Please adjust timeouts "
448 "via `ApplicationBuilder` or the parameter `get_updates_request` of `Bot`.",
449 exc_info=exc,
450 )
451
452 self.__polling_cleanup_cb = _get_updates_cleanup
453
454 if ready is not None:
455 ready.set()
456
457 async def start_webhook(
458 self,
459 listen: DVType[str] = DEFAULT_IP,
460 port: DVType[int] = DEFAULT_80,
461 url_path: str = "",
462 cert: Optional[Union[str, Path]] = None,
463 key: Optional[Union[str, Path]] = None,
464 bootstrap_retries: int = 0,
465 webhook_url: Optional[str] = None,
466 allowed_updates: Optional[List[str]] = None,
467 drop_pending_updates: Optional[bool] = None,
468 ip_address: Optional[str] = None,
469 max_connections: int = 40,
470 secret_token: Optional[str] = None,
471 unix: Optional[Union[str, Path]] = None,
472 ) -> "asyncio.Queue[object]":
473 """
474 Starts a small http server to listen for updates via webhook. If :paramref:`cert`
475 and :paramref:`key` are not provided, the webhook will be started directly on
476 ``http://listen:port/url_path``, so SSL can be handled by another
477 application. Else, the webhook will be started on
478 ``https://listen:port/url_path``. Also calls :meth:`telegram.Bot.set_webhook` as required.
479
480 Important:
481 If you want to use this method, you must install PTB with the optional requirement
482 ``webhooks``, i.e.
483
484 .. code-block:: bash
485
486 pip install "python-telegram-bot[webhooks]"
487
488 .. seealso:: :wiki:`Webhooks`
489
490 .. versionchanged:: 13.4
491 :meth:`start_webhook` now *always* calls :meth:`telegram.Bot.set_webhook`, so pass
492 ``webhook_url`` instead of calling ``updater.bot.set_webhook(webhook_url)`` manually.
493 .. versionchanged:: 20.0
494
495 * Removed the ``clean`` argument in favor of :paramref:`drop_pending_updates` and
496 removed the deprecated argument ``force_event_loop``.
497
498 Args:
499 listen (:obj:`str`, optional): IP-Address to listen on. Defaults to
500 `127.0.0.1 <https://en.wikipedia.org/wiki/Localhost>`_.
501 port (:obj:`int`, optional): Port the bot should be listening on. Must be one of
502 :attr:`telegram.constants.SUPPORTED_WEBHOOK_PORTS` unless the bot is running
503 behind a proxy. Defaults to ``80``.
504 url_path (:obj:`str`, optional): Path inside url (http(s)://listen:port/<url_path>).
505 Defaults to ``''``.
506 cert (:class:`pathlib.Path` | :obj:`str`, optional): Path to the SSL certificate file.
507 key (:class:`pathlib.Path` | :obj:`str`, optional): Path to the SSL key file.
508 drop_pending_updates (:obj:`bool`, optional): Whether to clean any pending updates on
509 Telegram servers before actually starting to poll. Default is :obj:`False`.
510
511 .. versionadded :: 13.4
512 bootstrap_retries (:obj:`int`, optional): Whether the bootstrapping phase of the
513 :class:`telegram.ext.Updater` will retry on failures on the Telegram server.
514
515 * < 0 - retry indefinitely
516 * 0 - no retries (default)
517 * > 0 - retry up to X times
518 webhook_url (:obj:`str`, optional): Explicitly specify the webhook url. Useful behind
519 NAT, reverse proxy, etc. Default is derived from :paramref:`listen`,
520 :paramref:`port`, :paramref:`url_path`, :paramref:`cert`, and :paramref:`key`.
521 ip_address (:obj:`str`, optional): Passed to :meth:`telegram.Bot.set_webhook`.
522 Defaults to :obj:`None`.
523
524 .. versionadded :: 13.4
525 allowed_updates (List[:obj:`str`], optional): Passed to
526 :meth:`telegram.Bot.set_webhook`. Defaults to :obj:`None`.
527 max_connections (:obj:`int`, optional): Passed to
528 :meth:`telegram.Bot.set_webhook`. Defaults to ``40``.
529
530 .. versionadded:: 13.6
531 secret_token (:obj:`str`, optional): Passed to :meth:`telegram.Bot.set_webhook`.
532 Defaults to :obj:`None`.
533
534 When added, the web server started by this call will expect the token to be set in
535 the ``X-Telegram-Bot-Api-Secret-Token`` header of an incoming request and will
536 raise a :class:`http.HTTPStatus.FORBIDDEN <http.HTTPStatus>` error if either the
537 header isn't set or it is set to a wrong token.
538
539 .. versionadded:: 20.0
540 unix (:class:`pathlib.Path` | :obj:`str`, optional): Path to the unix socket file. Path
541 does not need to exist, in which case the file will be created.
542
543 Caution:
544 This parameter is a replacement for the default TCP bind. Therefore, it is
545 mutually exclusive with :paramref:`listen` and :paramref:`port`. When using
546 this param, you must also run a reverse proxy to the unix socket and set the
547 appropriate :paramref:`webhook_url`.
548
549 .. versionadded:: 20.8
550 Returns:
551 :class:`queue.Queue`: The update queue that can be filled from the main thread.
552
553 Raises:
554 :exc:`RuntimeError`: If the updater is already running or was not initialized.
555 """
556 if not WEBHOOKS_AVAILABLE:
557 raise RuntimeError(
558 "To use `start_webhook`, PTB must be installed via `pip install "
559 '"python-telegram-bot[webhooks]"`.'
560 )
561 # unix has special requirements what must and mustn't be set when using it
562 if unix:
563 error_msg = (
564 "You can not pass unix and {0}, only use one. Unix if you want to "
565 "initialize a unix socket, or {0} for a standard TCP server."
566 )
567 if not isinstance(listen, DefaultValue):
568 raise RuntimeError(error_msg.format("listen"))
569 if not isinstance(port, DefaultValue):
570 raise RuntimeError(error_msg.format("port"))
571 if not webhook_url:
572 raise RuntimeError(
573 "Since you set unix, you also need to set the URL to the webhook "
574 "of the proxy you run in front of the unix socket."
575 )
576
577 async with self.__lock:
578 if self.running:
579 raise RuntimeError("This Updater is already running!")
580 if not self._initialized:
581 raise RuntimeError("This Updater was not initialized via `Updater.initialize`!")
582
583 self._running = True
584
585 try:
586 # Create & start tasks
587 webhook_ready = asyncio.Event()
588
589 await self._start_webhook(
590 listen=DefaultValue.get_value(listen),
591 port=DefaultValue.get_value(port),
592 url_path=url_path,
593 cert=cert,
594 key=key,
595 bootstrap_retries=bootstrap_retries,
596 drop_pending_updates=drop_pending_updates,
597 webhook_url=webhook_url,
598 allowed_updates=allowed_updates,
599 ready=webhook_ready,
600 ip_address=ip_address,
601 max_connections=max_connections,
602 secret_token=secret_token,
603 unix=unix,
604 )
605
606 _LOGGER.debug("Waiting for webhook server to start")
607 await webhook_ready.wait()
608 _LOGGER.debug("Webhook server started")
609 except Exception as exc:
610 self._running = False
611 raise exc
612
613 # Return the update queue so the main thread can insert updates
614 return self.update_queue
615
616 async def _start_webhook(
617 self,
618 listen: str,
619 port: int,
620 url_path: str,
621 bootstrap_retries: int,
622 allowed_updates: Optional[List[str]],
623 cert: Optional[Union[str, Path]] = None,
624 key: Optional[Union[str, Path]] = None,
625 drop_pending_updates: Optional[bool] = None,
626 webhook_url: Optional[str] = None,
627 ready: Optional[asyncio.Event] = None,
628 ip_address: Optional[str] = None,
629 max_connections: int = 40,
630 secret_token: Optional[str] = None,
631 unix: Optional[Union[str, Path]] = None,
632 ) -> None:
633 _LOGGER.debug("Updater thread started (webhook)")
634
635 if not url_path.startswith("/"):
636 url_path = f"/{url_path}"
637
638 # Create Tornado app instance
639 app = WebhookAppClass(url_path, self.bot, self.update_queue, secret_token)
640
641 # Form SSL Context
642 # An SSLError is raised if the private key does not match with the certificate
643 # Note that we only use the SSL certificate for the WebhookServer, if the key is also
644 # present. This is because the WebhookServer may not actually be in charge of performing
645 # the SSL handshake, e.g. in case a reverse proxy is used
646 if cert is not None and key is not None:
647 try:
648 ssl_ctx: Optional[ssl.SSLContext] = ssl.create_default_context(
649 ssl.Purpose.CLIENT_AUTH
650 )
651 ssl_ctx.load_cert_chain(cert, key) # type: ignore[union-attr]
652 except ssl.SSLError as exc:
653 raise TelegramError("Invalid SSL Certificate") from exc
654 else:
655 ssl_ctx = None
656 # Create and start server
657 self._httpd = WebhookServer(listen, port, app, ssl_ctx, unix)
658
659 if not webhook_url:
660 webhook_url = self._gen_webhook_url(
661 protocol="https" if ssl_ctx else "http",
662 listen=DefaultValue.get_value(listen),
663 port=port,
664 url_path=url_path,
665 )
666
667 # We pass along the cert to the webhook if present.
668 await self._bootstrap(
669 # Passing a Path or string only works if the bot is running against a local bot API
670 # server, so let's read the contents
671 cert=Path(cert).read_bytes() if cert else None,
672 max_retries=bootstrap_retries,
673 drop_pending_updates=drop_pending_updates,
674 webhook_url=webhook_url,
675 allowed_updates=allowed_updates,
676 ip_address=ip_address,
677 max_connections=max_connections,
678 secret_token=secret_token,
679 )
680
681 await self._httpd.serve_forever(ready=ready)
682
683 @staticmethod
684 def _gen_webhook_url(protocol: str, listen: str, port: int, url_path: str) -> str:
685 # TODO: double check if this should be https in any case - the docs of start_webhook
686 # say differently!
687 return f"{protocol}://{listen}:{port}{url_path}"
688
689 async def _network_loop_retry(
690 self,
691 action_cb: Callable[..., Coroutine],
692 on_err_cb: Callable[[TelegramError], None],
693 description: str,
694 interval: float,
695 ) -> None:
696 """Perform a loop calling `action_cb`, retrying after network errors.
697
698 Stop condition for loop: `self.running` evaluates :obj:`False` or return value of
699 `action_cb` evaluates :obj:`False`.
700
701 Args:
702 action_cb (:term:`coroutine function`): Network oriented callback function to call.
703 on_err_cb (:obj:`callable`): Callback to call when TelegramError is caught. Receives
704 the exception object as a parameter.
705 description (:obj:`str`): Description text to use for logs and exception raised.
706 interval (:obj:`float` | :obj:`int`): Interval to sleep between each call to
707 `action_cb`.
708
709 """
710 _LOGGER.debug("Start network loop retry %s", description)
711 cur_interval = interval
712 try:
713 while self.running:
714 try:
715 if not await action_cb():
716 break
717 except RetryAfter as exc:
718 _LOGGER.info("%s", exc)
719 cur_interval = 0.5 + exc.retry_after
720 except TimedOut as toe:
721 _LOGGER.debug("Timed out %s: %s", description, toe)
722 # If failure is due to timeout, we should retry asap.
723 cur_interval = 0
724 except InvalidToken as pex:
725 _LOGGER.error("Invalid token; aborting")
726 raise pex
727 except TelegramError as telegram_exc:
728 _LOGGER.error("Error while %s: %s", description, telegram_exc)
729 on_err_cb(telegram_exc)
730
731 # increase waiting times on subsequent errors up to 30secs
732 cur_interval = 1 if cur_interval == 0 else min(30, 1.5 * cur_interval)
733 else:
734 cur_interval = interval
735
736 if cur_interval:
737 await asyncio.sleep(cur_interval)
738
739 except asyncio.CancelledError:
740 _LOGGER.debug("Network loop retry %s was cancelled", description)
741
742 async def _bootstrap(
743 self,
744 max_retries: int,
745 webhook_url: Optional[str],
746 allowed_updates: Optional[List[str]],
747 drop_pending_updates: Optional[bool] = None,
748 cert: Optional[bytes] = None,
749 bootstrap_interval: float = 1,
750 ip_address: Optional[str] = None,
751 max_connections: int = 40,
752 secret_token: Optional[str] = None,
753 ) -> None:
754 """Prepares the setup for fetching updates: delete or set the webhook and drop pending
755 updates if appropriate. If there are unsuccessful attempts, this will retry as specified by
756 :paramref:`max_retries`.
757 """
758 retries = 0
759
760 async def bootstrap_del_webhook() -> bool:
761 _LOGGER.debug("Deleting webhook")
762 if drop_pending_updates:
763 _LOGGER.debug("Dropping pending updates from Telegram server")
764 await self.bot.delete_webhook(drop_pending_updates=drop_pending_updates)
765 return False
766
767 async def bootstrap_set_webhook() -> bool:
768 _LOGGER.debug("Setting webhook")
769 if drop_pending_updates:
770 _LOGGER.debug("Dropping pending updates from Telegram server")
771 await self.bot.set_webhook(
772 url=webhook_url,
773 certificate=cert,
774 allowed_updates=allowed_updates,
775 ip_address=ip_address,
776 drop_pending_updates=drop_pending_updates,
777 max_connections=max_connections,
778 secret_token=secret_token,
779 )
780 return False
781
782 def bootstrap_on_err_cb(exc: Exception) -> None:
783 # We need this since retries is an immutable object otherwise and the changes
784 # wouldn't propagate outside of thi function
785 nonlocal retries
786
787 if not isinstance(exc, InvalidToken) and (max_retries < 0 or retries < max_retries):
788 retries += 1
789 _LOGGER.warning(
790 "Failed bootstrap phase; try=%s max_retries=%s", retries, max_retries
791 )
792 else:
793 _LOGGER.error("Failed bootstrap phase after %s retries (%s)", retries, exc)
794 raise exc
795
796 # Dropping pending updates from TG can be efficiently done with the drop_pending_updates
797 # parameter of delete/start_webhook, even in the case of polling. Also, we want to make
798 # sure that no webhook is configured in case of polling, so we just always call
799 # delete_webhook for polling
800 if drop_pending_updates or not webhook_url:
801 await self._network_loop_retry(
802 bootstrap_del_webhook,
803 bootstrap_on_err_cb,
804 "bootstrap del webhook",
805 bootstrap_interval,
806 )
807
808 # Reset the retries counter for the next _network_loop_retry call
809 retries = 0
810
811 # Restore/set webhook settings, if needed. Again, we don't know ahead if a webhook is set,
812 # so we set it anyhow.
813 if webhook_url:
814 await self._network_loop_retry(
815 bootstrap_set_webhook,
816 bootstrap_on_err_cb,
817 "bootstrap set webhook",
818 bootstrap_interval,
819 )
820
821 async def stop(self) -> None:
822 """Stops the polling/webhook.
823
824 .. seealso::
825 :meth:`start_polling`, :meth:`start_webhook`
826
827 Raises:
828 :exc:`RuntimeError`: If the updater is not running.
829 """
830 async with self.__lock:
831 if not self.running:
832 raise RuntimeError("This Updater is not running!")
833
834 _LOGGER.debug("Stopping Updater")
835
836 self._running = False
837
838 await self._stop_httpd()
839 await self._stop_polling()
840
841 _LOGGER.debug("Updater.stop() is complete")
842
843 async def _stop_httpd(self) -> None:
844 """Stops the Webhook server by calling ``WebhookServer.shutdown()``"""
845 if self._httpd:
846 _LOGGER.debug("Waiting for current webhook connection to be closed.")
847 await self._httpd.shutdown()
848 self._httpd = None
849
850 async def _stop_polling(self) -> None:
851 """Stops the polling task by awaiting it."""
852 if self.__polling_task:
853 _LOGGER.debug("Waiting background polling task to finish up.")
854 self.__polling_task.cancel()
855
856 with contextlib.suppress(asyncio.CancelledError):
857 await self.__polling_task
858 # It only fails in rare edge-cases, e.g. when `stop()` is called directly
859 # after start_polling(), but lets better be safe than sorry ...
860
861 self.__polling_task = None
862
863 if self.__polling_cleanup_cb:
864 await self.__polling_cleanup_cb()
865 self.__polling_cleanup_cb = None
866 else:
867 _LOGGER.warning(
868 "No polling cleanup callback defined. The last fetched updates may be "
869 "fetched again on the next polling start."
870 )
871
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/telegram/ext/_updater.py b/telegram/ext/_updater.py
--- a/telegram/ext/_updater.py
+++ b/telegram/ext/_updater.py
@@ -102,6 +102,7 @@
__slots__ = (
"__lock",
+ "__polling_cleanup_cb",
"__polling_task",
"_httpd",
"_initialized",
| {"golden_diff": "diff --git a/telegram/ext/_updater.py b/telegram/ext/_updater.py\n--- a/telegram/ext/_updater.py\n+++ b/telegram/ext/_updater.py\n@@ -102,6 +102,7 @@\n \n __slots__ = (\n \"__lock\",\n+ \"__polling_cleanup_cb\",\n \"__polling_task\",\n \"_httpd\",\n \"_initialized\",\n", "issue": "AttributeError: 'Updater' object has no attribute '_Updater__polling_cleanup_cb' and no __dict__ for setting new attributes\n### Steps to Reproduce\n\n1. Created the bot and run the code below:\r\n```python\r\nimport asyncio\r\nimport telegram\r\n\r\n\r\nasync def main():\r\n bot = telegram.Bot(\"TOKEN\")\r\n async with bot:\r\n print(await bot.get_me())\r\n\r\n\r\nif __name__ == '__main__':\r\n asyncio.run(main())\r\n```\r\n2. Added a new file and run the code below:\r\n```python\r\nimport logging\r\nfrom telegram import Update\r\nfrom telegram.ext import ApplicationBuilder, ContextTypes, CommandHandler\r\n\r\nlogging.basicConfig(\r\n format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',\r\n level=logging.INFO\r\n)\r\n\r\nasync def start(update: Update, context: ContextTypes.DEFAULT_TYPE):\r\n await context.bot.send_message(chat_id= update.effective_chat.id, text=\"Bot Started.\")\r\n\r\nif __name__==\"__main__\":\r\n \r\n application= ApplicationBuilder().token(\"6900324258:AAEMo7fMCqGE816sPd30-Jmsiw1P5jgpKHA\").build()\r\n\r\n start_handler= CommandHandler(\"start\", start)\r\n application.add_handler(start_handler)\r\n\r\n application.run_polling()\r\n```\r\n\n\n### Expected behaviour\n\nThere shouldn't be any errors or problems.\n\n### Actual behaviour\n\nRaised attribute_error. Log sent on Log output.\n\n### Operating System\n\nwindows 10\n\n### Version of Python, python-telegram-bot & dependencies\n\n```shell\npython-telegram-bot 20.8\r\nBot API 7.0\r\nPython 3.13.0a2 (tags/v3.13.0a2:9c4347e, Nov 22 2023, 18:30:15) [MSC v.1937 64 bit (AMD64)]\n```\n\n\n### Relevant log output\n\n```python\nFile \"f:\\Codes\\Python\\Telegram_Bot\\main.py\", line 15, in <module>\r\n application= ApplicationBuilder().token(token).build()\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\user\\AppData\\Local\\Programs\\Python\\Python313\\Lib\\site-packages\\telegram\\ext\\_applicationbuilder.py\", line 312, in build\r\n updater = Updater(bot=bot, update_queue=update_queue)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\user\\AppData\\Local\\Programs\\Python\\Python313\\Lib\\site-packages\\telegram\\ext\\_updater.py\", line 128, in __init__\r\n self.__polling_cleanup_cb: Optional[Callable[[], Coroutine[Any, Any, None]]] = None\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^\r\nAttributeError: 'Updater' object has no attribute '_Updater__polling_cleanup_cb' and no __dict__ for setting new attributes\n```\n\n\n### Additional Context\n\n_No response_\n", "before_files": [{"content": "#!/usr/bin/env python\n#\n# A library that provides a Python interface to the Telegram Bot API\n# Copyright (C) 2015-2024\n# Leandro Toledo de Souza <[email protected]>\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Lesser Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Lesser Public License for more details.\n#\n# You should have received a copy of the GNU Lesser Public License\n# along with this program. If not, see [http://www.gnu.org/licenses/].\n\"\"\"This module contains the class Updater, which tries to make creating Telegram bots intuitive.\"\"\"\nimport asyncio\nimport contextlib\nimport ssl\nfrom pathlib import Path\nfrom types import TracebackType\nfrom typing import (\n TYPE_CHECKING,\n Any,\n AsyncContextManager,\n Callable,\n Coroutine,\n List,\n Optional,\n Type,\n TypeVar,\n Union,\n)\n\nfrom telegram._utils.defaultvalue import DEFAULT_80, DEFAULT_IP, DEFAULT_NONE, DefaultValue\nfrom telegram._utils.logging import get_logger\nfrom telegram._utils.repr import build_repr_with_selected_attrs\nfrom telegram._utils.types import DVType, ODVInput\nfrom telegram.error import InvalidToken, RetryAfter, TelegramError, TimedOut\n\ntry:\n from telegram.ext._utils.webhookhandler import WebhookAppClass, WebhookServer\n\n WEBHOOKS_AVAILABLE = True\nexcept ImportError:\n WEBHOOKS_AVAILABLE = False\n\nif TYPE_CHECKING:\n from telegram import Bot\n\n\n_UpdaterType = TypeVar(\"_UpdaterType\", bound=\"Updater\") # pylint: disable=invalid-name\n_LOGGER = get_logger(__name__)\n\n\nclass Updater(AsyncContextManager[\"Updater\"]):\n \"\"\"This class fetches updates for the bot either via long polling or by starting a webhook\n server. Received updates are enqueued into the :attr:`update_queue` and may be fetched from\n there to handle them appropriately.\n\n Instances of this class can be used as asyncio context managers, where\n\n .. code:: python\n\n async with updater:\n # code\n\n is roughly equivalent to\n\n .. code:: python\n\n try:\n await updater.initialize()\n # code\n finally:\n await updater.shutdown()\n\n .. seealso:: :meth:`__aenter__` and :meth:`__aexit__`.\n\n .. seealso:: :wiki:`Architecture Overview <Architecture>`,\n :wiki:`Builder Pattern <Builder-Pattern>`\n\n .. versionchanged:: 20.0\n\n * Removed argument and attribute ``user_sig_handler``\n * The only arguments and attributes are now :attr:`bot` and :attr:`update_queue` as now\n the sole purpose of this class is to fetch updates. The entry point to a PTB application\n is now :class:`telegram.ext.Application`.\n\n Args:\n bot (:class:`telegram.Bot`): The bot used with this Updater.\n update_queue (:class:`asyncio.Queue`): Queue for the updates.\n\n Attributes:\n bot (:class:`telegram.Bot`): The bot used with this Updater.\n update_queue (:class:`asyncio.Queue`): Queue for the updates.\n\n \"\"\"\n\n __slots__ = (\n \"__lock\",\n \"__polling_task\",\n \"_httpd\",\n \"_initialized\",\n \"_last_update_id\",\n \"_running\",\n \"bot\",\n \"update_queue\",\n )\n\n def __init__(\n self,\n bot: \"Bot\",\n update_queue: \"asyncio.Queue[object]\",\n ):\n self.bot: Bot = bot\n self.update_queue: asyncio.Queue[object] = update_queue\n\n self._last_update_id = 0\n self._running = False\n self._initialized = False\n self._httpd: Optional[WebhookServer] = None\n self.__lock = asyncio.Lock()\n self.__polling_task: Optional[asyncio.Task] = None\n self.__polling_cleanup_cb: Optional[Callable[[], Coroutine[Any, Any, None]]] = None\n\n async def __aenter__(self: _UpdaterType) -> _UpdaterType: # noqa: PYI019\n \"\"\"\n |async_context_manager| :meth:`initializes <initialize>` the Updater.\n\n Returns:\n The initialized Updater instance.\n\n Raises:\n :exc:`Exception`: If an exception is raised during initialization, :meth:`shutdown`\n is called in this case.\n \"\"\"\n try:\n await self.initialize()\n return self\n except Exception as exc:\n await self.shutdown()\n raise exc\n\n async def __aexit__(\n self,\n exc_type: Optional[Type[BaseException]],\n exc_val: Optional[BaseException],\n exc_tb: Optional[TracebackType],\n ) -> None:\n \"\"\"|async_context_manager| :meth:`shuts down <shutdown>` the Updater.\"\"\"\n # Make sure not to return `True` so that exceptions are not suppressed\n # https://docs.python.org/3/reference/datamodel.html?#object.__aexit__\n await self.shutdown()\n\n def __repr__(self) -> str:\n \"\"\"Give a string representation of the updater in the form ``Updater[bot=...]``.\n\n As this class doesn't implement :meth:`object.__str__`, the default implementation\n will be used, which is equivalent to :meth:`__repr__`.\n\n Returns:\n :obj:`str`\n \"\"\"\n return build_repr_with_selected_attrs(self, bot=self.bot)\n\n @property\n def running(self) -> bool:\n return self._running\n\n async def initialize(self) -> None:\n \"\"\"Initializes the Updater & the associated :attr:`bot` by calling\n :meth:`telegram.Bot.initialize`.\n\n .. seealso::\n :meth:`shutdown`\n \"\"\"\n if self._initialized:\n _LOGGER.debug(\"This Updater is already initialized.\")\n return\n\n await self.bot.initialize()\n self._initialized = True\n\n async def shutdown(self) -> None:\n \"\"\"\n Shutdown the Updater & the associated :attr:`bot` by calling :meth:`telegram.Bot.shutdown`.\n\n .. seealso::\n :meth:`initialize`\n\n Raises:\n :exc:`RuntimeError`: If the updater is still running.\n \"\"\"\n if self.running:\n raise RuntimeError(\"This Updater is still running!\")\n\n if not self._initialized:\n _LOGGER.debug(\"This Updater is already shut down. Returning.\")\n return\n\n await self.bot.shutdown()\n self._initialized = False\n _LOGGER.debug(\"Shut down of Updater complete\")\n\n async def start_polling(\n self,\n poll_interval: float = 0.0,\n timeout: int = 10,\n bootstrap_retries: int = -1,\n read_timeout: ODVInput[float] = DEFAULT_NONE,\n write_timeout: ODVInput[float] = DEFAULT_NONE,\n connect_timeout: ODVInput[float] = DEFAULT_NONE,\n pool_timeout: ODVInput[float] = DEFAULT_NONE,\n allowed_updates: Optional[List[str]] = None,\n drop_pending_updates: Optional[bool] = None,\n error_callback: Optional[Callable[[TelegramError], None]] = None,\n ) -> \"asyncio.Queue[object]\":\n \"\"\"Starts polling updates from Telegram.\n\n .. versionchanged:: 20.0\n Removed the ``clean`` argument in favor of :paramref:`drop_pending_updates`.\n\n Args:\n poll_interval (:obj:`float`, optional): Time to wait between polling updates from\n Telegram in seconds. Default is ``0.0``.\n timeout (:obj:`int`, optional): Passed to\n :paramref:`telegram.Bot.get_updates.timeout`. Defaults to ``10`` seconds.\n bootstrap_retries (:obj:`int`, optional): Whether the bootstrapping phase of the\n :class:`telegram.ext.Updater` will retry on failures on the Telegram server.\n\n * < 0 - retry indefinitely (default)\n * 0 - no retries\n * > 0 - retry up to X times\n read_timeout (:obj:`float`, optional): Value to pass to\n :paramref:`telegram.Bot.get_updates.read_timeout`. Defaults to\n :attr:`~telegram.request.BaseRequest.DEFAULT_NONE`.\n\n .. versionchanged:: 20.7\n Defaults to :attr:`~telegram.request.BaseRequest.DEFAULT_NONE` instead of\n ``2``.\n .. deprecated:: 20.7\n Deprecated in favor of setting the timeout via\n :meth:`telegram.ext.ApplicationBuilder.get_updates_read_timeout` or\n :paramref:`telegram.Bot.get_updates_request`.\n write_timeout (:obj:`float` | :obj:`None`, optional): Value to pass to\n :paramref:`telegram.Bot.get_updates.write_timeout`. Defaults to\n :attr:`~telegram.request.BaseRequest.DEFAULT_NONE`.\n\n .. deprecated:: 20.7\n Deprecated in favor of setting the timeout via\n :meth:`telegram.ext.ApplicationBuilder.get_updates_write_timeout` or\n :paramref:`telegram.Bot.get_updates_request`.\n connect_timeout (:obj:`float` | :obj:`None`, optional): Value to pass to\n :paramref:`telegram.Bot.get_updates.connect_timeout`. Defaults to\n :attr:`~telegram.request.BaseRequest.DEFAULT_NONE`.\n\n .. deprecated:: 20.7\n Deprecated in favor of setting the timeout via\n :meth:`telegram.ext.ApplicationBuilder.get_updates_connect_timeout` or\n :paramref:`telegram.Bot.get_updates_request`.\n pool_timeout (:obj:`float` | :obj:`None`, optional): Value to pass to\n :paramref:`telegram.Bot.get_updates.pool_timeout`. Defaults to\n :attr:`~telegram.request.BaseRequest.DEFAULT_NONE`.\n\n .. deprecated:: 20.7\n Deprecated in favor of setting the timeout via\n :meth:`telegram.ext.ApplicationBuilder.get_updates_pool_timeout` or\n :paramref:`telegram.Bot.get_updates_request`.\n allowed_updates (List[:obj:`str`], optional): Passed to\n :meth:`telegram.Bot.get_updates`.\n drop_pending_updates (:obj:`bool`, optional): Whether to clean any pending updates on\n Telegram servers before actually starting to poll. Default is :obj:`False`.\n\n .. versionadded :: 13.4\n error_callback (Callable[[:exc:`telegram.error.TelegramError`], :obj:`None`], \\\n optional): Callback to handle :exc:`telegram.error.TelegramError` s that occur\n while calling :meth:`telegram.Bot.get_updates` during polling. Defaults to\n :obj:`None`, in which case errors will be logged. Callback signature::\n\n def callback(error: telegram.error.TelegramError)\n\n Note:\n The :paramref:`error_callback` must *not* be a :term:`coroutine function`! If\n asynchronous behavior of the callback is wanted, please schedule a task from\n within the callback.\n\n Returns:\n :class:`asyncio.Queue`: The update queue that can be filled from the main thread.\n\n Raises:\n :exc:`RuntimeError`: If the updater is already running or was not initialized.\n\n \"\"\"\n # We refrain from issuing deprecation warnings for the timeout parameters here, as we\n # already issue them in `Application`. This means that there are no warnings when using\n # `Updater` without `Application`, but this is a rather special use case.\n\n if error_callback and asyncio.iscoroutinefunction(error_callback):\n raise TypeError(\n \"The `error_callback` must not be a coroutine function! Use an ordinary function \"\n \"instead. \"\n )\n\n async with self.__lock:\n if self.running:\n raise RuntimeError(\"This Updater is already running!\")\n if not self._initialized:\n raise RuntimeError(\"This Updater was not initialized via `Updater.initialize`!\")\n\n self._running = True\n\n try:\n # Create & start tasks\n polling_ready = asyncio.Event()\n\n await self._start_polling(\n poll_interval=poll_interval,\n timeout=timeout,\n read_timeout=read_timeout,\n write_timeout=write_timeout,\n connect_timeout=connect_timeout,\n pool_timeout=pool_timeout,\n bootstrap_retries=bootstrap_retries,\n drop_pending_updates=drop_pending_updates,\n allowed_updates=allowed_updates,\n ready=polling_ready,\n error_callback=error_callback,\n )\n\n _LOGGER.debug(\"Waiting for polling to start\")\n await polling_ready.wait()\n _LOGGER.debug(\"Polling updates from Telegram started\")\n\n return self.update_queue\n except Exception as exc:\n self._running = False\n raise exc\n\n async def _start_polling(\n self,\n poll_interval: float,\n timeout: int,\n read_timeout: ODVInput[float],\n write_timeout: ODVInput[float],\n connect_timeout: ODVInput[float],\n pool_timeout: ODVInput[float],\n bootstrap_retries: int,\n drop_pending_updates: Optional[bool],\n allowed_updates: Optional[List[str]],\n ready: asyncio.Event,\n error_callback: Optional[Callable[[TelegramError], None]],\n ) -> None:\n _LOGGER.debug(\"Updater started (polling)\")\n\n # the bootstrapping phase does two things:\n # 1) make sure there is no webhook set\n # 2) apply drop_pending_updates\n await self._bootstrap(\n bootstrap_retries,\n drop_pending_updates=drop_pending_updates,\n webhook_url=\"\",\n allowed_updates=None,\n )\n\n _LOGGER.debug(\"Bootstrap done\")\n\n async def polling_action_cb() -> bool:\n try:\n updates = await self.bot.get_updates(\n offset=self._last_update_id,\n timeout=timeout,\n read_timeout=read_timeout,\n connect_timeout=connect_timeout,\n write_timeout=write_timeout,\n pool_timeout=pool_timeout,\n allowed_updates=allowed_updates,\n )\n except TelegramError as exc:\n # TelegramErrors should be processed by the network retry loop\n raise exc\n except Exception as exc:\n # Other exceptions should not. Let's log them for now.\n _LOGGER.critical(\n \"Something went wrong processing the data received from Telegram. \"\n \"Received data was *not* processed!\",\n exc_info=exc,\n )\n return True\n\n if updates:\n if not self.running:\n _LOGGER.critical(\n \"Updater stopped unexpectedly. Pulled updates will be ignored and pulled \"\n \"again on restart.\"\n )\n else:\n for update in updates:\n await self.update_queue.put(update)\n self._last_update_id = updates[-1].update_id + 1 # Add one to 'confirm' it\n\n return True # Keep fetching updates & don't quit. Polls with poll_interval.\n\n def default_error_callback(exc: TelegramError) -> None:\n _LOGGER.exception(\"Exception happened while polling for updates.\", exc_info=exc)\n\n # Start task that runs in background, pulls\n # updates from Telegram and inserts them in the update queue of the\n # Application.\n self.__polling_task = asyncio.create_task(\n self._network_loop_retry(\n action_cb=polling_action_cb,\n on_err_cb=error_callback or default_error_callback,\n description=\"getting Updates\",\n interval=poll_interval,\n ),\n name=\"Updater:start_polling:polling_task\",\n )\n\n # Prepare a cleanup callback to await on _stop_polling\n # Calling get_updates one more time with the latest `offset` parameter ensures that\n # all updates that where put into the update queue are also marked as \"read\" to TG,\n # so we do not receive them again on the next startup\n # We define this here so that we can use the same parameters as in the polling task\n async def _get_updates_cleanup() -> None:\n _LOGGER.debug(\n \"Calling `get_updates` one more time to mark all fetched updates as read.\"\n )\n try:\n await self.bot.get_updates(\n offset=self._last_update_id,\n # We don't want to do long polling here!\n timeout=0,\n read_timeout=read_timeout,\n connect_timeout=connect_timeout,\n write_timeout=write_timeout,\n pool_timeout=pool_timeout,\n allowed_updates=allowed_updates,\n )\n except TelegramError as exc:\n _LOGGER.error(\n \"Error while calling `get_updates` one more time to mark all fetched updates \"\n \"as read: %s. Suppressing error to ensure graceful shutdown. When polling for \"\n \"updates is restarted, updates may be fetched again. Please adjust timeouts \"\n \"via `ApplicationBuilder` or the parameter `get_updates_request` of `Bot`.\",\n exc_info=exc,\n )\n\n self.__polling_cleanup_cb = _get_updates_cleanup\n\n if ready is not None:\n ready.set()\n\n async def start_webhook(\n self,\n listen: DVType[str] = DEFAULT_IP,\n port: DVType[int] = DEFAULT_80,\n url_path: str = \"\",\n cert: Optional[Union[str, Path]] = None,\n key: Optional[Union[str, Path]] = None,\n bootstrap_retries: int = 0,\n webhook_url: Optional[str] = None,\n allowed_updates: Optional[List[str]] = None,\n drop_pending_updates: Optional[bool] = None,\n ip_address: Optional[str] = None,\n max_connections: int = 40,\n secret_token: Optional[str] = None,\n unix: Optional[Union[str, Path]] = None,\n ) -> \"asyncio.Queue[object]\":\n \"\"\"\n Starts a small http server to listen for updates via webhook. If :paramref:`cert`\n and :paramref:`key` are not provided, the webhook will be started directly on\n ``http://listen:port/url_path``, so SSL can be handled by another\n application. Else, the webhook will be started on\n ``https://listen:port/url_path``. Also calls :meth:`telegram.Bot.set_webhook` as required.\n\n Important:\n If you want to use this method, you must install PTB with the optional requirement\n ``webhooks``, i.e.\n\n .. code-block:: bash\n\n pip install \"python-telegram-bot[webhooks]\"\n\n .. seealso:: :wiki:`Webhooks`\n\n .. versionchanged:: 13.4\n :meth:`start_webhook` now *always* calls :meth:`telegram.Bot.set_webhook`, so pass\n ``webhook_url`` instead of calling ``updater.bot.set_webhook(webhook_url)`` manually.\n .. versionchanged:: 20.0\n\n * Removed the ``clean`` argument in favor of :paramref:`drop_pending_updates` and\n removed the deprecated argument ``force_event_loop``.\n\n Args:\n listen (:obj:`str`, optional): IP-Address to listen on. Defaults to\n `127.0.0.1 <https://en.wikipedia.org/wiki/Localhost>`_.\n port (:obj:`int`, optional): Port the bot should be listening on. Must be one of\n :attr:`telegram.constants.SUPPORTED_WEBHOOK_PORTS` unless the bot is running\n behind a proxy. Defaults to ``80``.\n url_path (:obj:`str`, optional): Path inside url (http(s)://listen:port/<url_path>).\n Defaults to ``''``.\n cert (:class:`pathlib.Path` | :obj:`str`, optional): Path to the SSL certificate file.\n key (:class:`pathlib.Path` | :obj:`str`, optional): Path to the SSL key file.\n drop_pending_updates (:obj:`bool`, optional): Whether to clean any pending updates on\n Telegram servers before actually starting to poll. Default is :obj:`False`.\n\n .. versionadded :: 13.4\n bootstrap_retries (:obj:`int`, optional): Whether the bootstrapping phase of the\n :class:`telegram.ext.Updater` will retry on failures on the Telegram server.\n\n * < 0 - retry indefinitely\n * 0 - no retries (default)\n * > 0 - retry up to X times\n webhook_url (:obj:`str`, optional): Explicitly specify the webhook url. Useful behind\n NAT, reverse proxy, etc. Default is derived from :paramref:`listen`,\n :paramref:`port`, :paramref:`url_path`, :paramref:`cert`, and :paramref:`key`.\n ip_address (:obj:`str`, optional): Passed to :meth:`telegram.Bot.set_webhook`.\n Defaults to :obj:`None`.\n\n .. versionadded :: 13.4\n allowed_updates (List[:obj:`str`], optional): Passed to\n :meth:`telegram.Bot.set_webhook`. Defaults to :obj:`None`.\n max_connections (:obj:`int`, optional): Passed to\n :meth:`telegram.Bot.set_webhook`. Defaults to ``40``.\n\n .. versionadded:: 13.6\n secret_token (:obj:`str`, optional): Passed to :meth:`telegram.Bot.set_webhook`.\n Defaults to :obj:`None`.\n\n When added, the web server started by this call will expect the token to be set in\n the ``X-Telegram-Bot-Api-Secret-Token`` header of an incoming request and will\n raise a :class:`http.HTTPStatus.FORBIDDEN <http.HTTPStatus>` error if either the\n header isn't set or it is set to a wrong token.\n\n .. versionadded:: 20.0\n unix (:class:`pathlib.Path` | :obj:`str`, optional): Path to the unix socket file. Path\n does not need to exist, in which case the file will be created.\n\n Caution:\n This parameter is a replacement for the default TCP bind. Therefore, it is\n mutually exclusive with :paramref:`listen` and :paramref:`port`. When using\n this param, you must also run a reverse proxy to the unix socket and set the\n appropriate :paramref:`webhook_url`.\n\n .. versionadded:: 20.8\n Returns:\n :class:`queue.Queue`: The update queue that can be filled from the main thread.\n\n Raises:\n :exc:`RuntimeError`: If the updater is already running or was not initialized.\n \"\"\"\n if not WEBHOOKS_AVAILABLE:\n raise RuntimeError(\n \"To use `start_webhook`, PTB must be installed via `pip install \"\n '\"python-telegram-bot[webhooks]\"`.'\n )\n # unix has special requirements what must and mustn't be set when using it\n if unix:\n error_msg = (\n \"You can not pass unix and {0}, only use one. Unix if you want to \"\n \"initialize a unix socket, or {0} for a standard TCP server.\"\n )\n if not isinstance(listen, DefaultValue):\n raise RuntimeError(error_msg.format(\"listen\"))\n if not isinstance(port, DefaultValue):\n raise RuntimeError(error_msg.format(\"port\"))\n if not webhook_url:\n raise RuntimeError(\n \"Since you set unix, you also need to set the URL to the webhook \"\n \"of the proxy you run in front of the unix socket.\"\n )\n\n async with self.__lock:\n if self.running:\n raise RuntimeError(\"This Updater is already running!\")\n if not self._initialized:\n raise RuntimeError(\"This Updater was not initialized via `Updater.initialize`!\")\n\n self._running = True\n\n try:\n # Create & start tasks\n webhook_ready = asyncio.Event()\n\n await self._start_webhook(\n listen=DefaultValue.get_value(listen),\n port=DefaultValue.get_value(port),\n url_path=url_path,\n cert=cert,\n key=key,\n bootstrap_retries=bootstrap_retries,\n drop_pending_updates=drop_pending_updates,\n webhook_url=webhook_url,\n allowed_updates=allowed_updates,\n ready=webhook_ready,\n ip_address=ip_address,\n max_connections=max_connections,\n secret_token=secret_token,\n unix=unix,\n )\n\n _LOGGER.debug(\"Waiting for webhook server to start\")\n await webhook_ready.wait()\n _LOGGER.debug(\"Webhook server started\")\n except Exception as exc:\n self._running = False\n raise exc\n\n # Return the update queue so the main thread can insert updates\n return self.update_queue\n\n async def _start_webhook(\n self,\n listen: str,\n port: int,\n url_path: str,\n bootstrap_retries: int,\n allowed_updates: Optional[List[str]],\n cert: Optional[Union[str, Path]] = None,\n key: Optional[Union[str, Path]] = None,\n drop_pending_updates: Optional[bool] = None,\n webhook_url: Optional[str] = None,\n ready: Optional[asyncio.Event] = None,\n ip_address: Optional[str] = None,\n max_connections: int = 40,\n secret_token: Optional[str] = None,\n unix: Optional[Union[str, Path]] = None,\n ) -> None:\n _LOGGER.debug(\"Updater thread started (webhook)\")\n\n if not url_path.startswith(\"/\"):\n url_path = f\"/{url_path}\"\n\n # Create Tornado app instance\n app = WebhookAppClass(url_path, self.bot, self.update_queue, secret_token)\n\n # Form SSL Context\n # An SSLError is raised if the private key does not match with the certificate\n # Note that we only use the SSL certificate for the WebhookServer, if the key is also\n # present. This is because the WebhookServer may not actually be in charge of performing\n # the SSL handshake, e.g. in case a reverse proxy is used\n if cert is not None and key is not None:\n try:\n ssl_ctx: Optional[ssl.SSLContext] = ssl.create_default_context(\n ssl.Purpose.CLIENT_AUTH\n )\n ssl_ctx.load_cert_chain(cert, key) # type: ignore[union-attr]\n except ssl.SSLError as exc:\n raise TelegramError(\"Invalid SSL Certificate\") from exc\n else:\n ssl_ctx = None\n # Create and start server\n self._httpd = WebhookServer(listen, port, app, ssl_ctx, unix)\n\n if not webhook_url:\n webhook_url = self._gen_webhook_url(\n protocol=\"https\" if ssl_ctx else \"http\",\n listen=DefaultValue.get_value(listen),\n port=port,\n url_path=url_path,\n )\n\n # We pass along the cert to the webhook if present.\n await self._bootstrap(\n # Passing a Path or string only works if the bot is running against a local bot API\n # server, so let's read the contents\n cert=Path(cert).read_bytes() if cert else None,\n max_retries=bootstrap_retries,\n drop_pending_updates=drop_pending_updates,\n webhook_url=webhook_url,\n allowed_updates=allowed_updates,\n ip_address=ip_address,\n max_connections=max_connections,\n secret_token=secret_token,\n )\n\n await self._httpd.serve_forever(ready=ready)\n\n @staticmethod\n def _gen_webhook_url(protocol: str, listen: str, port: int, url_path: str) -> str:\n # TODO: double check if this should be https in any case - the docs of start_webhook\n # say differently!\n return f\"{protocol}://{listen}:{port}{url_path}\"\n\n async def _network_loop_retry(\n self,\n action_cb: Callable[..., Coroutine],\n on_err_cb: Callable[[TelegramError], None],\n description: str,\n interval: float,\n ) -> None:\n \"\"\"Perform a loop calling `action_cb`, retrying after network errors.\n\n Stop condition for loop: `self.running` evaluates :obj:`False` or return value of\n `action_cb` evaluates :obj:`False`.\n\n Args:\n action_cb (:term:`coroutine function`): Network oriented callback function to call.\n on_err_cb (:obj:`callable`): Callback to call when TelegramError is caught. Receives\n the exception object as a parameter.\n description (:obj:`str`): Description text to use for logs and exception raised.\n interval (:obj:`float` | :obj:`int`): Interval to sleep between each call to\n `action_cb`.\n\n \"\"\"\n _LOGGER.debug(\"Start network loop retry %s\", description)\n cur_interval = interval\n try:\n while self.running:\n try:\n if not await action_cb():\n break\n except RetryAfter as exc:\n _LOGGER.info(\"%s\", exc)\n cur_interval = 0.5 + exc.retry_after\n except TimedOut as toe:\n _LOGGER.debug(\"Timed out %s: %s\", description, toe)\n # If failure is due to timeout, we should retry asap.\n cur_interval = 0\n except InvalidToken as pex:\n _LOGGER.error(\"Invalid token; aborting\")\n raise pex\n except TelegramError as telegram_exc:\n _LOGGER.error(\"Error while %s: %s\", description, telegram_exc)\n on_err_cb(telegram_exc)\n\n # increase waiting times on subsequent errors up to 30secs\n cur_interval = 1 if cur_interval == 0 else min(30, 1.5 * cur_interval)\n else:\n cur_interval = interval\n\n if cur_interval:\n await asyncio.sleep(cur_interval)\n\n except asyncio.CancelledError:\n _LOGGER.debug(\"Network loop retry %s was cancelled\", description)\n\n async def _bootstrap(\n self,\n max_retries: int,\n webhook_url: Optional[str],\n allowed_updates: Optional[List[str]],\n drop_pending_updates: Optional[bool] = None,\n cert: Optional[bytes] = None,\n bootstrap_interval: float = 1,\n ip_address: Optional[str] = None,\n max_connections: int = 40,\n secret_token: Optional[str] = None,\n ) -> None:\n \"\"\"Prepares the setup for fetching updates: delete or set the webhook and drop pending\n updates if appropriate. If there are unsuccessful attempts, this will retry as specified by\n :paramref:`max_retries`.\n \"\"\"\n retries = 0\n\n async def bootstrap_del_webhook() -> bool:\n _LOGGER.debug(\"Deleting webhook\")\n if drop_pending_updates:\n _LOGGER.debug(\"Dropping pending updates from Telegram server\")\n await self.bot.delete_webhook(drop_pending_updates=drop_pending_updates)\n return False\n\n async def bootstrap_set_webhook() -> bool:\n _LOGGER.debug(\"Setting webhook\")\n if drop_pending_updates:\n _LOGGER.debug(\"Dropping pending updates from Telegram server\")\n await self.bot.set_webhook(\n url=webhook_url,\n certificate=cert,\n allowed_updates=allowed_updates,\n ip_address=ip_address,\n drop_pending_updates=drop_pending_updates,\n max_connections=max_connections,\n secret_token=secret_token,\n )\n return False\n\n def bootstrap_on_err_cb(exc: Exception) -> None:\n # We need this since retries is an immutable object otherwise and the changes\n # wouldn't propagate outside of thi function\n nonlocal retries\n\n if not isinstance(exc, InvalidToken) and (max_retries < 0 or retries < max_retries):\n retries += 1\n _LOGGER.warning(\n \"Failed bootstrap phase; try=%s max_retries=%s\", retries, max_retries\n )\n else:\n _LOGGER.error(\"Failed bootstrap phase after %s retries (%s)\", retries, exc)\n raise exc\n\n # Dropping pending updates from TG can be efficiently done with the drop_pending_updates\n # parameter of delete/start_webhook, even in the case of polling. Also, we want to make\n # sure that no webhook is configured in case of polling, so we just always call\n # delete_webhook for polling\n if drop_pending_updates or not webhook_url:\n await self._network_loop_retry(\n bootstrap_del_webhook,\n bootstrap_on_err_cb,\n \"bootstrap del webhook\",\n bootstrap_interval,\n )\n\n # Reset the retries counter for the next _network_loop_retry call\n retries = 0\n\n # Restore/set webhook settings, if needed. Again, we don't know ahead if a webhook is set,\n # so we set it anyhow.\n if webhook_url:\n await self._network_loop_retry(\n bootstrap_set_webhook,\n bootstrap_on_err_cb,\n \"bootstrap set webhook\",\n bootstrap_interval,\n )\n\n async def stop(self) -> None:\n \"\"\"Stops the polling/webhook.\n\n .. seealso::\n :meth:`start_polling`, :meth:`start_webhook`\n\n Raises:\n :exc:`RuntimeError`: If the updater is not running.\n \"\"\"\n async with self.__lock:\n if not self.running:\n raise RuntimeError(\"This Updater is not running!\")\n\n _LOGGER.debug(\"Stopping Updater\")\n\n self._running = False\n\n await self._stop_httpd()\n await self._stop_polling()\n\n _LOGGER.debug(\"Updater.stop() is complete\")\n\n async def _stop_httpd(self) -> None:\n \"\"\"Stops the Webhook server by calling ``WebhookServer.shutdown()``\"\"\"\n if self._httpd:\n _LOGGER.debug(\"Waiting for current webhook connection to be closed.\")\n await self._httpd.shutdown()\n self._httpd = None\n\n async def _stop_polling(self) -> None:\n \"\"\"Stops the polling task by awaiting it.\"\"\"\n if self.__polling_task:\n _LOGGER.debug(\"Waiting background polling task to finish up.\")\n self.__polling_task.cancel()\n\n with contextlib.suppress(asyncio.CancelledError):\n await self.__polling_task\n # It only fails in rare edge-cases, e.g. when `stop()` is called directly\n # after start_polling(), but lets better be safe than sorry ...\n\n self.__polling_task = None\n\n if self.__polling_cleanup_cb:\n await self.__polling_cleanup_cb()\n self.__polling_cleanup_cb = None\n else:\n _LOGGER.warning(\n \"No polling cleanup callback defined. The last fetched updates may be \"\n \"fetched again on the next polling start.\"\n )\n", "path": "telegram/ext/_updater.py"}], "after_files": [{"content": "#!/usr/bin/env python\n#\n# A library that provides a Python interface to the Telegram Bot API\n# Copyright (C) 2015-2024\n# Leandro Toledo de Souza <[email protected]>\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Lesser Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Lesser Public License for more details.\n#\n# You should have received a copy of the GNU Lesser Public License\n# along with this program. If not, see [http://www.gnu.org/licenses/].\n\"\"\"This module contains the class Updater, which tries to make creating Telegram bots intuitive.\"\"\"\nimport asyncio\nimport contextlib\nimport ssl\nfrom pathlib import Path\nfrom types import TracebackType\nfrom typing import (\n TYPE_CHECKING,\n Any,\n AsyncContextManager,\n Callable,\n Coroutine,\n List,\n Optional,\n Type,\n TypeVar,\n Union,\n)\n\nfrom telegram._utils.defaultvalue import DEFAULT_80, DEFAULT_IP, DEFAULT_NONE, DefaultValue\nfrom telegram._utils.logging import get_logger\nfrom telegram._utils.repr import build_repr_with_selected_attrs\nfrom telegram._utils.types import DVType, ODVInput\nfrom telegram.error import InvalidToken, RetryAfter, TelegramError, TimedOut\n\ntry:\n from telegram.ext._utils.webhookhandler import WebhookAppClass, WebhookServer\n\n WEBHOOKS_AVAILABLE = True\nexcept ImportError:\n WEBHOOKS_AVAILABLE = False\n\nif TYPE_CHECKING:\n from telegram import Bot\n\n\n_UpdaterType = TypeVar(\"_UpdaterType\", bound=\"Updater\") # pylint: disable=invalid-name\n_LOGGER = get_logger(__name__)\n\n\nclass Updater(AsyncContextManager[\"Updater\"]):\n \"\"\"This class fetches updates for the bot either via long polling or by starting a webhook\n server. Received updates are enqueued into the :attr:`update_queue` and may be fetched from\n there to handle them appropriately.\n\n Instances of this class can be used as asyncio context managers, where\n\n .. code:: python\n\n async with updater:\n # code\n\n is roughly equivalent to\n\n .. code:: python\n\n try:\n await updater.initialize()\n # code\n finally:\n await updater.shutdown()\n\n .. seealso:: :meth:`__aenter__` and :meth:`__aexit__`.\n\n .. seealso:: :wiki:`Architecture Overview <Architecture>`,\n :wiki:`Builder Pattern <Builder-Pattern>`\n\n .. versionchanged:: 20.0\n\n * Removed argument and attribute ``user_sig_handler``\n * The only arguments and attributes are now :attr:`bot` and :attr:`update_queue` as now\n the sole purpose of this class is to fetch updates. The entry point to a PTB application\n is now :class:`telegram.ext.Application`.\n\n Args:\n bot (:class:`telegram.Bot`): The bot used with this Updater.\n update_queue (:class:`asyncio.Queue`): Queue for the updates.\n\n Attributes:\n bot (:class:`telegram.Bot`): The bot used with this Updater.\n update_queue (:class:`asyncio.Queue`): Queue for the updates.\n\n \"\"\"\n\n __slots__ = (\n \"__lock\",\n \"__polling_cleanup_cb\",\n \"__polling_task\",\n \"_httpd\",\n \"_initialized\",\n \"_last_update_id\",\n \"_running\",\n \"bot\",\n \"update_queue\",\n )\n\n def __init__(\n self,\n bot: \"Bot\",\n update_queue: \"asyncio.Queue[object]\",\n ):\n self.bot: Bot = bot\n self.update_queue: asyncio.Queue[object] = update_queue\n\n self._last_update_id = 0\n self._running = False\n self._initialized = False\n self._httpd: Optional[WebhookServer] = None\n self.__lock = asyncio.Lock()\n self.__polling_task: Optional[asyncio.Task] = None\n self.__polling_cleanup_cb: Optional[Callable[[], Coroutine[Any, Any, None]]] = None\n\n async def __aenter__(self: _UpdaterType) -> _UpdaterType: # noqa: PYI019\n \"\"\"\n |async_context_manager| :meth:`initializes <initialize>` the Updater.\n\n Returns:\n The initialized Updater instance.\n\n Raises:\n :exc:`Exception`: If an exception is raised during initialization, :meth:`shutdown`\n is called in this case.\n \"\"\"\n try:\n await self.initialize()\n return self\n except Exception as exc:\n await self.shutdown()\n raise exc\n\n async def __aexit__(\n self,\n exc_type: Optional[Type[BaseException]],\n exc_val: Optional[BaseException],\n exc_tb: Optional[TracebackType],\n ) -> None:\n \"\"\"|async_context_manager| :meth:`shuts down <shutdown>` the Updater.\"\"\"\n # Make sure not to return `True` so that exceptions are not suppressed\n # https://docs.python.org/3/reference/datamodel.html?#object.__aexit__\n await self.shutdown()\n\n def __repr__(self) -> str:\n \"\"\"Give a string representation of the updater in the form ``Updater[bot=...]``.\n\n As this class doesn't implement :meth:`object.__str__`, the default implementation\n will be used, which is equivalent to :meth:`__repr__`.\n\n Returns:\n :obj:`str`\n \"\"\"\n return build_repr_with_selected_attrs(self, bot=self.bot)\n\n @property\n def running(self) -> bool:\n return self._running\n\n async def initialize(self) -> None:\n \"\"\"Initializes the Updater & the associated :attr:`bot` by calling\n :meth:`telegram.Bot.initialize`.\n\n .. seealso::\n :meth:`shutdown`\n \"\"\"\n if self._initialized:\n _LOGGER.debug(\"This Updater is already initialized.\")\n return\n\n await self.bot.initialize()\n self._initialized = True\n\n async def shutdown(self) -> None:\n \"\"\"\n Shutdown the Updater & the associated :attr:`bot` by calling :meth:`telegram.Bot.shutdown`.\n\n .. seealso::\n :meth:`initialize`\n\n Raises:\n :exc:`RuntimeError`: If the updater is still running.\n \"\"\"\n if self.running:\n raise RuntimeError(\"This Updater is still running!\")\n\n if not self._initialized:\n _LOGGER.debug(\"This Updater is already shut down. Returning.\")\n return\n\n await self.bot.shutdown()\n self._initialized = False\n _LOGGER.debug(\"Shut down of Updater complete\")\n\n async def start_polling(\n self,\n poll_interval: float = 0.0,\n timeout: int = 10,\n bootstrap_retries: int = -1,\n read_timeout: ODVInput[float] = DEFAULT_NONE,\n write_timeout: ODVInput[float] = DEFAULT_NONE,\n connect_timeout: ODVInput[float] = DEFAULT_NONE,\n pool_timeout: ODVInput[float] = DEFAULT_NONE,\n allowed_updates: Optional[List[str]] = None,\n drop_pending_updates: Optional[bool] = None,\n error_callback: Optional[Callable[[TelegramError], None]] = None,\n ) -> \"asyncio.Queue[object]\":\n \"\"\"Starts polling updates from Telegram.\n\n .. versionchanged:: 20.0\n Removed the ``clean`` argument in favor of :paramref:`drop_pending_updates`.\n\n Args:\n poll_interval (:obj:`float`, optional): Time to wait between polling updates from\n Telegram in seconds. Default is ``0.0``.\n timeout (:obj:`int`, optional): Passed to\n :paramref:`telegram.Bot.get_updates.timeout`. Defaults to ``10`` seconds.\n bootstrap_retries (:obj:`int`, optional): Whether the bootstrapping phase of the\n :class:`telegram.ext.Updater` will retry on failures on the Telegram server.\n\n * < 0 - retry indefinitely (default)\n * 0 - no retries\n * > 0 - retry up to X times\n read_timeout (:obj:`float`, optional): Value to pass to\n :paramref:`telegram.Bot.get_updates.read_timeout`. Defaults to\n :attr:`~telegram.request.BaseRequest.DEFAULT_NONE`.\n\n .. versionchanged:: 20.7\n Defaults to :attr:`~telegram.request.BaseRequest.DEFAULT_NONE` instead of\n ``2``.\n .. deprecated:: 20.7\n Deprecated in favor of setting the timeout via\n :meth:`telegram.ext.ApplicationBuilder.get_updates_read_timeout` or\n :paramref:`telegram.Bot.get_updates_request`.\n write_timeout (:obj:`float` | :obj:`None`, optional): Value to pass to\n :paramref:`telegram.Bot.get_updates.write_timeout`. Defaults to\n :attr:`~telegram.request.BaseRequest.DEFAULT_NONE`.\n\n .. deprecated:: 20.7\n Deprecated in favor of setting the timeout via\n :meth:`telegram.ext.ApplicationBuilder.get_updates_write_timeout` or\n :paramref:`telegram.Bot.get_updates_request`.\n connect_timeout (:obj:`float` | :obj:`None`, optional): Value to pass to\n :paramref:`telegram.Bot.get_updates.connect_timeout`. Defaults to\n :attr:`~telegram.request.BaseRequest.DEFAULT_NONE`.\n\n .. deprecated:: 20.7\n Deprecated in favor of setting the timeout via\n :meth:`telegram.ext.ApplicationBuilder.get_updates_connect_timeout` or\n :paramref:`telegram.Bot.get_updates_request`.\n pool_timeout (:obj:`float` | :obj:`None`, optional): Value to pass to\n :paramref:`telegram.Bot.get_updates.pool_timeout`. Defaults to\n :attr:`~telegram.request.BaseRequest.DEFAULT_NONE`.\n\n .. deprecated:: 20.7\n Deprecated in favor of setting the timeout via\n :meth:`telegram.ext.ApplicationBuilder.get_updates_pool_timeout` or\n :paramref:`telegram.Bot.get_updates_request`.\n allowed_updates (List[:obj:`str`], optional): Passed to\n :meth:`telegram.Bot.get_updates`.\n drop_pending_updates (:obj:`bool`, optional): Whether to clean any pending updates on\n Telegram servers before actually starting to poll. Default is :obj:`False`.\n\n .. versionadded :: 13.4\n error_callback (Callable[[:exc:`telegram.error.TelegramError`], :obj:`None`], \\\n optional): Callback to handle :exc:`telegram.error.TelegramError` s that occur\n while calling :meth:`telegram.Bot.get_updates` during polling. Defaults to\n :obj:`None`, in which case errors will be logged. Callback signature::\n\n def callback(error: telegram.error.TelegramError)\n\n Note:\n The :paramref:`error_callback` must *not* be a :term:`coroutine function`! If\n asynchronous behavior of the callback is wanted, please schedule a task from\n within the callback.\n\n Returns:\n :class:`asyncio.Queue`: The update queue that can be filled from the main thread.\n\n Raises:\n :exc:`RuntimeError`: If the updater is already running or was not initialized.\n\n \"\"\"\n # We refrain from issuing deprecation warnings for the timeout parameters here, as we\n # already issue them in `Application`. This means that there are no warnings when using\n # `Updater` without `Application`, but this is a rather special use case.\n\n if error_callback and asyncio.iscoroutinefunction(error_callback):\n raise TypeError(\n \"The `error_callback` must not be a coroutine function! Use an ordinary function \"\n \"instead. \"\n )\n\n async with self.__lock:\n if self.running:\n raise RuntimeError(\"This Updater is already running!\")\n if not self._initialized:\n raise RuntimeError(\"This Updater was not initialized via `Updater.initialize`!\")\n\n self._running = True\n\n try:\n # Create & start tasks\n polling_ready = asyncio.Event()\n\n await self._start_polling(\n poll_interval=poll_interval,\n timeout=timeout,\n read_timeout=read_timeout,\n write_timeout=write_timeout,\n connect_timeout=connect_timeout,\n pool_timeout=pool_timeout,\n bootstrap_retries=bootstrap_retries,\n drop_pending_updates=drop_pending_updates,\n allowed_updates=allowed_updates,\n ready=polling_ready,\n error_callback=error_callback,\n )\n\n _LOGGER.debug(\"Waiting for polling to start\")\n await polling_ready.wait()\n _LOGGER.debug(\"Polling updates from Telegram started\")\n\n return self.update_queue\n except Exception as exc:\n self._running = False\n raise exc\n\n async def _start_polling(\n self,\n poll_interval: float,\n timeout: int,\n read_timeout: ODVInput[float],\n write_timeout: ODVInput[float],\n connect_timeout: ODVInput[float],\n pool_timeout: ODVInput[float],\n bootstrap_retries: int,\n drop_pending_updates: Optional[bool],\n allowed_updates: Optional[List[str]],\n ready: asyncio.Event,\n error_callback: Optional[Callable[[TelegramError], None]],\n ) -> None:\n _LOGGER.debug(\"Updater started (polling)\")\n\n # the bootstrapping phase does two things:\n # 1) make sure there is no webhook set\n # 2) apply drop_pending_updates\n await self._bootstrap(\n bootstrap_retries,\n drop_pending_updates=drop_pending_updates,\n webhook_url=\"\",\n allowed_updates=None,\n )\n\n _LOGGER.debug(\"Bootstrap done\")\n\n async def polling_action_cb() -> bool:\n try:\n updates = await self.bot.get_updates(\n offset=self._last_update_id,\n timeout=timeout,\n read_timeout=read_timeout,\n connect_timeout=connect_timeout,\n write_timeout=write_timeout,\n pool_timeout=pool_timeout,\n allowed_updates=allowed_updates,\n )\n except TelegramError as exc:\n # TelegramErrors should be processed by the network retry loop\n raise exc\n except Exception as exc:\n # Other exceptions should not. Let's log them for now.\n _LOGGER.critical(\n \"Something went wrong processing the data received from Telegram. \"\n \"Received data was *not* processed!\",\n exc_info=exc,\n )\n return True\n\n if updates:\n if not self.running:\n _LOGGER.critical(\n \"Updater stopped unexpectedly. Pulled updates will be ignored and pulled \"\n \"again on restart.\"\n )\n else:\n for update in updates:\n await self.update_queue.put(update)\n self._last_update_id = updates[-1].update_id + 1 # Add one to 'confirm' it\n\n return True # Keep fetching updates & don't quit. Polls with poll_interval.\n\n def default_error_callback(exc: TelegramError) -> None:\n _LOGGER.exception(\"Exception happened while polling for updates.\", exc_info=exc)\n\n # Start task that runs in background, pulls\n # updates from Telegram and inserts them in the update queue of the\n # Application.\n self.__polling_task = asyncio.create_task(\n self._network_loop_retry(\n action_cb=polling_action_cb,\n on_err_cb=error_callback or default_error_callback,\n description=\"getting Updates\",\n interval=poll_interval,\n ),\n name=\"Updater:start_polling:polling_task\",\n )\n\n # Prepare a cleanup callback to await on _stop_polling\n # Calling get_updates one more time with the latest `offset` parameter ensures that\n # all updates that where put into the update queue are also marked as \"read\" to TG,\n # so we do not receive them again on the next startup\n # We define this here so that we can use the same parameters as in the polling task\n async def _get_updates_cleanup() -> None:\n _LOGGER.debug(\n \"Calling `get_updates` one more time to mark all fetched updates as read.\"\n )\n try:\n await self.bot.get_updates(\n offset=self._last_update_id,\n # We don't want to do long polling here!\n timeout=0,\n read_timeout=read_timeout,\n connect_timeout=connect_timeout,\n write_timeout=write_timeout,\n pool_timeout=pool_timeout,\n allowed_updates=allowed_updates,\n )\n except TelegramError as exc:\n _LOGGER.error(\n \"Error while calling `get_updates` one more time to mark all fetched updates \"\n \"as read: %s. Suppressing error to ensure graceful shutdown. When polling for \"\n \"updates is restarted, updates may be fetched again. Please adjust timeouts \"\n \"via `ApplicationBuilder` or the parameter `get_updates_request` of `Bot`.\",\n exc_info=exc,\n )\n\n self.__polling_cleanup_cb = _get_updates_cleanup\n\n if ready is not None:\n ready.set()\n\n async def start_webhook(\n self,\n listen: DVType[str] = DEFAULT_IP,\n port: DVType[int] = DEFAULT_80,\n url_path: str = \"\",\n cert: Optional[Union[str, Path]] = None,\n key: Optional[Union[str, Path]] = None,\n bootstrap_retries: int = 0,\n webhook_url: Optional[str] = None,\n allowed_updates: Optional[List[str]] = None,\n drop_pending_updates: Optional[bool] = None,\n ip_address: Optional[str] = None,\n max_connections: int = 40,\n secret_token: Optional[str] = None,\n unix: Optional[Union[str, Path]] = None,\n ) -> \"asyncio.Queue[object]\":\n \"\"\"\n Starts a small http server to listen for updates via webhook. If :paramref:`cert`\n and :paramref:`key` are not provided, the webhook will be started directly on\n ``http://listen:port/url_path``, so SSL can be handled by another\n application. Else, the webhook will be started on\n ``https://listen:port/url_path``. Also calls :meth:`telegram.Bot.set_webhook` as required.\n\n Important:\n If you want to use this method, you must install PTB with the optional requirement\n ``webhooks``, i.e.\n\n .. code-block:: bash\n\n pip install \"python-telegram-bot[webhooks]\"\n\n .. seealso:: :wiki:`Webhooks`\n\n .. versionchanged:: 13.4\n :meth:`start_webhook` now *always* calls :meth:`telegram.Bot.set_webhook`, so pass\n ``webhook_url`` instead of calling ``updater.bot.set_webhook(webhook_url)`` manually.\n .. versionchanged:: 20.0\n\n * Removed the ``clean`` argument in favor of :paramref:`drop_pending_updates` and\n removed the deprecated argument ``force_event_loop``.\n\n Args:\n listen (:obj:`str`, optional): IP-Address to listen on. Defaults to\n `127.0.0.1 <https://en.wikipedia.org/wiki/Localhost>`_.\n port (:obj:`int`, optional): Port the bot should be listening on. Must be one of\n :attr:`telegram.constants.SUPPORTED_WEBHOOK_PORTS` unless the bot is running\n behind a proxy. Defaults to ``80``.\n url_path (:obj:`str`, optional): Path inside url (http(s)://listen:port/<url_path>).\n Defaults to ``''``.\n cert (:class:`pathlib.Path` | :obj:`str`, optional): Path to the SSL certificate file.\n key (:class:`pathlib.Path` | :obj:`str`, optional): Path to the SSL key file.\n drop_pending_updates (:obj:`bool`, optional): Whether to clean any pending updates on\n Telegram servers before actually starting to poll. Default is :obj:`False`.\n\n .. versionadded :: 13.4\n bootstrap_retries (:obj:`int`, optional): Whether the bootstrapping phase of the\n :class:`telegram.ext.Updater` will retry on failures on the Telegram server.\n\n * < 0 - retry indefinitely\n * 0 - no retries (default)\n * > 0 - retry up to X times\n webhook_url (:obj:`str`, optional): Explicitly specify the webhook url. Useful behind\n NAT, reverse proxy, etc. Default is derived from :paramref:`listen`,\n :paramref:`port`, :paramref:`url_path`, :paramref:`cert`, and :paramref:`key`.\n ip_address (:obj:`str`, optional): Passed to :meth:`telegram.Bot.set_webhook`.\n Defaults to :obj:`None`.\n\n .. versionadded :: 13.4\n allowed_updates (List[:obj:`str`], optional): Passed to\n :meth:`telegram.Bot.set_webhook`. Defaults to :obj:`None`.\n max_connections (:obj:`int`, optional): Passed to\n :meth:`telegram.Bot.set_webhook`. Defaults to ``40``.\n\n .. versionadded:: 13.6\n secret_token (:obj:`str`, optional): Passed to :meth:`telegram.Bot.set_webhook`.\n Defaults to :obj:`None`.\n\n When added, the web server started by this call will expect the token to be set in\n the ``X-Telegram-Bot-Api-Secret-Token`` header of an incoming request and will\n raise a :class:`http.HTTPStatus.FORBIDDEN <http.HTTPStatus>` error if either the\n header isn't set or it is set to a wrong token.\n\n .. versionadded:: 20.0\n unix (:class:`pathlib.Path` | :obj:`str`, optional): Path to the unix socket file. Path\n does not need to exist, in which case the file will be created.\n\n Caution:\n This parameter is a replacement for the default TCP bind. Therefore, it is\n mutually exclusive with :paramref:`listen` and :paramref:`port`. When using\n this param, you must also run a reverse proxy to the unix socket and set the\n appropriate :paramref:`webhook_url`.\n\n .. versionadded:: 20.8\n Returns:\n :class:`queue.Queue`: The update queue that can be filled from the main thread.\n\n Raises:\n :exc:`RuntimeError`: If the updater is already running or was not initialized.\n \"\"\"\n if not WEBHOOKS_AVAILABLE:\n raise RuntimeError(\n \"To use `start_webhook`, PTB must be installed via `pip install \"\n '\"python-telegram-bot[webhooks]\"`.'\n )\n # unix has special requirements what must and mustn't be set when using it\n if unix:\n error_msg = (\n \"You can not pass unix and {0}, only use one. Unix if you want to \"\n \"initialize a unix socket, or {0} for a standard TCP server.\"\n )\n if not isinstance(listen, DefaultValue):\n raise RuntimeError(error_msg.format(\"listen\"))\n if not isinstance(port, DefaultValue):\n raise RuntimeError(error_msg.format(\"port\"))\n if not webhook_url:\n raise RuntimeError(\n \"Since you set unix, you also need to set the URL to the webhook \"\n \"of the proxy you run in front of the unix socket.\"\n )\n\n async with self.__lock:\n if self.running:\n raise RuntimeError(\"This Updater is already running!\")\n if not self._initialized:\n raise RuntimeError(\"This Updater was not initialized via `Updater.initialize`!\")\n\n self._running = True\n\n try:\n # Create & start tasks\n webhook_ready = asyncio.Event()\n\n await self._start_webhook(\n listen=DefaultValue.get_value(listen),\n port=DefaultValue.get_value(port),\n url_path=url_path,\n cert=cert,\n key=key,\n bootstrap_retries=bootstrap_retries,\n drop_pending_updates=drop_pending_updates,\n webhook_url=webhook_url,\n allowed_updates=allowed_updates,\n ready=webhook_ready,\n ip_address=ip_address,\n max_connections=max_connections,\n secret_token=secret_token,\n unix=unix,\n )\n\n _LOGGER.debug(\"Waiting for webhook server to start\")\n await webhook_ready.wait()\n _LOGGER.debug(\"Webhook server started\")\n except Exception as exc:\n self._running = False\n raise exc\n\n # Return the update queue so the main thread can insert updates\n return self.update_queue\n\n async def _start_webhook(\n self,\n listen: str,\n port: int,\n url_path: str,\n bootstrap_retries: int,\n allowed_updates: Optional[List[str]],\n cert: Optional[Union[str, Path]] = None,\n key: Optional[Union[str, Path]] = None,\n drop_pending_updates: Optional[bool] = None,\n webhook_url: Optional[str] = None,\n ready: Optional[asyncio.Event] = None,\n ip_address: Optional[str] = None,\n max_connections: int = 40,\n secret_token: Optional[str] = None,\n unix: Optional[Union[str, Path]] = None,\n ) -> None:\n _LOGGER.debug(\"Updater thread started (webhook)\")\n\n if not url_path.startswith(\"/\"):\n url_path = f\"/{url_path}\"\n\n # Create Tornado app instance\n app = WebhookAppClass(url_path, self.bot, self.update_queue, secret_token)\n\n # Form SSL Context\n # An SSLError is raised if the private key does not match with the certificate\n # Note that we only use the SSL certificate for the WebhookServer, if the key is also\n # present. This is because the WebhookServer may not actually be in charge of performing\n # the SSL handshake, e.g. in case a reverse proxy is used\n if cert is not None and key is not None:\n try:\n ssl_ctx: Optional[ssl.SSLContext] = ssl.create_default_context(\n ssl.Purpose.CLIENT_AUTH\n )\n ssl_ctx.load_cert_chain(cert, key) # type: ignore[union-attr]\n except ssl.SSLError as exc:\n raise TelegramError(\"Invalid SSL Certificate\") from exc\n else:\n ssl_ctx = None\n # Create and start server\n self._httpd = WebhookServer(listen, port, app, ssl_ctx, unix)\n\n if not webhook_url:\n webhook_url = self._gen_webhook_url(\n protocol=\"https\" if ssl_ctx else \"http\",\n listen=DefaultValue.get_value(listen),\n port=port,\n url_path=url_path,\n )\n\n # We pass along the cert to the webhook if present.\n await self._bootstrap(\n # Passing a Path or string only works if the bot is running against a local bot API\n # server, so let's read the contents\n cert=Path(cert).read_bytes() if cert else None,\n max_retries=bootstrap_retries,\n drop_pending_updates=drop_pending_updates,\n webhook_url=webhook_url,\n allowed_updates=allowed_updates,\n ip_address=ip_address,\n max_connections=max_connections,\n secret_token=secret_token,\n )\n\n await self._httpd.serve_forever(ready=ready)\n\n @staticmethod\n def _gen_webhook_url(protocol: str, listen: str, port: int, url_path: str) -> str:\n # TODO: double check if this should be https in any case - the docs of start_webhook\n # say differently!\n return f\"{protocol}://{listen}:{port}{url_path}\"\n\n async def _network_loop_retry(\n self,\n action_cb: Callable[..., Coroutine],\n on_err_cb: Callable[[TelegramError], None],\n description: str,\n interval: float,\n ) -> None:\n \"\"\"Perform a loop calling `action_cb`, retrying after network errors.\n\n Stop condition for loop: `self.running` evaluates :obj:`False` or return value of\n `action_cb` evaluates :obj:`False`.\n\n Args:\n action_cb (:term:`coroutine function`): Network oriented callback function to call.\n on_err_cb (:obj:`callable`): Callback to call when TelegramError is caught. Receives\n the exception object as a parameter.\n description (:obj:`str`): Description text to use for logs and exception raised.\n interval (:obj:`float` | :obj:`int`): Interval to sleep between each call to\n `action_cb`.\n\n \"\"\"\n _LOGGER.debug(\"Start network loop retry %s\", description)\n cur_interval = interval\n try:\n while self.running:\n try:\n if not await action_cb():\n break\n except RetryAfter as exc:\n _LOGGER.info(\"%s\", exc)\n cur_interval = 0.5 + exc.retry_after\n except TimedOut as toe:\n _LOGGER.debug(\"Timed out %s: %s\", description, toe)\n # If failure is due to timeout, we should retry asap.\n cur_interval = 0\n except InvalidToken as pex:\n _LOGGER.error(\"Invalid token; aborting\")\n raise pex\n except TelegramError as telegram_exc:\n _LOGGER.error(\"Error while %s: %s\", description, telegram_exc)\n on_err_cb(telegram_exc)\n\n # increase waiting times on subsequent errors up to 30secs\n cur_interval = 1 if cur_interval == 0 else min(30, 1.5 * cur_interval)\n else:\n cur_interval = interval\n\n if cur_interval:\n await asyncio.sleep(cur_interval)\n\n except asyncio.CancelledError:\n _LOGGER.debug(\"Network loop retry %s was cancelled\", description)\n\n async def _bootstrap(\n self,\n max_retries: int,\n webhook_url: Optional[str],\n allowed_updates: Optional[List[str]],\n drop_pending_updates: Optional[bool] = None,\n cert: Optional[bytes] = None,\n bootstrap_interval: float = 1,\n ip_address: Optional[str] = None,\n max_connections: int = 40,\n secret_token: Optional[str] = None,\n ) -> None:\n \"\"\"Prepares the setup for fetching updates: delete or set the webhook and drop pending\n updates if appropriate. If there are unsuccessful attempts, this will retry as specified by\n :paramref:`max_retries`.\n \"\"\"\n retries = 0\n\n async def bootstrap_del_webhook() -> bool:\n _LOGGER.debug(\"Deleting webhook\")\n if drop_pending_updates:\n _LOGGER.debug(\"Dropping pending updates from Telegram server\")\n await self.bot.delete_webhook(drop_pending_updates=drop_pending_updates)\n return False\n\n async def bootstrap_set_webhook() -> bool:\n _LOGGER.debug(\"Setting webhook\")\n if drop_pending_updates:\n _LOGGER.debug(\"Dropping pending updates from Telegram server\")\n await self.bot.set_webhook(\n url=webhook_url,\n certificate=cert,\n allowed_updates=allowed_updates,\n ip_address=ip_address,\n drop_pending_updates=drop_pending_updates,\n max_connections=max_connections,\n secret_token=secret_token,\n )\n return False\n\n def bootstrap_on_err_cb(exc: Exception) -> None:\n # We need this since retries is an immutable object otherwise and the changes\n # wouldn't propagate outside of thi function\n nonlocal retries\n\n if not isinstance(exc, InvalidToken) and (max_retries < 0 or retries < max_retries):\n retries += 1\n _LOGGER.warning(\n \"Failed bootstrap phase; try=%s max_retries=%s\", retries, max_retries\n )\n else:\n _LOGGER.error(\"Failed bootstrap phase after %s retries (%s)\", retries, exc)\n raise exc\n\n # Dropping pending updates from TG can be efficiently done with the drop_pending_updates\n # parameter of delete/start_webhook, even in the case of polling. Also, we want to make\n # sure that no webhook is configured in case of polling, so we just always call\n # delete_webhook for polling\n if drop_pending_updates or not webhook_url:\n await self._network_loop_retry(\n bootstrap_del_webhook,\n bootstrap_on_err_cb,\n \"bootstrap del webhook\",\n bootstrap_interval,\n )\n\n # Reset the retries counter for the next _network_loop_retry call\n retries = 0\n\n # Restore/set webhook settings, if needed. Again, we don't know ahead if a webhook is set,\n # so we set it anyhow.\n if webhook_url:\n await self._network_loop_retry(\n bootstrap_set_webhook,\n bootstrap_on_err_cb,\n \"bootstrap set webhook\",\n bootstrap_interval,\n )\n\n async def stop(self) -> None:\n \"\"\"Stops the polling/webhook.\n\n .. seealso::\n :meth:`start_polling`, :meth:`start_webhook`\n\n Raises:\n :exc:`RuntimeError`: If the updater is not running.\n \"\"\"\n async with self.__lock:\n if not self.running:\n raise RuntimeError(\"This Updater is not running!\")\n\n _LOGGER.debug(\"Stopping Updater\")\n\n self._running = False\n\n await self._stop_httpd()\n await self._stop_polling()\n\n _LOGGER.debug(\"Updater.stop() is complete\")\n\n async def _stop_httpd(self) -> None:\n \"\"\"Stops the Webhook server by calling ``WebhookServer.shutdown()``\"\"\"\n if self._httpd:\n _LOGGER.debug(\"Waiting for current webhook connection to be closed.\")\n await self._httpd.shutdown()\n self._httpd = None\n\n async def _stop_polling(self) -> None:\n \"\"\"Stops the polling task by awaiting it.\"\"\"\n if self.__polling_task:\n _LOGGER.debug(\"Waiting background polling task to finish up.\")\n self.__polling_task.cancel()\n\n with contextlib.suppress(asyncio.CancelledError):\n await self.__polling_task\n # It only fails in rare edge-cases, e.g. when `stop()` is called directly\n # after start_polling(), but lets better be safe than sorry ...\n\n self.__polling_task = None\n\n if self.__polling_cleanup_cb:\n await self.__polling_cleanup_cb()\n self.__polling_cleanup_cb = None\n else:\n _LOGGER.warning(\n \"No polling cleanup callback defined. The last fetched updates may be \"\n \"fetched again on the next polling start.\"\n )\n", "path": "telegram/ext/_updater.py"}]} |
gh_patches_debug_96 | rasdani/github-patches | git_diff | mampfes__hacs_waste_collection_schedule-599 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add Source grosswangen_ch
python3 test_sources.py -s grosswangen_ch -i -l
Testing source grosswangen_ch ...
found 58 entries for TEST
2023-01-12: GrΓΌngutabfuhr [mdi:leaf]
2023-01-26: Kehricht-Aussentour [mdi:trash-can-outline]
2023-02-02: Kartonsammlung [mdi:recycle]
2023-02-16: Altpapiersammlung [newspaper-variant-multiple-outline]
2023-02-16: GrΓΌngutabfuhr [mdi:leaf]
2023-02-23: Kehricht-Aussentour [mdi:trash-can-outline]
2023-03-02: Kartonsammlung [mdi:recycle]
2023-03-09: HΓ€ckselservice [mdi:leaf-off]
2023-03-09: GrΓΌngutabfuhr [mdi:leaf]
2023-03-23: Kehricht-Aussentour [mdi:trash-can-outline]
2023-03-30: GrΓΌngutabfuhr [mdi:leaf]
2023-04-01: Alteisensammlung und Sammlung elektronischer GerΓ€te [desktop-classic]
2023-04-06: Kartonsammlung [mdi:recycle]
2023-04-13: GrΓΌngutabfuhr [mdi:leaf]
2023-04-20: HΓ€ckselservice [mdi:leaf-off]
2023-04-27: GrΓΌngutabfuhr [mdi:leaf]
2023-04-27: Kehricht-Aussentour [mdi:trash-can-outline]
2023-05-04: Kartonsammlung [mdi:recycle]
2023-05-11: GrΓΌngutabfuhr [mdi:leaf]
2023-05-11: Altpapiersammlung [newspaper-variant-multiple-outline]
2023-05-25: Kehricht-Aussentour [mdi:trash-can-outline]
2023-05-25: GrΓΌngutabfuhr [mdi:leaf]
2023-06-01: Kartonsammlung [mdi:recycle]
2023-06-15: GrΓΌngutabfuhr [mdi:leaf]
2023-06-22: Kehricht-Aussentour [mdi:trash-can-outline]
2023-06-29: GrΓΌngutabfuhr [mdi:leaf]
2023-07-06: Kartonsammlung [mdi:recycle]
2023-07-13: GrΓΌngutabfuhr [mdi:leaf]
2023-07-27: GrΓΌngutabfuhr [mdi:leaf]
2023-07-27: Kehricht-Aussentour [mdi:trash-can-outline]
2023-08-03: Kartonsammlung [mdi:recycle]
2023-08-10: Altpapiersammlung [newspaper-variant-multiple-outline]
2023-08-10: GrΓΌngutabfuhr [mdi:leaf]
2023-08-24: GrΓΌngutabfuhr [mdi:leaf]
2023-08-24: Kehricht-Aussentour [mdi:trash-can-outline]
2023-09-07: GrΓΌngutabfuhr [mdi:leaf]
2023-09-07: Kartonsammlung [mdi:recycle]
2023-09-14: HΓ€ckselservice [mdi:leaf-off]
2023-09-21: GrΓΌngutabfuhr [mdi:leaf]
2023-09-28: Kehricht-Aussentour [mdi:trash-can-outline]
2023-10-05: Kartonsammlung [mdi:recycle]
2023-10-12: GrΓΌngutabfuhr [mdi:leaf]
2023-10-19: HΓ€ckselservice [mdi:leaf-off]
2023-10-26: Kehricht-Aussentour [mdi:trash-can-outline]
2023-10-26: ZusΓ€tzliche Gratis-Laubabfuhr [mdi:leaf]
2023-10-26: GrΓΌngutabfuhr [mdi:leaf]
2023-11-02: Kartonsammlung [mdi:recycle]
2023-11-04: Alteisensammlung und Sammlung elektronischer GerΓ€te [desktop-classic]
2023-11-09: GrΓΌngutabfuhr [mdi:leaf]
2023-11-16: HΓ€ckselservice [mdi:leaf-off]
2023-11-16: Altpapiersammlung [newspaper-variant-multiple-outline]
2023-11-23: Kehricht-Aussentour [mdi:trash-can-outline]
2023-11-23: GrΓΌngutabfuhr [mdi:leaf]
2023-11-30: GrΓΌngutabfuhr [mdi:leaf]
2023-11-30: ZusΓ€tzliche Gratis-Laubabfuhr [mdi:leaf]
2023-12-07: Kartonsammlung [mdi:recycle]
2023-12-14: GrΓΌngutabfuhr [mdi:leaf]
2023-12-21: Kehricht-Aussentour [mdi:trash-can-outline]
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `custom_components/waste_collection_schedule/waste_collection_schedule/source/grosswangen_ch.py`
Content:
```
1 import logging
2 from datetime import datetime
3
4 import requests
5 from bs4 import BeautifulSoup
6 from waste_collection_schedule import Collection
7
8 TITLE = "Grosswangen"
9 DESCRIPTION = " Source for 'Grosswangen, CH'"
10 URL = "https://www.grosswangen.ch"
11 TEST_CASES = {"TEST": {}}
12
13 ICON_MAP = {
14 "GrΓΌngutabfuhr": "mdi:leaf",
15 "Kehricht-Aussentour": "mdi:trash-can-outline",
16 "Kartonsammlung": "mdi:recycle",
17 "Altpapiersammlung": "newspaper-variant-multiple-outline",
18 "HΓ€ckselservice": "mdi:leaf-off",
19 "Alteisensammlung und Sammlung elektronischer GerΓ€te": "desktop-classic",
20 "ZusΓ€tzliche Gratis-Laubabfuhr": "mdi:leaf",
21 }
22
23 _LOGGER = logging.getLogger(__name__)
24
25
26 class Source:
27 def __init__(self, args=None):
28 self = None
29
30 def fetch(self):
31
32 r = requests.get(
33 "https://www.grosswangen.ch/institution/details/abfallsammlungen"
34 )
35
36 r.raise_for_status()
37
38 soup = BeautifulSoup(r.text, "html.parser")
39
40 entries = []
41
42 for tag in soup.find_all(class_="InstList-institution InstDetail-termin"):
43 for typ in tag.find_all("strong"):
44 # print(typ.string)
45 waste_type = typ.string
46 for date in tag.find_all("span", class_="mobile"):
47 # print(date.string[-8:])
48 waste_date = datetime.strptime(date.string[-8:], "%d.%m.%y").date()
49
50 entries.append(Collection(waste_date, waste_type, ICON_MAP.get(waste_type)))
51
52 return entries
53
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/grosswangen_ch.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/grosswangen_ch.py
--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/grosswangen_ch.py
+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/grosswangen_ch.py
@@ -24,7 +24,7 @@
class Source:
- def __init__(self, args=None):
+ def __init__(self):
self = None
def fetch(self):
| {"golden_diff": "diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/grosswangen_ch.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/grosswangen_ch.py\n--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/grosswangen_ch.py\n+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/grosswangen_ch.py\n@@ -24,7 +24,7 @@\n \n \n class Source:\n- def __init__(self, args=None):\n+ def __init__(self):\n self = None\n \n def fetch(self):\n", "issue": "Add Source grosswangen_ch\n python3 test_sources.py -s grosswangen_ch -i -l\r\nTesting source grosswangen_ch ...\r\n found 58 entries for TEST\r\n 2023-01-12: Gr\u00fcngutabfuhr [mdi:leaf]\r\n 2023-01-26: Kehricht-Aussentour [mdi:trash-can-outline]\r\n 2023-02-02: Kartonsammlung [mdi:recycle]\r\n 2023-02-16: Altpapiersammlung [newspaper-variant-multiple-outline]\r\n 2023-02-16: Gr\u00fcngutabfuhr [mdi:leaf]\r\n 2023-02-23: Kehricht-Aussentour [mdi:trash-can-outline]\r\n 2023-03-02: Kartonsammlung [mdi:recycle]\r\n 2023-03-09: H\u00e4ckselservice [mdi:leaf-off]\r\n 2023-03-09: Gr\u00fcngutabfuhr [mdi:leaf]\r\n 2023-03-23: Kehricht-Aussentour [mdi:trash-can-outline]\r\n 2023-03-30: Gr\u00fcngutabfuhr [mdi:leaf]\r\n 2023-04-01: Alteisensammlung und Sammlung elektronischer Ger\u00e4te [desktop-classic]\r\n 2023-04-06: Kartonsammlung [mdi:recycle]\r\n 2023-04-13: Gr\u00fcngutabfuhr [mdi:leaf]\r\n 2023-04-20: H\u00e4ckselservice [mdi:leaf-off]\r\n 2023-04-27: Gr\u00fcngutabfuhr [mdi:leaf]\r\n 2023-04-27: Kehricht-Aussentour [mdi:trash-can-outline]\r\n 2023-05-04: Kartonsammlung [mdi:recycle]\r\n 2023-05-11: Gr\u00fcngutabfuhr [mdi:leaf]\r\n 2023-05-11: Altpapiersammlung [newspaper-variant-multiple-outline]\r\n 2023-05-25: Kehricht-Aussentour [mdi:trash-can-outline]\r\n 2023-05-25: Gr\u00fcngutabfuhr [mdi:leaf]\r\n 2023-06-01: Kartonsammlung [mdi:recycle]\r\n 2023-06-15: Gr\u00fcngutabfuhr [mdi:leaf]\r\n 2023-06-22: Kehricht-Aussentour [mdi:trash-can-outline]\r\n 2023-06-29: Gr\u00fcngutabfuhr [mdi:leaf]\r\n 2023-07-06: Kartonsammlung [mdi:recycle]\r\n 2023-07-13: Gr\u00fcngutabfuhr [mdi:leaf]\r\n 2023-07-27: Gr\u00fcngutabfuhr [mdi:leaf]\r\n 2023-07-27: Kehricht-Aussentour [mdi:trash-can-outline]\r\n 2023-08-03: Kartonsammlung [mdi:recycle]\r\n 2023-08-10: Altpapiersammlung [newspaper-variant-multiple-outline]\r\n 2023-08-10: Gr\u00fcngutabfuhr [mdi:leaf]\r\n 2023-08-24: Gr\u00fcngutabfuhr [mdi:leaf]\r\n 2023-08-24: Kehricht-Aussentour [mdi:trash-can-outline]\r\n 2023-09-07: Gr\u00fcngutabfuhr [mdi:leaf]\r\n 2023-09-07: Kartonsammlung [mdi:recycle]\r\n 2023-09-14: H\u00e4ckselservice [mdi:leaf-off]\r\n 2023-09-21: Gr\u00fcngutabfuhr [mdi:leaf]\r\n 2023-09-28: Kehricht-Aussentour [mdi:trash-can-outline]\r\n 2023-10-05: Kartonsammlung [mdi:recycle]\r\n 2023-10-12: Gr\u00fcngutabfuhr [mdi:leaf]\r\n 2023-10-19: H\u00e4ckselservice [mdi:leaf-off]\r\n 2023-10-26: Kehricht-Aussentour [mdi:trash-can-outline]\r\n 2023-10-26: Zus\u00e4tzliche Gratis-Laubabfuhr [mdi:leaf]\r\n 2023-10-26: Gr\u00fcngutabfuhr [mdi:leaf]\r\n 2023-11-02: Kartonsammlung [mdi:recycle]\r\n 2023-11-04: Alteisensammlung und Sammlung elektronischer Ger\u00e4te [desktop-classic]\r\n 2023-11-09: Gr\u00fcngutabfuhr [mdi:leaf]\r\n 2023-11-16: H\u00e4ckselservice [mdi:leaf-off]\r\n 2023-11-16: Altpapiersammlung [newspaper-variant-multiple-outline]\r\n 2023-11-23: Kehricht-Aussentour [mdi:trash-can-outline]\r\n 2023-11-23: Gr\u00fcngutabfuhr [mdi:leaf]\r\n 2023-11-30: Gr\u00fcngutabfuhr [mdi:leaf]\r\n 2023-11-30: Zus\u00e4tzliche Gratis-Laubabfuhr [mdi:leaf]\r\n 2023-12-07: Kartonsammlung [mdi:recycle]\r\n 2023-12-14: Gr\u00fcngutabfuhr [mdi:leaf]\r\n 2023-12-21: Kehricht-Aussentour [mdi:trash-can-outline]\n", "before_files": [{"content": "import logging\nfrom datetime import datetime\n\nimport requests\nfrom bs4 import BeautifulSoup\nfrom waste_collection_schedule import Collection\n\nTITLE = \"Grosswangen\"\nDESCRIPTION = \" Source for 'Grosswangen, CH'\"\nURL = \"https://www.grosswangen.ch\"\nTEST_CASES = {\"TEST\": {}}\n\nICON_MAP = {\n \"Gr\u00fcngutabfuhr\": \"mdi:leaf\",\n \"Kehricht-Aussentour\": \"mdi:trash-can-outline\",\n \"Kartonsammlung\": \"mdi:recycle\",\n \"Altpapiersammlung\": \"newspaper-variant-multiple-outline\",\n \"H\u00e4ckselservice\": \"mdi:leaf-off\",\n \"Alteisensammlung und Sammlung elektronischer Ger\u00e4te\": \"desktop-classic\",\n \"Zus\u00e4tzliche Gratis-Laubabfuhr\": \"mdi:leaf\",\n}\n\n_LOGGER = logging.getLogger(__name__)\n\n\nclass Source:\n def __init__(self, args=None):\n self = None\n\n def fetch(self):\n\n r = requests.get(\n \"https://www.grosswangen.ch/institution/details/abfallsammlungen\"\n )\n\n r.raise_for_status()\n\n soup = BeautifulSoup(r.text, \"html.parser\")\n\n entries = []\n\n for tag in soup.find_all(class_=\"InstList-institution InstDetail-termin\"):\n for typ in tag.find_all(\"strong\"):\n # print(typ.string)\n waste_type = typ.string\n for date in tag.find_all(\"span\", class_=\"mobile\"):\n # print(date.string[-8:])\n waste_date = datetime.strptime(date.string[-8:], \"%d.%m.%y\").date()\n\n entries.append(Collection(waste_date, waste_type, ICON_MAP.get(waste_type)))\n\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/grosswangen_ch.py"}], "after_files": [{"content": "import logging\nfrom datetime import datetime\n\nimport requests\nfrom bs4 import BeautifulSoup\nfrom waste_collection_schedule import Collection\n\nTITLE = \"Grosswangen\"\nDESCRIPTION = \" Source for 'Grosswangen, CH'\"\nURL = \"https://www.grosswangen.ch\"\nTEST_CASES = {\"TEST\": {}}\n\nICON_MAP = {\n \"Gr\u00fcngutabfuhr\": \"mdi:leaf\",\n \"Kehricht-Aussentour\": \"mdi:trash-can-outline\",\n \"Kartonsammlung\": \"mdi:recycle\",\n \"Altpapiersammlung\": \"newspaper-variant-multiple-outline\",\n \"H\u00e4ckselservice\": \"mdi:leaf-off\",\n \"Alteisensammlung und Sammlung elektronischer Ger\u00e4te\": \"desktop-classic\",\n \"Zus\u00e4tzliche Gratis-Laubabfuhr\": \"mdi:leaf\",\n}\n\n_LOGGER = logging.getLogger(__name__)\n\n\nclass Source:\n def __init__(self):\n self = None\n\n def fetch(self):\n\n r = requests.get(\n \"https://www.grosswangen.ch/institution/details/abfallsammlungen\"\n )\n\n r.raise_for_status()\n\n soup = BeautifulSoup(r.text, \"html.parser\")\n\n entries = []\n\n for tag in soup.find_all(class_=\"InstList-institution InstDetail-termin\"):\n for typ in tag.find_all(\"strong\"):\n # print(typ.string)\n waste_type = typ.string\n for date in tag.find_all(\"span\", class_=\"mobile\"):\n # print(date.string[-8:])\n waste_date = datetime.strptime(date.string[-8:], \"%d.%m.%y\").date()\n\n entries.append(Collection(waste_date, waste_type, ICON_MAP.get(waste_type)))\n\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/grosswangen_ch.py"}]} |
gh_patches_debug_97 | rasdani/github-patches | git_diff | voxel51__fiftyone-2441 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] Use same default expansion logic when sidebar groups are defined
As of `fiftyone==0.18`, the sidebar has some nice default logic, such as automatically collapsing the `OTHER` group for the dataset below, since it contains all unsupported field types:
```py
import fiftyone as fo
import fiftyone.zoo as foz
dataset = foz.load_zoo_dataset("quickstart")
dataset.set_values("dict_field", [{}] * len(dataset))
dataset.add_sample_field("list_field", fo.ListField)
dataset.set_values("list_field", dataset.values("tags"))
session = fo.launch_app(dataset)
```
Or collapsing the sample/label tags sections by default in fast mode.
However, this default expansion logic only applies when the dataset does not have an `app_config` explicitly defined. Once an app config is defined, the collapsed-by-default logic no longer works.
To see this, make a trivial edit to the sidebar groups in the App and then refresh the page.
```py
# Edit sidebar groups in the App
dataset.reload()
print(dataset.app_config)
```
```
<DatasetAppConfig: {
'media_fields': ['filepath'],
'grid_media_field': 'filepath',
'modal_media_field': 'filepath',
'sidebar_mode': None,
'sidebar_groups': [
<SidebarGroupDocument: {'name': 'tags', 'paths': [], 'expanded': None}>,
<SidebarGroupDocument: {'name': 'label tags', 'paths': [], 'expanded': None}>,
<SidebarGroupDocument: {
'name': 'metadata',
'paths': [
'metadata.size_bytes',
'metadata.mime_type',
'metadata.width',
'metadata.height',
'metadata.num_channels',
],
'expanded': None,
}>,
<SidebarGroupDocument: {'name': 'labels', 'paths': ['predictions', 'ground_truth'], 'expanded': None}>,
<SidebarGroupDocument: {'name': 'primitives', 'paths': ['id', 'uniqueness', 'filepath'], 'expanded': None}>,
<SidebarGroupDocument: {'name': 'other', 'paths': ['dict_field', 'list_field'], 'expanded': None}>,
],
'plugins': {},
}>
```
In the above `sidebar_groups`, all `expanded` states are `None`, so the default logic should be applied to determine whether they are collapsed or not.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `fiftyone/server/query.py`
Content:
```
1 """
2 FiftyOne Server queries
3
4 | Copyright 2017-2022, Voxel51, Inc.
5 | `voxel51.com <https://voxel51.com/>`_
6 |
7 """
8 import typing as t
9 from dataclasses import asdict
10 from datetime import date, datetime
11 from enum import Enum
12 import os
13
14 import asyncio
15 import eta.core.serial as etas
16 import eta.core.utils as etau
17 import strawberry as gql
18 from bson import ObjectId, json_util
19 from dacite import Config, from_dict
20
21 import fiftyone as fo
22 import fiftyone.constants as foc
23 import fiftyone.core.context as focx
24 import fiftyone.core.media as fom
25 from fiftyone.core.odm import SavedViewDocument
26 from fiftyone.core.state import SampleField, serialize_fields
27 import fiftyone.core.uid as fou
28 import fiftyone.core.view as fov
29
30 import fiftyone.server.aggregate as fosa
31 from fiftyone.server.aggregations import aggregate_resolver
32 from fiftyone.server.data import Info
33 from fiftyone.server.dataloader import get_dataloader_resolver
34 import fiftyone.server.events as fose
35 from fiftyone.server.metadata import MediaType
36 from fiftyone.server.paginator import Connection, get_paginator_resolver
37 from fiftyone.server.samples import (
38 SampleFilter,
39 SampleItem,
40 paginate_samples,
41 )
42
43 from fiftyone.server.scalars import BSONArray, JSON
44
45 ID = gql.scalar(
46 t.NewType("ID", str),
47 serialize=lambda v: str(v),
48 parse_value=lambda v: ObjectId(v),
49 )
50 DATASET_FILTER = [{"sample_collection_name": {"$regex": "^samples\\."}}]
51 DATASET_FILTER_STAGE = [{"$match": DATASET_FILTER[0]}]
52
53
54 @gql.type
55 class Group:
56 name: str
57 media_type: MediaType
58
59
60 @gql.type
61 class Target:
62 target: int
63 value: str
64
65
66 @gql.type
67 class NamedTargets:
68 name: str
69 targets: t.List[Target]
70
71
72 @gql.interface
73 class RunConfig:
74 cls: str
75
76
77 @gql.interface
78 class Run:
79 key: str
80 version: t.Optional[str]
81 timestamp: t.Optional[datetime]
82 config: t.Optional[RunConfig]
83 view_stages: t.Optional[t.List[str]]
84
85
86 @gql.type
87 class BrainRunConfig(RunConfig):
88 embeddings_field: t.Optional[str]
89 method: t.Optional[str]
90 patches_field: t.Optional[str]
91
92
93 @gql.type
94 class BrainRun(Run):
95 config: t.Optional[BrainRunConfig]
96
97
98 @gql.type
99 class EvaluationRunConfig(RunConfig):
100 gt_field: t.Optional[str]
101 pred_field: t.Optional[str]
102 method: t.Optional[str]
103
104
105 @gql.type
106 class EvaluationRun(Run):
107 config: t.Optional[EvaluationRunConfig]
108
109
110 @gql.type
111 class SavedView:
112 id: t.Optional[str]
113 dataset_id: t.Optional[str]
114 name: t.Optional[str]
115 slug: t.Optional[str]
116 description: t.Optional[str]
117 color: t.Optional[str]
118 view_stages: t.Optional[t.List[str]]
119 created_at: t.Optional[datetime]
120 last_modified_at: t.Optional[datetime]
121 last_loaded_at: t.Optional[datetime]
122
123 @gql.field
124 def view_name(self) -> t.Optional[str]:
125 if isinstance(self, ObjectId):
126 return None
127 return self.name
128
129 @gql.field
130 def stage_dicts(self) -> t.Optional[BSONArray]:
131 return [json_util.loads(x) for x in self.view_stages]
132
133 @classmethod
134 def from_doc(cls, doc: SavedViewDocument):
135 stage_dicts = [json_util.loads(x) for x in doc.view_stages]
136 saved_view = from_dict(data_class=cls, data=doc.to_dict())
137 saved_view.stage_dicts = stage_dicts
138 return saved_view
139
140
141 @gql.type
142 class SidebarGroup:
143 name: str
144 paths: t.Optional[t.List[str]]
145 expanded: t.Optional[bool] = True
146
147
148 @gql.type
149 class KeypointSkeleton:
150 labels: t.Optional[t.List[str]]
151 edges: t.List[t.List[int]]
152
153
154 @gql.type
155 class NamedKeypointSkeleton(KeypointSkeleton):
156 name: str
157
158
159 @gql.enum
160 class SidebarMode(Enum):
161 all = "all"
162 best = "best"
163 fast = "fast"
164
165
166 @gql.type
167 class DatasetAppConfig:
168 media_fields: t.Optional[t.List[str]]
169 plugins: t.Optional[JSON]
170 sidebar_groups: t.Optional[t.List[SidebarGroup]]
171 sidebar_mode: t.Optional[SidebarMode]
172 modal_media_field: t.Optional[str] = gql.field(default="filepath")
173 grid_media_field: t.Optional[str] = "filepath"
174
175
176 @gql.type
177 class Dataset:
178 id: gql.ID
179 name: str
180 created_at: t.Optional[date]
181 last_loaded_at: t.Optional[datetime]
182 persistent: bool
183 group_media_types: t.Optional[t.List[Group]]
184 group_field: t.Optional[str]
185 group_slice: t.Optional[str]
186 default_group_slice: t.Optional[str]
187 media_type: t.Optional[MediaType]
188 mask_targets: t.List[NamedTargets]
189 default_mask_targets: t.Optional[t.List[Target]]
190 sample_fields: t.List[SampleField]
191 frame_fields: t.Optional[t.List[SampleField]]
192 brain_methods: t.Optional[t.List[BrainRun]]
193 evaluations: t.Optional[t.List[EvaluationRun]]
194 saved_views: t.Optional[t.List[SavedView]]
195 saved_view_ids: gql.Private[t.Optional[t.List[gql.ID]]]
196 version: t.Optional[str]
197 view_cls: t.Optional[str]
198 view_name: t.Optional[str]
199 default_skeleton: t.Optional[KeypointSkeleton]
200 skeletons: t.List[NamedKeypointSkeleton]
201 app_config: t.Optional[DatasetAppConfig]
202 info: t.Optional[JSON]
203
204 @staticmethod
205 def modifier(doc: dict) -> dict:
206 doc["id"] = doc.pop("_id")
207 doc["default_mask_targets"] = _convert_targets(
208 doc.get("default_mask_targets", {})
209 )
210 doc["mask_targets"] = [
211 NamedTargets(name=name, targets=_convert_targets(targets))
212 for name, targets in doc.get("mask_targets", {}).items()
213 ]
214 doc["sample_fields"] = _flatten_fields(
215 [], doc.get("sample_fields", [])
216 )
217 doc["frame_fields"] = _flatten_fields([], doc.get("frame_fields", []))
218 doc["brain_methods"] = list(doc.get("brain_methods", {}).values())
219 doc["evaluations"] = list(doc.get("evaluations", {}).values())
220 doc["saved_views"] = doc.get("saved_views", [])
221 doc["skeletons"] = list(
222 dict(name=name, **data)
223 for name, data in doc.get("skeletons", {}).items()
224 )
225 doc["group_media_types"] = [
226 Group(name=name, media_type=media_type)
227 for name, media_type in doc.get("group_media_types", {}).items()
228 ]
229 doc["default_skeletons"] = doc.get("default_skeletons", None)
230 return doc
231
232 @classmethod
233 async def resolver(
234 cls,
235 name: str,
236 view: t.Optional[BSONArray],
237 info: Info,
238 view_name: t.Optional[str] = gql.UNSET,
239 ) -> t.Optional["Dataset"]:
240 return await serialize_dataset(
241 dataset_name=name, serialized_view=view, view_name=view_name
242 )
243
244
245 dataset_dataloader = get_dataloader_resolver(
246 Dataset, "datasets", "name", DATASET_FILTER
247 )
248
249
250 @gql.enum
251 class ColorBy(Enum):
252 field = "field"
253 instance = "instance"
254 label = "label"
255
256
257 @gql.enum
258 class Theme(Enum):
259 browser = "browser"
260 dark = "dark"
261 light = "light"
262
263
264 @gql.type
265 class AppConfig:
266 color_by: ColorBy
267 color_pool: t.List[str]
268 colorscale: str
269 grid_zoom: int
270 loop_videos: bool
271 notebook_height: int
272 plugins: t.Optional[JSON]
273 show_confidence: bool
274 show_index: bool
275 show_label: bool
276 show_skeletons: bool
277 show_tooltip: bool
278 sidebar_mode: SidebarMode
279 theme: Theme
280 timezone: t.Optional[str]
281 use_frame_number: bool
282
283
284 @gql.type
285 class Query(fosa.AggregateQuery):
286
287 aggregations = gql.field(resolver=aggregate_resolver)
288
289 @gql.field
290 def colorscale(self) -> t.Optional[t.List[t.List[int]]]:
291 if fo.app_config.colorscale:
292 return fo.app_config.get_colormap()
293
294 return None
295
296 @gql.field
297 def config(self) -> AppConfig:
298 config = fose.get_state().config
299 d = config.serialize()
300 d["timezone"] = fo.config.timezone
301 return from_dict(AppConfig, d, config=Config(check_types=False))
302
303 @gql.field
304 def context(self) -> str:
305 return focx._get_context()
306
307 @gql.field
308 def dev(self) -> bool:
309 return foc.DEV_INSTALL or foc.RC_INSTALL
310
311 @gql.field
312 def do_not_track(self) -> bool:
313 return fo.config.do_not_track
314
315 dataset: Dataset = gql.field(resolver=Dataset.resolver)
316 datasets: Connection[Dataset, str] = gql.field(
317 resolver=get_paginator_resolver(
318 Dataset, "created_at", DATASET_FILTER_STAGE, "datasets"
319 )
320 )
321
322 @gql.field
323 async def samples(
324 self,
325 dataset: str,
326 view: BSONArray,
327 first: t.Optional[int] = 20,
328 after: t.Optional[str] = None,
329 filter: t.Optional[SampleFilter] = None,
330 ) -> Connection[SampleItem, str]:
331 return await paginate_samples(
332 dataset, view, None, first, after, sample_filter=filter
333 )
334
335 @gql.field
336 async def sample(
337 self, dataset: str, view: BSONArray, filter: SampleFilter
338 ) -> t.Optional[SampleItem]:
339 samples = await paginate_samples(
340 dataset, view, None, 1, sample_filter=filter
341 )
342 if samples.edges:
343 return samples.edges[0].node
344
345 return None
346
347 @gql.field
348 def teams_submission(self) -> bool:
349 isfile = os.path.isfile(foc.TEAMS_PATH)
350 if isfile:
351 submitted = etas.load_json(foc.TEAMS_PATH)["submitted"]
352 else:
353 submitted = False
354
355 return submitted
356
357 @gql.field
358 def uid(self) -> str:
359 uid, _ = fou.get_user_id()
360 return uid
361
362 @gql.field
363 def version(self) -> str:
364 return foc.VERSION
365
366 @gql.field
367 def saved_views(self, dataset_name: str) -> t.Optional[t.List[SavedView]]:
368 ds = fo.load_dataset(dataset_name)
369 return [
370 SavedView.from_doc(view_doc) for view_doc in ds._doc.saved_views
371 ]
372
373
374 def _flatten_fields(
375 path: t.List[str], fields: t.List[t.Dict]
376 ) -> t.List[t.Dict]:
377 result = []
378 for field in fields:
379 key = field.pop("name")
380 field_path = path + [key]
381 field["path"] = ".".join(field_path)
382 result.append(field)
383
384 fields = field.pop("fields", None)
385 if fields:
386 result = result + _flatten_fields(field_path, fields)
387
388 return result
389
390
391 def _convert_targets(targets: t.Dict[str, str]) -> t.List[Target]:
392 return [Target(target=int(k), value=v) for k, v in targets.items()]
393
394
395 async def serialize_dataset(
396 dataset_name: str, serialized_view: BSONArray, view_name: t.Optional[str]
397 ) -> Dataset:
398 def run():
399 dataset = fo.load_dataset(dataset_name)
400 dataset.reload()
401
402 if view_name is not None and dataset.has_saved_view(view_name):
403 view = dataset.load_saved_view(view_name)
404 else:
405 view = fov.DatasetView._build(dataset, serialized_view or [])
406
407 doc = dataset._doc.to_dict(no_dereference=True)
408 Dataset.modifier(doc)
409 data = from_dict(Dataset, doc, config=Config(check_types=False))
410 data.view_cls = None
411
412 collection = dataset.view()
413 if view is not None:
414 if view._dataset != dataset:
415 d = view._dataset._serialize()
416 data.media_type = d["media_type"]
417
418 data.id = view._dataset._doc.id
419
420 data.view_cls = etau.get_class_name(view)
421
422 if view.media_type != data.media_type:
423 data.id = ObjectId()
424 data.media_type = view.media_type
425
426 collection = view
427
428 data.sample_fields = serialize_fields(
429 collection.get_field_schema(flat=True)
430 )
431 data.frame_fields = serialize_fields(
432 collection.get_frame_field_schema(flat=True)
433 )
434
435 if dataset.media_type == fom.GROUP:
436 data.group_slice = collection.group_slice
437
438 return data
439
440 loop = asyncio.get_running_loop()
441
442 return await loop.run_in_executor(None, run)
443
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/fiftyone/server/query.py b/fiftyone/server/query.py
--- a/fiftyone/server/query.py
+++ b/fiftyone/server/query.py
@@ -142,7 +142,7 @@
class SidebarGroup:
name: str
paths: t.Optional[t.List[str]]
- expanded: t.Optional[bool] = True
+ expanded: t.Optional[bool] = None
@gql.type
| {"golden_diff": "diff --git a/fiftyone/server/query.py b/fiftyone/server/query.py\n--- a/fiftyone/server/query.py\n+++ b/fiftyone/server/query.py\n@@ -142,7 +142,7 @@\n class SidebarGroup:\n name: str\n paths: t.Optional[t.List[str]]\n- expanded: t.Optional[bool] = True\n+ expanded: t.Optional[bool] = None\n \n \n @gql.type\n", "issue": "[BUG] Use same default expansion logic when sidebar groups are defined\nAs of `fiftyone==0.18`, the sidebar has some nice default logic, such as automatically collapsing the `OTHER` group for the dataset below, since it contains all unsupported field types:\r\n\r\n```py\r\nimport fiftyone as fo\r\nimport fiftyone.zoo as foz\r\n\r\ndataset = foz.load_zoo_dataset(\"quickstart\")\r\n\r\ndataset.set_values(\"dict_field\", [{}] * len(dataset))\r\n\r\ndataset.add_sample_field(\"list_field\", fo.ListField)\r\ndataset.set_values(\"list_field\", dataset.values(\"tags\"))\r\n\r\nsession = fo.launch_app(dataset)\r\n```\r\n\r\nOr collapsing the sample/label tags sections by default in fast mode.\r\n\r\nHowever, this default expansion logic only applies when the dataset does not have an `app_config` explicitly defined. Once an app config is defined, the collapsed-by-default logic no longer works.\r\n\r\nTo see this, make a trivial edit to the sidebar groups in the App and then refresh the page.\r\n\r\n```py\r\n# Edit sidebar groups in the App\r\n\r\ndataset.reload()\r\nprint(dataset.app_config)\r\n```\r\n\r\n```\r\n<DatasetAppConfig: {\r\n 'media_fields': ['filepath'],\r\n 'grid_media_field': 'filepath',\r\n 'modal_media_field': 'filepath',\r\n 'sidebar_mode': None,\r\n 'sidebar_groups': [\r\n <SidebarGroupDocument: {'name': 'tags', 'paths': [], 'expanded': None}>,\r\n <SidebarGroupDocument: {'name': 'label tags', 'paths': [], 'expanded': None}>,\r\n <SidebarGroupDocument: {\r\n 'name': 'metadata',\r\n 'paths': [\r\n 'metadata.size_bytes',\r\n 'metadata.mime_type',\r\n 'metadata.width',\r\n 'metadata.height',\r\n 'metadata.num_channels',\r\n ],\r\n 'expanded': None,\r\n }>,\r\n <SidebarGroupDocument: {'name': 'labels', 'paths': ['predictions', 'ground_truth'], 'expanded': None}>,\r\n <SidebarGroupDocument: {'name': 'primitives', 'paths': ['id', 'uniqueness', 'filepath'], 'expanded': None}>,\r\n <SidebarGroupDocument: {'name': 'other', 'paths': ['dict_field', 'list_field'], 'expanded': None}>,\r\n ],\r\n 'plugins': {},\r\n}>\r\n```\r\n\r\nIn the above `sidebar_groups`, all `expanded` states are `None`, so the default logic should be applied to determine whether they are collapsed or not.\n", "before_files": [{"content": "\"\"\"\nFiftyOne Server queries\n\n| Copyright 2017-2022, Voxel51, Inc.\n| `voxel51.com <https://voxel51.com/>`_\n|\n\"\"\"\nimport typing as t\nfrom dataclasses import asdict\nfrom datetime import date, datetime\nfrom enum import Enum\nimport os\n\nimport asyncio\nimport eta.core.serial as etas\nimport eta.core.utils as etau\nimport strawberry as gql\nfrom bson import ObjectId, json_util\nfrom dacite import Config, from_dict\n\nimport fiftyone as fo\nimport fiftyone.constants as foc\nimport fiftyone.core.context as focx\nimport fiftyone.core.media as fom\nfrom fiftyone.core.odm import SavedViewDocument\nfrom fiftyone.core.state import SampleField, serialize_fields\nimport fiftyone.core.uid as fou\nimport fiftyone.core.view as fov\n\nimport fiftyone.server.aggregate as fosa\nfrom fiftyone.server.aggregations import aggregate_resolver\nfrom fiftyone.server.data import Info\nfrom fiftyone.server.dataloader import get_dataloader_resolver\nimport fiftyone.server.events as fose\nfrom fiftyone.server.metadata import MediaType\nfrom fiftyone.server.paginator import Connection, get_paginator_resolver\nfrom fiftyone.server.samples import (\n SampleFilter,\n SampleItem,\n paginate_samples,\n)\n\nfrom fiftyone.server.scalars import BSONArray, JSON\n\nID = gql.scalar(\n t.NewType(\"ID\", str),\n serialize=lambda v: str(v),\n parse_value=lambda v: ObjectId(v),\n)\nDATASET_FILTER = [{\"sample_collection_name\": {\"$regex\": \"^samples\\\\.\"}}]\nDATASET_FILTER_STAGE = [{\"$match\": DATASET_FILTER[0]}]\n\n\[email protected]\nclass Group:\n name: str\n media_type: MediaType\n\n\[email protected]\nclass Target:\n target: int\n value: str\n\n\[email protected]\nclass NamedTargets:\n name: str\n targets: t.List[Target]\n\n\[email protected]\nclass RunConfig:\n cls: str\n\n\[email protected]\nclass Run:\n key: str\n version: t.Optional[str]\n timestamp: t.Optional[datetime]\n config: t.Optional[RunConfig]\n view_stages: t.Optional[t.List[str]]\n\n\[email protected]\nclass BrainRunConfig(RunConfig):\n embeddings_field: t.Optional[str]\n method: t.Optional[str]\n patches_field: t.Optional[str]\n\n\[email protected]\nclass BrainRun(Run):\n config: t.Optional[BrainRunConfig]\n\n\[email protected]\nclass EvaluationRunConfig(RunConfig):\n gt_field: t.Optional[str]\n pred_field: t.Optional[str]\n method: t.Optional[str]\n\n\[email protected]\nclass EvaluationRun(Run):\n config: t.Optional[EvaluationRunConfig]\n\n\[email protected]\nclass SavedView:\n id: t.Optional[str]\n dataset_id: t.Optional[str]\n name: t.Optional[str]\n slug: t.Optional[str]\n description: t.Optional[str]\n color: t.Optional[str]\n view_stages: t.Optional[t.List[str]]\n created_at: t.Optional[datetime]\n last_modified_at: t.Optional[datetime]\n last_loaded_at: t.Optional[datetime]\n\n @gql.field\n def view_name(self) -> t.Optional[str]:\n if isinstance(self, ObjectId):\n return None\n return self.name\n\n @gql.field\n def stage_dicts(self) -> t.Optional[BSONArray]:\n return [json_util.loads(x) for x in self.view_stages]\n\n @classmethod\n def from_doc(cls, doc: SavedViewDocument):\n stage_dicts = [json_util.loads(x) for x in doc.view_stages]\n saved_view = from_dict(data_class=cls, data=doc.to_dict())\n saved_view.stage_dicts = stage_dicts\n return saved_view\n\n\[email protected]\nclass SidebarGroup:\n name: str\n paths: t.Optional[t.List[str]]\n expanded: t.Optional[bool] = True\n\n\[email protected]\nclass KeypointSkeleton:\n labels: t.Optional[t.List[str]]\n edges: t.List[t.List[int]]\n\n\[email protected]\nclass NamedKeypointSkeleton(KeypointSkeleton):\n name: str\n\n\[email protected]\nclass SidebarMode(Enum):\n all = \"all\"\n best = \"best\"\n fast = \"fast\"\n\n\[email protected]\nclass DatasetAppConfig:\n media_fields: t.Optional[t.List[str]]\n plugins: t.Optional[JSON]\n sidebar_groups: t.Optional[t.List[SidebarGroup]]\n sidebar_mode: t.Optional[SidebarMode]\n modal_media_field: t.Optional[str] = gql.field(default=\"filepath\")\n grid_media_field: t.Optional[str] = \"filepath\"\n\n\[email protected]\nclass Dataset:\n id: gql.ID\n name: str\n created_at: t.Optional[date]\n last_loaded_at: t.Optional[datetime]\n persistent: bool\n group_media_types: t.Optional[t.List[Group]]\n group_field: t.Optional[str]\n group_slice: t.Optional[str]\n default_group_slice: t.Optional[str]\n media_type: t.Optional[MediaType]\n mask_targets: t.List[NamedTargets]\n default_mask_targets: t.Optional[t.List[Target]]\n sample_fields: t.List[SampleField]\n frame_fields: t.Optional[t.List[SampleField]]\n brain_methods: t.Optional[t.List[BrainRun]]\n evaluations: t.Optional[t.List[EvaluationRun]]\n saved_views: t.Optional[t.List[SavedView]]\n saved_view_ids: gql.Private[t.Optional[t.List[gql.ID]]]\n version: t.Optional[str]\n view_cls: t.Optional[str]\n view_name: t.Optional[str]\n default_skeleton: t.Optional[KeypointSkeleton]\n skeletons: t.List[NamedKeypointSkeleton]\n app_config: t.Optional[DatasetAppConfig]\n info: t.Optional[JSON]\n\n @staticmethod\n def modifier(doc: dict) -> dict:\n doc[\"id\"] = doc.pop(\"_id\")\n doc[\"default_mask_targets\"] = _convert_targets(\n doc.get(\"default_mask_targets\", {})\n )\n doc[\"mask_targets\"] = [\n NamedTargets(name=name, targets=_convert_targets(targets))\n for name, targets in doc.get(\"mask_targets\", {}).items()\n ]\n doc[\"sample_fields\"] = _flatten_fields(\n [], doc.get(\"sample_fields\", [])\n )\n doc[\"frame_fields\"] = _flatten_fields([], doc.get(\"frame_fields\", []))\n doc[\"brain_methods\"] = list(doc.get(\"brain_methods\", {}).values())\n doc[\"evaluations\"] = list(doc.get(\"evaluations\", {}).values())\n doc[\"saved_views\"] = doc.get(\"saved_views\", [])\n doc[\"skeletons\"] = list(\n dict(name=name, **data)\n for name, data in doc.get(\"skeletons\", {}).items()\n )\n doc[\"group_media_types\"] = [\n Group(name=name, media_type=media_type)\n for name, media_type in doc.get(\"group_media_types\", {}).items()\n ]\n doc[\"default_skeletons\"] = doc.get(\"default_skeletons\", None)\n return doc\n\n @classmethod\n async def resolver(\n cls,\n name: str,\n view: t.Optional[BSONArray],\n info: Info,\n view_name: t.Optional[str] = gql.UNSET,\n ) -> t.Optional[\"Dataset\"]:\n return await serialize_dataset(\n dataset_name=name, serialized_view=view, view_name=view_name\n )\n\n\ndataset_dataloader = get_dataloader_resolver(\n Dataset, \"datasets\", \"name\", DATASET_FILTER\n)\n\n\[email protected]\nclass ColorBy(Enum):\n field = \"field\"\n instance = \"instance\"\n label = \"label\"\n\n\[email protected]\nclass Theme(Enum):\n browser = \"browser\"\n dark = \"dark\"\n light = \"light\"\n\n\[email protected]\nclass AppConfig:\n color_by: ColorBy\n color_pool: t.List[str]\n colorscale: str\n grid_zoom: int\n loop_videos: bool\n notebook_height: int\n plugins: t.Optional[JSON]\n show_confidence: bool\n show_index: bool\n show_label: bool\n show_skeletons: bool\n show_tooltip: bool\n sidebar_mode: SidebarMode\n theme: Theme\n timezone: t.Optional[str]\n use_frame_number: bool\n\n\[email protected]\nclass Query(fosa.AggregateQuery):\n\n aggregations = gql.field(resolver=aggregate_resolver)\n\n @gql.field\n def colorscale(self) -> t.Optional[t.List[t.List[int]]]:\n if fo.app_config.colorscale:\n return fo.app_config.get_colormap()\n\n return None\n\n @gql.field\n def config(self) -> AppConfig:\n config = fose.get_state().config\n d = config.serialize()\n d[\"timezone\"] = fo.config.timezone\n return from_dict(AppConfig, d, config=Config(check_types=False))\n\n @gql.field\n def context(self) -> str:\n return focx._get_context()\n\n @gql.field\n def dev(self) -> bool:\n return foc.DEV_INSTALL or foc.RC_INSTALL\n\n @gql.field\n def do_not_track(self) -> bool:\n return fo.config.do_not_track\n\n dataset: Dataset = gql.field(resolver=Dataset.resolver)\n datasets: Connection[Dataset, str] = gql.field(\n resolver=get_paginator_resolver(\n Dataset, \"created_at\", DATASET_FILTER_STAGE, \"datasets\"\n )\n )\n\n @gql.field\n async def samples(\n self,\n dataset: str,\n view: BSONArray,\n first: t.Optional[int] = 20,\n after: t.Optional[str] = None,\n filter: t.Optional[SampleFilter] = None,\n ) -> Connection[SampleItem, str]:\n return await paginate_samples(\n dataset, view, None, first, after, sample_filter=filter\n )\n\n @gql.field\n async def sample(\n self, dataset: str, view: BSONArray, filter: SampleFilter\n ) -> t.Optional[SampleItem]:\n samples = await paginate_samples(\n dataset, view, None, 1, sample_filter=filter\n )\n if samples.edges:\n return samples.edges[0].node\n\n return None\n\n @gql.field\n def teams_submission(self) -> bool:\n isfile = os.path.isfile(foc.TEAMS_PATH)\n if isfile:\n submitted = etas.load_json(foc.TEAMS_PATH)[\"submitted\"]\n else:\n submitted = False\n\n return submitted\n\n @gql.field\n def uid(self) -> str:\n uid, _ = fou.get_user_id()\n return uid\n\n @gql.field\n def version(self) -> str:\n return foc.VERSION\n\n @gql.field\n def saved_views(self, dataset_name: str) -> t.Optional[t.List[SavedView]]:\n ds = fo.load_dataset(dataset_name)\n return [\n SavedView.from_doc(view_doc) for view_doc in ds._doc.saved_views\n ]\n\n\ndef _flatten_fields(\n path: t.List[str], fields: t.List[t.Dict]\n) -> t.List[t.Dict]:\n result = []\n for field in fields:\n key = field.pop(\"name\")\n field_path = path + [key]\n field[\"path\"] = \".\".join(field_path)\n result.append(field)\n\n fields = field.pop(\"fields\", None)\n if fields:\n result = result + _flatten_fields(field_path, fields)\n\n return result\n\n\ndef _convert_targets(targets: t.Dict[str, str]) -> t.List[Target]:\n return [Target(target=int(k), value=v) for k, v in targets.items()]\n\n\nasync def serialize_dataset(\n dataset_name: str, serialized_view: BSONArray, view_name: t.Optional[str]\n) -> Dataset:\n def run():\n dataset = fo.load_dataset(dataset_name)\n dataset.reload()\n\n if view_name is not None and dataset.has_saved_view(view_name):\n view = dataset.load_saved_view(view_name)\n else:\n view = fov.DatasetView._build(dataset, serialized_view or [])\n\n doc = dataset._doc.to_dict(no_dereference=True)\n Dataset.modifier(doc)\n data = from_dict(Dataset, doc, config=Config(check_types=False))\n data.view_cls = None\n\n collection = dataset.view()\n if view is not None:\n if view._dataset != dataset:\n d = view._dataset._serialize()\n data.media_type = d[\"media_type\"]\n\n data.id = view._dataset._doc.id\n\n data.view_cls = etau.get_class_name(view)\n\n if view.media_type != data.media_type:\n data.id = ObjectId()\n data.media_type = view.media_type\n\n collection = view\n\n data.sample_fields = serialize_fields(\n collection.get_field_schema(flat=True)\n )\n data.frame_fields = serialize_fields(\n collection.get_frame_field_schema(flat=True)\n )\n\n if dataset.media_type == fom.GROUP:\n data.group_slice = collection.group_slice\n\n return data\n\n loop = asyncio.get_running_loop()\n\n return await loop.run_in_executor(None, run)\n", "path": "fiftyone/server/query.py"}], "after_files": [{"content": "\"\"\"\nFiftyOne Server queries\n\n| Copyright 2017-2022, Voxel51, Inc.\n| `voxel51.com <https://voxel51.com/>`_\n|\n\"\"\"\nimport typing as t\nfrom dataclasses import asdict\nfrom datetime import date, datetime\nfrom enum import Enum\nimport os\n\nimport asyncio\nimport eta.core.serial as etas\nimport eta.core.utils as etau\nimport strawberry as gql\nfrom bson import ObjectId, json_util\nfrom dacite import Config, from_dict\n\nimport fiftyone as fo\nimport fiftyone.constants as foc\nimport fiftyone.core.context as focx\nimport fiftyone.core.media as fom\nfrom fiftyone.core.odm import SavedViewDocument\nfrom fiftyone.core.state import SampleField, serialize_fields\nimport fiftyone.core.uid as fou\nimport fiftyone.core.view as fov\n\nimport fiftyone.server.aggregate as fosa\nfrom fiftyone.server.aggregations import aggregate_resolver\nfrom fiftyone.server.data import Info\nfrom fiftyone.server.dataloader import get_dataloader_resolver\nimport fiftyone.server.events as fose\nfrom fiftyone.server.metadata import MediaType\nfrom fiftyone.server.paginator import Connection, get_paginator_resolver\nfrom fiftyone.server.samples import (\n SampleFilter,\n SampleItem,\n paginate_samples,\n)\n\nfrom fiftyone.server.scalars import BSONArray, JSON\n\nID = gql.scalar(\n t.NewType(\"ID\", str),\n serialize=lambda v: str(v),\n parse_value=lambda v: ObjectId(v),\n)\nDATASET_FILTER = [{\"sample_collection_name\": {\"$regex\": \"^samples\\\\.\"}}]\nDATASET_FILTER_STAGE = [{\"$match\": DATASET_FILTER[0]}]\n\n\[email protected]\nclass Group:\n name: str\n media_type: MediaType\n\n\[email protected]\nclass Target:\n target: int\n value: str\n\n\[email protected]\nclass NamedTargets:\n name: str\n targets: t.List[Target]\n\n\[email protected]\nclass RunConfig:\n cls: str\n\n\[email protected]\nclass Run:\n key: str\n version: t.Optional[str]\n timestamp: t.Optional[datetime]\n config: t.Optional[RunConfig]\n view_stages: t.Optional[t.List[str]]\n\n\[email protected]\nclass BrainRunConfig(RunConfig):\n embeddings_field: t.Optional[str]\n method: t.Optional[str]\n patches_field: t.Optional[str]\n\n\[email protected]\nclass BrainRun(Run):\n config: t.Optional[BrainRunConfig]\n\n\[email protected]\nclass EvaluationRunConfig(RunConfig):\n gt_field: t.Optional[str]\n pred_field: t.Optional[str]\n method: t.Optional[str]\n\n\[email protected]\nclass EvaluationRun(Run):\n config: t.Optional[EvaluationRunConfig]\n\n\[email protected]\nclass SavedView:\n id: t.Optional[str]\n dataset_id: t.Optional[str]\n name: t.Optional[str]\n slug: t.Optional[str]\n description: t.Optional[str]\n color: t.Optional[str]\n view_stages: t.Optional[t.List[str]]\n created_at: t.Optional[datetime]\n last_modified_at: t.Optional[datetime]\n last_loaded_at: t.Optional[datetime]\n\n @gql.field\n def view_name(self) -> t.Optional[str]:\n if isinstance(self, ObjectId):\n return None\n return self.name\n\n @gql.field\n def stage_dicts(self) -> t.Optional[BSONArray]:\n return [json_util.loads(x) for x in self.view_stages]\n\n @classmethod\n def from_doc(cls, doc: SavedViewDocument):\n stage_dicts = [json_util.loads(x) for x in doc.view_stages]\n saved_view = from_dict(data_class=cls, data=doc.to_dict())\n saved_view.stage_dicts = stage_dicts\n return saved_view\n\n\[email protected]\nclass SidebarGroup:\n name: str\n paths: t.Optional[t.List[str]]\n expanded: t.Optional[bool] = None\n\n\[email protected]\nclass KeypointSkeleton:\n labels: t.Optional[t.List[str]]\n edges: t.List[t.List[int]]\n\n\[email protected]\nclass NamedKeypointSkeleton(KeypointSkeleton):\n name: str\n\n\[email protected]\nclass SidebarMode(Enum):\n all = \"all\"\n best = \"best\"\n fast = \"fast\"\n\n\[email protected]\nclass DatasetAppConfig:\n media_fields: t.Optional[t.List[str]]\n plugins: t.Optional[JSON]\n sidebar_groups: t.Optional[t.List[SidebarGroup]]\n sidebar_mode: t.Optional[SidebarMode]\n modal_media_field: t.Optional[str] = gql.field(default=\"filepath\")\n grid_media_field: t.Optional[str] = \"filepath\"\n\n\[email protected]\nclass Dataset:\n id: gql.ID\n name: str\n created_at: t.Optional[date]\n last_loaded_at: t.Optional[datetime]\n persistent: bool\n group_media_types: t.Optional[t.List[Group]]\n group_field: t.Optional[str]\n group_slice: t.Optional[str]\n default_group_slice: t.Optional[str]\n media_type: t.Optional[MediaType]\n mask_targets: t.List[NamedTargets]\n default_mask_targets: t.Optional[t.List[Target]]\n sample_fields: t.List[SampleField]\n frame_fields: t.Optional[t.List[SampleField]]\n brain_methods: t.Optional[t.List[BrainRun]]\n evaluations: t.Optional[t.List[EvaluationRun]]\n saved_views: t.Optional[t.List[SavedView]]\n saved_view_ids: gql.Private[t.Optional[t.List[gql.ID]]]\n version: t.Optional[str]\n view_cls: t.Optional[str]\n view_name: t.Optional[str]\n default_skeleton: t.Optional[KeypointSkeleton]\n skeletons: t.List[NamedKeypointSkeleton]\n app_config: t.Optional[DatasetAppConfig]\n info: t.Optional[JSON]\n\n @staticmethod\n def modifier(doc: dict) -> dict:\n doc[\"id\"] = doc.pop(\"_id\")\n doc[\"default_mask_targets\"] = _convert_targets(\n doc.get(\"default_mask_targets\", {})\n )\n doc[\"mask_targets\"] = [\n NamedTargets(name=name, targets=_convert_targets(targets))\n for name, targets in doc.get(\"mask_targets\", {}).items()\n ]\n doc[\"sample_fields\"] = _flatten_fields(\n [], doc.get(\"sample_fields\", [])\n )\n doc[\"frame_fields\"] = _flatten_fields([], doc.get(\"frame_fields\", []))\n doc[\"brain_methods\"] = list(doc.get(\"brain_methods\", {}).values())\n doc[\"evaluations\"] = list(doc.get(\"evaluations\", {}).values())\n doc[\"saved_views\"] = doc.get(\"saved_views\", [])\n doc[\"skeletons\"] = list(\n dict(name=name, **data)\n for name, data in doc.get(\"skeletons\", {}).items()\n )\n doc[\"group_media_types\"] = [\n Group(name=name, media_type=media_type)\n for name, media_type in doc.get(\"group_media_types\", {}).items()\n ]\n doc[\"default_skeletons\"] = doc.get(\"default_skeletons\", None)\n return doc\n\n @classmethod\n async def resolver(\n cls,\n name: str,\n view: t.Optional[BSONArray],\n info: Info,\n view_name: t.Optional[str] = gql.UNSET,\n ) -> t.Optional[\"Dataset\"]:\n return await serialize_dataset(\n dataset_name=name, serialized_view=view, view_name=view_name\n )\n\n\ndataset_dataloader = get_dataloader_resolver(\n Dataset, \"datasets\", \"name\", DATASET_FILTER\n)\n\n\[email protected]\nclass ColorBy(Enum):\n field = \"field\"\n instance = \"instance\"\n label = \"label\"\n\n\[email protected]\nclass Theme(Enum):\n browser = \"browser\"\n dark = \"dark\"\n light = \"light\"\n\n\[email protected]\nclass AppConfig:\n color_by: ColorBy\n color_pool: t.List[str]\n colorscale: str\n grid_zoom: int\n loop_videos: bool\n notebook_height: int\n plugins: t.Optional[JSON]\n show_confidence: bool\n show_index: bool\n show_label: bool\n show_skeletons: bool\n show_tooltip: bool\n sidebar_mode: SidebarMode\n theme: Theme\n timezone: t.Optional[str]\n use_frame_number: bool\n\n\[email protected]\nclass Query(fosa.AggregateQuery):\n\n aggregations = gql.field(resolver=aggregate_resolver)\n\n @gql.field\n def colorscale(self) -> t.Optional[t.List[t.List[int]]]:\n if fo.app_config.colorscale:\n return fo.app_config.get_colormap()\n\n return None\n\n @gql.field\n def config(self) -> AppConfig:\n config = fose.get_state().config\n d = config.serialize()\n d[\"timezone\"] = fo.config.timezone\n return from_dict(AppConfig, d, config=Config(check_types=False))\n\n @gql.field\n def context(self) -> str:\n return focx._get_context()\n\n @gql.field\n def dev(self) -> bool:\n return foc.DEV_INSTALL or foc.RC_INSTALL\n\n @gql.field\n def do_not_track(self) -> bool:\n return fo.config.do_not_track\n\n dataset: Dataset = gql.field(resolver=Dataset.resolver)\n datasets: Connection[Dataset, str] = gql.field(\n resolver=get_paginator_resolver(\n Dataset, \"created_at\", DATASET_FILTER_STAGE, \"datasets\"\n )\n )\n\n @gql.field\n async def samples(\n self,\n dataset: str,\n view: BSONArray,\n first: t.Optional[int] = 20,\n after: t.Optional[str] = None,\n filter: t.Optional[SampleFilter] = None,\n ) -> Connection[SampleItem, str]:\n return await paginate_samples(\n dataset, view, None, first, after, sample_filter=filter\n )\n\n @gql.field\n async def sample(\n self, dataset: str, view: BSONArray, filter: SampleFilter\n ) -> t.Optional[SampleItem]:\n samples = await paginate_samples(\n dataset, view, None, 1, sample_filter=filter\n )\n if samples.edges:\n return samples.edges[0].node\n\n return None\n\n @gql.field\n def teams_submission(self) -> bool:\n isfile = os.path.isfile(foc.TEAMS_PATH)\n if isfile:\n submitted = etas.load_json(foc.TEAMS_PATH)[\"submitted\"]\n else:\n submitted = False\n\n return submitted\n\n @gql.field\n def uid(self) -> str:\n uid, _ = fou.get_user_id()\n return uid\n\n @gql.field\n def version(self) -> str:\n return foc.VERSION\n\n @gql.field\n def saved_views(self, dataset_name: str) -> t.Optional[t.List[SavedView]]:\n ds = fo.load_dataset(dataset_name)\n return [\n SavedView.from_doc(view_doc) for view_doc in ds._doc.saved_views\n ]\n\n\ndef _flatten_fields(\n path: t.List[str], fields: t.List[t.Dict]\n) -> t.List[t.Dict]:\n result = []\n for field in fields:\n key = field.pop(\"name\")\n field_path = path + [key]\n field[\"path\"] = \".\".join(field_path)\n result.append(field)\n\n fields = field.pop(\"fields\", None)\n if fields:\n result = result + _flatten_fields(field_path, fields)\n\n return result\n\n\ndef _convert_targets(targets: t.Dict[str, str]) -> t.List[Target]:\n return [Target(target=int(k), value=v) for k, v in targets.items()]\n\n\nasync def serialize_dataset(\n dataset_name: str, serialized_view: BSONArray, view_name: t.Optional[str]\n) -> Dataset:\n def run():\n dataset = fo.load_dataset(dataset_name)\n dataset.reload()\n\n if view_name is not None and dataset.has_saved_view(view_name):\n view = dataset.load_saved_view(view_name)\n else:\n view = fov.DatasetView._build(dataset, serialized_view or [])\n\n doc = dataset._doc.to_dict(no_dereference=True)\n Dataset.modifier(doc)\n data = from_dict(Dataset, doc, config=Config(check_types=False))\n data.view_cls = None\n\n collection = dataset.view()\n if view is not None:\n if view._dataset != dataset:\n d = view._dataset._serialize()\n data.media_type = d[\"media_type\"]\n\n data.id = view._dataset._doc.id\n\n data.view_cls = etau.get_class_name(view)\n\n if view.media_type != data.media_type:\n data.id = ObjectId()\n data.media_type = view.media_type\n\n collection = view\n\n data.sample_fields = serialize_fields(\n collection.get_field_schema(flat=True)\n )\n data.frame_fields = serialize_fields(\n collection.get_frame_field_schema(flat=True)\n )\n\n if dataset.media_type == fom.GROUP:\n data.group_slice = collection.group_slice\n\n return data\n\n loop = asyncio.get_running_loop()\n\n return await loop.run_in_executor(None, run)\n", "path": "fiftyone/server/query.py"}]} |
gh_patches_debug_98 | rasdani/github-patches | git_diff | Flexget__Flexget-3491 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
plugin medusa authentication issue
Good evening.
I'm running into an authentication issue using plugin medusa.
Expected behavior is seen below, list of show is retreived properly using the api.
```
root@flexget2:~# flexget -V
3.3.18
You are on the latest release
```
From machine "flexget2":
using cURL:
```
curl --user medusa:medusa http://IP.ADDR:8081/api/v2/series?limit=1000
```
Returns data successfully.
```
root@flexget2:~# curl --user medusa:medusa http://IP.ADDR:8081/api/v2/series?limit=1000
[{"id": {"tvdb": 350667, "slug": "tvdb350667", "trakt": null}, "externals": {"imdb": 7608248, "tvmaze": 34601, "tmdb": 81499}, "title": "A Million Little Things", "name": "A Million Little Things", "indexer": "tvdb", "network": "ABC (US)", "type": "Scripted", "status": "Continuing", "airs": "Wednesday 10:00 PM", "airsFormatValid": true, "language": "en", "showType": "series", "imdbInfo": {"imdbInfoId": 1, "indexer": 1, "indexerId": 350667, "imdbId": "tt7608248", "title": "A Million Little Things", "year": 2018, "akas": "", "runtimes": 43, "genres": "Comedy|Drama|Romance", "countries": "UNITED STATES", "countryCodes": "us", "certificates": "", "rating": "7.8", "votes": 13278, "lastUpdate": 738332, "plot": "A group of friends becomes motivated to living fuller lives after the unexpected death of a close friend."}, "year": {"start": 2018}, "prevAirDate": "2022-05-19T04:00:00+02:00", "nextAirDate": null, "lastUpdate": "2022-06-26", "runtime": 43, "genres": ["Romance", "Comedy", "Drama"], "rating": {"imdb": {"rating": "7.8", "votes": 13278}}, "classification": "", "cache": {"poster": "/config/cache/images/tvdb/350667.poster.jpg", "banner": "/config/cache/images/tvdb/350667.banner.jpg"}, "countries": ["UNITED STATES"], "countryCodes": ["us"], "plot": "They say friendship isn\u2019t one big thing\u2026 it\u2019s a million little things. When Jon Dixon \u2014 a man perfect on paper \u2014 took his own life, his family and friends are left to pick up the pieces. Each of these friends is not living the version of life they thought they\u2019d be living, and their friend\u2019s death forces them to take a look at the choices they\u2019ve made and to solve the unanswerable mystery of a man they thought they knew.", "config": {"location": "/tmp/A Million Little Things", "rootDir": "/tmp", "locationValid": false, "qualities": {"allowed": [8, 32, 64, 128], "preferred": [32, 128]}, "paused": false, "airByDate": false, "subtitlesEnabled": false, "dvdOrder": false, "seasonFolders": true, "anime": false, "scene": false, "sports": false, "templates": false, "defaultEpisodeStatus": "Wanted", "aliases": [], "release": {"ignoredWords": [], "requiredWords": [], "ignoredWordsExclude": false, "requiredWordsExclude": false}, "airdateOffset": 0, "showLists": ["series"]}, "xemNumbering": [], "sceneAbsoluteNumbering": [], "xemAbsoluteNumbering": [], "sceneNumbering": []}, {"id": {"tvdb": 153021, "slug": "tvdb153021", "trakt": null}, "externals": {"imdb": 1520211, "tvrage": 25056, "tvmaze": 73, "tmdb": 1402}, "title": "The Walking Dead", "name": "The Walking Dead", "indexer": "tvdb", "network": "AMC", "type": "Scripted", "status": "Continuing", "airs": "Sunday 9:00 PM", "airsFormatValid": true, "language": "en", "showType": "series", "imdbInfo": {"imdbInfoId": 2, "indexer": 1, "indexerId": 153021, "imdbId": "tt1520211", "title": "The Walking Dead", "year": 2010, "akas": "", "runtimes": 44, "genres": "Drama|Horror|Thriller", "countries": "UNITED STATES", "countryCodes": "us", "certificates": "", "rating": "8.2", "votes": 951642, "lastUpdate": 738332, "plot": "Sheriff Deputy Rick Grimes wakes up from a coma to learn the world is in ruins and must lead a group of survivors to stay alive."}, "year": {"start": 2010}, "prevAirDate": "2022-04-11T03:00:00+02:00", "nextAirDate": null, "lastUpdate": "2022-06-26", "runtime": 44, "genres": ["Horror", "Adventure", "Thriller", "Drama"], "rating": {"imdb": {"rating": "8.2", "votes": 951642}}, "classification": "", "cache": {"poster": "/config/cache/images/tvdb/153021.poster.jpg", "banner": "/config/cache/images/tvdb/153021.banner.jpg"}, "countries": ["UNITED STATES"], "countryCodes": ["us"], "plot": "The world we knew is gone. An epidemic of apocalyptic proportions has swept the globe causing the dead to rise and feed on the living. In a matter of months society has crumbled. In a world ruled by the dead, we are forced to finally start living.", "config": {"location": "/tmp/The Walking Dead", "rootDir": "/tmp", "locationValid": false, "qualities": {"allowed": [8, 32, 64, 128], "preferred": [32, 128]}, "paused": false, "airByDate": false, "subtitlesEnabled": false, "dvdOrder": false, "seasonFolders": true, "anime": false, "scene": false, "sports": false, "templates": false, "defaultEpisodeStatus": "Wanted", "aliases": [], "release": {"ignoredWords": [], "requiredWords
```
However, using the generated token taken from the log does not work:
```
curl -H "authorization': 'Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJNZWR1c2EgMS4wLjMiLCJpYXQiOjE2NTY3MDQzNDUsImp0aSI6IkRJU2VPUjN5UnhxZm96UlRYaG9YIiwiZXhwIjoxNjU2NzkwNzQ1LCJ1c2VybmFtZSI6Im1lZHVzYSIsImFwaUtleSI6IjgwZjhjNDJiNTM0YjNhYjFkMzAzMmEwN2U4YjJmYzNiIn0.37trJnleOVZxvklAVdFnH4Nr200vMp6QPKMoakPiKvI'" http://IP.ADDR:8081/api/v2/series?limit=1000
```
Results:
```
{"error": "No authorization token."}
```
This is my first time using the medusa plugin, as I was using the sickbeard before and could not get it to work now either.
Configuration is the following:
```
> tv:
configure_series:
from:
medusa:
base_url: 'http://{? medusa.ip ?}'
port: '{? medusa.port ?}'
#api_key: '{? medusa.api_key ?}'
username: medusa
password: medusa
include_ended: false
only_monitored: true
#include_data: true
```
logs:
```
2022-07-01 19:39:06 DEBUG task get_entry_tv executing get_entry_tv
2022-07-01 19:39:06 DEBUG template get_entry_tv Merging template tv into task get_entry_tv
2022-07-01 19:39:06 DEBUG template get_entry_tv Merging template torrents into task get_entry_tv
2022-07-01 19:39:06 DEBUG utils.requests get_entry_tv POSTing URL http://IP.ADDR:8081/api/v2/authenticate with args () and kwargs {'data': None, 'json': {'username': 'medusa', 'password': 'medusa'}, 'timeout': 30}
2022-07-01 19:39:06 DEBUG utils.requests get_entry_tv GETing URL http://IP.ADDR:8081/api/v2/series with args () and kwargs {'params': {'limit': 1000}, 'headers': {'authorization': 'Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJNZWR1c2EgMS4wLjMiLCJpYXQiOjE2NTY3MDQzNDUsImp0aSI6IkRJU2VPUjN5UnhxZm96UlRYaG9YIiwiZXhwIjoxNjU2NzkwNzQ1LCJ1c2VybmFtZSI6Im1lZHVzYSIsImFwaUtleSI6IjgwZjhjNDJiNTM0YjNhYjFkMzAzMmEwN2U4YjJmYzNiIn0.37trJnleOVZxvklAVdFnH4Nr200vMp6QPKMoakPiKvI'}, 'allow_redirects': True, 'timeout': 30}
2022-07-01 19:39:06 CRITICAL task get_entry_tv BUG: Unhandled error in plugin configure_series: 401 Client Error: Unauthorized for url: http://IP.ADDR:8081/api/v2/series?limit=1000
Traceback (most recent call last):
File "/usr/lib/python3.9/threading.py", line 912, in _bootstrap
self._bootstrap_inner()
β β <function Thread._bootstrap_inner at 0x7fbacc7513a0>
β <Thread(task_queue, started daemon 140440167495424)>
File "/usr/lib/python3.9/threading.py", line 954, in _bootstrap_inner
self.run()
β β <function Thread.run at 0x7fbacc7510d0>
β <Thread(task_queue, started daemon 140440167495424)>
File "/usr/lib/python3.9/threading.py", line 892, in run
self._target(*self._args, **self._kwargs)
β β β β β β {}
β β β β β <Thread(task_queue, started daemon 140440167495424)>
β β β β ()
β β β <Thread(task_queue, started daemon 140440167495424)>
β β <bound method TaskQueue.run of <flexget.task_queue.TaskQueue object at 0x7fbac679afa0>>
β <Thread(task_queue, started daemon 140440167495424)>
File "/usr/local/lib/python3.9/dist-packages/flexget/task_queue.py", line 47, in run
self.current_task.execute()
β β β <function Task.execute at 0x7fbac95a7e50>
β β <flexget.task.Task object at 0x7fbac64ccb20>
β <flexget.task_queue.TaskQueue object at 0x7fbac679afa0>
File "/usr/local/lib/python3.9/dist-packages/flexget/task.py", line 87, in wrapper
return func(self, *args, **kw)
β β β β {}
β β β ()
β β <flexget.task.Task object at 0x7fbac64ccb20>
β <function Task.execute at 0x7fbac95a7dc0>
File "/usr/local/lib/python3.9/dist-packages/flexget/task.py", line 727, in execute
self._execute()
β β <function Task._execute at 0x7fbac95a7d30>
β <flexget.task.Task object at 0x7fbac64ccb20>
File "/usr/local/lib/python3.9/dist-packages/flexget/task.py", line 696, in _execute
self.__run_task_phase(phase)
β β 'prepare'
β <flexget.task.Task object at 0x7fbac64ccb20>
File "/usr/local/lib/python3.9/dist-packages/flexget/task.py", line 514, in __run_task_phase
response = self.__run_plugin(plugin, phase, args)
β β β β (<flexget.task.Task object at 0x7fbac64ccb20>, {'from': {'medusa': {'base_url': 'http://IP.ADDR', 'port': 8081, 'username':...
β β β 'prepare'
β β <PluginInfo(name=configure_series)>
β <flexget.task.Task object at 0x7fbac64ccb20>
> File "/usr/local/lib/python3.9/dist-packages/flexget/task.py", line 547, in __run_plugin
result = method(*args, **kwargs)
β β β {}
β β (<flexget.task.Task object at 0x7fbac64ccb20>, {'from': {'medusa': {'base_url': 'http://IP.ADDR', 'port': 8081, 'username':...
β <Event(name=plugin.configure_series.prepare,func=on_task_prepare,priority=128)>
File "/usr/local/lib/python3.9/dist-packages/flexget/event.py", line 20, in __call__
return self.func(*args, **kwargs)
β β β β {}
β β β (<flexget.task.Task object at 0x7fbac64ccb20>, {'from': {'medusa': {'base_url': 'http://IP.ADDR', 'port': 8081, 'username':...
β β <bound method ConfigureSeries.on_task_prepare of <flexget.components.series.configure_series.ConfigureSeries object at 0x7fba...
β <Event(name=plugin.configure_series.prepare,func=on_task_prepare,priority=128)>
File "/usr/local/lib/python3.9/dist-packages/flexget/components/series/configure_series.py", line 53, in on_task_prepare
result = method(task, input_config)
β β β {'base_url': 'http://IP.ADDR', 'port': 8081, 'username': 'medusa', 'password': 'medusa', 'include_ended': False, 'only_moni...
β β <flexget.task.Task object at 0x7fbac64ccb20>
β <Event(name=plugin.medusa.input,func=on_task_input,priority=128)>
File "/usr/local/lib/python3.9/dist-packages/flexget/event.py", line 20, in __call__
return self.func(*args, **kwargs)
β β β β {}
β β β (<flexget.task.Task object at 0x7fbac64ccb20>, {'base_url': 'http://IP.ADDR', 'port': 8081, 'username': 'medusa', 'password...
β β <bound method Medusa.on_task_input of <flexget.plugins.input.medusa.Medusa object at 0x7fbac68670a0>>
β <Event(name=plugin.medusa.input,func=on_task_input,priority=128)>
File "/usr/local/lib/python3.9/dist-packages/flexget/plugins/input/medusa.py", line 80, in on_task_input
series = task.requests.get(
β β β <function Session.get at 0x7fbacb4800d0>
β β <flexget.utils.requests.Session object at 0x7fbac64cca30>
β <flexget.task.Task object at 0x7fbac64ccb20>
File "/usr/local/lib/python3.9/dist-packages/requests/sessions.py", line 542, in get
return self.request('GET', url, **kwargs)
β β β β {'params': {'limit': 1000}, 'headers': {'authorization': 'Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJNZWR1c2EgMS4...
β β β 'http://IP.ADDR:8081/api/v2/series'
β β <function Session.request at 0x7fbac95f9820>
β <flexget.utils.requests.Session object at 0x7fbac64cca30>
File "/usr/local/lib/python3.9/dist-packages/flexget/utils/requests.py", line 271, in request
result.raise_for_status()
β β <function Response.raise_for_status at 0x7fbacb46c700>
β <Response [401]>
File "/usr/local/lib/python3.9/dist-packages/requests/models.py", line 960, in raise_for_status
raise HTTPError(http_error_msg, response=self)
β β β <Response [401]>
β β '401 Client Error: Unauthorized for url: http://IP.ADDR:8081/api/v2/series?limit=1000'
β <class 'requests.exceptions.HTTPError'>
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: http://IP.ADDR:8081/api/v2/series?limit=1000
Traceback (most recent call last):
File "/usr/local/lib/python3.9/dist-packages/flexget/task.py", line 547, in __run_plugin
result = method(*args, **kwargs)
File "/usr/local/lib/python3.9/dist-packages/flexget/event.py", line 20, in __call__
return self.func(*args, **kwargs)
File "/usr/local/lib/python3.9/dist-packages/flexget/components/series/configure_series.py", line 53, in on_task_prepare
result = method(task, input_config)
File "/usr/local/lib/python3.9/dist-packages/flexget/event.py", line 20, in __call__
return self.func(*args, **kwargs)
File "/usr/local/lib/python3.9/dist-packages/flexget/plugins/input/medusa.py", line 80, in on_task_input
series = task.requests.get(
File "/usr/local/lib/python3.9/dist-packages/requests/sessions.py", line 542, in get
return self.request('GET', url, **kwargs)
File "/usr/local/lib/python3.9/dist-packages/flexget/utils/requests.py", line 271, in request
result.raise_for_status()
File "/usr/local/lib/python3.9/dist-packages/requests/models.py", line 960, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: http://IP.ADDR:8081/api/v2/series?limit=1000
root@flexget2:~#
```
_Originally posted by @hideYourPretzels in https://github.com/Flexget/Flexget/discussions/3420#discussioncomment-3066652_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `flexget/plugins/input/medusa.py`
Content:
```
1 from urllib.parse import urlparse
2
3 from loguru import logger
4
5 from flexget import plugin
6 from flexget.entry import Entry
7 from flexget.event import event
8
9 logger = logger.bind(name='medusa')
10
11
12 class Medusa:
13 schema = {
14 'type': 'object',
15 'properties': {
16 'base_url': {'type': 'string', 'format': 'uri'},
17 'port': {'type': 'number', 'default': 8081},
18 'username': {'type': 'string'},
19 'password': {'type': 'string'},
20 'only_monitored': {'type': 'boolean', 'default': False},
21 'include_ended': {'type': 'boolean', 'default': False},
22 },
23 'required': ['username', 'password', 'base_url'],
24 'additionalProperties': False,
25 }
26
27 def on_task_input(self, task, config):
28 """
29 This plugin returns ALL of the shows monitored by Medusa.
30 This includes both ongoing and ended.
31 Syntax:
32
33 medusa:
34 base_url=<value>
35 port=<value>
36 username=<value>
37 password=<value>
38
39 Options base_url, username and password are required.
40
41 Use with input plugin like discover and/or configure_series.
42 Example:
43
44 download-tv-task:
45 configure_series:
46 from:
47 medusa:
48 base_url: http://localhost
49 port: 8531
50 username: USERNAME
51 password: PASSWORD
52 discover:
53 what:
54 - next_series_episodes: yes
55 from:
56 torrentz: any
57 download:
58 /download/tv
59
60 Note that when using the configure_series plugin with Medusa
61 you are basically synced to it, so removing a show in Medusa will
62 remove it in flexget as well, which could be positive or negative,
63 depending on your usage.
64 """
65 parsed_url = urlparse(config.get('base_url'))
66 base_url = '{scheme}://{url}:{port}/api/v2'.format(
67 scheme=parsed_url.scheme, url=parsed_url.netloc, port=config.get('port')
68 )
69
70 body_auth = dict(username=config.get('username'), password=config.get('password'))
71
72 api_key = task.requests.post('{}/authenticate'.format(base_url), json=body_auth).json()[
73 'token'
74 ]
75
76 headers = {'authorization': 'Bearer ' + api_key}
77
78 params = {'limit': 1000}
79
80 series = task.requests.get(
81 '{}/series'.format(base_url), params=params, headers=headers
82 ).json()
83
84 entries = []
85 for show in series:
86 logger.debug('processing show: {}', show)
87 if (
88 (show['config']['paused'] and config.get('only_monitored'))
89 or show['status'] == 'Ended'
90 and not config.get('include_ended')
91 ):
92 logger.debug('discarted show: {}', show)
93
94 entry = Entry(title=show['title'], url='', series_name=show['title'])
95
96 if entry.isvalid():
97 entries.append(entry)
98 else:
99 logger.error('Invalid entry created? {}'.format(entry))
100
101 return entries
102
103
104 @event('plugin.register')
105 def register_plugin():
106 plugin.register(Medusa, 'medusa', api_ver=2)
107
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/flexget/plugins/input/medusa.py b/flexget/plugins/input/medusa.py
--- a/flexget/plugins/input/medusa.py
+++ b/flexget/plugins/input/medusa.py
@@ -73,7 +73,7 @@
'token'
]
- headers = {'authorization': 'Bearer ' + api_key}
+ headers = {'x-auth': 'Bearer ' + api_key}
params = {'limit': 1000}
| {"golden_diff": "diff --git a/flexget/plugins/input/medusa.py b/flexget/plugins/input/medusa.py\n--- a/flexget/plugins/input/medusa.py\n+++ b/flexget/plugins/input/medusa.py\n@@ -73,7 +73,7 @@\n 'token'\n ]\n \n- headers = {'authorization': 'Bearer ' + api_key}\n+ headers = {'x-auth': 'Bearer ' + api_key}\n \n params = {'limit': 1000}\n", "issue": "plugin medusa authentication issue\nGood evening.\r\nI'm running into an authentication issue using plugin medusa.\r\nExpected behavior is seen below, list of show is retreived properly using the api.\r\n```\r\nroot@flexget2:~# flexget -V\r\n3.3.18\r\nYou are on the latest release\r\n```\r\n\r\nFrom machine \"flexget2\":\r\nusing cURL:\r\n```\r\ncurl --user medusa:medusa http://IP.ADDR:8081/api/v2/series?limit=1000\r\n```\r\nReturns data successfully.\r\n```\r\nroot@flexget2:~# curl --user medusa:medusa http://IP.ADDR:8081/api/v2/series?limit=1000\r\n[{\"id\": {\"tvdb\": 350667, \"slug\": \"tvdb350667\", \"trakt\": null}, \"externals\": {\"imdb\": 7608248, \"tvmaze\": 34601, \"tmdb\": 81499}, \"title\": \"A Million Little Things\", \"name\": \"A Million Little Things\", \"indexer\": \"tvdb\", \"network\": \"ABC (US)\", \"type\": \"Scripted\", \"status\": \"Continuing\", \"airs\": \"Wednesday 10:00 PM\", \"airsFormatValid\": true, \"language\": \"en\", \"showType\": \"series\", \"imdbInfo\": {\"imdbInfoId\": 1, \"indexer\": 1, \"indexerId\": 350667, \"imdbId\": \"tt7608248\", \"title\": \"A Million Little Things\", \"year\": 2018, \"akas\": \"\", \"runtimes\": 43, \"genres\": \"Comedy|Drama|Romance\", \"countries\": \"UNITED STATES\", \"countryCodes\": \"us\", \"certificates\": \"\", \"rating\": \"7.8\", \"votes\": 13278, \"lastUpdate\": 738332, \"plot\": \"A group of friends becomes motivated to living fuller lives after the unexpected death of a close friend.\"}, \"year\": {\"start\": 2018}, \"prevAirDate\": \"2022-05-19T04:00:00+02:00\", \"nextAirDate\": null, \"lastUpdate\": \"2022-06-26\", \"runtime\": 43, \"genres\": [\"Romance\", \"Comedy\", \"Drama\"], \"rating\": {\"imdb\": {\"rating\": \"7.8\", \"votes\": 13278}}, \"classification\": \"\", \"cache\": {\"poster\": \"/config/cache/images/tvdb/350667.poster.jpg\", \"banner\": \"/config/cache/images/tvdb/350667.banner.jpg\"}, \"countries\": [\"UNITED STATES\"], \"countryCodes\": [\"us\"], \"plot\": \"They say friendship isn\\u2019t one big thing\\u2026 it\\u2019s a million little things. When Jon Dixon \\u2014 a man perfect on paper \\u2014 took his own life, his family and friends are left to pick up the pieces. Each of these friends is not living the version of life they thought they\\u2019d be living, and their friend\\u2019s death forces them to take a look at the choices they\\u2019ve made and to solve the unanswerable mystery of a man they thought they knew.\", \"config\": {\"location\": \"/tmp/A Million Little Things\", \"rootDir\": \"/tmp\", \"locationValid\": false, \"qualities\": {\"allowed\": [8, 32, 64, 128], \"preferred\": [32, 128]}, \"paused\": false, \"airByDate\": false, \"subtitlesEnabled\": false, \"dvdOrder\": false, \"seasonFolders\": true, \"anime\": false, \"scene\": false, \"sports\": false, \"templates\": false, \"defaultEpisodeStatus\": \"Wanted\", \"aliases\": [], \"release\": {\"ignoredWords\": [], \"requiredWords\": [], \"ignoredWordsExclude\": false, \"requiredWordsExclude\": false}, \"airdateOffset\": 0, \"showLists\": [\"series\"]}, \"xemNumbering\": [], \"sceneAbsoluteNumbering\": [], \"xemAbsoluteNumbering\": [], \"sceneNumbering\": []}, {\"id\": {\"tvdb\": 153021, \"slug\": \"tvdb153021\", \"trakt\": null}, \"externals\": {\"imdb\": 1520211, \"tvrage\": 25056, \"tvmaze\": 73, \"tmdb\": 1402}, \"title\": \"The Walking Dead\", \"name\": \"The Walking Dead\", \"indexer\": \"tvdb\", \"network\": \"AMC\", \"type\": \"Scripted\", \"status\": \"Continuing\", \"airs\": \"Sunday 9:00 PM\", \"airsFormatValid\": true, \"language\": \"en\", \"showType\": \"series\", \"imdbInfo\": {\"imdbInfoId\": 2, \"indexer\": 1, \"indexerId\": 153021, \"imdbId\": \"tt1520211\", \"title\": \"The Walking Dead\", \"year\": 2010, \"akas\": \"\", \"runtimes\": 44, \"genres\": \"Drama|Horror|Thriller\", \"countries\": \"UNITED STATES\", \"countryCodes\": \"us\", \"certificates\": \"\", \"rating\": \"8.2\", \"votes\": 951642, \"lastUpdate\": 738332, \"plot\": \"Sheriff Deputy Rick Grimes wakes up from a coma to learn the world is in ruins and must lead a group of survivors to stay alive.\"}, \"year\": {\"start\": 2010}, \"prevAirDate\": \"2022-04-11T03:00:00+02:00\", \"nextAirDate\": null, \"lastUpdate\": \"2022-06-26\", \"runtime\": 44, \"genres\": [\"Horror\", \"Adventure\", \"Thriller\", \"Drama\"], \"rating\": {\"imdb\": {\"rating\": \"8.2\", \"votes\": 951642}}, \"classification\": \"\", \"cache\": {\"poster\": \"/config/cache/images/tvdb/153021.poster.jpg\", \"banner\": \"/config/cache/images/tvdb/153021.banner.jpg\"}, \"countries\": [\"UNITED STATES\"], \"countryCodes\": [\"us\"], \"plot\": \"The world we knew is gone. An epidemic of apocalyptic proportions has swept the globe causing the dead to rise and feed on the living. In a matter of months society has crumbled. In a world ruled by the dead, we are forced to finally start living.\", \"config\": {\"location\": \"/tmp/The Walking Dead\", \"rootDir\": \"/tmp\", \"locationValid\": false, \"qualities\": {\"allowed\": [8, 32, 64, 128], \"preferred\": [32, 128]}, \"paused\": false, \"airByDate\": false, \"subtitlesEnabled\": false, \"dvdOrder\": false, \"seasonFolders\": true, \"anime\": false, \"scene\": false, \"sports\": false, \"templates\": false, \"defaultEpisodeStatus\": \"Wanted\", \"aliases\": [], \"release\": {\"ignoredWords\": [], \"requiredWords\r\n```\r\n\r\nHowever, using the generated token taken from the log does not work:\r\n```\r\ncurl -H \"authorization': 'Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJNZWR1c2EgMS4wLjMiLCJpYXQiOjE2NTY3MDQzNDUsImp0aSI6IkRJU2VPUjN5UnhxZm96UlRYaG9YIiwiZXhwIjoxNjU2NzkwNzQ1LCJ1c2VybmFtZSI6Im1lZHVzYSIsImFwaUtleSI6IjgwZjhjNDJiNTM0YjNhYjFkMzAzMmEwN2U4YjJmYzNiIn0.37trJnleOVZxvklAVdFnH4Nr200vMp6QPKMoakPiKvI'\" http://IP.ADDR:8081/api/v2/series?limit=1000\r\n```\r\n\r\nResults:\r\n```\r\n{\"error\": \"No authorization token.\"}\r\n```\r\n\r\nThis is my first time using the medusa plugin, as I was using the sickbeard before and could not get it to work now either.\r\n\r\nConfiguration is the following:\r\n```\r\n> tv: \r\n configure_series: \r\n from: \r\n medusa: \r\n base_url: 'http://{? medusa.ip ?}' \r\n port: '{? medusa.port ?}' \r\n #api_key: '{? medusa.api_key ?}' \r\n username: medusa \r\n password: medusa \r\n include_ended: false \r\n only_monitored: true \r\n #include_data: true\r\n```\r\nlogs:\r\n```\r\n\r\n2022-07-01 19:39:06 DEBUG task get_entry_tv executing get_entry_tv\r\n2022-07-01 19:39:06 DEBUG template get_entry_tv Merging template tv into task get_entry_tv\r\n2022-07-01 19:39:06 DEBUG template get_entry_tv Merging template torrents into task get_entry_tv\r\n2022-07-01 19:39:06 DEBUG utils.requests get_entry_tv POSTing URL http://IP.ADDR:8081/api/v2/authenticate with args () and kwargs {'data': None, 'json': {'username': 'medusa', 'password': 'medusa'}, 'timeout': 30}\r\n2022-07-01 19:39:06 DEBUG utils.requests get_entry_tv GETing URL http://IP.ADDR:8081/api/v2/series with args () and kwargs {'params': {'limit': 1000}, 'headers': {'authorization': 'Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJNZWR1c2EgMS4wLjMiLCJpYXQiOjE2NTY3MDQzNDUsImp0aSI6IkRJU2VPUjN5UnhxZm96UlRYaG9YIiwiZXhwIjoxNjU2NzkwNzQ1LCJ1c2VybmFtZSI6Im1lZHVzYSIsImFwaUtleSI6IjgwZjhjNDJiNTM0YjNhYjFkMzAzMmEwN2U4YjJmYzNiIn0.37trJnleOVZxvklAVdFnH4Nr200vMp6QPKMoakPiKvI'}, 'allow_redirects': True, 'timeout': 30}\r\n2022-07-01 19:39:06 CRITICAL task get_entry_tv BUG: Unhandled error in plugin configure_series: 401 Client Error: Unauthorized for url: http://IP.ADDR:8081/api/v2/series?limit=1000\r\nTraceback (most recent call last):\r\n\r\n File \"/usr/lib/python3.9/threading.py\", line 912, in _bootstrap\r\n self._bootstrap_inner()\r\n \u2502 \u2514 <function Thread._bootstrap_inner at 0x7fbacc7513a0>\r\n \u2514 <Thread(task_queue, started daemon 140440167495424)>\r\n File \"/usr/lib/python3.9/threading.py\", line 954, in _bootstrap_inner\r\n self.run()\r\n \u2502 \u2514 <function Thread.run at 0x7fbacc7510d0>\r\n \u2514 <Thread(task_queue, started daemon 140440167495424)>\r\n File \"/usr/lib/python3.9/threading.py\", line 892, in run\r\n self._target(*self._args, **self._kwargs)\r\n \u2502 \u2502 \u2502 \u2502 \u2502 \u2514 {}\r\n \u2502 \u2502 \u2502 \u2502 \u2514 <Thread(task_queue, started daemon 140440167495424)>\r\n \u2502 \u2502 \u2502 \u2514 ()\r\n \u2502 \u2502 \u2514 <Thread(task_queue, started daemon 140440167495424)>\r\n \u2502 \u2514 <bound method TaskQueue.run of <flexget.task_queue.TaskQueue object at 0x7fbac679afa0>>\r\n \u2514 <Thread(task_queue, started daemon 140440167495424)>\r\n\r\n File \"/usr/local/lib/python3.9/dist-packages/flexget/task_queue.py\", line 47, in run\r\n self.current_task.execute()\r\n \u2502 \u2502 \u2514 <function Task.execute at 0x7fbac95a7e50>\r\n \u2502 \u2514 <flexget.task.Task object at 0x7fbac64ccb20>\r\n \u2514 <flexget.task_queue.TaskQueue object at 0x7fbac679afa0>\r\n\r\n File \"/usr/local/lib/python3.9/dist-packages/flexget/task.py\", line 87, in wrapper\r\n return func(self, *args, **kw)\r\n \u2502 \u2502 \u2502 \u2514 {}\r\n \u2502 \u2502 \u2514 ()\r\n \u2502 \u2514 <flexget.task.Task object at 0x7fbac64ccb20>\r\n \u2514 <function Task.execute at 0x7fbac95a7dc0>\r\n\r\n File \"/usr/local/lib/python3.9/dist-packages/flexget/task.py\", line 727, in execute\r\n self._execute()\r\n \u2502 \u2514 <function Task._execute at 0x7fbac95a7d30>\r\n \u2514 <flexget.task.Task object at 0x7fbac64ccb20>\r\n\r\n File \"/usr/local/lib/python3.9/dist-packages/flexget/task.py\", line 696, in _execute\r\n self.__run_task_phase(phase)\r\n \u2502 \u2514 'prepare'\r\n \u2514 <flexget.task.Task object at 0x7fbac64ccb20>\r\n\r\n File \"/usr/local/lib/python3.9/dist-packages/flexget/task.py\", line 514, in __run_task_phase\r\n response = self.__run_plugin(plugin, phase, args)\r\n \u2502 \u2502 \u2502 \u2514 (<flexget.task.Task object at 0x7fbac64ccb20>, {'from': {'medusa': {'base_url': 'http://IP.ADDR', 'port': 8081, 'username':...\r\n \u2502 \u2502 \u2514 'prepare'\r\n \u2502 \u2514 <PluginInfo(name=configure_series)>\r\n \u2514 <flexget.task.Task object at 0x7fbac64ccb20>\r\n\r\n> File \"/usr/local/lib/python3.9/dist-packages/flexget/task.py\", line 547, in __run_plugin\r\n result = method(*args, **kwargs)\r\n \u2502 \u2502 \u2514 {}\r\n \u2502 \u2514 (<flexget.task.Task object at 0x7fbac64ccb20>, {'from': {'medusa': {'base_url': 'http://IP.ADDR', 'port': 8081, 'username':...\r\n \u2514 <Event(name=plugin.configure_series.prepare,func=on_task_prepare,priority=128)>\r\n\r\n File \"/usr/local/lib/python3.9/dist-packages/flexget/event.py\", line 20, in __call__\r\n return self.func(*args, **kwargs)\r\n \u2502 \u2502 \u2502 \u2514 {}\r\n \u2502 \u2502 \u2514 (<flexget.task.Task object at 0x7fbac64ccb20>, {'from': {'medusa': {'base_url': 'http://IP.ADDR', 'port': 8081, 'username':...\r\n \u2502 \u2514 <bound method ConfigureSeries.on_task_prepare of <flexget.components.series.configure_series.ConfigureSeries object at 0x7fba...\r\n \u2514 <Event(name=plugin.configure_series.prepare,func=on_task_prepare,priority=128)>\r\n\r\n File \"/usr/local/lib/python3.9/dist-packages/flexget/components/series/configure_series.py\", line 53, in on_task_prepare\r\n result = method(task, input_config)\r\n \u2502 \u2502 \u2514 {'base_url': 'http://IP.ADDR', 'port': 8081, 'username': 'medusa', 'password': 'medusa', 'include_ended': False, 'only_moni...\r\n \u2502 \u2514 <flexget.task.Task object at 0x7fbac64ccb20>\r\n \u2514 <Event(name=plugin.medusa.input,func=on_task_input,priority=128)>\r\n\r\n File \"/usr/local/lib/python3.9/dist-packages/flexget/event.py\", line 20, in __call__\r\n return self.func(*args, **kwargs)\r\n \u2502 \u2502 \u2502 \u2514 {}\r\n \u2502 \u2502 \u2514 (<flexget.task.Task object at 0x7fbac64ccb20>, {'base_url': 'http://IP.ADDR', 'port': 8081, 'username': 'medusa', 'password...\r\n \u2502 \u2514 <bound method Medusa.on_task_input of <flexget.plugins.input.medusa.Medusa object at 0x7fbac68670a0>>\r\n \u2514 <Event(name=plugin.medusa.input,func=on_task_input,priority=128)>\r\n\r\n File \"/usr/local/lib/python3.9/dist-packages/flexget/plugins/input/medusa.py\", line 80, in on_task_input\r\n series = task.requests.get(\r\n \u2502 \u2502 \u2514 <function Session.get at 0x7fbacb4800d0>\r\n \u2502 \u2514 <flexget.utils.requests.Session object at 0x7fbac64cca30>\r\n \u2514 <flexget.task.Task object at 0x7fbac64ccb20>\r\n\r\n File \"/usr/local/lib/python3.9/dist-packages/requests/sessions.py\", line 542, in get\r\n return self.request('GET', url, **kwargs)\r\n \u2502 \u2502 \u2502 \u2514 {'params': {'limit': 1000}, 'headers': {'authorization': 'Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJNZWR1c2EgMS4...\r\n \u2502 \u2502 \u2514 'http://IP.ADDR:8081/api/v2/series'\r\n \u2502 \u2514 <function Session.request at 0x7fbac95f9820>\r\n \u2514 <flexget.utils.requests.Session object at 0x7fbac64cca30>\r\n\r\n File \"/usr/local/lib/python3.9/dist-packages/flexget/utils/requests.py\", line 271, in request\r\n result.raise_for_status()\r\n \u2502 \u2514 <function Response.raise_for_status at 0x7fbacb46c700>\r\n \u2514 <Response [401]>\r\n\r\n File \"/usr/local/lib/python3.9/dist-packages/requests/models.py\", line 960, in raise_for_status\r\n raise HTTPError(http_error_msg, response=self)\r\n \u2502 \u2502 \u2514 <Response [401]>\r\n \u2502 \u2514 '401 Client Error: Unauthorized for url: http://IP.ADDR:8081/api/v2/series?limit=1000'\r\n \u2514 <class 'requests.exceptions.HTTPError'>\r\n\r\nrequests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: http://IP.ADDR:8081/api/v2/series?limit=1000\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.9/dist-packages/flexget/task.py\", line 547, in __run_plugin\r\n result = method(*args, **kwargs)\r\n File \"/usr/local/lib/python3.9/dist-packages/flexget/event.py\", line 20, in __call__\r\n return self.func(*args, **kwargs)\r\n File \"/usr/local/lib/python3.9/dist-packages/flexget/components/series/configure_series.py\", line 53, in on_task_prepare\r\n result = method(task, input_config)\r\n File \"/usr/local/lib/python3.9/dist-packages/flexget/event.py\", line 20, in __call__\r\n return self.func(*args, **kwargs)\r\n File \"/usr/local/lib/python3.9/dist-packages/flexget/plugins/input/medusa.py\", line 80, in on_task_input\r\n series = task.requests.get(\r\n File \"/usr/local/lib/python3.9/dist-packages/requests/sessions.py\", line 542, in get\r\n return self.request('GET', url, **kwargs)\r\n File \"/usr/local/lib/python3.9/dist-packages/flexget/utils/requests.py\", line 271, in request\r\n result.raise_for_status()\r\n File \"/usr/local/lib/python3.9/dist-packages/requests/models.py\", line 960, in raise_for_status\r\n raise HTTPError(http_error_msg, response=self)\r\nrequests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: http://IP.ADDR:8081/api/v2/series?limit=1000\r\nroot@flexget2:~# \r\n```\r\n\r\n\r\n_Originally posted by @hideYourPretzels in https://github.com/Flexget/Flexget/discussions/3420#discussioncomment-3066652_\n", "before_files": [{"content": "from urllib.parse import urlparse\n\nfrom loguru import logger\n\nfrom flexget import plugin\nfrom flexget.entry import Entry\nfrom flexget.event import event\n\nlogger = logger.bind(name='medusa')\n\n\nclass Medusa:\n schema = {\n 'type': 'object',\n 'properties': {\n 'base_url': {'type': 'string', 'format': 'uri'},\n 'port': {'type': 'number', 'default': 8081},\n 'username': {'type': 'string'},\n 'password': {'type': 'string'},\n 'only_monitored': {'type': 'boolean', 'default': False},\n 'include_ended': {'type': 'boolean', 'default': False},\n },\n 'required': ['username', 'password', 'base_url'],\n 'additionalProperties': False,\n }\n\n def on_task_input(self, task, config):\n \"\"\"\n This plugin returns ALL of the shows monitored by Medusa.\n This includes both ongoing and ended.\n Syntax:\n\n medusa:\n base_url=<value>\n port=<value>\n username=<value>\n password=<value>\n\n Options base_url, username and password are required.\n\n Use with input plugin like discover and/or configure_series.\n Example:\n\n download-tv-task:\n configure_series:\n from:\n medusa:\n base_url: http://localhost\n port: 8531\n username: USERNAME\n password: PASSWORD\n discover:\n what:\n - next_series_episodes: yes\n from:\n torrentz: any\n download:\n /download/tv\n\n Note that when using the configure_series plugin with Medusa\n you are basically synced to it, so removing a show in Medusa will\n remove it in flexget as well, which could be positive or negative,\n depending on your usage.\n \"\"\"\n parsed_url = urlparse(config.get('base_url'))\n base_url = '{scheme}://{url}:{port}/api/v2'.format(\n scheme=parsed_url.scheme, url=parsed_url.netloc, port=config.get('port')\n )\n\n body_auth = dict(username=config.get('username'), password=config.get('password'))\n\n api_key = task.requests.post('{}/authenticate'.format(base_url), json=body_auth).json()[\n 'token'\n ]\n\n headers = {'authorization': 'Bearer ' + api_key}\n\n params = {'limit': 1000}\n\n series = task.requests.get(\n '{}/series'.format(base_url), params=params, headers=headers\n ).json()\n\n entries = []\n for show in series:\n logger.debug('processing show: {}', show)\n if (\n (show['config']['paused'] and config.get('only_monitored'))\n or show['status'] == 'Ended'\n and not config.get('include_ended')\n ):\n logger.debug('discarted show: {}', show)\n\n entry = Entry(title=show['title'], url='', series_name=show['title'])\n\n if entry.isvalid():\n entries.append(entry)\n else:\n logger.error('Invalid entry created? {}'.format(entry))\n\n return entries\n\n\n@event('plugin.register')\ndef register_plugin():\n plugin.register(Medusa, 'medusa', api_ver=2)\n", "path": "flexget/plugins/input/medusa.py"}], "after_files": [{"content": "from urllib.parse import urlparse\n\nfrom loguru import logger\n\nfrom flexget import plugin\nfrom flexget.entry import Entry\nfrom flexget.event import event\n\nlogger = logger.bind(name='medusa')\n\n\nclass Medusa:\n schema = {\n 'type': 'object',\n 'properties': {\n 'base_url': {'type': 'string', 'format': 'uri'},\n 'port': {'type': 'number', 'default': 8081},\n 'username': {'type': 'string'},\n 'password': {'type': 'string'},\n 'only_monitored': {'type': 'boolean', 'default': False},\n 'include_ended': {'type': 'boolean', 'default': False},\n },\n 'required': ['username', 'password', 'base_url'],\n 'additionalProperties': False,\n }\n\n def on_task_input(self, task, config):\n \"\"\"\n This plugin returns ALL of the shows monitored by Medusa.\n This includes both ongoing and ended.\n Syntax:\n\n medusa:\n base_url=<value>\n port=<value>\n username=<value>\n password=<value>\n\n Options base_url, username and password are required.\n\n Use with input plugin like discover and/or configure_series.\n Example:\n\n download-tv-task:\n configure_series:\n from:\n medusa:\n base_url: http://localhost\n port: 8531\n username: USERNAME\n password: PASSWORD\n discover:\n what:\n - next_series_episodes: yes\n from:\n torrentz: any\n download:\n /download/tv\n\n Note that when using the configure_series plugin with Medusa\n you are basically synced to it, so removing a show in Medusa will\n remove it in flexget as well, which could be positive or negative,\n depending on your usage.\n \"\"\"\n parsed_url = urlparse(config.get('base_url'))\n base_url = '{scheme}://{url}:{port}/api/v2'.format(\n scheme=parsed_url.scheme, url=parsed_url.netloc, port=config.get('port')\n )\n\n body_auth = dict(username=config.get('username'), password=config.get('password'))\n\n api_key = task.requests.post('{}/authenticate'.format(base_url), json=body_auth).json()[\n 'token'\n ]\n\n headers = {'x-auth': 'Bearer ' + api_key}\n\n params = {'limit': 1000}\n\n series = task.requests.get(\n '{}/series'.format(base_url), params=params, headers=headers\n ).json()\n\n entries = []\n for show in series:\n logger.debug('processing show: {}', show)\n if (\n (show['config']['paused'] and config.get('only_monitored'))\n or show['status'] == 'Ended'\n and not config.get('include_ended')\n ):\n logger.debug('discarted show: {}', show)\n\n entry = Entry(title=show['title'], url='', series_name=show['title'])\n\n if entry.isvalid():\n entries.append(entry)\n else:\n logger.error('Invalid entry created? {}'.format(entry))\n\n return entries\n\n\n@event('plugin.register')\ndef register_plugin():\n plugin.register(Medusa, 'medusa', api_ver=2)\n", "path": "flexget/plugins/input/medusa.py"}]} |
gh_patches_debug_99 | rasdani/github-patches | git_diff | pwr-Solaar__Solaar-907 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
locale.Error: unsupported locale setting exception when glibc locale files are not present
**Information**
- Solaar version: 1.0.3
- Distribution: Fedora
- Kernel version (ex. `uname -srmo`): `Linux 5.7.11-200.fc32.x86_64 x86_64 GNU/Linux`
- Output of `solaar show`: N/A
**Describe the bug**
Any solaar invocation is failing with a traceback when locale.setlocale() call fails, e.g. due to missing glibc locale files for the currently set locale.
**To Reproduce**
Steps to reproduce the behavior:
```
$ sudo dnf remove glibc-langpack-de
$ export LC_ALL=de_CH.UTF-8
$ export LANG=de_CH.UTF-8
$ solaar --help
Traceback (most recent call last):
File "/usr/bin/solaar", line 59, in <module>
import solaar.gtk
File "/usr/lib/python3.8/site-packages/solaar/gtk.py", line 29, in <module>
import solaar.i18n as _i18n
File "/usr/lib/python3.8/site-packages/solaar/i18n.py", line 50, in <module>
locale.setlocale(locale.LC_ALL, '')
File "/usr/lib64/python3.8/locale.py", line 608, in setlocale
return _setlocale(category, locale)
locale.Error: unsupported locale setting
$
```
**Additional context**
Looks like #190 is still unfixed. Downstream bug report: https://bugzilla.redhat.com/show_bug.cgi?id=1811313 .
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lib/solaar/i18n.py`
Content:
```
1 # -*- python-mode -*-
2 # -*- coding: UTF-8 -*-
3
4 ## Copyright (C) 2012-2013 Daniel Pavel
5 ##
6 ## This program is free software; you can redistribute it and/or modify
7 ## it under the terms of the GNU General Public License as published by
8 ## the Free Software Foundation; either version 2 of the License, or
9 ## (at your option) any later version.
10 ##
11 ## This program is distributed in the hope that it will be useful,
12 ## but WITHOUT ANY WARRANTY; without even the implied warranty of
13 ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
14 ## GNU General Public License for more details.
15 ##
16 ## You should have received a copy of the GNU General Public License along
17 ## with this program; if not, write to the Free Software Foundation, Inc.,
18 ## 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
19
20 from __future__ import absolute_import, division, print_function, unicode_literals
21
22 import gettext as _gettext
23 import locale
24
25 from solaar import NAME as _NAME
26
27 #
28 #
29 #
30
31
32 def _find_locale_path(lc_domain):
33 import os.path as _path
34
35 import sys as _sys
36 prefix_share = _path.normpath(_path.join(_path.realpath(_sys.path[0]), '..'))
37 src_share = _path.normpath(_path.join(_path.realpath(_sys.path[0]), '..', 'share'))
38 del _sys
39
40 from glob import glob as _glob
41
42 for location in prefix_share, src_share:
43 mo_files = _glob(_path.join(location, 'locale', '*', 'LC_MESSAGES', lc_domain + '.mo'))
44 if mo_files:
45 return _path.join(location, 'locale')
46
47 # del _path
48
49
50 locale.setlocale(locale.LC_ALL, '')
51 language, encoding = locale.getlocale()
52 del locale
53
54 _LOCALE_DOMAIN = _NAME.lower()
55 path = _find_locale_path(_LOCALE_DOMAIN)
56
57 _gettext.bindtextdomain(_LOCALE_DOMAIN, path)
58 _gettext.textdomain(_LOCALE_DOMAIN)
59 _gettext.install(_LOCALE_DOMAIN)
60
61 try:
62 unicode # noqa: F821
63 _ = lambda x: _gettext.gettext(x).decode('UTF-8')
64 ngettext = lambda *x: _gettext.ngettext(*x).decode('UTF-8')
65 except Exception:
66 _ = _gettext.gettext
67 ngettext = _gettext.ngettext
68
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lib/solaar/i18n.py b/lib/solaar/i18n.py
--- a/lib/solaar/i18n.py
+++ b/lib/solaar/i18n.py
@@ -47,7 +47,11 @@
# del _path
-locale.setlocale(locale.LC_ALL, '')
+try:
+ locale.setlocale(locale.LC_ALL, '')
+except Exception:
+ pass
+
language, encoding = locale.getlocale()
del locale
| {"golden_diff": "diff --git a/lib/solaar/i18n.py b/lib/solaar/i18n.py\n--- a/lib/solaar/i18n.py\n+++ b/lib/solaar/i18n.py\n@@ -47,7 +47,11 @@\n # del _path\n \n \n-locale.setlocale(locale.LC_ALL, '')\n+try:\n+ locale.setlocale(locale.LC_ALL, '')\n+except Exception:\n+ pass\n+\n language, encoding = locale.getlocale()\n del locale\n", "issue": "locale.Error: unsupported locale setting exception when glibc locale files are not present\n**Information**\r\n- Solaar version: 1.0.3\r\n- Distribution: Fedora\r\n- Kernel version (ex. `uname -srmo`): `Linux 5.7.11-200.fc32.x86_64 x86_64 GNU/Linux`\r\n- Output of `solaar show`: N/A\r\n\r\n**Describe the bug**\r\nAny solaar invocation is failing with a traceback when locale.setlocale() call fails, e.g. due to missing glibc locale files for the currently set locale.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n```\r\n$ sudo dnf remove glibc-langpack-de\r\n$ export LC_ALL=de_CH.UTF-8\r\n$ export LANG=de_CH.UTF-8\r\n$ solaar --help\r\nTraceback (most recent call last):\r\n File \"/usr/bin/solaar\", line 59, in <module>\r\n import solaar.gtk\r\n File \"/usr/lib/python3.8/site-packages/solaar/gtk.py\", line 29, in <module>\r\n import solaar.i18n as _i18n\r\n File \"/usr/lib/python3.8/site-packages/solaar/i18n.py\", line 50, in <module>\r\n locale.setlocale(locale.LC_ALL, '')\r\n File \"/usr/lib64/python3.8/locale.py\", line 608, in setlocale\r\n return _setlocale(category, locale)\r\nlocale.Error: unsupported locale setting\r\n$ \r\n```\r\n\r\n**Additional context**\r\nLooks like #190 is still unfixed. Downstream bug report: https://bugzilla.redhat.com/show_bug.cgi?id=1811313 .\n", "before_files": [{"content": "# -*- python-mode -*-\n# -*- coding: UTF-8 -*-\n\n## Copyright (C) 2012-2013 Daniel Pavel\n##\n## This program is free software; you can redistribute it and/or modify\n## it under the terms of the GNU General Public License as published by\n## the Free Software Foundation; either version 2 of the License, or\n## (at your option) any later version.\n##\n## This program is distributed in the hope that it will be useful,\n## but WITHOUT ANY WARRANTY; without even the implied warranty of\n## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n## GNU General Public License for more details.\n##\n## You should have received a copy of the GNU General Public License along\n## with this program; if not, write to the Free Software Foundation, Inc.,\n## 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport gettext as _gettext\nimport locale\n\nfrom solaar import NAME as _NAME\n\n#\n#\n#\n\n\ndef _find_locale_path(lc_domain):\n import os.path as _path\n\n import sys as _sys\n prefix_share = _path.normpath(_path.join(_path.realpath(_sys.path[0]), '..'))\n src_share = _path.normpath(_path.join(_path.realpath(_sys.path[0]), '..', 'share'))\n del _sys\n\n from glob import glob as _glob\n\n for location in prefix_share, src_share:\n mo_files = _glob(_path.join(location, 'locale', '*', 'LC_MESSAGES', lc_domain + '.mo'))\n if mo_files:\n return _path.join(location, 'locale')\n\n # del _path\n\n\nlocale.setlocale(locale.LC_ALL, '')\nlanguage, encoding = locale.getlocale()\ndel locale\n\n_LOCALE_DOMAIN = _NAME.lower()\npath = _find_locale_path(_LOCALE_DOMAIN)\n\n_gettext.bindtextdomain(_LOCALE_DOMAIN, path)\n_gettext.textdomain(_LOCALE_DOMAIN)\n_gettext.install(_LOCALE_DOMAIN)\n\ntry:\n unicode # noqa: F821\n _ = lambda x: _gettext.gettext(x).decode('UTF-8')\n ngettext = lambda *x: _gettext.ngettext(*x).decode('UTF-8')\nexcept Exception:\n _ = _gettext.gettext\n ngettext = _gettext.ngettext\n", "path": "lib/solaar/i18n.py"}], "after_files": [{"content": "# -*- python-mode -*-\n# -*- coding: UTF-8 -*-\n\n## Copyright (C) 2012-2013 Daniel Pavel\n##\n## This program is free software; you can redistribute it and/or modify\n## it under the terms of the GNU General Public License as published by\n## the Free Software Foundation; either version 2 of the License, or\n## (at your option) any later version.\n##\n## This program is distributed in the hope that it will be useful,\n## but WITHOUT ANY WARRANTY; without even the implied warranty of\n## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n## GNU General Public License for more details.\n##\n## You should have received a copy of the GNU General Public License along\n## with this program; if not, write to the Free Software Foundation, Inc.,\n## 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport gettext as _gettext\nimport locale\n\nfrom solaar import NAME as _NAME\n\n#\n#\n#\n\n\ndef _find_locale_path(lc_domain):\n import os.path as _path\n\n import sys as _sys\n prefix_share = _path.normpath(_path.join(_path.realpath(_sys.path[0]), '..'))\n src_share = _path.normpath(_path.join(_path.realpath(_sys.path[0]), '..', 'share'))\n del _sys\n\n from glob import glob as _glob\n\n for location in prefix_share, src_share:\n mo_files = _glob(_path.join(location, 'locale', '*', 'LC_MESSAGES', lc_domain + '.mo'))\n if mo_files:\n return _path.join(location, 'locale')\n\n # del _path\n\n\ntry:\n locale.setlocale(locale.LC_ALL, '')\nexcept Exception:\n pass\n\nlanguage, encoding = locale.getlocale()\ndel locale\n\n_LOCALE_DOMAIN = _NAME.lower()\npath = _find_locale_path(_LOCALE_DOMAIN)\n\n_gettext.bindtextdomain(_LOCALE_DOMAIN, path)\n_gettext.textdomain(_LOCALE_DOMAIN)\n_gettext.install(_LOCALE_DOMAIN)\n\ntry:\n unicode # noqa: F821\n _ = lambda x: _gettext.gettext(x).decode('UTF-8')\n ngettext = lambda *x: _gettext.ngettext(*x).decode('UTF-8')\nexcept Exception:\n _ = _gettext.gettext\n ngettext = _gettext.ngettext\n", "path": "lib/solaar/i18n.py"}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.