problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.1k
10.2k
| golden_diff
stringlengths 151
4.94k
| verification_info
stringlengths 582
21k
| num_tokens
int64 271
2.05k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_5289 | rasdani/github-patches | git_diff | freedomofpress__securedrop-7074 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Release SecureDrop 2.7.0
This is a tracking issue for the release of SecureDrop 2.7.0
Tentatively scheduled as follows:
**Pre-release announcement:** ~10-19-2023~10-26-2023
**Release date:** ~10-26-2023~11-07-2023
**Release manager:** @zenmonkeykstop
**Deputy release manager:** @cfm
**Localization manager:** @cfm
**Communications manager:** @nathandyer
_SecureDrop maintainers and testers:_ As you QA 2.7.0, please report back your testing results as comments on this ticket. File GitHub issues for any problems found, tag them "QA: Release".
Test debian packages will be posted on https://apt-test.freedom.press signed with [the test key](https://gist.githubusercontent.com/conorsch/ec4008b111bc3142fca522693f3cce7e/raw/2968621e8ad92db4505a31fcc5776422d7d26729/apt-test%2520apt%2520pubkey).
# [QA Matrix for 2.7.0](https://docs.google.com/spreadsheets/d/1Q_dOST_87ezW3L1LO-u7Nl18h6bQsYUbQHYPGQbG448/edit#gid=361662860)
# [Test Plan for 2.7.0](https://github.com/freedomofpress/securedrop/wiki/2.7.0-Test-Plan)
# Prepare release candidate (2.7.0~rc1)
- [ ] Link to latest version of Tails, including release candidates, to test against during QA
- [x] Prepare 2.7.0~rc1 release changelog
- [x] Branch off release/2.7.0 from develop
- [x] Prepare 2.7.0
- [ ] Build debs, preserving build log, and put up `2.7.0~rc1` on test apt server
- [ ] Commit build log.
# Prepare release candidate (2.7.0~rc2)
- [x] Prepare 2.7.0~rc2 release changelog
- [x] Branch off release/2.7.0 from develop
- [x] Prepare 2.7.0
- [x] Build debs, preserving build log, and put up `2.7.0~rc2` on test apt server
- [x] Commit build log.
# Prepare release candidate (2.7.0~rc3)
- [x] Prepare 2.7.0~rc3 release changelog
- [x] Branch off release/2.7.0 from develop
- [x] Prepare 2.7.0
- [x] Build debs, preserving build log, and put up `2.7.0~rc3` on test apt server
- [x] Commit build log.
# Prepare release candidate (2.7.0~rc4)
- [x] Prepare 2.7.0~rc4 release changelog
- [x] Branch off release/2.7.0 from develop
- [x] Prepare 2.7.0-rc4
- [x] Build debs, preserving build log, and put up `2.7.0~rc4` on test apt server
- [x] Commit build log.
After each test, please update the QA matrix and post details for Basic Server Testing, Application Acceptance Testing and release-specific testing below in comments to this ticket.
# Final release
- [x] Ensure builder in release branch is updated and/or update builder image
- [x] Push signed tag
- [x] Pre-Flight: Test updater logic in Tails (apt-qa tracks the `release` branch in the LFS repo)
- [x] Build final Debian packages(and preserve build log)
- [x] Commit package build log to https://github.com/freedomofpress/build-logs
- [x] Pre-Flight: Test that install and upgrade from 2.6.1 to 2.7.0 works w/ prod repo debs (apt-qa.freedom.press polls the `release` branch in the LFS repo for the debs)
- [x] Flip apt QA server to prod status (merge to `main` in the LFS repo)
- [ ] Merge Docs branch changes to ``main`` and verify new docs build in securedrop-docs repo
- [x] Prepare release messaging
# Post release
- [x] Create GitHub release object
- [x] Once release object is created, update versions in `securedrop-docs` and Wagtail
- [x] Verify new docs show up on https://docs.securedrop.org
- [x] Publish announcements
- [ ] Merge changelog back to `develop`
- [ ] Update roadmap wiki page: https://github.com/freedomofpress/securedrop/wiki/Development-Roadmap
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `securedrop/version.py`
Content:
```
1 __version__ = "2.7.0~rc1"
2
```
Path: `securedrop/setup.py`
Content:
```
1 import setuptools
2
3 long_description = "The SecureDrop whistleblower platform."
4
5 setuptools.setup(
6 name="securedrop-app-code",
7 version="2.7.0~rc1",
8 author="Freedom of the Press Foundation",
9 author_email="[email protected]",
10 description="SecureDrop Server",
11 long_description=long_description,
12 long_description_content_type="text/markdown",
13 license="AGPLv3+",
14 python_requires=">=3.8",
15 url="https://github.com/freedomofpress/securedrop",
16 classifiers=[
17 "Development Status :: 5 - Stable",
18 "Programming Language :: Python :: 3",
19 "Topic :: Software Development :: Libraries :: Python Modules",
20 "License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)",
21 "Intended Audience :: Developers",
22 "Operating System :: OS Independent",
23 ],
24 )
25
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/securedrop/setup.py b/securedrop/setup.py
--- a/securedrop/setup.py
+++ b/securedrop/setup.py
@@ -4,7 +4,7 @@
setuptools.setup(
name="securedrop-app-code",
- version="2.7.0~rc1",
+ version="2.8.0~rc1",
author="Freedom of the Press Foundation",
author_email="[email protected]",
description="SecureDrop Server",
diff --git a/securedrop/version.py b/securedrop/version.py
--- a/securedrop/version.py
+++ b/securedrop/version.py
@@ -1 +1 @@
-__version__ = "2.7.0~rc1"
+__version__ = "2.8.0~rc1"
| {"golden_diff": "diff --git a/securedrop/setup.py b/securedrop/setup.py\n--- a/securedrop/setup.py\n+++ b/securedrop/setup.py\n@@ -4,7 +4,7 @@\n \n setuptools.setup(\n name=\"securedrop-app-code\",\n- version=\"2.7.0~rc1\",\n+ version=\"2.8.0~rc1\",\n author=\"Freedom of the Press Foundation\",\n author_email=\"[email protected]\",\n description=\"SecureDrop Server\",\ndiff --git a/securedrop/version.py b/securedrop/version.py\n--- a/securedrop/version.py\n+++ b/securedrop/version.py\n@@ -1 +1 @@\n-__version__ = \"2.7.0~rc1\"\n+__version__ = \"2.8.0~rc1\"\n", "issue": "Release SecureDrop 2.7.0\nThis is a tracking issue for the release of SecureDrop 2.7.0\r\n\r\nTentatively scheduled as follows:\r\n\r\n**Pre-release announcement:** ~10-19-2023~10-26-2023\r\n**Release date:** ~10-26-2023~11-07-2023\r\n\r\n**Release manager:** @zenmonkeykstop \r\n**Deputy release manager:** @cfm\r\n**Localization manager:** @cfm\r\n**Communications manager:** @nathandyer \r\n\r\n_SecureDrop maintainers and testers:_ As you QA 2.7.0, please report back your testing results as comments on this ticket. File GitHub issues for any problems found, tag them \"QA: Release\".\r\n\r\nTest debian packages will be posted on https://apt-test.freedom.press signed with [the test key](https://gist.githubusercontent.com/conorsch/ec4008b111bc3142fca522693f3cce7e/raw/2968621e8ad92db4505a31fcc5776422d7d26729/apt-test%2520apt%2520pubkey).\r\n\r\n# [QA Matrix for 2.7.0](https://docs.google.com/spreadsheets/d/1Q_dOST_87ezW3L1LO-u7Nl18h6bQsYUbQHYPGQbG448/edit#gid=361662860)\r\n# [Test Plan for 2.7.0](https://github.com/freedomofpress/securedrop/wiki/2.7.0-Test-Plan)\r\n\r\n# Prepare release candidate (2.7.0~rc1)\r\n- [ ] Link to latest version of Tails, including release candidates, to test against during QA\r\n- [x] Prepare 2.7.0~rc1 release changelog\r\n- [x] Branch off release/2.7.0 from develop\r\n- [x] Prepare 2.7.0\r\n- [ ] Build debs, preserving build log, and put up `2.7.0~rc1` on test apt server\r\n- [ ] Commit build log.\r\n\r\n# Prepare release candidate (2.7.0~rc2)\r\n- [x] Prepare 2.7.0~rc2 release changelog\r\n- [x] Branch off release/2.7.0 from develop\r\n- [x] Prepare 2.7.0\r\n- [x] Build debs, preserving build log, and put up `2.7.0~rc2` on test apt server\r\n- [x] Commit build log.\r\n\r\n\r\n# Prepare release candidate (2.7.0~rc3)\r\n- [x] Prepare 2.7.0~rc3 release changelog\r\n- [x] Branch off release/2.7.0 from develop\r\n- [x] Prepare 2.7.0\r\n- [x] Build debs, preserving build log, and put up `2.7.0~rc3` on test apt server\r\n- [x] Commit build log.\r\n\r\n# Prepare release candidate (2.7.0~rc4)\r\n- [x] Prepare 2.7.0~rc4 release changelog\r\n- [x] Branch off release/2.7.0 from develop\r\n- [x] Prepare 2.7.0-rc4\r\n- [x] Build debs, preserving build log, and put up `2.7.0~rc4` on test apt server\r\n- [x] Commit build log.\r\n\r\n\r\nAfter each test, please update the QA matrix and post details for Basic Server Testing, Application Acceptance Testing and release-specific testing below in comments to this ticket.\r\n\r\n# Final release\r\n- [x] Ensure builder in release branch is updated and/or update builder image\r\n- [x] Push signed tag \r\n- [x] Pre-Flight: Test updater logic in Tails (apt-qa tracks the `release` branch in the LFS repo)\r\n- [x] Build final Debian packages(and preserve build log)\r\n- [x] Commit package build log to https://github.com/freedomofpress/build-logs\r\n- [x] Pre-Flight: Test that install and upgrade from 2.6.1 to 2.7.0 works w/ prod repo debs (apt-qa.freedom.press polls the `release` branch in the LFS repo for the debs)\r\n- [x] Flip apt QA server to prod status (merge to `main` in the LFS repo)\r\n- [ ] Merge Docs branch changes to ``main`` and verify new docs build in securedrop-docs repo\r\n- [x] Prepare release messaging\r\n\r\n# Post release\r\n- [x] Create GitHub release object \r\n- [x] Once release object is created, update versions in `securedrop-docs` and Wagtail\r\n- [x] Verify new docs show up on https://docs.securedrop.org\r\n- [x] Publish announcements\r\n- [ ] Merge changelog back to `develop`\r\n- [ ] Update roadmap wiki page: https://github.com/freedomofpress/securedrop/wiki/Development-Roadmap\n", "before_files": [{"content": "__version__ = \"2.7.0~rc1\"\n", "path": "securedrop/version.py"}, {"content": "import setuptools\n\nlong_description = \"The SecureDrop whistleblower platform.\"\n\nsetuptools.setup(\n name=\"securedrop-app-code\",\n version=\"2.7.0~rc1\",\n author=\"Freedom of the Press Foundation\",\n author_email=\"[email protected]\",\n description=\"SecureDrop Server\",\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n license=\"AGPLv3+\",\n python_requires=\">=3.8\",\n url=\"https://github.com/freedomofpress/securedrop\",\n classifiers=[\n \"Development Status :: 5 - Stable\",\n \"Programming Language :: Python :: 3\",\n \"Topic :: Software Development :: Libraries :: Python Modules\",\n \"License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)\",\n \"Intended Audience :: Developers\",\n \"Operating System :: OS Independent\",\n ],\n)\n", "path": "securedrop/setup.py"}], "after_files": [{"content": "__version__ = \"2.8.0~rc1\"\n", "path": "securedrop/version.py"}, {"content": "import setuptools\n\nlong_description = \"The SecureDrop whistleblower platform.\"\n\nsetuptools.setup(\n name=\"securedrop-app-code\",\n version=\"2.8.0~rc1\",\n author=\"Freedom of the Press Foundation\",\n author_email=\"[email protected]\",\n description=\"SecureDrop Server\",\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n license=\"AGPLv3+\",\n python_requires=\">=3.8\",\n url=\"https://github.com/freedomofpress/securedrop\",\n classifiers=[\n \"Development Status :: 5 - Stable\",\n \"Programming Language :: Python :: 3\",\n \"Topic :: Software Development :: Libraries :: Python Modules\",\n \"License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)\",\n \"Intended Audience :: Developers\",\n \"Operating System :: OS Independent\",\n ],\n)\n", "path": "securedrop/setup.py"}]} | 1,668 | 175 |
gh_patches_debug_26784 | rasdani/github-patches | git_diff | mindee__doctr-1280 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
listing the detection and recognition models not working in pytorch
### Bug description
Hi,
This is the list of models not working in inference:
- detection models:
* db_resnet_34
* linknet_resnet18
* linknet_resnet34
* linknet_resnet50
for all these detection models in pytorch, the code inference works but it get to much segmentations/detected boxes.
- recognition models:
* sar_resnet31
* master
* vitstr_small
* vitstr_base
For all of these models in pytorch, the code inference seems to work, but it gets random character recognition. I think they are not correct loaded or trained.
### Code snippet to reproduce the bug
in doctr/demo/backend/pytorch.py file, changing this lines
```
DET_ARCHS = ["db_resnet50", "db_mobilenet_v3_large", "linknet_resnet50_rotation"]
RECO_ARCHS = ["crnn_vgg16_bn", "crnn_mobilenet_v3_small", "master", "sar_resnet31"]
```
by this lines
```
#DET_ARCHS = ["db_resnet50", "db_mobilenet_v3_large", "linknet_resnet50_rotation"]
DET_ARCHS = [ "db_resnet34",
"db_resnet50",
"db_mobilenet_v3_large",
"linknet_resnet18",
"linknet_resnet34",
"linknet_resnet50",
"db_resnet50_rotation"]
#RECO_ARCHS = ["crnn_vgg16_bn", "crnn_mobilenet_v3_small", "master", "sar_resnet31"]
RECO_ARCHS=[ "crnn_vgg16_bn",
"crnn_mobilenet_v3_small",
"crnn_mobilenet_v3_large",
"sar_resnet31",
"master",
"vitstr_small",
"vitstr_base"]
```
and running this code to try all the pytorch models
```
USE_TF=0 streamlit run demo/app.py
```
### Error traceback
not errors but bugs on models
### Environment
wget https://raw.githubusercontent.com/mindee/doctr/main/scripts/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
### Deep Learning backend
from doctr.file_utils import is_tf_available, is_torch_available
print(f"is_tf_available: {is_tf_available()}")
print(f"is_torch_available: {is_torch_available()}")
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `demo/backend/tensorflow.py`
Content:
```
1 # Copyright (C) 2021-2023, Mindee.
2
3 # This program is licensed under the Apache License 2.0.
4 # See LICENSE or go to <https://opensource.org/licenses/Apache-2.0> for full license details.
5
6 import numpy as np
7 import tensorflow as tf
8
9 from doctr.models import ocr_predictor
10 from doctr.models.predictor import OCRPredictor
11
12 DET_ARCHS = ["db_resnet50", "db_mobilenet_v3_large", "linknet_resnet18_rotation"]
13 RECO_ARCHS = ["crnn_vgg16_bn", "crnn_mobilenet_v3_small", "master", "sar_resnet31"]
14
15
16 def load_predictor(det_arch: str, reco_arch: str, device) -> OCRPredictor:
17 """
18 Args:
19 device is tf.device
20 """
21 with device:
22 predictor = ocr_predictor(
23 det_arch, reco_arch, pretrained=True, assume_straight_pages=("rotation" not in det_arch)
24 )
25 return predictor
26
27
28 def forward_image(predictor: OCRPredictor, image: np.ndarray, device) -> np.ndarray:
29 """
30 Args:
31 device is tf.device
32 """
33 with device:
34 processed_batches = predictor.det_predictor.pre_processor([image])
35 out = predictor.det_predictor.model(processed_batches[0], return_model_output=True)
36 seg_map = out["out_map"]
37
38 with tf.device("/cpu:0"):
39 seg_map = tf.identity(seg_map).numpy()
40
41 return seg_map
42
```
Path: `demo/backend/pytorch.py`
Content:
```
1 # Copyright (C) 2021-2023, Mindee.
2
3 # This program is licensed under the Apache License 2.0.
4 # See LICENSE or go to <https://opensource.org/licenses/Apache-2.0> for full license details.
5
6 import numpy as np
7 import torch
8
9 from doctr.models import ocr_predictor
10 from doctr.models.predictor import OCRPredictor
11
12 DET_ARCHS = ["db_resnet50", "db_mobilenet_v3_large", "linknet_resnet50_rotation"]
13 RECO_ARCHS = ["crnn_vgg16_bn", "crnn_mobilenet_v3_small", "master", "sar_resnet31"]
14
15
16 def load_predictor(det_arch: str, reco_arch: str, device) -> OCRPredictor:
17 """
18 Args:
19 device is torch.device
20 """
21 predictor = ocr_predictor(
22 det_arch, reco_arch, pretrained=True, assume_straight_pages=("rotation" not in det_arch)
23 ).to(device)
24 return predictor
25
26
27 def forward_image(predictor: OCRPredictor, image: np.ndarray, device) -> np.ndarray:
28 """
29 Args:
30 device is torch.device
31 """
32 with torch.no_grad():
33 processed_batches = predictor.det_predictor.pre_processor([image])
34 out = predictor.det_predictor.model(processed_batches[0].to(device), return_model_output=True)
35 seg_map = out["out_map"].to("cpu").numpy()
36
37 return seg_map
38
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/demo/backend/pytorch.py b/demo/backend/pytorch.py
--- a/demo/backend/pytorch.py
+++ b/demo/backend/pytorch.py
@@ -9,8 +9,25 @@
from doctr.models import ocr_predictor
from doctr.models.predictor import OCRPredictor
-DET_ARCHS = ["db_resnet50", "db_mobilenet_v3_large", "linknet_resnet50_rotation"]
-RECO_ARCHS = ["crnn_vgg16_bn", "crnn_mobilenet_v3_small", "master", "sar_resnet31"]
+DET_ARCHS = [
+ "db_resnet50",
+ "db_resnet34",
+ "db_mobilenet_v3_large",
+ "db_resnet50_rotation",
+ "linknet_resnet18",
+ "linknet_resnet34",
+ "linknet_resnet50",
+]
+RECO_ARCHS = [
+ "crnn_vgg16_bn",
+ "crnn_mobilenet_v3_small",
+ "crnn_mobilenet_v3_large",
+ "master",
+ "sar_resnet31",
+ "vitstr_small",
+ "vitstr_base",
+ "parseq",
+]
def load_predictor(det_arch: str, reco_arch: str, device) -> OCRPredictor:
diff --git a/demo/backend/tensorflow.py b/demo/backend/tensorflow.py
--- a/demo/backend/tensorflow.py
+++ b/demo/backend/tensorflow.py
@@ -9,8 +9,24 @@
from doctr.models import ocr_predictor
from doctr.models.predictor import OCRPredictor
-DET_ARCHS = ["db_resnet50", "db_mobilenet_v3_large", "linknet_resnet18_rotation"]
-RECO_ARCHS = ["crnn_vgg16_bn", "crnn_mobilenet_v3_small", "master", "sar_resnet31"]
+DET_ARCHS = [
+ "db_resnet50",
+ "db_mobilenet_v3_large",
+ "linknet_resnet18",
+ "linknet_resnet18_rotation",
+ "linknet_resnet34",
+ "linknet_resnet50",
+]
+RECO_ARCHS = [
+ "crnn_vgg16_bn",
+ "crnn_mobilenet_v3_small",
+ "crnn_mobilenet_v3_large",
+ "master",
+ "sar_resnet31",
+ "vitstr_small",
+ "vitstr_base",
+ "parseq",
+]
def load_predictor(det_arch: str, reco_arch: str, device) -> OCRPredictor:
| {"golden_diff": "diff --git a/demo/backend/pytorch.py b/demo/backend/pytorch.py\n--- a/demo/backend/pytorch.py\n+++ b/demo/backend/pytorch.py\n@@ -9,8 +9,25 @@\n from doctr.models import ocr_predictor\n from doctr.models.predictor import OCRPredictor\n \n-DET_ARCHS = [\"db_resnet50\", \"db_mobilenet_v3_large\", \"linknet_resnet50_rotation\"]\n-RECO_ARCHS = [\"crnn_vgg16_bn\", \"crnn_mobilenet_v3_small\", \"master\", \"sar_resnet31\"]\n+DET_ARCHS = [\n+ \"db_resnet50\",\n+ \"db_resnet34\",\n+ \"db_mobilenet_v3_large\",\n+ \"db_resnet50_rotation\",\n+ \"linknet_resnet18\",\n+ \"linknet_resnet34\",\n+ \"linknet_resnet50\",\n+]\n+RECO_ARCHS = [\n+ \"crnn_vgg16_bn\",\n+ \"crnn_mobilenet_v3_small\",\n+ \"crnn_mobilenet_v3_large\",\n+ \"master\",\n+ \"sar_resnet31\",\n+ \"vitstr_small\",\n+ \"vitstr_base\",\n+ \"parseq\",\n+]\n \n \n def load_predictor(det_arch: str, reco_arch: str, device) -> OCRPredictor:\ndiff --git a/demo/backend/tensorflow.py b/demo/backend/tensorflow.py\n--- a/demo/backend/tensorflow.py\n+++ b/demo/backend/tensorflow.py\n@@ -9,8 +9,24 @@\n from doctr.models import ocr_predictor\n from doctr.models.predictor import OCRPredictor\n \n-DET_ARCHS = [\"db_resnet50\", \"db_mobilenet_v3_large\", \"linknet_resnet18_rotation\"]\n-RECO_ARCHS = [\"crnn_vgg16_bn\", \"crnn_mobilenet_v3_small\", \"master\", \"sar_resnet31\"]\n+DET_ARCHS = [\n+ \"db_resnet50\",\n+ \"db_mobilenet_v3_large\",\n+ \"linknet_resnet18\",\n+ \"linknet_resnet18_rotation\",\n+ \"linknet_resnet34\",\n+ \"linknet_resnet50\",\n+]\n+RECO_ARCHS = [\n+ \"crnn_vgg16_bn\",\n+ \"crnn_mobilenet_v3_small\",\n+ \"crnn_mobilenet_v3_large\",\n+ \"master\",\n+ \"sar_resnet31\",\n+ \"vitstr_small\",\n+ \"vitstr_base\",\n+ \"parseq\",\n+]\n \n \n def load_predictor(det_arch: str, reco_arch: str, device) -> OCRPredictor:\n", "issue": "listing the detection and recognition models not working in pytorch\n### Bug description\r\n\r\nHi,\r\n\r\nThis is the list of models not working in inference:\r\n\r\n- detection models:\r\n * db_resnet_34\r\n * linknet_resnet18\r\n * linknet_resnet34\r\n * linknet_resnet50\r\nfor all these detection models in pytorch, the code inference works but it get to much segmentations/detected boxes. \r\n\r\n\r\n- recognition models:\r\n * sar_resnet31\r\n * master\r\n * vitstr_small\r\n * vitstr_base\r\nFor all of these models in pytorch, the code inference seems to work, but it gets random character recognition. I think they are not correct loaded or trained.\r\n\r\n\r\n### Code snippet to reproduce the bug\r\n\r\nin doctr/demo/backend/pytorch.py file, changing this lines\r\n```\r\nDET_ARCHS = [\"db_resnet50\", \"db_mobilenet_v3_large\", \"linknet_resnet50_rotation\"]\r\nRECO_ARCHS = [\"crnn_vgg16_bn\", \"crnn_mobilenet_v3_small\", \"master\", \"sar_resnet31\"]\r\n```\r\n\r\nby this lines \r\n```\r\n#DET_ARCHS = [\"db_resnet50\", \"db_mobilenet_v3_large\", \"linknet_resnet50_rotation\"]\r\nDET_ARCHS = [ \"db_resnet34\",\r\n \"db_resnet50\",\r\n \"db_mobilenet_v3_large\",\r\n \"linknet_resnet18\",\r\n \"linknet_resnet34\",\r\n \"linknet_resnet50\",\r\n \"db_resnet50_rotation\"]\r\n#RECO_ARCHS = [\"crnn_vgg16_bn\", \"crnn_mobilenet_v3_small\", \"master\", \"sar_resnet31\"]\r\nRECO_ARCHS=[ \"crnn_vgg16_bn\",\r\n \"crnn_mobilenet_v3_small\",\r\n \"crnn_mobilenet_v3_large\",\r\n \"sar_resnet31\",\r\n \"master\",\r\n \"vitstr_small\",\r\n \"vitstr_base\"]\r\n```\r\n\r\nand running this code to try all the pytorch models\r\n```\r\nUSE_TF=0 streamlit run demo/app.py\r\n```\r\n\r\n\r\n### Error traceback\r\n\r\nnot errors but bugs on models\r\n\r\n### Environment\r\n\r\nwget https://raw.githubusercontent.com/mindee/doctr/main/scripts/collect_env.py\r\n# For security purposes, please check the contents of collect_env.py before running it.\r\npython collect_env.py\r\n\r\n### Deep Learning backend\r\n\r\n from doctr.file_utils import is_tf_available, is_torch_available\r\n\r\nprint(f\"is_tf_available: {is_tf_available()}\")\r\nprint(f\"is_torch_available: {is_torch_available()}\")\n", "before_files": [{"content": "# Copyright (C) 2021-2023, Mindee.\n\n# This program is licensed under the Apache License 2.0.\n# See LICENSE or go to <https://opensource.org/licenses/Apache-2.0> for full license details.\n\nimport numpy as np\nimport tensorflow as tf\n\nfrom doctr.models import ocr_predictor\nfrom doctr.models.predictor import OCRPredictor\n\nDET_ARCHS = [\"db_resnet50\", \"db_mobilenet_v3_large\", \"linknet_resnet18_rotation\"]\nRECO_ARCHS = [\"crnn_vgg16_bn\", \"crnn_mobilenet_v3_small\", \"master\", \"sar_resnet31\"]\n\n\ndef load_predictor(det_arch: str, reco_arch: str, device) -> OCRPredictor:\n \"\"\"\n Args:\n device is tf.device\n \"\"\"\n with device:\n predictor = ocr_predictor(\n det_arch, reco_arch, pretrained=True, assume_straight_pages=(\"rotation\" not in det_arch)\n )\n return predictor\n\n\ndef forward_image(predictor: OCRPredictor, image: np.ndarray, device) -> np.ndarray:\n \"\"\"\n Args:\n device is tf.device\n \"\"\"\n with device:\n processed_batches = predictor.det_predictor.pre_processor([image])\n out = predictor.det_predictor.model(processed_batches[0], return_model_output=True)\n seg_map = out[\"out_map\"]\n\n with tf.device(\"/cpu:0\"):\n seg_map = tf.identity(seg_map).numpy()\n\n return seg_map\n", "path": "demo/backend/tensorflow.py"}, {"content": "# Copyright (C) 2021-2023, Mindee.\n\n# This program is licensed under the Apache License 2.0.\n# See LICENSE or go to <https://opensource.org/licenses/Apache-2.0> for full license details.\n\nimport numpy as np\nimport torch\n\nfrom doctr.models import ocr_predictor\nfrom doctr.models.predictor import OCRPredictor\n\nDET_ARCHS = [\"db_resnet50\", \"db_mobilenet_v3_large\", \"linknet_resnet50_rotation\"]\nRECO_ARCHS = [\"crnn_vgg16_bn\", \"crnn_mobilenet_v3_small\", \"master\", \"sar_resnet31\"]\n\n\ndef load_predictor(det_arch: str, reco_arch: str, device) -> OCRPredictor:\n \"\"\"\n Args:\n device is torch.device\n \"\"\"\n predictor = ocr_predictor(\n det_arch, reco_arch, pretrained=True, assume_straight_pages=(\"rotation\" not in det_arch)\n ).to(device)\n return predictor\n\n\ndef forward_image(predictor: OCRPredictor, image: np.ndarray, device) -> np.ndarray:\n \"\"\"\n Args:\n device is torch.device\n \"\"\"\n with torch.no_grad():\n processed_batches = predictor.det_predictor.pre_processor([image])\n out = predictor.det_predictor.model(processed_batches[0].to(device), return_model_output=True)\n seg_map = out[\"out_map\"].to(\"cpu\").numpy()\n\n return seg_map\n", "path": "demo/backend/pytorch.py"}], "after_files": [{"content": "# Copyright (C) 2021-2023, Mindee.\n\n# This program is licensed under the Apache License 2.0.\n# See LICENSE or go to <https://opensource.org/licenses/Apache-2.0> for full license details.\n\nimport numpy as np\nimport tensorflow as tf\n\nfrom doctr.models import ocr_predictor\nfrom doctr.models.predictor import OCRPredictor\n\nDET_ARCHS = [\n \"db_resnet50\",\n \"db_mobilenet_v3_large\",\n \"linknet_resnet18\",\n \"linknet_resnet18_rotation\",\n \"linknet_resnet34\",\n \"linknet_resnet50\",\n]\nRECO_ARCHS = [\n \"crnn_vgg16_bn\",\n \"crnn_mobilenet_v3_small\",\n \"crnn_mobilenet_v3_large\",\n \"master\",\n \"sar_resnet31\",\n \"vitstr_small\",\n \"vitstr_base\",\n \"parseq\",\n]\n\n\ndef load_predictor(det_arch: str, reco_arch: str, device) -> OCRPredictor:\n \"\"\"\n Args:\n device is tf.device\n \"\"\"\n with device:\n predictor = ocr_predictor(\n det_arch, reco_arch, pretrained=True, assume_straight_pages=(\"rotation\" not in det_arch)\n )\n return predictor\n\n\ndef forward_image(predictor: OCRPredictor, image: np.ndarray, device) -> np.ndarray:\n \"\"\"\n Args:\n device is tf.device\n \"\"\"\n with device:\n processed_batches = predictor.det_predictor.pre_processor([image])\n out = predictor.det_predictor.model(processed_batches[0], return_model_output=True)\n seg_map = out[\"out_map\"]\n\n with tf.device(\"/cpu:0\"):\n seg_map = tf.identity(seg_map).numpy()\n\n return seg_map\n", "path": "demo/backend/tensorflow.py"}, {"content": "# Copyright (C) 2021-2023, Mindee.\n\n# This program is licensed under the Apache License 2.0.\n# See LICENSE or go to <https://opensource.org/licenses/Apache-2.0> for full license details.\n\nimport numpy as np\nimport torch\n\nfrom doctr.models import ocr_predictor\nfrom doctr.models.predictor import OCRPredictor\n\nDET_ARCHS = [\n \"db_resnet50\",\n \"db_resnet34\",\n \"db_mobilenet_v3_large\",\n \"db_resnet50_rotation\",\n \"linknet_resnet18\",\n \"linknet_resnet34\",\n \"linknet_resnet50\",\n]\nRECO_ARCHS = [\n \"crnn_vgg16_bn\",\n \"crnn_mobilenet_v3_small\",\n \"crnn_mobilenet_v3_large\",\n \"master\",\n \"sar_resnet31\",\n \"vitstr_small\",\n \"vitstr_base\",\n \"parseq\",\n]\n\n\ndef load_predictor(det_arch: str, reco_arch: str, device) -> OCRPredictor:\n \"\"\"\n Args:\n device is torch.device\n \"\"\"\n predictor = ocr_predictor(\n det_arch, reco_arch, pretrained=True, assume_straight_pages=(\"rotation\" not in det_arch)\n ).to(device)\n return predictor\n\n\ndef forward_image(predictor: OCRPredictor, image: np.ndarray, device) -> np.ndarray:\n \"\"\"\n Args:\n device is torch.device\n \"\"\"\n with torch.no_grad():\n processed_batches = predictor.det_predictor.pre_processor([image])\n out = predictor.det_predictor.model(processed_batches[0].to(device), return_model_output=True)\n seg_map = out[\"out_map\"].to(\"cpu\").numpy()\n\n return seg_map\n", "path": "demo/backend/pytorch.py"}]} | 1,653 | 604 |
gh_patches_debug_26097 | rasdani/github-patches | git_diff | cloud-custodian__cloud-custodian-3709 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
eks update action
https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `c7n/resources/eks.py`
Content:
```
1 # Copyright 2018 Capital One Services, LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from __future__ import absolute_import, division, print_function, unicode_literals
15
16 from c7n.actions import Action
17 from c7n.filters.vpc import SecurityGroupFilter, SubnetFilter, VpcFilter
18 from c7n.manager import resources
19 from c7n.query import QueryResourceManager
20 from c7n.utils import local_session, type_schema
21
22
23 @resources.register('eks')
24 class EKS(QueryResourceManager):
25
26 class resource_type(object):
27 service = 'eks'
28 enum_spec = ('list_clusters', 'clusters', None)
29 arn = 'arn'
30 detail_spec = ('describe_cluster', 'name', None, 'cluster')
31 id = name = 'name'
32 date = 'createdAt'
33 dimension = None
34 filter_name = None
35
36
37 @EKS.filter_registry.register('subnet')
38 class EKSSubnetFilter(SubnetFilter):
39
40 RelatedIdsExpression = "resourcesVpcConfig.subnetIds[]"
41
42
43 @EKS.filter_registry.register('security-group')
44 class EKSSGFilter(SecurityGroupFilter):
45
46 RelatedIdsExpression = "resourcesVpcConfig.securityGroupIds[]"
47
48
49 @EKS.filter_registry.register('vpc')
50 class EKSVpcFilter(VpcFilter):
51
52 RelatedIdsExpression = 'resourcesVpcConfig.vpcId'
53
54
55 @EKS.action_registry.register('delete')
56 class Delete(Action):
57
58 schema = type_schema('delete')
59 permissions = ('eks:DeleteCluster',)
60
61 def process(self, resources):
62 client = local_session(self.manager.session_factory).client('eks')
63 for r in resources:
64 try:
65 client.delete_cluster(name=r['name'])
66 except client.exceptions.ResourceNotFoundException:
67 continue
68
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/c7n/resources/eks.py b/c7n/resources/eks.py
--- a/c7n/resources/eks.py
+++ b/c7n/resources/eks.py
@@ -19,6 +19,8 @@
from c7n.query import QueryResourceManager
from c7n.utils import local_session, type_schema
+from .aws import shape_validate
+
@resources.register('eks')
class EKS(QueryResourceManager):
@@ -52,6 +54,37 @@
RelatedIdsExpression = 'resourcesVpcConfig.vpcId'
[email protected]_registry.register('update-config')
+class UpdateConfig(Action):
+
+ schema = type_schema(
+ 'update-config', resourcesVpcConfig={'type': 'object'},
+ required=('resourcesVpcConfig',))
+ permissions = ('eks:UpdateClusterConfig',)
+ shape = 'UpdateClusterConfigRequest'
+
+ def validate(self):
+ cfg = dict(self.data)
+ cfg['name'] = 'validate'
+ cfg.pop('type')
+ return shape_validate(
+ cfg, self.shape, self.manager.resource_type.service)
+
+ def process(self, resources):
+ client = local_session(self.manager.session_factory).client('eks')
+ state_filtered = 0
+ for r in resources:
+ if r['status'] != 'ACTIVE':
+ state_filtered += 1
+ continue
+ client.update_cluster_config(
+ name=r['name'],
+ resourcesVpcConfig=self.data['resourcesVpcConfig'])
+ if state_filtered:
+ self.log.warning(
+ "Filtered %d of %d clusters due to state", state_filtered, len(resources))
+
+
@EKS.action_registry.register('delete')
class Delete(Action):
| {"golden_diff": "diff --git a/c7n/resources/eks.py b/c7n/resources/eks.py\n--- a/c7n/resources/eks.py\n+++ b/c7n/resources/eks.py\n@@ -19,6 +19,8 @@\n from c7n.query import QueryResourceManager\n from c7n.utils import local_session, type_schema\n \n+from .aws import shape_validate\n+\n \n @resources.register('eks')\n class EKS(QueryResourceManager):\n@@ -52,6 +54,37 @@\n RelatedIdsExpression = 'resourcesVpcConfig.vpcId'\n \n \[email protected]_registry.register('update-config')\n+class UpdateConfig(Action):\n+\n+ schema = type_schema(\n+ 'update-config', resourcesVpcConfig={'type': 'object'},\n+ required=('resourcesVpcConfig',))\n+ permissions = ('eks:UpdateClusterConfig',)\n+ shape = 'UpdateClusterConfigRequest'\n+\n+ def validate(self):\n+ cfg = dict(self.data)\n+ cfg['name'] = 'validate'\n+ cfg.pop('type')\n+ return shape_validate(\n+ cfg, self.shape, self.manager.resource_type.service)\n+\n+ def process(self, resources):\n+ client = local_session(self.manager.session_factory).client('eks')\n+ state_filtered = 0\n+ for r in resources:\n+ if r['status'] != 'ACTIVE':\n+ state_filtered += 1\n+ continue\n+ client.update_cluster_config(\n+ name=r['name'],\n+ resourcesVpcConfig=self.data['resourcesVpcConfig'])\n+ if state_filtered:\n+ self.log.warning(\n+ \"Filtered %d of %d clusters due to state\", state_filtered, len(resources))\n+\n+\n @EKS.action_registry.register('delete')\n class Delete(Action):\n", "issue": "eks update action\nhttps://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html\n", "before_files": [{"content": "# Copyright 2018 Capital One Services, LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom c7n.actions import Action\nfrom c7n.filters.vpc import SecurityGroupFilter, SubnetFilter, VpcFilter\nfrom c7n.manager import resources\nfrom c7n.query import QueryResourceManager\nfrom c7n.utils import local_session, type_schema\n\n\[email protected]('eks')\nclass EKS(QueryResourceManager):\n\n class resource_type(object):\n service = 'eks'\n enum_spec = ('list_clusters', 'clusters', None)\n arn = 'arn'\n detail_spec = ('describe_cluster', 'name', None, 'cluster')\n id = name = 'name'\n date = 'createdAt'\n dimension = None\n filter_name = None\n\n\[email protected]_registry.register('subnet')\nclass EKSSubnetFilter(SubnetFilter):\n\n RelatedIdsExpression = \"resourcesVpcConfig.subnetIds[]\"\n\n\[email protected]_registry.register('security-group')\nclass EKSSGFilter(SecurityGroupFilter):\n\n RelatedIdsExpression = \"resourcesVpcConfig.securityGroupIds[]\"\n\n\[email protected]_registry.register('vpc')\nclass EKSVpcFilter(VpcFilter):\n\n RelatedIdsExpression = 'resourcesVpcConfig.vpcId'\n\n\[email protected]_registry.register('delete')\nclass Delete(Action):\n\n schema = type_schema('delete')\n permissions = ('eks:DeleteCluster',)\n\n def process(self, resources):\n client = local_session(self.manager.session_factory).client('eks')\n for r in resources:\n try:\n client.delete_cluster(name=r['name'])\n except client.exceptions.ResourceNotFoundException:\n continue\n", "path": "c7n/resources/eks.py"}], "after_files": [{"content": "# Copyright 2018 Capital One Services, LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom c7n.actions import Action\nfrom c7n.filters.vpc import SecurityGroupFilter, SubnetFilter, VpcFilter\nfrom c7n.manager import resources\nfrom c7n.query import QueryResourceManager\nfrom c7n.utils import local_session, type_schema\n\nfrom .aws import shape_validate\n\n\[email protected]('eks')\nclass EKS(QueryResourceManager):\n\n class resource_type(object):\n service = 'eks'\n enum_spec = ('list_clusters', 'clusters', None)\n arn = 'arn'\n detail_spec = ('describe_cluster', 'name', None, 'cluster')\n id = name = 'name'\n date = 'createdAt'\n dimension = None\n filter_name = None\n\n\[email protected]_registry.register('subnet')\nclass EKSSubnetFilter(SubnetFilter):\n\n RelatedIdsExpression = \"resourcesVpcConfig.subnetIds[]\"\n\n\[email protected]_registry.register('security-group')\nclass EKSSGFilter(SecurityGroupFilter):\n\n RelatedIdsExpression = \"resourcesVpcConfig.securityGroupIds[]\"\n\n\[email protected]_registry.register('vpc')\nclass EKSVpcFilter(VpcFilter):\n\n RelatedIdsExpression = 'resourcesVpcConfig.vpcId'\n\n\[email protected]_registry.register('update-config')\nclass UpdateConfig(Action):\n\n schema = type_schema(\n 'update-config', resourcesVpcConfig={'type': 'object'},\n required=('resourcesVpcConfig',))\n permissions = ('eks:UpdateClusterConfig',)\n shape = 'UpdateClusterConfigRequest'\n\n def validate(self):\n cfg = dict(self.data)\n cfg['name'] = 'validate'\n cfg.pop('type')\n return shape_validate(\n cfg, self.shape, self.manager.resource_type.service)\n\n def process(self, resources):\n client = local_session(self.manager.session_factory).client('eks')\n state_filtered = 0\n for r in resources:\n if r['status'] != 'ACTIVE':\n state_filtered += 1\n continue\n client.update_cluster_config(\n name=r['name'],\n resourcesVpcConfig=self.data['resourcesVpcConfig'])\n if state_filtered:\n self.log.warning(\n \"Filtered %d of %d clusters due to state\", state_filtered, len(resources))\n\n\[email protected]_registry.register('delete')\nclass Delete(Action):\n\n schema = type_schema('delete')\n permissions = ('eks:DeleteCluster',)\n\n def process(self, resources):\n client = local_session(self.manager.session_factory).client('eks')\n for r in resources:\n try:\n client.delete_cluster(name=r['name'])\n except client.exceptions.ResourceNotFoundException:\n continue\n", "path": "c7n/resources/eks.py"}]} | 900 | 383 |
gh_patches_debug_36103 | rasdani/github-patches | git_diff | bokeh__bokeh-6504 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add synthetic data mode to spectrogram
The pyaudio package is not available on all platforms. An option mode to generate synthetic dat would allow the app to be run everywhere in some capacity.
@philippjfr I think you might already have some code that could be re-used?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/app/spectrogram/audio.py`
Content:
```
1 from __future__ import print_function
2
3 try:
4 import pyaudio
5 except:
6 print("This demo requires pyaudio installed to function")
7 import sys
8 sys.exit(1)
9
10 import numpy as np
11 import scipy as sp
12 from scipy.integrate import simps
13
14 NUM_SAMPLES = 1024
15 SAMPLING_RATE = 44100
16 MAX_FREQ = SAMPLING_RATE / 2
17 FREQ_SAMPLES = NUM_SAMPLES / 8
18 TIMESLICE = 100 # ms
19 NUM_BINS = 16
20
21 data = {'values': None}
22
23 def _get_audio_data():
24 pa = pyaudio.PyAudio()
25 stream = pa.open(
26 format=pyaudio.paInt16,
27 channels=1,
28 rate=SAMPLING_RATE,
29 input=True,
30 frames_per_buffer=NUM_SAMPLES
31 )
32
33 while True:
34 try:
35 raw_data = np.fromstring(stream.read(NUM_SAMPLES), dtype=np.int16)
36 signal = raw_data / 32768.0
37 fft = sp.fft(signal)
38 spectrum = abs(fft)[:NUM_SAMPLES/2]
39 power = spectrum**2
40 bins = simps(np.split(power, NUM_BINS))
41 data['values'] = signal, spectrum, bins
42 except:
43 continue
44
```
Path: `examples/app/spectrogram/server_lifecycle.py`
Content:
```
1 from threading import Thread
2
3 import audio
4
5 def on_server_loaded(server_context):
6 t = Thread(target=audio._get_audio_data, args=())
7 t.setDaemon(True)
8 t.start()
9
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/examples/app/spectrogram/audio.py b/examples/app/spectrogram/audio.py
--- a/examples/app/spectrogram/audio.py
+++ b/examples/app/spectrogram/audio.py
@@ -1,11 +1,6 @@
from __future__ import print_function
-try:
- import pyaudio
-except:
- print("This demo requires pyaudio installed to function")
- import sys
- sys.exit(1)
+from time import sleep
import numpy as np
import scipy as sp
@@ -20,24 +15,51 @@
data = {'values': None}
-def _get_audio_data():
- pa = pyaudio.PyAudio()
- stream = pa.open(
- format=pyaudio.paInt16,
- channels=1,
- rate=SAMPLING_RATE,
- input=True,
- frames_per_buffer=NUM_SAMPLES
- )
-
- while True:
- try:
- raw_data = np.fromstring(stream.read(NUM_SAMPLES), dtype=np.int16)
- signal = raw_data / 32768.0
+try:
+ import pyaudio
+
+ def update_audio_data():
+ pa = pyaudio.PyAudio()
+ stream = pa.open(
+ format=pyaudio.paInt16,
+ channels=1,
+ rate=SAMPLING_RATE,
+ input=True,
+ frames_per_buffer=NUM_SAMPLES
+ )
+
+ while True:
+ try:
+ raw_data = np.fromstring(stream.read(NUM_SAMPLES), dtype=np.int16)
+ signal = raw_data / 32768.0
+ fft = sp.fft(signal)
+ spectrum = abs(fft)[:int(NUM_SAMPLES/2)]
+ power = spectrum**2
+ bins = simps(np.split(power, NUM_BINS))
+ data['values'] = signal, spectrum, bins
+ except:
+ continue
+
+except:
+ print()
+ print(" *** Pyaudio package not installed, using synthesized audio data ***")
+ print()
+
+ # These are basically picked out of a hat to show something vaguely interesting
+ _t = np.arange(0, NUM_SAMPLES/SAMPLING_RATE, 1.0/SAMPLING_RATE)
+ _f = 2000 + 3000*(2+np.sin(4*np.linspace(0, 2*np.pi, 500)))
+ _i = 0
+
+ def update_audio_data():
+ while True:
+ global _i
+ A = 0.3 + 0.05 * np.random.random()
+ signal = A*np.sin(2*np.pi*_f[_i]*_t + np.sin(2*np.pi*200*_t))
+
fft = sp.fft(signal)
- spectrum = abs(fft)[:NUM_SAMPLES/2]
+ spectrum = abs(fft)[:int(NUM_SAMPLES/2)]
power = spectrum**2
bins = simps(np.split(power, NUM_BINS))
data['values'] = signal, spectrum, bins
- except:
- continue
+ _i = (_i + 1) % len(_f)
+ sleep(1.0/12)
diff --git a/examples/app/spectrogram/server_lifecycle.py b/examples/app/spectrogram/server_lifecycle.py
--- a/examples/app/spectrogram/server_lifecycle.py
+++ b/examples/app/spectrogram/server_lifecycle.py
@@ -3,6 +3,6 @@
import audio
def on_server_loaded(server_context):
- t = Thread(target=audio._get_audio_data, args=())
+ t = Thread(target=audio.update_audio_data, args=())
t.setDaemon(True)
t.start()
| {"golden_diff": "diff --git a/examples/app/spectrogram/audio.py b/examples/app/spectrogram/audio.py\n--- a/examples/app/spectrogram/audio.py\n+++ b/examples/app/spectrogram/audio.py\n@@ -1,11 +1,6 @@\n from __future__ import print_function\n \n-try:\n- import pyaudio\n-except:\n- print(\"This demo requires pyaudio installed to function\")\n- import sys\n- sys.exit(1)\n+from time import sleep\n \n import numpy as np\n import scipy as sp\n@@ -20,24 +15,51 @@\n \n data = {'values': None}\n \n-def _get_audio_data():\n- pa = pyaudio.PyAudio()\n- stream = pa.open(\n- format=pyaudio.paInt16,\n- channels=1,\n- rate=SAMPLING_RATE,\n- input=True,\n- frames_per_buffer=NUM_SAMPLES\n- )\n-\n- while True:\n- try:\n- raw_data = np.fromstring(stream.read(NUM_SAMPLES), dtype=np.int16)\n- signal = raw_data / 32768.0\n+try:\n+ import pyaudio\n+\n+ def update_audio_data():\n+ pa = pyaudio.PyAudio()\n+ stream = pa.open(\n+ format=pyaudio.paInt16,\n+ channels=1,\n+ rate=SAMPLING_RATE,\n+ input=True,\n+ frames_per_buffer=NUM_SAMPLES\n+ )\n+\n+ while True:\n+ try:\n+ raw_data = np.fromstring(stream.read(NUM_SAMPLES), dtype=np.int16)\n+ signal = raw_data / 32768.0\n+ fft = sp.fft(signal)\n+ spectrum = abs(fft)[:int(NUM_SAMPLES/2)]\n+ power = spectrum**2\n+ bins = simps(np.split(power, NUM_BINS))\n+ data['values'] = signal, spectrum, bins\n+ except:\n+ continue\n+\n+except:\n+ print()\n+ print(\" *** Pyaudio package not installed, using synthesized audio data ***\")\n+ print()\n+\n+ # These are basically picked out of a hat to show something vaguely interesting\n+ _t = np.arange(0, NUM_SAMPLES/SAMPLING_RATE, 1.0/SAMPLING_RATE)\n+ _f = 2000 + 3000*(2+np.sin(4*np.linspace(0, 2*np.pi, 500)))\n+ _i = 0\n+\n+ def update_audio_data():\n+ while True:\n+ global _i\n+ A = 0.3 + 0.05 * np.random.random()\n+ signal = A*np.sin(2*np.pi*_f[_i]*_t + np.sin(2*np.pi*200*_t))\n+\n fft = sp.fft(signal)\n- spectrum = abs(fft)[:NUM_SAMPLES/2]\n+ spectrum = abs(fft)[:int(NUM_SAMPLES/2)]\n power = spectrum**2\n bins = simps(np.split(power, NUM_BINS))\n data['values'] = signal, spectrum, bins\n- except:\n- continue\n+ _i = (_i + 1) % len(_f)\n+ sleep(1.0/12)\ndiff --git a/examples/app/spectrogram/server_lifecycle.py b/examples/app/spectrogram/server_lifecycle.py\n--- a/examples/app/spectrogram/server_lifecycle.py\n+++ b/examples/app/spectrogram/server_lifecycle.py\n@@ -3,6 +3,6 @@\n import audio\n \n def on_server_loaded(server_context):\n- t = Thread(target=audio._get_audio_data, args=())\n+ t = Thread(target=audio.update_audio_data, args=())\n t.setDaemon(True)\n t.start()\n", "issue": "Add synthetic data mode to spectrogram\nThe pyaudio package is not available on all platforms. An option mode to generate synthetic dat would allow the app to be run everywhere in some capacity. \r\n\r\n@philippjfr I think you might already have some code that could be re-used?\n", "before_files": [{"content": "from __future__ import print_function\n\ntry:\n import pyaudio\nexcept:\n print(\"This demo requires pyaudio installed to function\")\n import sys\n sys.exit(1)\n\nimport numpy as np\nimport scipy as sp\nfrom scipy.integrate import simps\n\nNUM_SAMPLES = 1024\nSAMPLING_RATE = 44100\nMAX_FREQ = SAMPLING_RATE / 2\nFREQ_SAMPLES = NUM_SAMPLES / 8\nTIMESLICE = 100 # ms\nNUM_BINS = 16\n\ndata = {'values': None}\n\ndef _get_audio_data():\n pa = pyaudio.PyAudio()\n stream = pa.open(\n format=pyaudio.paInt16,\n channels=1,\n rate=SAMPLING_RATE,\n input=True,\n frames_per_buffer=NUM_SAMPLES\n )\n\n while True:\n try:\n raw_data = np.fromstring(stream.read(NUM_SAMPLES), dtype=np.int16)\n signal = raw_data / 32768.0\n fft = sp.fft(signal)\n spectrum = abs(fft)[:NUM_SAMPLES/2]\n power = spectrum**2\n bins = simps(np.split(power, NUM_BINS))\n data['values'] = signal, spectrum, bins\n except:\n continue\n", "path": "examples/app/spectrogram/audio.py"}, {"content": "from threading import Thread\n\nimport audio\n\ndef on_server_loaded(server_context):\n t = Thread(target=audio._get_audio_data, args=())\n t.setDaemon(True)\n t.start()\n", "path": "examples/app/spectrogram/server_lifecycle.py"}], "after_files": [{"content": "from __future__ import print_function\n\nfrom time import sleep\n\nimport numpy as np\nimport scipy as sp\nfrom scipy.integrate import simps\n\nNUM_SAMPLES = 1024\nSAMPLING_RATE = 44100\nMAX_FREQ = SAMPLING_RATE / 2\nFREQ_SAMPLES = NUM_SAMPLES / 8\nTIMESLICE = 100 # ms\nNUM_BINS = 16\n\ndata = {'values': None}\n\ntry:\n import pyaudio\n\n def update_audio_data():\n pa = pyaudio.PyAudio()\n stream = pa.open(\n format=pyaudio.paInt16,\n channels=1,\n rate=SAMPLING_RATE,\n input=True,\n frames_per_buffer=NUM_SAMPLES\n )\n\n while True:\n try:\n raw_data = np.fromstring(stream.read(NUM_SAMPLES), dtype=np.int16)\n signal = raw_data / 32768.0\n fft = sp.fft(signal)\n spectrum = abs(fft)[:int(NUM_SAMPLES/2)]\n power = spectrum**2\n bins = simps(np.split(power, NUM_BINS))\n data['values'] = signal, spectrum, bins\n except:\n continue\n\nexcept:\n print()\n print(\" *** Pyaudio package not installed, using synthesized audio data ***\")\n print()\n\n # These are basically picked out of a hat to show something vaguely interesting\n _t = np.arange(0, NUM_SAMPLES/SAMPLING_RATE, 1.0/SAMPLING_RATE)\n _f = 2000 + 3000*(2+np.sin(4*np.linspace(0, 2*np.pi, 500)))\n _i = 0\n\n def update_audio_data():\n while True:\n global _i\n A = 0.3 + 0.05 * np.random.random()\n signal = A*np.sin(2*np.pi*_f[_i]*_t + np.sin(2*np.pi*200*_t))\n\n fft = sp.fft(signal)\n spectrum = abs(fft)[:int(NUM_SAMPLES/2)]\n power = spectrum**2\n bins = simps(np.split(power, NUM_BINS))\n data['values'] = signal, spectrum, bins\n _i = (_i + 1) % len(_f)\n sleep(1.0/12)\n", "path": "examples/app/spectrogram/audio.py"}, {"content": "from threading import Thread\n\nimport audio\n\ndef on_server_loaded(server_context):\n t = Thread(target=audio.update_audio_data, args=())\n t.setDaemon(True)\n t.start()\n", "path": "examples/app/spectrogram/server_lifecycle.py"}]} | 756 | 833 |
gh_patches_debug_17638 | rasdani/github-patches | git_diff | joke2k__faker-283 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ReadTheDocs spacing is off for providers
http://fake-factory.readthedocs.org/en/latest/providers/internet.html
The spacing makes this pretty unreadable without constantly scrolling back and forth, which is annoying.
Since these are the official extended docs linked to in the project ReadMe, it'd certainly be aesthetically nicer if they used native RTD formatting, for example see the formatting for the reference page of the factory boy project: http://factoryboy.readthedocs.org/en/latest/reference.html
I dug around a little trying to make a pull request to fix this, but it looks like Faker's docs are auto-generated somehow and I didn't have time to figure it all out. If you point me in the right direction, I'd be happy to submit a PR.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `faker/build_docs.py`
Content:
```
1 # coding=utf-8
2
3 from __future__ import print_function
4 from __future__ import unicode_literals
5
6 import os
7 import sys
8
9
10 if sys.version < '3':
11 text_type = unicode
12 binary_type = str
13 else:
14 text_type = str
15 binary_type = bytes
16
17
18 DOCS_ROOT = os.path.abspath(os.path.join('..', 'docs'))
19
20
21 def write(fh, s):
22 return fh.write(s.encode('utf-8'))
23
24
25 def write_provider(fh, doc, provider, formatters, excludes=None):
26
27 if excludes is None:
28 excludes = []
29
30 write(fh, '\n')
31 title = "``{0}``".format(doc.get_provider_name(provider))
32 write(fh, '%s\n' % title)
33 write(fh, "-" * len(title))
34 write(fh, '\n\n::\n')
35
36 for signature, example in formatters.items():
37 if signature in excludes:
38 continue
39 try:
40 lines = text_type(example).expandtabs().splitlines()
41 except UnicodeEncodeError:
42 msg = 'error on "{0}" with value "{1}"'.format(signature, example)
43 raise Exception(msg)
44 margin = max(30, doc.max_name_len+1)
45 remains = 150 - margin
46 separator = '#'
47 write(fh, '\n')
48 for line in lines:
49 for i in range(0, (len(line) // remains) + 1):
50 write(fh, "\t{fake:<{margin}}{separator} {example}".format(
51 fake=signature,
52 separator=separator,
53 example=line[i*remains:(i+1)*remains],
54 margin=margin
55 ))
56 signature = separator = ' '
57 write(fh, '\n')
58
59
60 def write_docs(*args, **kwargs):
61 from faker import Faker, documentor
62 from faker.config import DEFAULT_LOCALE, AVAILABLE_LOCALES
63
64 fake = Faker(locale=DEFAULT_LOCALE)
65
66 from faker.providers import BaseProvider
67 base_provider_formatters = [f for f in dir(BaseProvider)]
68
69 doc = documentor.Documentor(fake)
70
71 formatters = doc.get_formatters(with_args=True, with_defaults=True)
72
73 for provider, fakers in formatters:
74 provider_name = doc.get_provider_name(provider)
75 fname = os.path.join(DOCS_ROOT, 'providers', '%s.rst' % provider_name)
76 with open(fname, 'wb') as fh:
77 write_provider(fh, doc, provider, fakers)
78
79 with open(os.path.join(DOCS_ROOT, 'providers.rst'), 'wb') as fh:
80 write(fh, 'Providers\n')
81 write(fh, '=========\n')
82 write(fh, '.. toctree::\n')
83 write(fh, ' :maxdepth: 2\n\n')
84 [write(fh, ' providers/%s\n' % doc.get_provider_name(provider))
85 for provider, fakers in formatters]
86
87 AVAILABLE_LOCALES = list(AVAILABLE_LOCALES)
88 AVAILABLE_LOCALES.sort()
89 for lang in AVAILABLE_LOCALES:
90 fname = os.path.join(DOCS_ROOT, 'locales', '%s.rst' % lang)
91 with open(fname, 'wb') as fh:
92 write(fh, '\n')
93 title = 'Language {0}\n'.format(lang)
94 write(fh, title)
95 write(fh, '=' * len(title))
96 write(fh, '\n')
97 fake = Faker(locale=lang)
98 d = documentor.Documentor(fake)
99
100 for p, fs in d.get_formatters(with_args=True, with_defaults=True,
101 locale=lang,
102 excludes=base_provider_formatters):
103 write_provider(fh, d, p, fs)
104
105 with open(os.path.join(DOCS_ROOT, 'locales.rst'), 'wb') as fh:
106 write(fh, 'Locales\n')
107 write(fh, '=======\n')
108 write(fh, '.. toctree::\n')
109 write(fh, ' :maxdepth: 2\n\n')
110 [write(fh, ' locales/%s\n' % lang) for lang in AVAILABLE_LOCALES]
111
112
113 # wrappers for sphinx
114 def _main(app, *args, **kwargs):
115 return write_docs(*args, **kwargs)
116
117
118 def setup(app):
119 app.connect(str('builder-inited'), _main)
120
121
122 if __name__ == "__main__":
123 write_docs(*sys.argv[1:])
124
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/faker/build_docs.py b/faker/build_docs.py
--- a/faker/build_docs.py
+++ b/faker/build_docs.py
@@ -41,20 +41,11 @@
except UnicodeEncodeError:
msg = 'error on "{0}" with value "{1}"'.format(signature, example)
raise Exception(msg)
- margin = max(30, doc.max_name_len+1)
- remains = 150 - margin
- separator = '#'
write(fh, '\n')
- for line in lines:
- for i in range(0, (len(line) // remains) + 1):
- write(fh, "\t{fake:<{margin}}{separator} {example}".format(
- fake=signature,
- separator=separator,
- example=line[i*remains:(i+1)*remains],
- margin=margin
- ))
- signature = separator = ' '
- write(fh, '\n')
+ write(fh, "\t{fake}\n{example}\n".format(
+ fake=signature,
+ example='\n'.join(['\t# ' + line for line in lines]),
+ ))
def write_docs(*args, **kwargs):
| {"golden_diff": "diff --git a/faker/build_docs.py b/faker/build_docs.py\n--- a/faker/build_docs.py\n+++ b/faker/build_docs.py\n@@ -41,20 +41,11 @@\n except UnicodeEncodeError:\n msg = 'error on \"{0}\" with value \"{1}\"'.format(signature, example)\n raise Exception(msg)\n- margin = max(30, doc.max_name_len+1)\n- remains = 150 - margin\n- separator = '#'\n write(fh, '\\n')\n- for line in lines:\n- for i in range(0, (len(line) // remains) + 1):\n- write(fh, \"\\t{fake:<{margin}}{separator} {example}\".format(\n- fake=signature,\n- separator=separator,\n- example=line[i*remains:(i+1)*remains],\n- margin=margin\n- ))\n- signature = separator = ' '\n- write(fh, '\\n')\n+ write(fh, \"\\t{fake}\\n{example}\\n\".format(\n+ fake=signature,\n+ example='\\n'.join(['\\t# ' + line for line in lines]),\n+ ))\n \n \n def write_docs(*args, **kwargs):\n", "issue": "ReadTheDocs spacing is off for providers\nhttp://fake-factory.readthedocs.org/en/latest/providers/internet.html\n\nThe spacing makes this pretty unreadable without constantly scrolling back and forth, which is annoying. \n\nSince these are the official extended docs linked to in the project ReadMe, it'd certainly be aesthetically nicer if they used native RTD formatting, for example see the formatting for the reference page of the factory boy project: http://factoryboy.readthedocs.org/en/latest/reference.html\n\nI dug around a little trying to make a pull request to fix this, but it looks like Faker's docs are auto-generated somehow and I didn't have time to figure it all out. If you point me in the right direction, I'd be happy to submit a PR. \n\n", "before_files": [{"content": "# coding=utf-8\n\nfrom __future__ import print_function\nfrom __future__ import unicode_literals\n\nimport os\nimport sys\n\n\nif sys.version < '3':\n text_type = unicode\n binary_type = str\nelse:\n text_type = str\n binary_type = bytes\n\n\nDOCS_ROOT = os.path.abspath(os.path.join('..', 'docs'))\n\n\ndef write(fh, s):\n return fh.write(s.encode('utf-8'))\n\n\ndef write_provider(fh, doc, provider, formatters, excludes=None):\n\n if excludes is None:\n excludes = []\n\n write(fh, '\\n')\n title = \"``{0}``\".format(doc.get_provider_name(provider))\n write(fh, '%s\\n' % title)\n write(fh, \"-\" * len(title))\n write(fh, '\\n\\n::\\n')\n\n for signature, example in formatters.items():\n if signature in excludes:\n continue\n try:\n lines = text_type(example).expandtabs().splitlines()\n except UnicodeEncodeError:\n msg = 'error on \"{0}\" with value \"{1}\"'.format(signature, example)\n raise Exception(msg)\n margin = max(30, doc.max_name_len+1)\n remains = 150 - margin\n separator = '#'\n write(fh, '\\n')\n for line in lines:\n for i in range(0, (len(line) // remains) + 1):\n write(fh, \"\\t{fake:<{margin}}{separator} {example}\".format(\n fake=signature,\n separator=separator,\n example=line[i*remains:(i+1)*remains],\n margin=margin\n ))\n signature = separator = ' '\n write(fh, '\\n')\n\n\ndef write_docs(*args, **kwargs):\n from faker import Faker, documentor\n from faker.config import DEFAULT_LOCALE, AVAILABLE_LOCALES\n\n fake = Faker(locale=DEFAULT_LOCALE)\n\n from faker.providers import BaseProvider\n base_provider_formatters = [f for f in dir(BaseProvider)]\n\n doc = documentor.Documentor(fake)\n\n formatters = doc.get_formatters(with_args=True, with_defaults=True)\n\n for provider, fakers in formatters:\n provider_name = doc.get_provider_name(provider)\n fname = os.path.join(DOCS_ROOT, 'providers', '%s.rst' % provider_name)\n with open(fname, 'wb') as fh:\n write_provider(fh, doc, provider, fakers)\n\n with open(os.path.join(DOCS_ROOT, 'providers.rst'), 'wb') as fh:\n write(fh, 'Providers\\n')\n write(fh, '=========\\n')\n write(fh, '.. toctree::\\n')\n write(fh, ' :maxdepth: 2\\n\\n')\n [write(fh, ' providers/%s\\n' % doc.get_provider_name(provider))\n for provider, fakers in formatters]\n\n AVAILABLE_LOCALES = list(AVAILABLE_LOCALES)\n AVAILABLE_LOCALES.sort()\n for lang in AVAILABLE_LOCALES:\n fname = os.path.join(DOCS_ROOT, 'locales', '%s.rst' % lang)\n with open(fname, 'wb') as fh:\n write(fh, '\\n')\n title = 'Language {0}\\n'.format(lang)\n write(fh, title)\n write(fh, '=' * len(title))\n write(fh, '\\n')\n fake = Faker(locale=lang)\n d = documentor.Documentor(fake)\n\n for p, fs in d.get_formatters(with_args=True, with_defaults=True,\n locale=lang,\n excludes=base_provider_formatters):\n write_provider(fh, d, p, fs)\n\n with open(os.path.join(DOCS_ROOT, 'locales.rst'), 'wb') as fh:\n write(fh, 'Locales\\n')\n write(fh, '=======\\n')\n write(fh, '.. toctree::\\n')\n write(fh, ' :maxdepth: 2\\n\\n')\n [write(fh, ' locales/%s\\n' % lang) for lang in AVAILABLE_LOCALES]\n\n\n# wrappers for sphinx\ndef _main(app, *args, **kwargs):\n return write_docs(*args, **kwargs)\n\n\ndef setup(app):\n app.connect(str('builder-inited'), _main)\n\n\nif __name__ == \"__main__\":\n write_docs(*sys.argv[1:])\n", "path": "faker/build_docs.py"}], "after_files": [{"content": "# coding=utf-8\n\nfrom __future__ import print_function\nfrom __future__ import unicode_literals\n\nimport os\nimport sys\n\n\nif sys.version < '3':\n text_type = unicode\n binary_type = str\nelse:\n text_type = str\n binary_type = bytes\n\n\nDOCS_ROOT = os.path.abspath(os.path.join('..', 'docs'))\n\n\ndef write(fh, s):\n return fh.write(s.encode('utf-8'))\n\n\ndef write_provider(fh, doc, provider, formatters, excludes=None):\n\n if excludes is None:\n excludes = []\n\n write(fh, '\\n')\n title = \"``{0}``\".format(doc.get_provider_name(provider))\n write(fh, '%s\\n' % title)\n write(fh, \"-\" * len(title))\n write(fh, '\\n\\n::\\n')\n\n for signature, example in formatters.items():\n if signature in excludes:\n continue\n try:\n lines = text_type(example).expandtabs().splitlines()\n except UnicodeEncodeError:\n msg = 'error on \"{0}\" with value \"{1}\"'.format(signature, example)\n raise Exception(msg)\n write(fh, '\\n')\n write(fh, \"\\t{fake}\\n{example}\\n\".format(\n fake=signature,\n example='\\n'.join(['\\t# ' + line for line in lines]),\n ))\n\n\ndef write_docs(*args, **kwargs):\n from faker import Faker, documentor\n from faker.config import DEFAULT_LOCALE, AVAILABLE_LOCALES\n\n fake = Faker(locale=DEFAULT_LOCALE)\n\n from faker.providers import BaseProvider\n base_provider_formatters = [f for f in dir(BaseProvider)]\n\n doc = documentor.Documentor(fake)\n\n formatters = doc.get_formatters(with_args=True, with_defaults=True)\n\n for provider, fakers in formatters:\n provider_name = doc.get_provider_name(provider)\n fname = os.path.join(DOCS_ROOT, 'providers', '%s.rst' % provider_name)\n with open(fname, 'wb') as fh:\n write_provider(fh, doc, provider, fakers)\n\n with open(os.path.join(DOCS_ROOT, 'providers.rst'), 'wb') as fh:\n write(fh, 'Providers\\n')\n write(fh, '=========\\n')\n write(fh, '.. toctree::\\n')\n write(fh, ' :maxdepth: 2\\n\\n')\n [write(fh, ' providers/%s\\n' % doc.get_provider_name(provider))\n for provider, fakers in formatters]\n\n AVAILABLE_LOCALES = list(AVAILABLE_LOCALES)\n AVAILABLE_LOCALES.sort()\n for lang in AVAILABLE_LOCALES:\n fname = os.path.join(DOCS_ROOT, 'locales', '%s.rst' % lang)\n with open(fname, 'wb') as fh:\n write(fh, '\\n')\n title = 'Language {0}\\n'.format(lang)\n write(fh, title)\n write(fh, '=' * len(title))\n write(fh, '\\n')\n fake = Faker(locale=lang)\n d = documentor.Documentor(fake)\n\n for p, fs in d.get_formatters(with_args=True, with_defaults=True,\n locale=lang,\n excludes=base_provider_formatters):\n write_provider(fh, d, p, fs)\n\n with open(os.path.join(DOCS_ROOT, 'locales.rst'), 'wb') as fh:\n write(fh, 'Locales\\n')\n write(fh, '=======\\n')\n write(fh, '.. toctree::\\n')\n write(fh, ' :maxdepth: 2\\n\\n')\n [write(fh, ' locales/%s\\n' % lang) for lang in AVAILABLE_LOCALES]\n\n\n# wrappers for sphinx\ndef _main(app, *args, **kwargs):\n return write_docs(*args, **kwargs)\n\n\ndef setup(app):\n app.connect(str('builder-inited'), _main)\n\n\nif __name__ == \"__main__\":\n write_docs(*sys.argv[1:])\n", "path": "faker/build_docs.py"}]} | 1,645 | 274 |
gh_patches_debug_42117 | rasdani/github-patches | git_diff | 3cn-ecn__nantralPlatform-23 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Separer les differentes actions d'edition d'un groupe en plusieurs pages
Il faudrait diviser les actions des pages d'edition en plusieurs sous pages
- [ ] Convertir les pages pour utiliser le template update base
- [ ] Mettre les membres dans une nouvelle page
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `server/apps/group/views.py`
Content:
```
1 from django.shortcuts import redirect, render
2 from django.views.generic import DetailView, UpdateView, ListView, View
3 from .models import Club, Group, NamedMembership
4 from .forms import NamedMembershipClubFormset
5 from django.contrib import messages
6 from django.contrib.auth.decorators import login_required
7 from django.views.decorators.http import require_http_methods
8 from django.contrib.auth.mixins import UserPassesTestMixin
9
10 from apps.student.models import Student
11 from apps.event.models import Event
12
13 from apps.event.forms import EventGroupFormSet
14
15 class ListClubView(ListView):
16 model = Club
17 template_name = 'group/club_list.html'
18
19
20 class UpdateClubView(UpdateView):
21 model = Club
22 template_name = 'group/club_update.html'
23 fields = ['description', 'admins', 'logo']
24 def get_context_data(self, **kwargs):
25 context = super().get_context_data(**kwargs)
26 memberships = NamedMembership.objects.filter(group=self.object)
27 membersForm = NamedMembershipClubFormset(queryset=memberships)
28 context['members'] = membersForm
29 return context
30
31 class UpdateClubEventsView(UserPassesTestMixin, View):
32 template_name = 'group/club_events_update.html'
33 def test_func(self):
34 group = Group.get_group_by_slug(self.kwargs['group_slug'])
35 return group.is_admin(self.request.user)
36 def get_context_data(self, **kwargs):
37 context = {}
38 context['object'] = Group.get_group_by_slug(kwargs['group_slug'])
39 context['events'] = Event.objects.filter(group=kwargs['group_slug'])
40 context['form'] = EventGroupFormSet(queryset=context['events'])
41 return context
42
43 def get(self, request, group_slug):
44 return render(request, self.template_name, context=self.get_context_data(group_slug=group_slug))
45
46 def post(self, request, group_slug):
47 return edit_events(request, group_slug)
48
49 def delete(self, request, group_slug, event_id):
50 print('Hello')
51 event = Event.objects.delete(group=group_slug, id=event_id)
52 return redirect('group:update-events')
53
54 class DetailClubView(DetailView):
55 model = Club
56 template_name = 'group/club_detail.html'
57 def get_context_data(self, **kwargs):
58 context = super().get_context_data(**kwargs)
59 members = NamedMembership.objects.filter(group=self.object)
60 context['members'] = members
61 return context
62
63 @login_required
64 def add_member(request, group_slug, student_id):
65 """Add a user to a club"""
66 group = Group.get_group_by_slug(group_slug)
67 student = Student.objects.get(id=student_id)
68 if isinstance(Club, group):
69 NamedMembership.objects.create(student=student, group=group)
70
71
72 @require_http_methods(['POST'])
73 @login_required
74 def edit_named_memberships(request, pk):
75 club = Club.objects.get(pk=pk)
76 form = NamedMembershipClubFormset(request.POST)
77 if form.is_valid():
78 members = form.save(commit=False)
79 for member in members:
80 member.group = club
81 member.save()
82 for member in form.deleted_objects:
83 member.delete()
84 messages.success(request, 'Membres modifies')
85 return redirect('group:update', pk)
86 else:
87 messages.warning(request, form.errors)
88 return redirect('group:update', pk)
89
90
91 @login_required
92 def edit_events(request, group_slug):
93 group = Group.get_group_by_slug(group_slug)
94 form = EventGroupFormSet(request.POST)
95 if form.is_valid():
96 events = form.save(commit=False)
97 # Link each event to the group
98 for event in events:
99 event.group = group.slug
100 event.save()
101 # Delete missing events
102 for event in form.deleted_objects:
103 event.delete()
104 messages.success(request, 'Events modifies')
105 return redirect('group:update-events', group_slug)
106 else:
107 messages.warning(request, form.errors)
108 return redirect('group:update-events', group_slug)
109
```
Path: `server/apps/group/urls.py`
Content:
```
1 from django.conf.urls import url
2 from django.urls import path
3
4 from .views import *
5
6 app_name = 'group'
7
8 urlpatterns = [
9 path('<slug:pk>/', DetailClubView.as_view(), name='detail'),
10 path('<slug:pk>/edit', UpdateClubView.as_view(), name='update'),
11 path('<slug:group_slug>/member/add/<slug:user_id>', add_member, name='add-member'),
12 path('<slug:pk>/members/edit', edit_named_memberships, name='editNamedMemberships'),
13 path('', ListClubView.as_view(), name='list'),
14 path('<slug:group_slug>/events/edit', UpdateClubEventsView.as_view(), name='update-events')
15 ]
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/server/apps/group/urls.py b/server/apps/group/urls.py
--- a/server/apps/group/urls.py
+++ b/server/apps/group/urls.py
@@ -9,7 +9,7 @@
path('<slug:pk>/', DetailClubView.as_view(), name='detail'),
path('<slug:pk>/edit', UpdateClubView.as_view(), name='update'),
path('<slug:group_slug>/member/add/<slug:user_id>', add_member, name='add-member'),
- path('<slug:pk>/members/edit', edit_named_memberships, name='editNamedMemberships'),
path('', ListClubView.as_view(), name='list'),
- path('<slug:group_slug>/events/edit', UpdateClubEventsView.as_view(), name='update-events')
+ path('<slug:group_slug>/events/edit', UpdateGroupEventsView.as_view(), name='update-events'),
+ path('<slug:group_slug>/members/edit', UpdateGroupMembersView.as_view(), name='update-members')
]
\ No newline at end of file
diff --git a/server/apps/group/views.py b/server/apps/group/views.py
--- a/server/apps/group/views.py
+++ b/server/apps/group/views.py
@@ -21,14 +21,8 @@
model = Club
template_name = 'group/club_update.html'
fields = ['description', 'admins', 'logo']
- def get_context_data(self, **kwargs):
- context = super().get_context_data(**kwargs)
- memberships = NamedMembership.objects.filter(group=self.object)
- membersForm = NamedMembershipClubFormset(queryset=memberships)
- context['members'] = membersForm
- return context
-class UpdateClubEventsView(UserPassesTestMixin, View):
+class UpdateGroupEventsView(UserPassesTestMixin, View):
template_name = 'group/club_events_update.html'
def test_func(self):
group = Group.get_group_by_slug(self.kwargs['group_slug'])
@@ -46,10 +40,29 @@
def post(self, request, group_slug):
return edit_events(request, group_slug)
- def delete(self, request, group_slug, event_id):
- print('Hello')
- event = Event.objects.delete(group=group_slug, id=event_id)
- return redirect('group:update-events')
+
+class UpdateGroupMembersView(UserPassesTestMixin, View):
+ template_name = 'group/club_members_update.html'
+ def test_func(self):
+ group = Group.get_group_by_slug(self.kwargs['group_slug'])
+ return group.is_admin(self.request.user)
+
+ def get_context_data(self, **kwargs):
+ context = {}
+ context['object'] = Group.get_group_by_slug(kwargs['group_slug'])
+ if isinstance(context['object'], Club):
+ memberships = NamedMembership.objects.filter(group=context['object'])
+ membersForm = NamedMembershipClubFormset(queryset=memberships)
+ context['members'] = membersForm
+ return context
+
+ def get(self, request, group_slug):
+ return render(request, self.template_name, context=self.get_context_data(group_slug=group_slug))
+
+ def post(self, request, group_slug):
+ return edit_named_memberships(request, group_slug)
+
+
class DetailClubView(DetailView):
model = Club
@@ -71,8 +84,8 @@
@require_http_methods(['POST'])
@login_required
-def edit_named_memberships(request, pk):
- club = Club.objects.get(pk=pk)
+def edit_named_memberships(request, group_slug):
+ club = Club.objects.filter(slug=group_slug).first()
form = NamedMembershipClubFormset(request.POST)
if form.is_valid():
members = form.save(commit=False)
@@ -82,10 +95,10 @@
for member in form.deleted_objects:
member.delete()
messages.success(request, 'Membres modifies')
- return redirect('group:update', pk)
+ return redirect('group:update', club.id)
else:
messages.warning(request, form.errors)
- return redirect('group:update', pk)
+ return redirect('group:update', club.id)
@login_required
| {"golden_diff": "diff --git a/server/apps/group/urls.py b/server/apps/group/urls.py\n--- a/server/apps/group/urls.py\n+++ b/server/apps/group/urls.py\n@@ -9,7 +9,7 @@\n path('<slug:pk>/', DetailClubView.as_view(), name='detail'),\n path('<slug:pk>/edit', UpdateClubView.as_view(), name='update'),\n path('<slug:group_slug>/member/add/<slug:user_id>', add_member, name='add-member'),\n- path('<slug:pk>/members/edit', edit_named_memberships, name='editNamedMemberships'),\n path('', ListClubView.as_view(), name='list'),\n- path('<slug:group_slug>/events/edit', UpdateClubEventsView.as_view(), name='update-events')\n+ path('<slug:group_slug>/events/edit', UpdateGroupEventsView.as_view(), name='update-events'),\n+ path('<slug:group_slug>/members/edit', UpdateGroupMembersView.as_view(), name='update-members')\n ]\n\\ No newline at end of file\ndiff --git a/server/apps/group/views.py b/server/apps/group/views.py\n--- a/server/apps/group/views.py\n+++ b/server/apps/group/views.py\n@@ -21,14 +21,8 @@\n model = Club\n template_name = 'group/club_update.html'\n fields = ['description', 'admins', 'logo']\n- def get_context_data(self, **kwargs):\n- context = super().get_context_data(**kwargs)\n- memberships = NamedMembership.objects.filter(group=self.object)\n- membersForm = NamedMembershipClubFormset(queryset=memberships)\n- context['members'] = membersForm\n- return context\n \n-class UpdateClubEventsView(UserPassesTestMixin, View):\n+class UpdateGroupEventsView(UserPassesTestMixin, View):\n template_name = 'group/club_events_update.html'\n def test_func(self):\n group = Group.get_group_by_slug(self.kwargs['group_slug'])\n@@ -46,10 +40,29 @@\n def post(self, request, group_slug):\n return edit_events(request, group_slug)\n \n- def delete(self, request, group_slug, event_id):\n- print('Hello')\n- event = Event.objects.delete(group=group_slug, id=event_id)\n- return redirect('group:update-events')\n+\n+class UpdateGroupMembersView(UserPassesTestMixin, View):\n+ template_name = 'group/club_members_update.html'\n+ def test_func(self):\n+ group = Group.get_group_by_slug(self.kwargs['group_slug'])\n+ return group.is_admin(self.request.user)\n+\n+ def get_context_data(self, **kwargs):\n+ context = {}\n+ context['object'] = Group.get_group_by_slug(kwargs['group_slug'])\n+ if isinstance(context['object'], Club):\n+ memberships = NamedMembership.objects.filter(group=context['object'])\n+ membersForm = NamedMembershipClubFormset(queryset=memberships)\n+ context['members'] = membersForm\n+ return context\n+\n+ def get(self, request, group_slug):\n+ return render(request, self.template_name, context=self.get_context_data(group_slug=group_slug))\n+ \n+ def post(self, request, group_slug):\n+ return edit_named_memberships(request, group_slug)\n+\n+\n \n class DetailClubView(DetailView):\n model = Club\n@@ -71,8 +84,8 @@\n \n @require_http_methods(['POST'])\n @login_required\n-def edit_named_memberships(request, pk):\n- club = Club.objects.get(pk=pk)\n+def edit_named_memberships(request, group_slug):\n+ club = Club.objects.filter(slug=group_slug).first()\n form = NamedMembershipClubFormset(request.POST)\n if form.is_valid():\n members = form.save(commit=False)\n@@ -82,10 +95,10 @@\n for member in form.deleted_objects:\n member.delete()\n messages.success(request, 'Membres modifies')\n- return redirect('group:update', pk)\n+ return redirect('group:update', club.id)\n else:\n messages.warning(request, form.errors)\n- return redirect('group:update', pk)\n+ return redirect('group:update', club.id)\n \n \n @login_required\n", "issue": "Separer les differentes actions d'edition d'un groupe en plusieurs pages\nIl faudrait diviser les actions des pages d'edition en plusieurs sous pages\r\n\r\n- [ ] Convertir les pages pour utiliser le template update base\r\n- [ ] Mettre les membres dans une nouvelle page\n", "before_files": [{"content": "from django.shortcuts import redirect, render\nfrom django.views.generic import DetailView, UpdateView, ListView, View\nfrom .models import Club, Group, NamedMembership\nfrom .forms import NamedMembershipClubFormset\nfrom django.contrib import messages\nfrom django.contrib.auth.decorators import login_required\nfrom django.views.decorators.http import require_http_methods\nfrom django.contrib.auth.mixins import UserPassesTestMixin\n\nfrom apps.student.models import Student\nfrom apps.event.models import Event\n\nfrom apps.event.forms import EventGroupFormSet\n\nclass ListClubView(ListView):\n model = Club\n template_name = 'group/club_list.html'\n\n\nclass UpdateClubView(UpdateView):\n model = Club\n template_name = 'group/club_update.html'\n fields = ['description', 'admins', 'logo']\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n memberships = NamedMembership.objects.filter(group=self.object)\n membersForm = NamedMembershipClubFormset(queryset=memberships)\n context['members'] = membersForm\n return context\n\nclass UpdateClubEventsView(UserPassesTestMixin, View):\n template_name = 'group/club_events_update.html'\n def test_func(self):\n group = Group.get_group_by_slug(self.kwargs['group_slug'])\n return group.is_admin(self.request.user)\n def get_context_data(self, **kwargs):\n context = {}\n context['object'] = Group.get_group_by_slug(kwargs['group_slug'])\n context['events'] = Event.objects.filter(group=kwargs['group_slug'])\n context['form'] = EventGroupFormSet(queryset=context['events'])\n return context\n\n def get(self, request, group_slug):\n return render(request, self.template_name, context=self.get_context_data(group_slug=group_slug))\n\n def post(self, request, group_slug):\n return edit_events(request, group_slug)\n\n def delete(self, request, group_slug, event_id):\n print('Hello')\n event = Event.objects.delete(group=group_slug, id=event_id)\n return redirect('group:update-events')\n\nclass DetailClubView(DetailView):\n model = Club\n template_name = 'group/club_detail.html'\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n members = NamedMembership.objects.filter(group=self.object)\n context['members'] = members\n return context\n\n@login_required\ndef add_member(request, group_slug, student_id):\n \"\"\"Add a user to a club\"\"\"\n group = Group.get_group_by_slug(group_slug)\n student = Student.objects.get(id=student_id)\n if isinstance(Club, group):\n NamedMembership.objects.create(student=student, group=group)\n\n\n@require_http_methods(['POST'])\n@login_required\ndef edit_named_memberships(request, pk):\n club = Club.objects.get(pk=pk)\n form = NamedMembershipClubFormset(request.POST)\n if form.is_valid():\n members = form.save(commit=False)\n for member in members:\n member.group = club\n member.save()\n for member in form.deleted_objects:\n member.delete()\n messages.success(request, 'Membres modifies')\n return redirect('group:update', pk)\n else:\n messages.warning(request, form.errors)\n return redirect('group:update', pk)\n\n\n@login_required\ndef edit_events(request, group_slug):\n group = Group.get_group_by_slug(group_slug)\n form = EventGroupFormSet(request.POST)\n if form.is_valid():\n events = form.save(commit=False)\n # Link each event to the group\n for event in events:\n event.group = group.slug\n event.save()\n # Delete missing events\n for event in form.deleted_objects:\n event.delete()\n messages.success(request, 'Events modifies')\n return redirect('group:update-events', group_slug)\n else:\n messages.warning(request, form.errors)\n return redirect('group:update-events', group_slug)\n", "path": "server/apps/group/views.py"}, {"content": "from django.conf.urls import url\nfrom django.urls import path\n\nfrom .views import *\n\napp_name = 'group'\n\nurlpatterns = [\n path('<slug:pk>/', DetailClubView.as_view(), name='detail'),\n path('<slug:pk>/edit', UpdateClubView.as_view(), name='update'),\n path('<slug:group_slug>/member/add/<slug:user_id>', add_member, name='add-member'),\n path('<slug:pk>/members/edit', edit_named_memberships, name='editNamedMemberships'),\n path('', ListClubView.as_view(), name='list'),\n path('<slug:group_slug>/events/edit', UpdateClubEventsView.as_view(), name='update-events')\n]", "path": "server/apps/group/urls.py"}], "after_files": [{"content": "from django.shortcuts import redirect, render\nfrom django.views.generic import DetailView, UpdateView, ListView, View\nfrom .models import Club, Group, NamedMembership\nfrom .forms import NamedMembershipClubFormset\nfrom django.contrib import messages\nfrom django.contrib.auth.decorators import login_required\nfrom django.views.decorators.http import require_http_methods\nfrom django.contrib.auth.mixins import UserPassesTestMixin\n\nfrom apps.student.models import Student\nfrom apps.event.models import Event\n\nfrom apps.event.forms import EventGroupFormSet\n\nclass ListClubView(ListView):\n model = Club\n template_name = 'group/club_list.html'\n\n\nclass UpdateClubView(UpdateView):\n model = Club\n template_name = 'group/club_update.html'\n fields = ['description', 'admins', 'logo']\n\nclass UpdateGroupEventsView(UserPassesTestMixin, View):\n template_name = 'group/club_events_update.html'\n def test_func(self):\n group = Group.get_group_by_slug(self.kwargs['group_slug'])\n return group.is_admin(self.request.user)\n def get_context_data(self, **kwargs):\n context = {}\n context['object'] = Group.get_group_by_slug(kwargs['group_slug'])\n context['events'] = Event.objects.filter(group=kwargs['group_slug'])\n context['form'] = EventGroupFormSet(queryset=context['events'])\n return context\n\n def get(self, request, group_slug):\n return render(request, self.template_name, context=self.get_context_data(group_slug=group_slug))\n\n def post(self, request, group_slug):\n return edit_events(request, group_slug)\n\n\nclass UpdateGroupMembersView(UserPassesTestMixin, View):\n template_name = 'group/club_members_update.html'\n def test_func(self):\n group = Group.get_group_by_slug(self.kwargs['group_slug'])\n return group.is_admin(self.request.user)\n\n def get_context_data(self, **kwargs):\n context = {}\n context['object'] = Group.get_group_by_slug(kwargs['group_slug'])\n if isinstance(context['object'], Club):\n memberships = NamedMembership.objects.filter(group=context['object'])\n membersForm = NamedMembershipClubFormset(queryset=memberships)\n context['members'] = membersForm\n return context\n\n def get(self, request, group_slug):\n return render(request, self.template_name, context=self.get_context_data(group_slug=group_slug))\n \n def post(self, request, group_slug):\n return edit_named_memberships(request, group_slug)\n\n\n\nclass DetailClubView(DetailView):\n model = Club\n template_name = 'group/club_detail.html'\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n members = NamedMembership.objects.filter(group=self.object)\n context['members'] = members\n return context\n\n@login_required\ndef add_member(request, group_slug, student_id):\n \"\"\"Add a user to a club\"\"\"\n group = Group.get_group_by_slug(group_slug)\n student = Student.objects.get(id=student_id)\n if isinstance(Club, group):\n NamedMembership.objects.create(student=student, group=group)\n\n\n@require_http_methods(['POST'])\n@login_required\ndef edit_named_memberships(request, group_slug):\n club = Club.objects.filter(slug=group_slug).first()\n form = NamedMembershipClubFormset(request.POST)\n if form.is_valid():\n members = form.save(commit=False)\n for member in members:\n member.group = club\n member.save()\n for member in form.deleted_objects:\n member.delete()\n messages.success(request, 'Membres modifies')\n return redirect('group:update', club.id)\n else:\n messages.warning(request, form.errors)\n return redirect('group:update', club.id)\n\n\n@login_required\ndef edit_events(request, group_slug):\n group = Group.get_group_by_slug(group_slug)\n form = EventGroupFormSet(request.POST)\n if form.is_valid():\n events = form.save(commit=False)\n # Link each event to the group\n for event in events:\n event.group = group.slug\n event.save()\n # Delete missing events\n for event in form.deleted_objects:\n event.delete()\n messages.success(request, 'Events modifies')\n return redirect('group:update-events', group_slug)\n else:\n messages.warning(request, form.errors)\n return redirect('group:update-events', group_slug)\n", "path": "server/apps/group/views.py"}, {"content": "from django.conf.urls import url\nfrom django.urls import path\n\nfrom .views import *\n\napp_name = 'group'\n\nurlpatterns = [\n path('<slug:pk>/', DetailClubView.as_view(), name='detail'),\n path('<slug:pk>/edit', UpdateClubView.as_view(), name='update'),\n path('<slug:group_slug>/member/add/<slug:user_id>', add_member, name='add-member'),\n path('', ListClubView.as_view(), name='list'),\n path('<slug:group_slug>/events/edit', UpdateGroupEventsView.as_view(), name='update-events'),\n path('<slug:group_slug>/members/edit', UpdateGroupMembersView.as_view(), name='update-members')\n]", "path": "server/apps/group/urls.py"}]} | 1,565 | 909 |
gh_patches_debug_3326 | rasdani/github-patches | git_diff | conda-forge__conda-smithy-144 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Linter: handle environ
Same issue as https://github.com/conda-forge/conda-forge.github.io/issues/93.
Same fix needed: https://github.com/conda-forge/conda-forge.github.io/pull/107/files
Linter: handle environ
Same issue as https://github.com/conda-forge/conda-forge.github.io/issues/93.
Same fix needed: https://github.com/conda-forge/conda-forge.github.io/pull/107/files
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `conda_smithy/lint_recipe.py`
Content:
```
1 import os
2 import re
3
4 import jinja2
5 import ruamel.yaml
6
7
8 EXPECTED_SECTION_ORDER = ['package', 'source', 'build', 'requirements',
9 'test', 'app', 'about', 'extra']
10
11 REQUIREMENTS_ORDER = ['build', 'run']
12
13
14 class NullUndefined(jinja2.Undefined):
15 def __unicode__(self):
16 return unicode(self._undefined_name)
17
18
19 def get_section(parent, name, lints):
20 section = parent.get(name, {})
21 if not isinstance(section, dict):
22 lints.append('The "{}" section was expected to be a dictionary, but '
23 'got a {}.'.format(name, type(section).__name__))
24 section = {}
25 return section
26
27
28 def lintify(meta, recipe_dir=None):
29 lints = []
30 major_sections = list(meta.keys())
31
32 # If the recipe_dir exists (no guarantee within this function) , we can
33 # find the meta.yaml within it.
34 meta_fname = os.path.join(recipe_dir or '', 'meta.yaml')
35
36 source_section = get_section(meta, 'source', lints)
37 build_section = get_section(meta, 'build', lints)
38 requirements_section = get_section(meta, 'requirements', lints)
39 about_section = get_section(meta, 'about', lints)
40 extra_section = get_section(meta, 'extra', lints)
41
42 # 1: Top level meta.yaml keys should have a specific order.
43 section_order_sorted = sorted(major_sections,
44 key=EXPECTED_SECTION_ORDER.index)
45 if major_sections != section_order_sorted:
46 lints.append('The top level meta keys are in an unexpected order. '
47 'Expecting {}.'.format(section_order_sorted))
48
49 # 2: The about section should have a home, license and summary.
50 for about_item in ['home', 'license', 'summary']:
51 # if the section doesn't exist, or is just empty, lint it.
52 if not about_section.get(about_item, ''):
53 lints.append('The {} item is expected in the about section.'
54 ''.format(about_item))
55
56 # 3: The recipe should have some maintainers.
57 if not extra_section.get('recipe-maintainers', []):
58 lints.append('The recipe could do with some maintainers listed in '
59 'the "extra/recipe-maintainers" section.')
60
61 # 4: The recipe should have some tests.
62 if 'test' not in major_sections:
63 test_files = ['run_test.py', 'run_test.sh', 'run_test.bat',
64 'run_test.pl']
65 a_test_file_exists = (recipe_dir is not None and
66 any(os.path.exists(os.path.join(recipe_dir,
67 test_file))
68 for test_file in test_files))
69 if not a_test_file_exists:
70 lints.append('The recipe must have some tests.')
71
72 # 5: License cannot be 'unknown.'
73 license = about_section.get('license', '').lower()
74 if 'unknown' == license.strip():
75 lints.append('The recipe license cannot be unknown.')
76
77 # 6: Selectors should be in a tidy form.
78 if recipe_dir is not None and os.path.exists(meta_fname):
79 bad_selectors = []
80 # Good selectors look like ".*\s\s#\s[...]"
81 good_selectors_pat = re.compile(r'(.+?)\s{2,}#\s\[(.+)\](?(2).*)$')
82 with open(meta_fname, 'r') as fh:
83 for selector_line in selector_lines(fh):
84 if not good_selectors_pat.match(selector_line):
85 bad_selectors.append(selector_line)
86 if bad_selectors:
87 lints.append('Selectors are suggested to take a '
88 '``<two spaces>#<one space>[<expression>]`` form.')
89
90 # 7: The build section should have a build number.
91 if build_section.get('number', None) is None:
92 lints.append('The recipe must have a `build/number` section.')
93
94 # 8: The build section should be before the run section in requirements.
95 requirements_order_sorted = sorted(requirements_section,
96 key=REQUIREMENTS_ORDER.index)
97 if requirements_section.keys() != requirements_order_sorted:
98 lints.append('The `requirements/build` section should be defined '
99 'before the `requirements/run` section.')
100
101 # 9: Files downloaded should have a hash.
102 if ('url' in source_section and
103 not ({'sha1', 'sha256', 'md5'} & set(source_section.keys()))):
104 lints.append('When defining a source/url please add a sha256, sha1 '
105 'or md5 checksum (sha256 preferably).')
106
107 return lints
108
109
110 def selector_lines(lines):
111 # Using the same pattern defined in conda-build (metadata.py),
112 # we identify selectors.
113 sel_pat = re.compile(r'(.+?)\s*(#.*)?\[(.+)\](?(2).*)$')
114
115 for line in lines:
116 line = line.rstrip()
117 if line.lstrip().startswith('#'):
118 # Don't bother with comment only lines
119 continue
120 m = sel_pat.match(line)
121 if m:
122 m.group(3)
123 yield line
124
125
126 def main(recipe_dir):
127 recipe_dir = os.path.abspath(recipe_dir)
128 recipe_meta = os.path.join(recipe_dir, 'meta.yaml')
129 if not os.path.exists(recipe_dir):
130 raise IOError('Feedstock has no recipe/meta.yaml.')
131
132 env = jinja2.Environment(undefined=NullUndefined)
133
134 with open(recipe_meta, 'r') as fh:
135 content = env.from_string(''.join(fh)).render()
136 meta = ruamel.yaml.load(content, ruamel.yaml.RoundTripLoader)
137 results = lintify(meta, recipe_dir)
138 return results
139
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/conda_smithy/lint_recipe.py b/conda_smithy/lint_recipe.py
--- a/conda_smithy/lint_recipe.py
+++ b/conda_smithy/lint_recipe.py
@@ -15,6 +15,11 @@
def __unicode__(self):
return unicode(self._undefined_name)
+ def __getattr__(self, name):
+ return unicode('{}.{}'.format(self, name))
+
+ def __getitem__(self, name):
+ return '{}["{}"]'.format(self, name)
def get_section(parent, name, lints):
section = parent.get(name, {})
| {"golden_diff": "diff --git a/conda_smithy/lint_recipe.py b/conda_smithy/lint_recipe.py\n--- a/conda_smithy/lint_recipe.py\n+++ b/conda_smithy/lint_recipe.py\n@@ -15,6 +15,11 @@\n def __unicode__(self):\n return unicode(self._undefined_name)\n \n+ def __getattr__(self, name):\n+ return unicode('{}.{}'.format(self, name))\n+\n+ def __getitem__(self, name):\n+ return '{}[\"{}\"]'.format(self, name)\n \n def get_section(parent, name, lints):\n section = parent.get(name, {})\n", "issue": "Linter: handle environ\nSame issue as https://github.com/conda-forge/conda-forge.github.io/issues/93.\nSame fix needed: https://github.com/conda-forge/conda-forge.github.io/pull/107/files\n\nLinter: handle environ\nSame issue as https://github.com/conda-forge/conda-forge.github.io/issues/93.\nSame fix needed: https://github.com/conda-forge/conda-forge.github.io/pull/107/files\n\n", "before_files": [{"content": "import os\nimport re\n\nimport jinja2\nimport ruamel.yaml\n\n\nEXPECTED_SECTION_ORDER = ['package', 'source', 'build', 'requirements',\n 'test', 'app', 'about', 'extra']\n\nREQUIREMENTS_ORDER = ['build', 'run']\n\n\nclass NullUndefined(jinja2.Undefined):\n def __unicode__(self):\n return unicode(self._undefined_name)\n\n\ndef get_section(parent, name, lints):\n section = parent.get(name, {})\n if not isinstance(section, dict):\n lints.append('The \"{}\" section was expected to be a dictionary, but '\n 'got a {}.'.format(name, type(section).__name__))\n section = {}\n return section\n\n\ndef lintify(meta, recipe_dir=None):\n lints = []\n major_sections = list(meta.keys())\n\n # If the recipe_dir exists (no guarantee within this function) , we can\n # find the meta.yaml within it.\n meta_fname = os.path.join(recipe_dir or '', 'meta.yaml')\n\n source_section = get_section(meta, 'source', lints)\n build_section = get_section(meta, 'build', lints)\n requirements_section = get_section(meta, 'requirements', lints)\n about_section = get_section(meta, 'about', lints)\n extra_section = get_section(meta, 'extra', lints)\n\n # 1: Top level meta.yaml keys should have a specific order.\n section_order_sorted = sorted(major_sections,\n key=EXPECTED_SECTION_ORDER.index)\n if major_sections != section_order_sorted:\n lints.append('The top level meta keys are in an unexpected order. '\n 'Expecting {}.'.format(section_order_sorted))\n\n # 2: The about section should have a home, license and summary.\n for about_item in ['home', 'license', 'summary']:\n # if the section doesn't exist, or is just empty, lint it.\n if not about_section.get(about_item, ''):\n lints.append('The {} item is expected in the about section.'\n ''.format(about_item))\n\n # 3: The recipe should have some maintainers.\n if not extra_section.get('recipe-maintainers', []):\n lints.append('The recipe could do with some maintainers listed in '\n 'the \"extra/recipe-maintainers\" section.')\n\n # 4: The recipe should have some tests.\n if 'test' not in major_sections:\n test_files = ['run_test.py', 'run_test.sh', 'run_test.bat',\n 'run_test.pl']\n a_test_file_exists = (recipe_dir is not None and\n any(os.path.exists(os.path.join(recipe_dir,\n test_file))\n for test_file in test_files))\n if not a_test_file_exists:\n lints.append('The recipe must have some tests.')\n\n # 5: License cannot be 'unknown.'\n license = about_section.get('license', '').lower()\n if 'unknown' == license.strip():\n lints.append('The recipe license cannot be unknown.')\n\n # 6: Selectors should be in a tidy form.\n if recipe_dir is not None and os.path.exists(meta_fname):\n bad_selectors = []\n # Good selectors look like \".*\\s\\s#\\s[...]\"\n good_selectors_pat = re.compile(r'(.+?)\\s{2,}#\\s\\[(.+)\\](?(2).*)$')\n with open(meta_fname, 'r') as fh:\n for selector_line in selector_lines(fh):\n if not good_selectors_pat.match(selector_line):\n bad_selectors.append(selector_line)\n if bad_selectors:\n lints.append('Selectors are suggested to take a '\n '``<two spaces>#<one space>[<expression>]`` form.')\n\n # 7: The build section should have a build number.\n if build_section.get('number', None) is None:\n lints.append('The recipe must have a `build/number` section.')\n\n # 8: The build section should be before the run section in requirements.\n requirements_order_sorted = sorted(requirements_section,\n key=REQUIREMENTS_ORDER.index)\n if requirements_section.keys() != requirements_order_sorted:\n lints.append('The `requirements/build` section should be defined '\n 'before the `requirements/run` section.')\n\n # 9: Files downloaded should have a hash.\n if ('url' in source_section and\n not ({'sha1', 'sha256', 'md5'} & set(source_section.keys()))):\n lints.append('When defining a source/url please add a sha256, sha1 '\n 'or md5 checksum (sha256 preferably).')\n\n return lints\n\n\ndef selector_lines(lines):\n # Using the same pattern defined in conda-build (metadata.py),\n # we identify selectors.\n sel_pat = re.compile(r'(.+?)\\s*(#.*)?\\[(.+)\\](?(2).*)$')\n\n for line in lines:\n line = line.rstrip()\n if line.lstrip().startswith('#'):\n # Don't bother with comment only lines\n continue\n m = sel_pat.match(line)\n if m:\n m.group(3)\n yield line\n\n\ndef main(recipe_dir):\n recipe_dir = os.path.abspath(recipe_dir)\n recipe_meta = os.path.join(recipe_dir, 'meta.yaml')\n if not os.path.exists(recipe_dir):\n raise IOError('Feedstock has no recipe/meta.yaml.')\n\n env = jinja2.Environment(undefined=NullUndefined)\n\n with open(recipe_meta, 'r') as fh:\n content = env.from_string(''.join(fh)).render()\n meta = ruamel.yaml.load(content, ruamel.yaml.RoundTripLoader)\n results = lintify(meta, recipe_dir)\n return results\n", "path": "conda_smithy/lint_recipe.py"}], "after_files": [{"content": "import os\nimport re\n\nimport jinja2\nimport ruamel.yaml\n\n\nEXPECTED_SECTION_ORDER = ['package', 'source', 'build', 'requirements',\n 'test', 'app', 'about', 'extra']\n\nREQUIREMENTS_ORDER = ['build', 'run']\n\n\nclass NullUndefined(jinja2.Undefined):\n def __unicode__(self):\n return unicode(self._undefined_name)\n\n def __getattr__(self, name):\n return unicode('{}.{}'.format(self, name))\n\n def __getitem__(self, name):\n return '{}[\"{}\"]'.format(self, name)\n\ndef get_section(parent, name, lints):\n section = parent.get(name, {})\n if not isinstance(section, dict):\n lints.append('The \"{}\" section was expected to be a dictionary, but '\n 'got a {}.'.format(name, type(section).__name__))\n section = {}\n return section\n\n\ndef lintify(meta, recipe_dir=None):\n lints = []\n major_sections = list(meta.keys())\n\n # If the recipe_dir exists (no guarantee within this function) , we can\n # find the meta.yaml within it.\n meta_fname = os.path.join(recipe_dir or '', 'meta.yaml')\n\n source_section = get_section(meta, 'source', lints)\n build_section = get_section(meta, 'build', lints)\n requirements_section = get_section(meta, 'requirements', lints)\n about_section = get_section(meta, 'about', lints)\n extra_section = get_section(meta, 'extra', lints)\n\n # 1: Top level meta.yaml keys should have a specific order.\n section_order_sorted = sorted(major_sections,\n key=EXPECTED_SECTION_ORDER.index)\n if major_sections != section_order_sorted:\n lints.append('The top level meta keys are in an unexpected order. '\n 'Expecting {}.'.format(section_order_sorted))\n\n # 2: The about section should have a home, license and summary.\n for about_item in ['home', 'license', 'summary']:\n # if the section doesn't exist, or is just empty, lint it.\n if not about_section.get(about_item, ''):\n lints.append('The {} item is expected in the about section.'\n ''.format(about_item))\n\n # 3: The recipe should have some maintainers.\n if not extra_section.get('recipe-maintainers', []):\n lints.append('The recipe could do with some maintainers listed in '\n 'the \"extra/recipe-maintainers\" section.')\n\n # 4: The recipe should have some tests.\n if 'test' not in major_sections:\n test_files = ['run_test.py', 'run_test.sh', 'run_test.bat',\n 'run_test.pl']\n a_test_file_exists = (recipe_dir is not None and\n any(os.path.exists(os.path.join(recipe_dir,\n test_file))\n for test_file in test_files))\n if not a_test_file_exists:\n lints.append('The recipe must have some tests.')\n\n # 5: License cannot be 'unknown.'\n license = about_section.get('license', '').lower()\n if 'unknown' == license.strip():\n lints.append('The recipe license cannot be unknown.')\n\n # 6: Selectors should be in a tidy form.\n if recipe_dir is not None and os.path.exists(meta_fname):\n bad_selectors = []\n # Good selectors look like \".*\\s\\s#\\s[...]\"\n good_selectors_pat = re.compile(r'(.+?)\\s{2,}#\\s\\[(.+)\\](?(2).*)$')\n with open(meta_fname, 'r') as fh:\n for selector_line in selector_lines(fh):\n if not good_selectors_pat.match(selector_line):\n bad_selectors.append(selector_line)\n if bad_selectors:\n lints.append('Selectors are suggested to take a '\n '``<two spaces>#<one space>[<expression>]`` form.')\n\n # 7: The build section should have a build number.\n if build_section.get('number', None) is None:\n lints.append('The recipe must have a `build/number` section.')\n\n # 8: The build section should be before the run section in requirements.\n requirements_order_sorted = sorted(requirements_section,\n key=REQUIREMENTS_ORDER.index)\n if requirements_section.keys() != requirements_order_sorted:\n lints.append('The `requirements/build` section should be defined '\n 'before the `requirements/run` section.')\n\n # 9: Files downloaded should have a hash.\n if ('url' in source_section and\n not ({'sha1', 'sha256', 'md5'} & set(source_section.keys()))):\n lints.append('When defining a source/url please add a sha256, sha1 '\n 'or md5 checksum (sha256 preferably).')\n\n return lints\n\n\ndef selector_lines(lines):\n # Using the same pattern defined in conda-build (metadata.py),\n # we identify selectors.\n sel_pat = re.compile(r'(.+?)\\s*(#.*)?\\[(.+)\\](?(2).*)$')\n\n for line in lines:\n line = line.rstrip()\n if line.lstrip().startswith('#'):\n # Don't bother with comment only lines\n continue\n m = sel_pat.match(line)\n if m:\n m.group(3)\n yield line\n\n\ndef main(recipe_dir):\n recipe_dir = os.path.abspath(recipe_dir)\n recipe_meta = os.path.join(recipe_dir, 'meta.yaml')\n if not os.path.exists(recipe_dir):\n raise IOError('Feedstock has no recipe/meta.yaml.')\n\n env = jinja2.Environment(undefined=NullUndefined)\n\n with open(recipe_meta, 'r') as fh:\n content = env.from_string(''.join(fh)).render()\n meta = ruamel.yaml.load(content, ruamel.yaml.RoundTripLoader)\n results = lintify(meta, recipe_dir)\n return results\n", "path": "conda_smithy/lint_recipe.py"}]} | 1,931 | 142 |
gh_patches_debug_29489 | rasdani/github-patches | git_diff | hylang__hy-112 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Unifying division
Since we are effectively creating a new language, we might as well fix one of the nastiest Python portability problems between 2 and 3: division. I propose to do an implicit "from **future** import division" at the top of any Hy module, as well as before starting the REPL, when running under Python 2.x.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `hy/importer.py`
Content:
```
1 # Copyright (c) 2013 Paul Tagliamonte <[email protected]>
2 #
3 # Permission is hereby granted, free of charge, to any person obtaining a
4 # copy of this software and associated documentation files (the "Software"),
5 # to deal in the Software without restriction, including without limitation
6 # the rights to use, copy, modify, merge, publish, distribute, sublicense,
7 # and/or sell copies of the Software, and to permit persons to whom the
8 # Software is furnished to do so, subject to the following conditions:
9 #
10 # The above copyright notice and this permission notice shall be included in
11 # all copies or substantial portions of the Software.
12 #
13 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
14 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
15 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
16 # THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
17 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
18 # FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
19 # DEALINGS IN THE SOFTWARE.
20
21 from py_compile import wr_long, MAGIC
22 from hy.compiler import hy_compile
23 from hy.models import HyObject
24 from hy.core import process
25 from hy.lex import tokenize
26
27
28 from io import open
29 import marshal
30 import imp
31 import sys
32 import ast
33 import os
34
35
36 if sys.version_info[0] >= 3:
37 from io import StringIO
38 else:
39 from StringIO import StringIO # NOQA
40
41
42 def import_buffer_to_hst(fd):
43 tree = tokenize(fd.read() + "\n")
44 tree = process(tree)
45 return tree
46
47
48 def import_file_to_hst(fpath):
49 return import_buffer_to_hst(open(fpath, 'r', encoding='utf-8'))
50
51
52 def import_file_to_ast(fpath):
53 tree = import_file_to_hst(fpath)
54 _ast = hy_compile(tree)
55 return _ast
56
57
58 def import_string_to_ast(buff):
59 tree = import_buffer_to_hst(StringIO(buff))
60 _ast = hy_compile(tree)
61 return _ast
62
63
64 def import_file_to_module(name, fpath):
65 _ast = import_file_to_ast(fpath)
66 mod = imp.new_module(name)
67 mod.__file__ = fpath
68 eval(compile(_ast, fpath, "exec"), mod.__dict__)
69 return mod
70
71
72 def hy_eval(hytree, namespace):
73 foo = HyObject()
74 foo.start_line = 0
75 foo.end_line = 0
76 foo.start_column = 0
77 foo.end_column = 0
78 hytree.replace(foo)
79 _ast = hy_compile(hytree, root=ast.Expression)
80 return eval(compile(_ast, "<eval>", "eval"), namespace)
81
82
83 def write_hy_as_pyc(fname):
84 with open(fname, 'U') as f:
85 try:
86 timestamp = long(os.fstat(f.fileno()).st_mtime)
87 except AttributeError:
88 timestamp = long(os.stat(fname).st_mtime)
89
90 _ast = import_file_to_ast(fname)
91 code = compile(_ast, fname, "exec")
92 cfile = "%s.pyc" % fname[:-len(".hy")]
93
94 with open(cfile, 'wb') as fc:
95 fc.write('\0\0\0\0')
96 wr_long(fc, timestamp)
97 marshal.dump(code, fc)
98 fc.flush()
99 fc.seek(0, 0)
100 fc.write(MAGIC)
101
102
103 class HyFinder(object):
104 def is_package(self, fullname):
105 dirpath = "/".join(fullname.split("."))
106 for pth in sys.path:
107 pth = os.path.abspath(pth)
108 composed_path = "%s/%s/__init__.hy" % (pth, dirpath)
109 if os.path.exists(composed_path):
110 return True
111 return False
112
113 def find_on_path(self, fullname):
114 fls = ["%s/__init__.hy", "%s.hy"]
115 dirpath = "/".join(fullname.split("."))
116
117 for pth in sys.path:
118 pth = os.path.abspath(pth)
119 for fp in fls:
120 composed_path = fp % ("%s/%s" % (pth, dirpath))
121 if os.path.exists(composed_path):
122 return composed_path
123
124
125 class MetaLoader(HyFinder):
126 def load_module(self, fullname):
127 if fullname in sys.modules:
128 return sys.modules[fullname]
129
130 pth = self.find_on_path(fullname)
131 if pth is None:
132 return
133
134 sys.modules[fullname] = None
135 mod = import_file_to_module(fullname, pth)
136
137 ispkg = self.is_package(fullname)
138
139 mod.__file__ = pth
140 mod.__loader__ = self
141 mod.__name__ = fullname
142
143 if ispkg:
144 mod.__path__ = []
145 mod.__package__ = fullname
146 else:
147 mod.__package__ = fullname.rpartition('.')[0]
148
149 sys.modules[fullname] = mod
150 return mod
151
152
153 class MetaImporter(HyFinder):
154 def find_module(self, fullname, path=None):
155 pth = self.find_on_path(fullname)
156 if pth is None:
157 return
158 return MetaLoader()
159
160
161 sys.meta_path.append(MetaImporter())
162 sys.path.insert(0, "")
163
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/hy/importer.py b/hy/importer.py
--- a/hy/importer.py
+++ b/hy/importer.py
@@ -31,7 +31,7 @@
import sys
import ast
import os
-
+import __future__
if sys.version_info[0] >= 3:
from io import StringIO
@@ -39,6 +39,10 @@
from StringIO import StringIO # NOQA
+def compile_(ast, filename, mode):
+ return compile(ast, filename, mode, __future__.CO_FUTURE_DIVISION)
+
+
def import_buffer_to_hst(fd):
tree = tokenize(fd.read() + "\n")
tree = process(tree)
@@ -65,7 +69,7 @@
_ast = import_file_to_ast(fpath)
mod = imp.new_module(name)
mod.__file__ = fpath
- eval(compile(_ast, fpath, "exec"), mod.__dict__)
+ eval(compile_(_ast, fpath, "exec"), mod.__dict__)
return mod
@@ -77,7 +81,7 @@
foo.end_column = 0
hytree.replace(foo)
_ast = hy_compile(hytree, root=ast.Expression)
- return eval(compile(_ast, "<eval>", "eval"), namespace)
+ return eval(compile_(_ast, "<eval>", "eval"), namespace)
def write_hy_as_pyc(fname):
@@ -88,7 +92,7 @@
timestamp = long(os.stat(fname).st_mtime)
_ast = import_file_to_ast(fname)
- code = compile(_ast, fname, "exec")
+ code = compile_(_ast, fname, "exec")
cfile = "%s.pyc" % fname[:-len(".hy")]
with open(cfile, 'wb') as fc:
| {"golden_diff": "diff --git a/hy/importer.py b/hy/importer.py\n--- a/hy/importer.py\n+++ b/hy/importer.py\n@@ -31,7 +31,7 @@\n import sys\n import ast\n import os\n-\n+import __future__\n \n if sys.version_info[0] >= 3:\n from io import StringIO\n@@ -39,6 +39,10 @@\n from StringIO import StringIO # NOQA\n \n \n+def compile_(ast, filename, mode):\n+ return compile(ast, filename, mode, __future__.CO_FUTURE_DIVISION)\n+\n+\n def import_buffer_to_hst(fd):\n tree = tokenize(fd.read() + \"\\n\")\n tree = process(tree)\n@@ -65,7 +69,7 @@\n _ast = import_file_to_ast(fpath)\n mod = imp.new_module(name)\n mod.__file__ = fpath\n- eval(compile(_ast, fpath, \"exec\"), mod.__dict__)\n+ eval(compile_(_ast, fpath, \"exec\"), mod.__dict__)\n return mod\n \n \n@@ -77,7 +81,7 @@\n foo.end_column = 0\n hytree.replace(foo)\n _ast = hy_compile(hytree, root=ast.Expression)\n- return eval(compile(_ast, \"<eval>\", \"eval\"), namespace)\n+ return eval(compile_(_ast, \"<eval>\", \"eval\"), namespace)\n \n \n def write_hy_as_pyc(fname):\n@@ -88,7 +92,7 @@\n timestamp = long(os.stat(fname).st_mtime)\n \n _ast = import_file_to_ast(fname)\n- code = compile(_ast, fname, \"exec\")\n+ code = compile_(_ast, fname, \"exec\")\n cfile = \"%s.pyc\" % fname[:-len(\".hy\")]\n \n with open(cfile, 'wb') as fc:\n", "issue": "Unifying division\nSince we are effectively creating a new language, we might as well fix one of the nastiest Python portability problems between 2 and 3: division. I propose to do an implicit \"from **future** import division\" at the top of any Hy module, as well as before starting the REPL, when running under Python 2.x.\n\n", "before_files": [{"content": "# Copyright (c) 2013 Paul Tagliamonte <[email protected]>\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the \"Software\"),\n# to deal in the Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish, distribute, sublicense,\n# and/or sell copies of the Software, and to permit persons to whom the\n# Software is furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.\n\nfrom py_compile import wr_long, MAGIC\nfrom hy.compiler import hy_compile\nfrom hy.models import HyObject\nfrom hy.core import process\nfrom hy.lex import tokenize\n\n\nfrom io import open\nimport marshal\nimport imp\nimport sys\nimport ast\nimport os\n\n\nif sys.version_info[0] >= 3:\n from io import StringIO\nelse:\n from StringIO import StringIO # NOQA\n\n\ndef import_buffer_to_hst(fd):\n tree = tokenize(fd.read() + \"\\n\")\n tree = process(tree)\n return tree\n\n\ndef import_file_to_hst(fpath):\n return import_buffer_to_hst(open(fpath, 'r', encoding='utf-8'))\n\n\ndef import_file_to_ast(fpath):\n tree = import_file_to_hst(fpath)\n _ast = hy_compile(tree)\n return _ast\n\n\ndef import_string_to_ast(buff):\n tree = import_buffer_to_hst(StringIO(buff))\n _ast = hy_compile(tree)\n return _ast\n\n\ndef import_file_to_module(name, fpath):\n _ast = import_file_to_ast(fpath)\n mod = imp.new_module(name)\n mod.__file__ = fpath\n eval(compile(_ast, fpath, \"exec\"), mod.__dict__)\n return mod\n\n\ndef hy_eval(hytree, namespace):\n foo = HyObject()\n foo.start_line = 0\n foo.end_line = 0\n foo.start_column = 0\n foo.end_column = 0\n hytree.replace(foo)\n _ast = hy_compile(hytree, root=ast.Expression)\n return eval(compile(_ast, \"<eval>\", \"eval\"), namespace)\n\n\ndef write_hy_as_pyc(fname):\n with open(fname, 'U') as f:\n try:\n timestamp = long(os.fstat(f.fileno()).st_mtime)\n except AttributeError:\n timestamp = long(os.stat(fname).st_mtime)\n\n _ast = import_file_to_ast(fname)\n code = compile(_ast, fname, \"exec\")\n cfile = \"%s.pyc\" % fname[:-len(\".hy\")]\n\n with open(cfile, 'wb') as fc:\n fc.write('\\0\\0\\0\\0')\n wr_long(fc, timestamp)\n marshal.dump(code, fc)\n fc.flush()\n fc.seek(0, 0)\n fc.write(MAGIC)\n\n\nclass HyFinder(object):\n def is_package(self, fullname):\n dirpath = \"/\".join(fullname.split(\".\"))\n for pth in sys.path:\n pth = os.path.abspath(pth)\n composed_path = \"%s/%s/__init__.hy\" % (pth, dirpath)\n if os.path.exists(composed_path):\n return True\n return False\n\n def find_on_path(self, fullname):\n fls = [\"%s/__init__.hy\", \"%s.hy\"]\n dirpath = \"/\".join(fullname.split(\".\"))\n\n for pth in sys.path:\n pth = os.path.abspath(pth)\n for fp in fls:\n composed_path = fp % (\"%s/%s\" % (pth, dirpath))\n if os.path.exists(composed_path):\n return composed_path\n\n\nclass MetaLoader(HyFinder):\n def load_module(self, fullname):\n if fullname in sys.modules:\n return sys.modules[fullname]\n\n pth = self.find_on_path(fullname)\n if pth is None:\n return\n\n sys.modules[fullname] = None\n mod = import_file_to_module(fullname, pth)\n\n ispkg = self.is_package(fullname)\n\n mod.__file__ = pth\n mod.__loader__ = self\n mod.__name__ = fullname\n\n if ispkg:\n mod.__path__ = []\n mod.__package__ = fullname\n else:\n mod.__package__ = fullname.rpartition('.')[0]\n\n sys.modules[fullname] = mod\n return mod\n\n\nclass MetaImporter(HyFinder):\n def find_module(self, fullname, path=None):\n pth = self.find_on_path(fullname)\n if pth is None:\n return\n return MetaLoader()\n\n\nsys.meta_path.append(MetaImporter())\nsys.path.insert(0, \"\")\n", "path": "hy/importer.py"}], "after_files": [{"content": "# Copyright (c) 2013 Paul Tagliamonte <[email protected]>\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the \"Software\"),\n# to deal in the Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish, distribute, sublicense,\n# and/or sell copies of the Software, and to permit persons to whom the\n# Software is furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.\n\nfrom py_compile import wr_long, MAGIC\nfrom hy.compiler import hy_compile\nfrom hy.models import HyObject\nfrom hy.core import process\nfrom hy.lex import tokenize\n\n\nfrom io import open\nimport marshal\nimport imp\nimport sys\nimport ast\nimport os\nimport __future__\n\nif sys.version_info[0] >= 3:\n from io import StringIO\nelse:\n from StringIO import StringIO # NOQA\n\n\ndef compile_(ast, filename, mode):\n return compile(ast, filename, mode, __future__.CO_FUTURE_DIVISION)\n\n\ndef import_buffer_to_hst(fd):\n tree = tokenize(fd.read() + \"\\n\")\n tree = process(tree)\n return tree\n\n\ndef import_file_to_hst(fpath):\n return import_buffer_to_hst(open(fpath, 'r', encoding='utf-8'))\n\n\ndef import_file_to_ast(fpath):\n tree = import_file_to_hst(fpath)\n _ast = hy_compile(tree)\n return _ast\n\n\ndef import_string_to_ast(buff):\n tree = import_buffer_to_hst(StringIO(buff))\n _ast = hy_compile(tree)\n return _ast\n\n\ndef import_file_to_module(name, fpath):\n _ast = import_file_to_ast(fpath)\n mod = imp.new_module(name)\n mod.__file__ = fpath\n eval(compile_(_ast, fpath, \"exec\"), mod.__dict__)\n return mod\n\n\ndef hy_eval(hytree, namespace):\n foo = HyObject()\n foo.start_line = 0\n foo.end_line = 0\n foo.start_column = 0\n foo.end_column = 0\n hytree.replace(foo)\n _ast = hy_compile(hytree, root=ast.Expression)\n return eval(compile_(_ast, \"<eval>\", \"eval\"), namespace)\n\n\ndef write_hy_as_pyc(fname):\n with open(fname, 'U') as f:\n try:\n timestamp = long(os.fstat(f.fileno()).st_mtime)\n except AttributeError:\n timestamp = long(os.stat(fname).st_mtime)\n\n _ast = import_file_to_ast(fname)\n code = compile_(_ast, fname, \"exec\")\n cfile = \"%s.pyc\" % fname[:-len(\".hy\")]\n\n with open(cfile, 'wb') as fc:\n fc.write('\\0\\0\\0\\0')\n wr_long(fc, timestamp)\n marshal.dump(code, fc)\n fc.flush()\n fc.seek(0, 0)\n fc.write(MAGIC)\n\n\nclass HyFinder(object):\n def is_package(self, fullname):\n dirpath = \"/\".join(fullname.split(\".\"))\n for pth in sys.path:\n pth = os.path.abspath(pth)\n composed_path = \"%s/%s/__init__.hy\" % (pth, dirpath)\n if os.path.exists(composed_path):\n return True\n return False\n\n def find_on_path(self, fullname):\n fls = [\"%s/__init__.hy\", \"%s.hy\"]\n dirpath = \"/\".join(fullname.split(\".\"))\n\n for pth in sys.path:\n pth = os.path.abspath(pth)\n for fp in fls:\n composed_path = fp % (\"%s/%s\" % (pth, dirpath))\n if os.path.exists(composed_path):\n return composed_path\n\n\nclass MetaLoader(HyFinder):\n def load_module(self, fullname):\n if fullname in sys.modules:\n return sys.modules[fullname]\n\n pth = self.find_on_path(fullname)\n if pth is None:\n return\n\n sys.modules[fullname] = None\n mod = import_file_to_module(fullname, pth)\n\n ispkg = self.is_package(fullname)\n\n mod.__file__ = pth\n mod.__loader__ = self\n mod.__name__ = fullname\n\n if ispkg:\n mod.__path__ = []\n mod.__package__ = fullname\n else:\n mod.__package__ = fullname.rpartition('.')[0]\n\n sys.modules[fullname] = mod\n return mod\n\n\nclass MetaImporter(HyFinder):\n def find_module(self, fullname, path=None):\n pth = self.find_on_path(fullname)\n if pth is None:\n return\n return MetaLoader()\n\n\nsys.meta_path.append(MetaImporter())\nsys.path.insert(0, \"\")\n", "path": "hy/importer.py"}]} | 1,877 | 415 |
gh_patches_debug_63306 | rasdani/github-patches | git_diff | scikit-hep__pyhf-363 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
consolidation: add simplemodls to __all__
# Description
It would be nice if the snippet in the README could be shorter:
right now this is needed
```
import pyhf
import pyhf.simplemodels
pdf = pyhf.simplemodels.hepdata_like(signal_data=[12.0], bkg_data=[50.0], bkg_uncerts=[3.0])
CLs_obs = pyhf.utils.hypotest(1.0, [51] + pdf.config.auxdata, pdf)
```
whereas if we pre-import `simplemodels` it could be
```
import pyhf
pdf = pyhf.simplemodels.hepdata_like(signal_data=[12.0], bkg_data=[50.0], bkg_uncerts=[3.0])
CLs_obs = pyhf.utils.hypotest(1.0, [51] + pdf.config.auxdata, pdf)
```
since `simplemodels.py` doesn't add much code, i don't think it would slow down things a lot
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pyhf/__init__.py`
Content:
```
1 from .tensor import BackendRetriever as tensor
2 from .optimize import OptimizerRetriever as optimize
3 from .version import __version__
4 from . import events
5
6 tensorlib = tensor.numpy_backend()
7 default_backend = tensorlib
8 optimizer = optimize.scipy_optimizer()
9 default_optimizer = optimizer
10
11
12 def get_backend():
13 """
14 Get the current backend and the associated optimizer
15
16 Example:
17 >>> import pyhf
18 >>> pyhf.get_backend()
19 (<pyhf.tensor.numpy_backend.numpy_backend object at 0x...>, <pyhf.optimize.opt_scipy.scipy_optimizer object at 0x...>)
20
21 Returns:
22 backend, optimizer
23 """
24 global tensorlib
25 global optimizer
26 return tensorlib, optimizer
27
28
29 @events.register('change_backend')
30 def set_backend(backend, custom_optimizer=None):
31 """
32 Set the backend and the associated optimizer
33
34 Example:
35 >>> import pyhf
36 >>> import tensorflow as tf
37 >>> pyhf.set_backend(pyhf.tensor.tensorflow_backend(session=tf.Session()))
38
39 Args:
40 backend: One of the supported pyhf backends: NumPy,
41 TensorFlow, PyTorch, and MXNet
42
43 Returns:
44 None
45 """
46 global tensorlib
47 global optimizer
48
49 # need to determine if the tensorlib changed or the optimizer changed for events
50 tensorlib_changed = bool(backend.name != tensorlib.name)
51 optimizer_changed = False
52
53 if backend.name == 'tensorflow':
54 new_optimizer = (
55 custom_optimizer if custom_optimizer else optimize.tflow_optimizer(backend)
56 )
57 if tensorlib.name == 'tensorflow':
58 tensorlib_changed |= bool(backend.session != tensorlib.session)
59 elif backend.name == 'pytorch':
60 new_optimizer = (
61 custom_optimizer
62 if custom_optimizer
63 else optimize.pytorch_optimizer(tensorlib=backend)
64 )
65 # TODO: Add support for mxnet_optimizer()
66 # elif tensorlib.name == 'mxnet':
67 # new_optimizer = custom_optimizer if custom_optimizer else mxnet_optimizer()
68 else:
69 new_optimizer = (
70 custom_optimizer if custom_optimizer else optimize.scipy_optimizer()
71 )
72
73 optimizer_changed = bool(optimizer != new_optimizer)
74 # set new backend
75 tensorlib = backend
76 optimizer = new_optimizer
77 # trigger events
78 if tensorlib_changed:
79 events.trigger("tensorlib_changed")()
80 if optimizer_changed:
81 events.trigger("optimizer_changed")()
82
83
84 from .pdf import Model
85
86 __all__ = ['Model', 'utils', 'modifiers', '__version__']
87
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pyhf/__init__.py b/pyhf/__init__.py
--- a/pyhf/__init__.py
+++ b/pyhf/__init__.py
@@ -82,5 +82,6 @@
from .pdf import Model
+from . import simplemodels
-__all__ = ['Model', 'utils', 'modifiers', '__version__']
+__all__ = ['Model', 'utils', 'modifiers', 'simplemodels', '__version__']
| {"golden_diff": "diff --git a/pyhf/__init__.py b/pyhf/__init__.py\n--- a/pyhf/__init__.py\n+++ b/pyhf/__init__.py\n@@ -82,5 +82,6 @@\n \n \n from .pdf import Model\n+from . import simplemodels\n \n-__all__ = ['Model', 'utils', 'modifiers', '__version__']\n+__all__ = ['Model', 'utils', 'modifiers', 'simplemodels', '__version__']\n", "issue": "consolidation: add simplemodls to __all__\n# Description\r\n\r\nIt would be nice if the snippet in the README could be shorter:\r\n\r\nright now this is needed\r\n```\r\nimport pyhf\r\nimport pyhf.simplemodels\r\npdf = pyhf.simplemodels.hepdata_like(signal_data=[12.0], bkg_data=[50.0], bkg_uncerts=[3.0])\r\nCLs_obs = pyhf.utils.hypotest(1.0, [51] + pdf.config.auxdata, pdf)\r\n```\r\n\r\nwhereas if we pre-import `simplemodels` it could be \r\n```\r\nimport pyhf\r\npdf = pyhf.simplemodels.hepdata_like(signal_data=[12.0], bkg_data=[50.0], bkg_uncerts=[3.0])\r\nCLs_obs = pyhf.utils.hypotest(1.0, [51] + pdf.config.auxdata, pdf)\r\n```\r\n\r\nsince `simplemodels.py` doesn't add much code, i don't think it would slow down things a lot\n", "before_files": [{"content": "from .tensor import BackendRetriever as tensor\nfrom .optimize import OptimizerRetriever as optimize\nfrom .version import __version__\nfrom . import events\n\ntensorlib = tensor.numpy_backend()\ndefault_backend = tensorlib\noptimizer = optimize.scipy_optimizer()\ndefault_optimizer = optimizer\n\n\ndef get_backend():\n \"\"\"\n Get the current backend and the associated optimizer\n\n Example:\n >>> import pyhf\n >>> pyhf.get_backend()\n (<pyhf.tensor.numpy_backend.numpy_backend object at 0x...>, <pyhf.optimize.opt_scipy.scipy_optimizer object at 0x...>)\n\n Returns:\n backend, optimizer\n \"\"\"\n global tensorlib\n global optimizer\n return tensorlib, optimizer\n\n\[email protected]('change_backend')\ndef set_backend(backend, custom_optimizer=None):\n \"\"\"\n Set the backend and the associated optimizer\n\n Example:\n >>> import pyhf\n >>> import tensorflow as tf\n >>> pyhf.set_backend(pyhf.tensor.tensorflow_backend(session=tf.Session()))\n\n Args:\n backend: One of the supported pyhf backends: NumPy,\n TensorFlow, PyTorch, and MXNet\n\n Returns:\n None\n \"\"\"\n global tensorlib\n global optimizer\n\n # need to determine if the tensorlib changed or the optimizer changed for events\n tensorlib_changed = bool(backend.name != tensorlib.name)\n optimizer_changed = False\n\n if backend.name == 'tensorflow':\n new_optimizer = (\n custom_optimizer if custom_optimizer else optimize.tflow_optimizer(backend)\n )\n if tensorlib.name == 'tensorflow':\n tensorlib_changed |= bool(backend.session != tensorlib.session)\n elif backend.name == 'pytorch':\n new_optimizer = (\n custom_optimizer\n if custom_optimizer\n else optimize.pytorch_optimizer(tensorlib=backend)\n )\n # TODO: Add support for mxnet_optimizer()\n # elif tensorlib.name == 'mxnet':\n # new_optimizer = custom_optimizer if custom_optimizer else mxnet_optimizer()\n else:\n new_optimizer = (\n custom_optimizer if custom_optimizer else optimize.scipy_optimizer()\n )\n\n optimizer_changed = bool(optimizer != new_optimizer)\n # set new backend\n tensorlib = backend\n optimizer = new_optimizer\n # trigger events\n if tensorlib_changed:\n events.trigger(\"tensorlib_changed\")()\n if optimizer_changed:\n events.trigger(\"optimizer_changed\")()\n\n\nfrom .pdf import Model\n\n__all__ = ['Model', 'utils', 'modifiers', '__version__']\n", "path": "pyhf/__init__.py"}], "after_files": [{"content": "from .tensor import BackendRetriever as tensor\nfrom .optimize import OptimizerRetriever as optimize\nfrom .version import __version__\nfrom . import events\n\ntensorlib = tensor.numpy_backend()\ndefault_backend = tensorlib\noptimizer = optimize.scipy_optimizer()\ndefault_optimizer = optimizer\n\n\ndef get_backend():\n \"\"\"\n Get the current backend and the associated optimizer\n\n Example:\n >>> import pyhf\n >>> pyhf.get_backend()\n (<pyhf.tensor.numpy_backend.numpy_backend object at 0x...>, <pyhf.optimize.opt_scipy.scipy_optimizer object at 0x...>)\n\n Returns:\n backend, optimizer\n \"\"\"\n global tensorlib\n global optimizer\n return tensorlib, optimizer\n\n\[email protected]('change_backend')\ndef set_backend(backend, custom_optimizer=None):\n \"\"\"\n Set the backend and the associated optimizer\n\n Example:\n >>> import pyhf\n >>> import tensorflow as tf\n >>> pyhf.set_backend(pyhf.tensor.tensorflow_backend(session=tf.Session()))\n\n Args:\n backend: One of the supported pyhf backends: NumPy,\n TensorFlow, PyTorch, and MXNet\n\n Returns:\n None\n \"\"\"\n global tensorlib\n global optimizer\n\n # need to determine if the tensorlib changed or the optimizer changed for events\n tensorlib_changed = bool(backend.name != tensorlib.name)\n optimizer_changed = False\n\n if backend.name == 'tensorflow':\n new_optimizer = (\n custom_optimizer if custom_optimizer else optimize.tflow_optimizer(backend)\n )\n if tensorlib.name == 'tensorflow':\n tensorlib_changed |= bool(backend.session != tensorlib.session)\n elif backend.name == 'pytorch':\n new_optimizer = (\n custom_optimizer\n if custom_optimizer\n else optimize.pytorch_optimizer(tensorlib=backend)\n )\n # TODO: Add support for mxnet_optimizer()\n # elif tensorlib.name == 'mxnet':\n # new_optimizer = custom_optimizer if custom_optimizer else mxnet_optimizer()\n else:\n new_optimizer = (\n custom_optimizer if custom_optimizer else optimize.scipy_optimizer()\n )\n\n optimizer_changed = bool(optimizer != new_optimizer)\n # set new backend\n tensorlib = backend\n optimizer = new_optimizer\n # trigger events\n if tensorlib_changed:\n events.trigger(\"tensorlib_changed\")()\n if optimizer_changed:\n events.trigger(\"optimizer_changed\")()\n\n\nfrom .pdf import Model\nfrom . import simplemodels\n\n__all__ = ['Model', 'utils', 'modifiers', 'simplemodels', '__version__']\n", "path": "pyhf/__init__.py"}]} | 1,190 | 102 |
gh_patches_debug_3535 | rasdani/github-patches | git_diff | scikit-image__scikit-image-1206 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
canny edge detection throws AttributeError exception
I was trying out http://scikit-image.org/docs/dev/auto_examples/plot_canny.html
And the following lines of code:
# Generate noisy image of a square
im = np.zeros((128, 128))
im[32:-32, 32:-32] = 1
im = ndimage.rotate(im, 15, mode='constant')
im = ndimage.gaussian_filter(im, 4)
im += 0.2 \* np.random.random(im.shape)
# Compute the Canny filter for two values of sigma
edges1 = filter.canny(im)
produced this:
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "/home/abc/virtualenvs/activepython/lib/python2.7/site-packages/scikit_image-0.11dev-py2.7-linux-x86_64.egg/skimage/_shared/utils.py", line 46, in __call__
msg = 'Call to deprecated function `%s`.' % func.**name**
AttributeError: 'numpy.ndarray' object has no attribute '**name**'
Any guess? Seems I'm using the latest version of scikit-image.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `skimage/filter/__init__.py`
Content:
```
1 from .lpi_filter import inverse, wiener, LPIFilter2D
2 from ._gaussian import gaussian_filter
3 from .edges import (sobel, hsobel, vsobel, scharr, hscharr, vscharr, prewitt,
4 hprewitt, vprewitt, roberts, roberts_positive_diagonal,
5 roberts_negative_diagonal)
6 from ._rank_order import rank_order
7 from ._gabor import gabor_kernel, gabor_filter
8 from .thresholding import (threshold_adaptive, threshold_otsu, threshold_yen,
9 threshold_isodata)
10 from . import rank
11
12 from skimage._shared.utils import deprecated
13 from skimage import restoration
14 denoise_bilateral = deprecated('skimage.restoration.denoise_bilateral')\
15 (restoration.denoise_bilateral)
16 denoise_tv_bregman = deprecated('skimage.restoration.denoise_tv_bregman')\
17 (restoration.denoise_tv_bregman)
18 denoise_tv_chambolle = deprecated('skimage.restoration.denoise_tv_chambolle')\
19 (restoration.denoise_tv_chambolle)
20
21 # Backward compatibility v<0.11
22 @deprecated
23 def canny(*args, **kwargs):
24 # Hack to avoid circular import
25 from skimage.feature._canny import canny as canny_
26 return canny_(*args, **kwargs)
27
28
29 __all__ = ['inverse',
30 'wiener',
31 'LPIFilter2D',
32 'gaussian_filter',
33 'canny',
34 'sobel',
35 'hsobel',
36 'vsobel',
37 'scharr',
38 'hscharr',
39 'vscharr',
40 'prewitt',
41 'hprewitt',
42 'vprewitt',
43 'roberts',
44 'roberts_positive_diagonal',
45 'roberts_negative_diagonal',
46 'denoise_tv_chambolle',
47 'denoise_bilateral',
48 'denoise_tv_bregman',
49 'rank_order',
50 'gabor_kernel',
51 'gabor_filter',
52 'threshold_adaptive',
53 'threshold_otsu',
54 'threshold_yen',
55 'threshold_isodata',
56 'rank']
57
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/skimage/filter/__init__.py b/skimage/filter/__init__.py
--- a/skimage/filter/__init__.py
+++ b/skimage/filter/__init__.py
@@ -19,7 +19,7 @@
(restoration.denoise_tv_chambolle)
# Backward compatibility v<0.11
-@deprecated
+@deprecated('skimage.feature.canny')
def canny(*args, **kwargs):
# Hack to avoid circular import
from skimage.feature._canny import canny as canny_
| {"golden_diff": "diff --git a/skimage/filter/__init__.py b/skimage/filter/__init__.py\n--- a/skimage/filter/__init__.py\n+++ b/skimage/filter/__init__.py\n@@ -19,7 +19,7 @@\n (restoration.denoise_tv_chambolle)\n \n # Backward compatibility v<0.11\n-@deprecated\n+@deprecated('skimage.feature.canny')\n def canny(*args, **kwargs):\n # Hack to avoid circular import\n from skimage.feature._canny import canny as canny_\n", "issue": "canny edge detection throws AttributeError exception\nI was trying out http://scikit-image.org/docs/dev/auto_examples/plot_canny.html \n\nAnd the following lines of code:\n# Generate noisy image of a square\n\nim = np.zeros((128, 128))\nim[32:-32, 32:-32] = 1\n\nim = ndimage.rotate(im, 15, mode='constant')\nim = ndimage.gaussian_filter(im, 4)\nim += 0.2 \\* np.random.random(im.shape)\n# Compute the Canny filter for two values of sigma\n\nedges1 = filter.canny(im)\n\n produced this:\n\nTraceback (most recent call last):\n File \"<stdin>\", line 2, in <module>\n File \"/home/abc/virtualenvs/activepython/lib/python2.7/site-packages/scikit_image-0.11dev-py2.7-linux-x86_64.egg/skimage/_shared/utils.py\", line 46, in __call__\n msg = 'Call to deprecated function `%s`.' % func.**name**\nAttributeError: 'numpy.ndarray' object has no attribute '**name**'\n\nAny guess? Seems I'm using the latest version of scikit-image. \n\n", "before_files": [{"content": "from .lpi_filter import inverse, wiener, LPIFilter2D\nfrom ._gaussian import gaussian_filter\nfrom .edges import (sobel, hsobel, vsobel, scharr, hscharr, vscharr, prewitt,\n hprewitt, vprewitt, roberts, roberts_positive_diagonal,\n roberts_negative_diagonal)\nfrom ._rank_order import rank_order\nfrom ._gabor import gabor_kernel, gabor_filter\nfrom .thresholding import (threshold_adaptive, threshold_otsu, threshold_yen,\n threshold_isodata)\nfrom . import rank\n\nfrom skimage._shared.utils import deprecated\nfrom skimage import restoration\ndenoise_bilateral = deprecated('skimage.restoration.denoise_bilateral')\\\n (restoration.denoise_bilateral)\ndenoise_tv_bregman = deprecated('skimage.restoration.denoise_tv_bregman')\\\n (restoration.denoise_tv_bregman)\ndenoise_tv_chambolle = deprecated('skimage.restoration.denoise_tv_chambolle')\\\n (restoration.denoise_tv_chambolle)\n\n# Backward compatibility v<0.11\n@deprecated\ndef canny(*args, **kwargs):\n # Hack to avoid circular import\n from skimage.feature._canny import canny as canny_\n return canny_(*args, **kwargs)\n\n\n__all__ = ['inverse',\n 'wiener',\n 'LPIFilter2D',\n 'gaussian_filter',\n 'canny',\n 'sobel',\n 'hsobel',\n 'vsobel',\n 'scharr',\n 'hscharr',\n 'vscharr',\n 'prewitt',\n 'hprewitt',\n 'vprewitt',\n 'roberts',\n 'roberts_positive_diagonal',\n 'roberts_negative_diagonal',\n 'denoise_tv_chambolle',\n 'denoise_bilateral',\n 'denoise_tv_bregman',\n 'rank_order',\n 'gabor_kernel',\n 'gabor_filter',\n 'threshold_adaptive',\n 'threshold_otsu',\n 'threshold_yen',\n 'threshold_isodata',\n 'rank']\n", "path": "skimage/filter/__init__.py"}], "after_files": [{"content": "from .lpi_filter import inverse, wiener, LPIFilter2D\nfrom ._gaussian import gaussian_filter\nfrom .edges import (sobel, hsobel, vsobel, scharr, hscharr, vscharr, prewitt,\n hprewitt, vprewitt, roberts, roberts_positive_diagonal,\n roberts_negative_diagonal)\nfrom ._rank_order import rank_order\nfrom ._gabor import gabor_kernel, gabor_filter\nfrom .thresholding import (threshold_adaptive, threshold_otsu, threshold_yen,\n threshold_isodata)\nfrom . import rank\n\nfrom skimage._shared.utils import deprecated\nfrom skimage import restoration\ndenoise_bilateral = deprecated('skimage.restoration.denoise_bilateral')\\\n (restoration.denoise_bilateral)\ndenoise_tv_bregman = deprecated('skimage.restoration.denoise_tv_bregman')\\\n (restoration.denoise_tv_bregman)\ndenoise_tv_chambolle = deprecated('skimage.restoration.denoise_tv_chambolle')\\\n (restoration.denoise_tv_chambolle)\n\n# Backward compatibility v<0.11\n@deprecated('skimage.feature.canny')\ndef canny(*args, **kwargs):\n # Hack to avoid circular import\n from skimage.feature._canny import canny as canny_\n return canny_(*args, **kwargs)\n\n\n__all__ = ['inverse',\n 'wiener',\n 'LPIFilter2D',\n 'gaussian_filter',\n 'canny',\n 'sobel',\n 'hsobel',\n 'vsobel',\n 'scharr',\n 'hscharr',\n 'vscharr',\n 'prewitt',\n 'hprewitt',\n 'vprewitt',\n 'roberts',\n 'roberts_positive_diagonal',\n 'roberts_negative_diagonal',\n 'denoise_tv_chambolle',\n 'denoise_bilateral',\n 'denoise_tv_bregman',\n 'rank_order',\n 'gabor_kernel',\n 'gabor_filter',\n 'threshold_adaptive',\n 'threshold_otsu',\n 'threshold_yen',\n 'threshold_isodata',\n 'rank']\n", "path": "skimage/filter/__init__.py"}]} | 1,114 | 125 |
gh_patches_debug_31778 | rasdani/github-patches | git_diff | pyqtgraph__pyqtgraph-1326 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add MultiPlotItem to docs
I noticed it's missing in reviewing #1324. Also [MultiPlotWidget](https://pyqtgraph.readthedocs.io/en/latest/widgets/multiplotwidget.html) should link to it and indicate it wraps the item's methods. `MultiPlotItem.plot` could also use a docstring.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pyqtgraph/widgets/MultiPlotWidget.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """
3 MultiPlotWidget.py - Convenience class--GraphicsView widget displaying a MultiPlotItem
4 Copyright 2010 Luke Campagnola
5 Distributed under MIT/X11 license. See license.txt for more information.
6 """
7 from ..Qt import QtCore
8 from .GraphicsView import GraphicsView
9 from ..graphicsItems import MultiPlotItem as MultiPlotItem
10
11 __all__ = ['MultiPlotWidget']
12 class MultiPlotWidget(GraphicsView):
13 """Widget implementing a graphicsView with a single MultiPlotItem inside."""
14 def __init__(self, parent=None):
15 self.minPlotHeight = 50
16 self.mPlotItem = MultiPlotItem.MultiPlotItem()
17 GraphicsView.__init__(self, parent)
18 self.enableMouse(False)
19 self.setCentralItem(self.mPlotItem)
20 ## Explicitly wrap methods from mPlotItem
21 #for m in ['setData']:
22 #setattr(self, m, getattr(self.mPlotItem, m))
23 self.setVerticalScrollBarPolicy(QtCore.Qt.ScrollBarAsNeeded)
24 self.setHorizontalScrollBarPolicy(QtCore.Qt.ScrollBarAsNeeded)
25
26 def __getattr__(self, attr): ## implicitly wrap methods from plotItem
27 if hasattr(self.mPlotItem, attr):
28 m = getattr(self.mPlotItem, attr)
29 if hasattr(m, '__call__'):
30 return m
31 raise AttributeError(attr)
32
33 def setMinimumPlotHeight(self, min):
34 """Set the minimum height for each sub-plot displayed.
35
36 If the total height of all plots is greater than the height of the
37 widget, then a scroll bar will appear to provide access to the entire
38 set of plots.
39
40 Added in version 0.9.9
41 """
42 self.minPlotHeight = min
43 self.resizeEvent(None)
44
45 def widgetGroupInterface(self):
46 return (None, MultiPlotWidget.saveState, MultiPlotWidget.restoreState)
47
48 def saveState(self):
49 return {}
50 #return self.plotItem.saveState()
51
52 def restoreState(self, state):
53 pass
54 #return self.plotItem.restoreState(state)
55
56 def close(self):
57 self.mPlotItem.close()
58 self.mPlotItem = None
59 self.setParent(None)
60 GraphicsView.close(self)
61
62 def setRange(self, *args, **kwds):
63 GraphicsView.setRange(self, *args, **kwds)
64 if self.centralWidget is not None:
65 r = self.range
66 minHeight = len(self.mPlotItem.plots) * self.minPlotHeight
67 if r.height() < minHeight:
68 r.setHeight(minHeight)
69 r.setWidth(r.width() - self.verticalScrollBar().width())
70 self.centralWidget.setGeometry(r)
71
72 def resizeEvent(self, ev):
73 if self.closed:
74 return
75 if self.autoPixelRange:
76 self.range = QtCore.QRectF(0, 0, self.size().width(), self.size().height())
77 MultiPlotWidget.setRange(self, self.range, padding=0, disableAutoPixel=False) ## we do this because some subclasses like to redefine setRange in an incompatible way.
78 self.updateMatrix()
79
```
Path: `pyqtgraph/graphicsItems/MultiPlotItem.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """
3 MultiPlotItem.py - Graphics item used for displaying an array of PlotItems
4 Copyright 2010 Luke Campagnola
5 Distributed under MIT/X11 license. See license.txt for more information.
6 """
7 from numpy import ndarray
8 from . import GraphicsLayout
9 from ..metaarray import *
10
11
12 __all__ = ['MultiPlotItem']
13 class MultiPlotItem(GraphicsLayout.GraphicsLayout):
14 """
15 Automatically generates a grid of plots from a multi-dimensional array
16 """
17 def __init__(self, *args, **kwds):
18 GraphicsLayout.GraphicsLayout.__init__(self, *args, **kwds)
19 self.plots = []
20
21
22 def plot(self, data, **plotArgs):
23 #self.layout.clear()
24
25 if hasattr(data, 'implements') and data.implements('MetaArray'):
26 if data.ndim != 2:
27 raise Exception("MultiPlot currently only accepts 2D MetaArray.")
28 ic = data.infoCopy()
29 ax = 0
30 for i in [0, 1]:
31 if 'cols' in ic[i]:
32 ax = i
33 break
34 #print "Plotting using axis %d as columns (%d plots)" % (ax, data.shape[ax])
35 for i in range(data.shape[ax]):
36 pi = self.addPlot()
37 self.nextRow()
38 sl = [slice(None)] * 2
39 sl[ax] = i
40 pi.plot(data[tuple(sl)], **plotArgs)
41 #self.layout.addItem(pi, i, 0)
42 self.plots.append((pi, i, 0))
43 info = ic[ax]['cols'][i]
44 title = info.get('title', info.get('name', None))
45 units = info.get('units', None)
46 pi.setLabel('left', text=title, units=units)
47 info = ic[1-ax]
48 title = info.get('title', info.get('name', None))
49 units = info.get('units', None)
50 pi.setLabel('bottom', text=title, units=units)
51 else:
52 raise Exception("Data type %s not (yet?) supported for MultiPlot." % type(data))
53
54 def close(self):
55 for p in self.plots:
56 p[0].close()
57 self.plots = None
58 self.clear()
59
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pyqtgraph/graphicsItems/MultiPlotItem.py b/pyqtgraph/graphicsItems/MultiPlotItem.py
--- a/pyqtgraph/graphicsItems/MultiPlotItem.py
+++ b/pyqtgraph/graphicsItems/MultiPlotItem.py
@@ -4,22 +4,33 @@
Copyright 2010 Luke Campagnola
Distributed under MIT/X11 license. See license.txt for more information.
"""
-from numpy import ndarray
from . import GraphicsLayout
from ..metaarray import *
-
__all__ = ['MultiPlotItem']
+
+
class MultiPlotItem(GraphicsLayout.GraphicsLayout):
"""
- Automatically generates a grid of plots from a multi-dimensional array
+ :class:`~pyqtgraph.GraphicsLayout` that automatically generates a grid of
+ plots from a MetaArray.
+
+ .. seealso:: :class:`~pyqtgraph.MultiPlotWidget`: Widget containing a MultiPlotItem
"""
+
def __init__(self, *args, **kwds):
GraphicsLayout.GraphicsLayout.__init__(self, *args, **kwds)
self.plots = []
-
def plot(self, data, **plotArgs):
+ """Plot the data from a MetaArray with each array column as a separate
+ :class:`~pyqtgraph.PlotItem`.
+
+ Axis labels are automatically extracted from the array info.
+
+ ``plotArgs`` are passed to :meth:`PlotItem.plot
+ <pyqtgraph.PlotItem.plot>`.
+ """
#self.layout.clear()
if hasattr(data, 'implements') and data.implements('MetaArray'):
diff --git a/pyqtgraph/widgets/MultiPlotWidget.py b/pyqtgraph/widgets/MultiPlotWidget.py
--- a/pyqtgraph/widgets/MultiPlotWidget.py
+++ b/pyqtgraph/widgets/MultiPlotWidget.py
@@ -10,7 +10,8 @@
__all__ = ['MultiPlotWidget']
class MultiPlotWidget(GraphicsView):
- """Widget implementing a graphicsView with a single MultiPlotItem inside."""
+ """Widget implementing a :class:`~pyqtgraph.GraphicsView` with a single
+ :class:`~pyqtgraph.MultiPlotItem` inside."""
def __init__(self, parent=None):
self.minPlotHeight = 50
self.mPlotItem = MultiPlotItem.MultiPlotItem()
| {"golden_diff": "diff --git a/pyqtgraph/graphicsItems/MultiPlotItem.py b/pyqtgraph/graphicsItems/MultiPlotItem.py\n--- a/pyqtgraph/graphicsItems/MultiPlotItem.py\n+++ b/pyqtgraph/graphicsItems/MultiPlotItem.py\n@@ -4,22 +4,33 @@\n Copyright 2010 Luke Campagnola\n Distributed under MIT/X11 license. See license.txt for more information.\n \"\"\"\n-from numpy import ndarray\n from . import GraphicsLayout\n from ..metaarray import *\n \n-\n __all__ = ['MultiPlotItem']\n+\n+\n class MultiPlotItem(GraphicsLayout.GraphicsLayout):\n \"\"\"\n- Automatically generates a grid of plots from a multi-dimensional array\n+ :class:`~pyqtgraph.GraphicsLayout` that automatically generates a grid of\n+ plots from a MetaArray.\n+\n+ .. seealso:: :class:`~pyqtgraph.MultiPlotWidget`: Widget containing a MultiPlotItem\n \"\"\"\n+\n def __init__(self, *args, **kwds):\n GraphicsLayout.GraphicsLayout.__init__(self, *args, **kwds)\n self.plots = []\n \n-\n def plot(self, data, **plotArgs):\n+ \"\"\"Plot the data from a MetaArray with each array column as a separate\n+ :class:`~pyqtgraph.PlotItem`.\n+\n+ Axis labels are automatically extracted from the array info.\n+\n+ ``plotArgs`` are passed to :meth:`PlotItem.plot\n+ <pyqtgraph.PlotItem.plot>`.\n+ \"\"\"\n #self.layout.clear()\n \n if hasattr(data, 'implements') and data.implements('MetaArray'):\ndiff --git a/pyqtgraph/widgets/MultiPlotWidget.py b/pyqtgraph/widgets/MultiPlotWidget.py\n--- a/pyqtgraph/widgets/MultiPlotWidget.py\n+++ b/pyqtgraph/widgets/MultiPlotWidget.py\n@@ -10,7 +10,8 @@\n \n __all__ = ['MultiPlotWidget']\n class MultiPlotWidget(GraphicsView):\n- \"\"\"Widget implementing a graphicsView with a single MultiPlotItem inside.\"\"\"\n+ \"\"\"Widget implementing a :class:`~pyqtgraph.GraphicsView` with a single\n+ :class:`~pyqtgraph.MultiPlotItem` inside.\"\"\"\n def __init__(self, parent=None):\n self.minPlotHeight = 50\n self.mPlotItem = MultiPlotItem.MultiPlotItem()\n", "issue": "Add MultiPlotItem to docs\nI noticed it's missing in reviewing #1324. Also [MultiPlotWidget](https://pyqtgraph.readthedocs.io/en/latest/widgets/multiplotwidget.html) should link to it and indicate it wraps the item's methods. `MultiPlotItem.plot` could also use a docstring.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nMultiPlotWidget.py - Convenience class--GraphicsView widget displaying a MultiPlotItem\nCopyright 2010 Luke Campagnola\nDistributed under MIT/X11 license. See license.txt for more information.\n\"\"\"\nfrom ..Qt import QtCore\nfrom .GraphicsView import GraphicsView\nfrom ..graphicsItems import MultiPlotItem as MultiPlotItem\n\n__all__ = ['MultiPlotWidget']\nclass MultiPlotWidget(GraphicsView):\n \"\"\"Widget implementing a graphicsView with a single MultiPlotItem inside.\"\"\"\n def __init__(self, parent=None):\n self.minPlotHeight = 50\n self.mPlotItem = MultiPlotItem.MultiPlotItem()\n GraphicsView.__init__(self, parent)\n self.enableMouse(False)\n self.setCentralItem(self.mPlotItem)\n ## Explicitly wrap methods from mPlotItem\n #for m in ['setData']:\n #setattr(self, m, getattr(self.mPlotItem, m))\n self.setVerticalScrollBarPolicy(QtCore.Qt.ScrollBarAsNeeded)\n self.setHorizontalScrollBarPolicy(QtCore.Qt.ScrollBarAsNeeded)\n \n def __getattr__(self, attr): ## implicitly wrap methods from plotItem\n if hasattr(self.mPlotItem, attr):\n m = getattr(self.mPlotItem, attr)\n if hasattr(m, '__call__'):\n return m\n raise AttributeError(attr)\n\n def setMinimumPlotHeight(self, min):\n \"\"\"Set the minimum height for each sub-plot displayed. \n \n If the total height of all plots is greater than the height of the \n widget, then a scroll bar will appear to provide access to the entire\n set of plots.\n \n Added in version 0.9.9\n \"\"\"\n self.minPlotHeight = min\n self.resizeEvent(None)\n\n def widgetGroupInterface(self):\n return (None, MultiPlotWidget.saveState, MultiPlotWidget.restoreState)\n\n def saveState(self):\n return {}\n #return self.plotItem.saveState()\n \n def restoreState(self, state):\n pass\n #return self.plotItem.restoreState(state)\n\n def close(self):\n self.mPlotItem.close()\n self.mPlotItem = None\n self.setParent(None)\n GraphicsView.close(self)\n\n def setRange(self, *args, **kwds):\n GraphicsView.setRange(self, *args, **kwds)\n if self.centralWidget is not None:\n r = self.range\n minHeight = len(self.mPlotItem.plots) * self.minPlotHeight\n if r.height() < minHeight:\n r.setHeight(minHeight)\n r.setWidth(r.width() - self.verticalScrollBar().width())\n self.centralWidget.setGeometry(r)\n\n def resizeEvent(self, ev):\n if self.closed:\n return\n if self.autoPixelRange:\n self.range = QtCore.QRectF(0, 0, self.size().width(), self.size().height())\n MultiPlotWidget.setRange(self, self.range, padding=0, disableAutoPixel=False) ## we do this because some subclasses like to redefine setRange in an incompatible way.\n self.updateMatrix()\n", "path": "pyqtgraph/widgets/MultiPlotWidget.py"}, {"content": "# -*- coding: utf-8 -*-\n\"\"\"\nMultiPlotItem.py - Graphics item used for displaying an array of PlotItems\nCopyright 2010 Luke Campagnola\nDistributed under MIT/X11 license. See license.txt for more information.\n\"\"\"\nfrom numpy import ndarray\nfrom . import GraphicsLayout\nfrom ..metaarray import *\n\n\n__all__ = ['MultiPlotItem']\nclass MultiPlotItem(GraphicsLayout.GraphicsLayout):\n \"\"\"\n Automatically generates a grid of plots from a multi-dimensional array\n \"\"\"\n def __init__(self, *args, **kwds):\n GraphicsLayout.GraphicsLayout.__init__(self, *args, **kwds)\n self.plots = []\n\n\n def plot(self, data, **plotArgs):\n #self.layout.clear()\n\n if hasattr(data, 'implements') and data.implements('MetaArray'):\n if data.ndim != 2:\n raise Exception(\"MultiPlot currently only accepts 2D MetaArray.\")\n ic = data.infoCopy()\n ax = 0\n for i in [0, 1]:\n if 'cols' in ic[i]:\n ax = i\n break\n #print \"Plotting using axis %d as columns (%d plots)\" % (ax, data.shape[ax])\n for i in range(data.shape[ax]):\n pi = self.addPlot()\n self.nextRow()\n sl = [slice(None)] * 2\n sl[ax] = i\n pi.plot(data[tuple(sl)], **plotArgs)\n #self.layout.addItem(pi, i, 0)\n self.plots.append((pi, i, 0))\n info = ic[ax]['cols'][i]\n title = info.get('title', info.get('name', None))\n units = info.get('units', None)\n pi.setLabel('left', text=title, units=units)\n info = ic[1-ax]\n title = info.get('title', info.get('name', None))\n units = info.get('units', None)\n pi.setLabel('bottom', text=title, units=units)\n else:\n raise Exception(\"Data type %s not (yet?) supported for MultiPlot.\" % type(data))\n\n def close(self):\n for p in self.plots:\n p[0].close()\n self.plots = None\n self.clear()\n", "path": "pyqtgraph/graphicsItems/MultiPlotItem.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nMultiPlotWidget.py - Convenience class--GraphicsView widget displaying a MultiPlotItem\nCopyright 2010 Luke Campagnola\nDistributed under MIT/X11 license. See license.txt for more information.\n\"\"\"\nfrom ..Qt import QtCore\nfrom .GraphicsView import GraphicsView\nfrom ..graphicsItems import MultiPlotItem as MultiPlotItem\n\n__all__ = ['MultiPlotWidget']\nclass MultiPlotWidget(GraphicsView):\n \"\"\"Widget implementing a :class:`~pyqtgraph.GraphicsView` with a single\n :class:`~pyqtgraph.MultiPlotItem` inside.\"\"\"\n def __init__(self, parent=None):\n self.minPlotHeight = 50\n self.mPlotItem = MultiPlotItem.MultiPlotItem()\n GraphicsView.__init__(self, parent)\n self.enableMouse(False)\n self.setCentralItem(self.mPlotItem)\n ## Explicitly wrap methods from mPlotItem\n #for m in ['setData']:\n #setattr(self, m, getattr(self.mPlotItem, m))\n self.setVerticalScrollBarPolicy(QtCore.Qt.ScrollBarAsNeeded)\n self.setHorizontalScrollBarPolicy(QtCore.Qt.ScrollBarAsNeeded)\n \n def __getattr__(self, attr): ## implicitly wrap methods from plotItem\n if hasattr(self.mPlotItem, attr):\n m = getattr(self.mPlotItem, attr)\n if hasattr(m, '__call__'):\n return m\n raise AttributeError(attr)\n\n def setMinimumPlotHeight(self, min):\n \"\"\"Set the minimum height for each sub-plot displayed. \n \n If the total height of all plots is greater than the height of the \n widget, then a scroll bar will appear to provide access to the entire\n set of plots.\n \n Added in version 0.9.9\n \"\"\"\n self.minPlotHeight = min\n self.resizeEvent(None)\n\n def widgetGroupInterface(self):\n return (None, MultiPlotWidget.saveState, MultiPlotWidget.restoreState)\n\n def saveState(self):\n return {}\n #return self.plotItem.saveState()\n \n def restoreState(self, state):\n pass\n #return self.plotItem.restoreState(state)\n\n def close(self):\n self.mPlotItem.close()\n self.mPlotItem = None\n self.setParent(None)\n GraphicsView.close(self)\n\n def setRange(self, *args, **kwds):\n GraphicsView.setRange(self, *args, **kwds)\n if self.centralWidget is not None:\n r = self.range\n minHeight = len(self.mPlotItem.plots) * self.minPlotHeight\n if r.height() < minHeight:\n r.setHeight(minHeight)\n r.setWidth(r.width() - self.verticalScrollBar().width())\n self.centralWidget.setGeometry(r)\n\n def resizeEvent(self, ev):\n if self.closed:\n return\n if self.autoPixelRange:\n self.range = QtCore.QRectF(0, 0, self.size().width(), self.size().height())\n MultiPlotWidget.setRange(self, self.range, padding=0, disableAutoPixel=False) ## we do this because some subclasses like to redefine setRange in an incompatible way.\n self.updateMatrix()\n", "path": "pyqtgraph/widgets/MultiPlotWidget.py"}, {"content": "# -*- coding: utf-8 -*-\n\"\"\"\nMultiPlotItem.py - Graphics item used for displaying an array of PlotItems\nCopyright 2010 Luke Campagnola\nDistributed under MIT/X11 license. See license.txt for more information.\n\"\"\"\nfrom . import GraphicsLayout\nfrom ..metaarray import *\n\n__all__ = ['MultiPlotItem']\n\n\nclass MultiPlotItem(GraphicsLayout.GraphicsLayout):\n \"\"\"\n :class:`~pyqtgraph.GraphicsLayout` that automatically generates a grid of\n plots from a MetaArray.\n\n .. seealso:: :class:`~pyqtgraph.MultiPlotWidget`: Widget containing a MultiPlotItem\n \"\"\"\n\n def __init__(self, *args, **kwds):\n GraphicsLayout.GraphicsLayout.__init__(self, *args, **kwds)\n self.plots = []\n\n def plot(self, data, **plotArgs):\n \"\"\"Plot the data from a MetaArray with each array column as a separate\n :class:`~pyqtgraph.PlotItem`.\n\n Axis labels are automatically extracted from the array info.\n\n ``plotArgs`` are passed to :meth:`PlotItem.plot\n <pyqtgraph.PlotItem.plot>`.\n \"\"\"\n #self.layout.clear()\n\n if hasattr(data, 'implements') and data.implements('MetaArray'):\n if data.ndim != 2:\n raise Exception(\"MultiPlot currently only accepts 2D MetaArray.\")\n ic = data.infoCopy()\n ax = 0\n for i in [0, 1]:\n if 'cols' in ic[i]:\n ax = i\n break\n #print \"Plotting using axis %d as columns (%d plots)\" % (ax, data.shape[ax])\n for i in range(data.shape[ax]):\n pi = self.addPlot()\n self.nextRow()\n sl = [slice(None)] * 2\n sl[ax] = i\n pi.plot(data[tuple(sl)], **plotArgs)\n #self.layout.addItem(pi, i, 0)\n self.plots.append((pi, i, 0))\n info = ic[ax]['cols'][i]\n title = info.get('title', info.get('name', None))\n units = info.get('units', None)\n pi.setLabel('left', text=title, units=units)\n info = ic[1-ax]\n title = info.get('title', info.get('name', None))\n units = info.get('units', None)\n pi.setLabel('bottom', text=title, units=units)\n else:\n raise Exception(\"Data type %s not (yet?) supported for MultiPlot.\" % type(data))\n\n def close(self):\n for p in self.plots:\n p[0].close()\n self.plots = None\n self.clear()\n", "path": "pyqtgraph/graphicsItems/MultiPlotItem.py"}]} | 1,776 | 511 |
gh_patches_debug_33409 | rasdani/github-patches | git_diff | bridgecrewio__checkov-4614 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
False positive for tests in bicep: CKV_AZURE_4, CKV_AZURE_5, CKV_AZURE_7, CKV_AZURE_8
**Describe the issue**
false positive for tests CKV_AZURE_4, CKV_AZURE_5, CKV_AZURE_7, CKV_AZURE_8
**Examples**
Here is the bicep code
Code made shorter.
```
resource aks 'Microsoft.ContainerService/managedClusters@2022-11-01' = {
properties: {
enableRBAC: true
}
addonProfiles: {
omsAgent: {
enabled: true
config: {
logAnalyticsWorkspaceResourceId: logAnalyticsWorkspaceId
}
}
azurePolicy: {
enabled: true
config: {
version: 'v2'
}
}
kubeDashboard: {
enabled: false
}
}
networkProfile: {
networkPlugin: 'azure'
networkPolicy: 'azure'
}
}
}
```
**Version (please complete the following information):**
- checkov v2.1.244
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `checkov/arm/checks/resource/AKSLoggingEnabled.py`
Content:
```
1 from checkov.common.models.enums import CheckResult, CheckCategories
2 from checkov.arm.base_resource_check import BaseResourceCheck
3
4
5 class AKSLoggingEnabled(BaseResourceCheck):
6 def __init__(self):
7 # apiVersion 2017-08-03 = Fail - No addonProfiles option to configure
8 name = "Ensure AKS logging to Azure Monitoring is Configured"
9 id = "CKV_AZURE_4"
10 supported_resources = ['Microsoft.ContainerService/managedClusters']
11 categories = [CheckCategories.KUBERNETES]
12 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
13
14 def scan_resource_conf(self, conf):
15 if conf.get("apiVersion"):
16 if conf["apiVersion"] == "2017-08-31":
17 # No addonProfiles option to configure
18 return CheckResult.FAILED
19
20 properties = conf.get("properties")
21 if isinstance(properties, dict):
22 addon_profiles = properties.get("addonProfiles")
23 if isinstance(addon_profiles, dict):
24 omsagent = addon_profiles.get("omsagent")
25 if isinstance(omsagent, dict) and omsagent.get("enabled"):
26 return CheckResult.PASSED
27
28 return CheckResult.FAILED
29
30
31 check = AKSLoggingEnabled()
32
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/checkov/arm/checks/resource/AKSLoggingEnabled.py b/checkov/arm/checks/resource/AKSLoggingEnabled.py
--- a/checkov/arm/checks/resource/AKSLoggingEnabled.py
+++ b/checkov/arm/checks/resource/AKSLoggingEnabled.py
@@ -1,18 +1,22 @@
+from __future__ import annotations
+
+from typing import Any
+
from checkov.common.models.enums import CheckResult, CheckCategories
from checkov.arm.base_resource_check import BaseResourceCheck
class AKSLoggingEnabled(BaseResourceCheck):
- def __init__(self):
+ def __init__(self) -> None:
# apiVersion 2017-08-03 = Fail - No addonProfiles option to configure
name = "Ensure AKS logging to Azure Monitoring is Configured"
id = "CKV_AZURE_4"
- supported_resources = ['Microsoft.ContainerService/managedClusters']
- categories = [CheckCategories.KUBERNETES]
+ supported_resources = ("Microsoft.ContainerService/managedClusters",)
+ categories = (CheckCategories.KUBERNETES,)
super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
- def scan_resource_conf(self, conf):
- if conf.get("apiVersion"):
+ def scan_resource_conf(self, conf: dict[str, Any]) -> CheckResult:
+ if "apiVersion" in conf:
if conf["apiVersion"] == "2017-08-31":
# No addonProfiles option to configure
return CheckResult.FAILED
@@ -22,6 +26,10 @@
addon_profiles = properties.get("addonProfiles")
if isinstance(addon_profiles, dict):
omsagent = addon_profiles.get("omsagent")
+ if not omsagent:
+ # it can be written in lowercase or camelCase
+ omsagent = addon_profiles.get("omsAgent")
+
if isinstance(omsagent, dict) and omsagent.get("enabled"):
return CheckResult.PASSED
| {"golden_diff": "diff --git a/checkov/arm/checks/resource/AKSLoggingEnabled.py b/checkov/arm/checks/resource/AKSLoggingEnabled.py\n--- a/checkov/arm/checks/resource/AKSLoggingEnabled.py\n+++ b/checkov/arm/checks/resource/AKSLoggingEnabled.py\n@@ -1,18 +1,22 @@\n+from __future__ import annotations\n+\n+from typing import Any\n+\n from checkov.common.models.enums import CheckResult, CheckCategories\n from checkov.arm.base_resource_check import BaseResourceCheck\n \n \n class AKSLoggingEnabled(BaseResourceCheck):\n- def __init__(self):\n+ def __init__(self) -> None:\n # apiVersion 2017-08-03 = Fail - No addonProfiles option to configure\n name = \"Ensure AKS logging to Azure Monitoring is Configured\"\n id = \"CKV_AZURE_4\"\n- supported_resources = ['Microsoft.ContainerService/managedClusters']\n- categories = [CheckCategories.KUBERNETES]\n+ supported_resources = (\"Microsoft.ContainerService/managedClusters\",)\n+ categories = (CheckCategories.KUBERNETES,)\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n \n- def scan_resource_conf(self, conf):\n- if conf.get(\"apiVersion\"):\n+ def scan_resource_conf(self, conf: dict[str, Any]) -> CheckResult:\n+ if \"apiVersion\" in conf:\n if conf[\"apiVersion\"] == \"2017-08-31\":\n # No addonProfiles option to configure\n return CheckResult.FAILED\n@@ -22,6 +26,10 @@\n addon_profiles = properties.get(\"addonProfiles\")\n if isinstance(addon_profiles, dict):\n omsagent = addon_profiles.get(\"omsagent\")\n+ if not omsagent:\n+ # it can be written in lowercase or camelCase\n+ omsagent = addon_profiles.get(\"omsAgent\")\n+\n if isinstance(omsagent, dict) and omsagent.get(\"enabled\"):\n return CheckResult.PASSED\n", "issue": "False positive for tests in bicep: CKV_AZURE_4, CKV_AZURE_5, CKV_AZURE_7, CKV_AZURE_8\n**Describe the issue**\r\nfalse positive for tests CKV_AZURE_4, CKV_AZURE_5, CKV_AZURE_7, CKV_AZURE_8\r\n\r\n\r\n**Examples**\r\nHere is the bicep code\r\n Code made shorter.\r\n```\r\nresource aks 'Microsoft.ContainerService/managedClusters@2022-11-01' = {\r\n properties: {\r\n enableRBAC: true\r\n }\r\n addonProfiles: {\r\n omsAgent: {\r\n enabled: true\r\n config: {\r\n logAnalyticsWorkspaceResourceId: logAnalyticsWorkspaceId\r\n }\r\n }\r\n azurePolicy: {\r\n enabled: true\r\n config: {\r\n version: 'v2'\r\n }\r\n }\r\n kubeDashboard: {\r\n enabled: false\r\n }\r\n }\r\n networkProfile: {\r\n networkPlugin: 'azure'\r\n networkPolicy: 'azure'\r\n }\r\n }\r\n}\r\n```\r\n\r\n**Version (please complete the following information):**\r\n - checkov v2.1.244\r\n\r\n\n", "before_files": [{"content": "from checkov.common.models.enums import CheckResult, CheckCategories\nfrom checkov.arm.base_resource_check import BaseResourceCheck\n\n\nclass AKSLoggingEnabled(BaseResourceCheck):\n def __init__(self):\n # apiVersion 2017-08-03 = Fail - No addonProfiles option to configure\n name = \"Ensure AKS logging to Azure Monitoring is Configured\"\n id = \"CKV_AZURE_4\"\n supported_resources = ['Microsoft.ContainerService/managedClusters']\n categories = [CheckCategories.KUBERNETES]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def scan_resource_conf(self, conf):\n if conf.get(\"apiVersion\"):\n if conf[\"apiVersion\"] == \"2017-08-31\":\n # No addonProfiles option to configure\n return CheckResult.FAILED\n\n properties = conf.get(\"properties\")\n if isinstance(properties, dict):\n addon_profiles = properties.get(\"addonProfiles\")\n if isinstance(addon_profiles, dict):\n omsagent = addon_profiles.get(\"omsagent\")\n if isinstance(omsagent, dict) and omsagent.get(\"enabled\"):\n return CheckResult.PASSED\n\n return CheckResult.FAILED\n\n\ncheck = AKSLoggingEnabled()\n", "path": "checkov/arm/checks/resource/AKSLoggingEnabled.py"}], "after_files": [{"content": "from __future__ import annotations\n\nfrom typing import Any\n\nfrom checkov.common.models.enums import CheckResult, CheckCategories\nfrom checkov.arm.base_resource_check import BaseResourceCheck\n\n\nclass AKSLoggingEnabled(BaseResourceCheck):\n def __init__(self) -> None:\n # apiVersion 2017-08-03 = Fail - No addonProfiles option to configure\n name = \"Ensure AKS logging to Azure Monitoring is Configured\"\n id = \"CKV_AZURE_4\"\n supported_resources = (\"Microsoft.ContainerService/managedClusters\",)\n categories = (CheckCategories.KUBERNETES,)\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def scan_resource_conf(self, conf: dict[str, Any]) -> CheckResult:\n if \"apiVersion\" in conf:\n if conf[\"apiVersion\"] == \"2017-08-31\":\n # No addonProfiles option to configure\n return CheckResult.FAILED\n\n properties = conf.get(\"properties\")\n if isinstance(properties, dict):\n addon_profiles = properties.get(\"addonProfiles\")\n if isinstance(addon_profiles, dict):\n omsagent = addon_profiles.get(\"omsagent\")\n if not omsagent:\n # it can be written in lowercase or camelCase\n omsagent = addon_profiles.get(\"omsAgent\")\n\n if isinstance(omsagent, dict) and omsagent.get(\"enabled\"):\n return CheckResult.PASSED\n\n return CheckResult.FAILED\n\n\ncheck = AKSLoggingEnabled()\n", "path": "checkov/arm/checks/resource/AKSLoggingEnabled.py"}]} | 865 | 454 |
gh_patches_debug_18581 | rasdani/github-patches | git_diff | scikit-image__scikit-image-219 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
message about unit tests not available
If scikits-image is installed without nose, a message is printed about unit tests not being available is printed on import but it is a little bit distracting to the casual end-user.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `skimage/__init__.py`
Content:
```
1 """Image Processing SciKit (Toolbox for SciPy)
2
3 ``scikits-image`` (a.k.a. ``skimage``) is a collection of algorithms for image
4 processing and computer vision.
5
6 The main package of ``skimage`` only provides a few utilities for converting
7 between image data types; for most features, you need to import one of the
8 following subpackages:
9
10 Subpackages
11 -----------
12 color
13 Color space conversion.
14 data
15 Test images and example data.
16 draw
17 Image drawing primitives (lines, text, etc.).
18 exposure
19 Image intensity adjustment (e.g., histogram equalization).
20 feature
21 Feature detection (e.g. texture analysis, corners, etc.).
22 filter
23 Sharpening, edge finding, denoising, etc.
24 graph
25 Graph-theoretic operations, e.g. dynamic programming (shortest paths).
26 io
27 Reading, saving, and displaying images and video.
28 measure
29 Measurement of image properties, e.g., similarity and contours.
30 morphology
31 Morphological operations, e.g. opening or skeletonization.
32 segmentation
33 Splitting an image into self-similar regions.
34 transform
35 Geometric and other transforms, e.g. rotation or the Radon transform.
36 util
37 Generic utilities.
38
39 Utility Functions
40 -----------------
41 get_log
42 Returns the ``skimage`` log. Use this to print debug output.
43 img_as_float
44 Convert an image to floating point format, with values in [0, 1].
45 img_as_uint
46 Convert an image to unsigned integer format, with values in [0, 65535].
47 img_as_int
48 Convert an image to signed integer format, with values in [-32768, 32767].
49 img_as_ubyte
50 Convert an image to unsigned byte format, with values in [0, 255].
51
52 """
53
54 import os.path as _osp
55
56 pkg_dir = _osp.abspath(_osp.dirname(__file__))
57 data_dir = _osp.join(pkg_dir, 'data')
58
59 try:
60 from .version import version as __version__
61 except ImportError:
62 __version__ = "unbuilt-dev"
63
64
65 def _setup_test(verbose=False):
66 import functools
67
68 args = ['', '--exe', '-w', pkg_dir]
69 if verbose:
70 args.extend(['-v', '-s'])
71
72 try:
73 import nose as _nose
74 except ImportError:
75 print("Could not load nose. Unit tests not available.")
76 return None
77 else:
78 f = functools.partial(_nose.run, 'skimage', argv=args)
79 f.__doc__ = 'Invoke the skimage test suite.'
80 return f
81
82 test = _setup_test()
83 if test is None:
84 try:
85 del test
86 except NameError:
87 pass
88
89 test_verbose = _setup_test(verbose=True)
90 if test_verbose is None:
91 try:
92 del test
93 except NameError:
94 pass
95
96
97 def get_log(name=None):
98 """Return a console logger.
99
100 Output may be sent to the logger using the `debug`, `info`, `warning`,
101 `error` and `critical` methods.
102
103 Parameters
104 ----------
105 name : str
106 Name of the log.
107
108 References
109 ----------
110 .. [1] Logging facility for Python,
111 http://docs.python.org/library/logging.html
112
113 """
114 import logging
115
116 if name is None:
117 name = 'skimage'
118 else:
119 name = 'skimage.' + name
120
121 log = logging.getLogger(name)
122 return log
123
124
125 def _setup_log():
126 """Configure root logger.
127
128 """
129 import logging
130 import sys
131
132 log = logging.getLogger()
133
134 try:
135 handler = logging.StreamHandler(stream=sys.stdout)
136 except TypeError:
137 handler = logging.StreamHandler(strm=sys.stdout)
138
139 formatter = logging.Formatter(
140 '%(name)s: %(levelname)s: %(message)s'
141 )
142 handler.setFormatter(formatter)
143
144 log.addHandler(handler)
145 log.setLevel(logging.WARNING)
146
147 _setup_log()
148
149 from .util.dtype import *
150
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/skimage/__init__.py b/skimage/__init__.py
--- a/skimage/__init__.py
+++ b/skimage/__init__.py
@@ -72,26 +72,20 @@
try:
import nose as _nose
except ImportError:
- print("Could not load nose. Unit tests not available.")
- return None
+ def broken_test_func():
+ """This would invoke the skimage test suite, but nose couldn't be
+ imported so the test suite can not run.
+ """
+ raise ImportError("Could not load nose. Unit tests not available.")
+ return broken_test_func
else:
f = functools.partial(_nose.run, 'skimage', argv=args)
f.__doc__ = 'Invoke the skimage test suite.'
return f
-test = _setup_test()
-if test is None:
- try:
- del test
- except NameError:
- pass
+test = _setup_test()
test_verbose = _setup_test(verbose=True)
-if test_verbose is None:
- try:
- del test
- except NameError:
- pass
def get_log(name=None):
| {"golden_diff": "diff --git a/skimage/__init__.py b/skimage/__init__.py\n--- a/skimage/__init__.py\n+++ b/skimage/__init__.py\n@@ -72,26 +72,20 @@\n try:\n import nose as _nose\n except ImportError:\n- print(\"Could not load nose. Unit tests not available.\")\n- return None\n+ def broken_test_func():\n+ \"\"\"This would invoke the skimage test suite, but nose couldn't be\n+ imported so the test suite can not run.\n+ \"\"\"\n+ raise ImportError(\"Could not load nose. Unit tests not available.\")\n+ return broken_test_func\n else:\n f = functools.partial(_nose.run, 'skimage', argv=args)\n f.__doc__ = 'Invoke the skimage test suite.'\n return f\n \n-test = _setup_test()\n-if test is None:\n- try:\n- del test\n- except NameError:\n- pass\n \n+test = _setup_test()\n test_verbose = _setup_test(verbose=True)\n-if test_verbose is None:\n- try:\n- del test\n- except NameError:\n- pass\n \n \n def get_log(name=None):\n", "issue": "message about unit tests not available\nIf scikits-image is installed without nose, a message is printed about unit tests not being available is printed on import but it is a little bit distracting to the casual end-user.\n\n", "before_files": [{"content": "\"\"\"Image Processing SciKit (Toolbox for SciPy)\n\n``scikits-image`` (a.k.a. ``skimage``) is a collection of algorithms for image\nprocessing and computer vision.\n\nThe main package of ``skimage`` only provides a few utilities for converting\nbetween image data types; for most features, you need to import one of the\nfollowing subpackages:\n\nSubpackages\n-----------\ncolor\n Color space conversion.\ndata\n Test images and example data.\ndraw\n Image drawing primitives (lines, text, etc.).\nexposure\n Image intensity adjustment (e.g., histogram equalization).\nfeature\n Feature detection (e.g. texture analysis, corners, etc.).\nfilter\n Sharpening, edge finding, denoising, etc.\ngraph\n Graph-theoretic operations, e.g. dynamic programming (shortest paths).\nio\n Reading, saving, and displaying images and video.\nmeasure\n Measurement of image properties, e.g., similarity and contours.\nmorphology\n Morphological operations, e.g. opening or skeletonization.\nsegmentation\n Splitting an image into self-similar regions.\ntransform\n Geometric and other transforms, e.g. rotation or the Radon transform.\nutil\n Generic utilities.\n\nUtility Functions\n-----------------\nget_log\n Returns the ``skimage`` log. Use this to print debug output.\nimg_as_float\n Convert an image to floating point format, with values in [0, 1].\nimg_as_uint\n Convert an image to unsigned integer format, with values in [0, 65535].\nimg_as_int\n Convert an image to signed integer format, with values in [-32768, 32767].\nimg_as_ubyte\n Convert an image to unsigned byte format, with values in [0, 255].\n\n\"\"\"\n\nimport os.path as _osp\n\npkg_dir = _osp.abspath(_osp.dirname(__file__))\ndata_dir = _osp.join(pkg_dir, 'data')\n\ntry:\n from .version import version as __version__\nexcept ImportError:\n __version__ = \"unbuilt-dev\"\n\n\ndef _setup_test(verbose=False):\n import functools\n\n args = ['', '--exe', '-w', pkg_dir]\n if verbose:\n args.extend(['-v', '-s'])\n\n try:\n import nose as _nose\n except ImportError:\n print(\"Could not load nose. Unit tests not available.\")\n return None\n else:\n f = functools.partial(_nose.run, 'skimage', argv=args)\n f.__doc__ = 'Invoke the skimage test suite.'\n return f\n\ntest = _setup_test()\nif test is None:\n try:\n del test\n except NameError:\n pass\n\ntest_verbose = _setup_test(verbose=True)\nif test_verbose is None:\n try:\n del test\n except NameError:\n pass\n\n\ndef get_log(name=None):\n \"\"\"Return a console logger.\n\n Output may be sent to the logger using the `debug`, `info`, `warning`,\n `error` and `critical` methods.\n\n Parameters\n ----------\n name : str\n Name of the log.\n\n References\n ----------\n .. [1] Logging facility for Python,\n http://docs.python.org/library/logging.html\n\n \"\"\"\n import logging\n\n if name is None:\n name = 'skimage'\n else:\n name = 'skimage.' + name\n\n log = logging.getLogger(name)\n return log\n\n\ndef _setup_log():\n \"\"\"Configure root logger.\n\n \"\"\"\n import logging\n import sys\n\n log = logging.getLogger()\n\n try:\n handler = logging.StreamHandler(stream=sys.stdout)\n except TypeError:\n handler = logging.StreamHandler(strm=sys.stdout)\n\n formatter = logging.Formatter(\n '%(name)s: %(levelname)s: %(message)s'\n )\n handler.setFormatter(formatter)\n\n log.addHandler(handler)\n log.setLevel(logging.WARNING)\n\n_setup_log()\n\nfrom .util.dtype import *\n", "path": "skimage/__init__.py"}], "after_files": [{"content": "\"\"\"Image Processing SciKit (Toolbox for SciPy)\n\n``scikits-image`` (a.k.a. ``skimage``) is a collection of algorithms for image\nprocessing and computer vision.\n\nThe main package of ``skimage`` only provides a few utilities for converting\nbetween image data types; for most features, you need to import one of the\nfollowing subpackages:\n\nSubpackages\n-----------\ncolor\n Color space conversion.\ndata\n Test images and example data.\ndraw\n Image drawing primitives (lines, text, etc.).\nexposure\n Image intensity adjustment (e.g., histogram equalization).\nfeature\n Feature detection (e.g. texture analysis, corners, etc.).\nfilter\n Sharpening, edge finding, denoising, etc.\ngraph\n Graph-theoretic operations, e.g. dynamic programming (shortest paths).\nio\n Reading, saving, and displaying images and video.\nmeasure\n Measurement of image properties, e.g., similarity and contours.\nmorphology\n Morphological operations, e.g. opening or skeletonization.\nsegmentation\n Splitting an image into self-similar regions.\ntransform\n Geometric and other transforms, e.g. rotation or the Radon transform.\nutil\n Generic utilities.\n\nUtility Functions\n-----------------\nget_log\n Returns the ``skimage`` log. Use this to print debug output.\nimg_as_float\n Convert an image to floating point format, with values in [0, 1].\nimg_as_uint\n Convert an image to unsigned integer format, with values in [0, 65535].\nimg_as_int\n Convert an image to signed integer format, with values in [-32768, 32767].\nimg_as_ubyte\n Convert an image to unsigned byte format, with values in [0, 255].\n\n\"\"\"\n\nimport os.path as _osp\n\npkg_dir = _osp.abspath(_osp.dirname(__file__))\ndata_dir = _osp.join(pkg_dir, 'data')\n\ntry:\n from .version import version as __version__\nexcept ImportError:\n __version__ = \"unbuilt-dev\"\n\n\ndef _setup_test(verbose=False):\n import functools\n\n args = ['', '--exe', '-w', pkg_dir]\n if verbose:\n args.extend(['-v', '-s'])\n\n try:\n import nose as _nose\n except ImportError:\n def broken_test_func():\n \"\"\"This would invoke the skimage test suite, but nose couldn't be\n imported so the test suite can not run.\n \"\"\"\n raise ImportError(\"Could not load nose. Unit tests not available.\")\n return broken_test_func\n else:\n f = functools.partial(_nose.run, 'skimage', argv=args)\n f.__doc__ = 'Invoke the skimage test suite.'\n return f\n\n\ntest = _setup_test()\ntest_verbose = _setup_test(verbose=True)\n\n\ndef get_log(name=None):\n \"\"\"Return a console logger.\n\n Output may be sent to the logger using the `debug`, `info`, `warning`,\n `error` and `critical` methods.\n\n Parameters\n ----------\n name : str\n Name of the log.\n\n References\n ----------\n .. [1] Logging facility for Python,\n http://docs.python.org/library/logging.html\n\n \"\"\"\n import logging\n\n if name is None:\n name = 'skimage'\n else:\n name = 'skimage.' + name\n\n log = logging.getLogger(name)\n return log\n\n\ndef _setup_log():\n \"\"\"Configure root logger.\n\n \"\"\"\n import logging\n import sys\n\n log = logging.getLogger()\n\n try:\n handler = logging.StreamHandler(stream=sys.stdout)\n except TypeError:\n handler = logging.StreamHandler(strm=sys.stdout)\n\n formatter = logging.Formatter(\n '%(name)s: %(levelname)s: %(message)s'\n )\n handler.setFormatter(formatter)\n\n log.addHandler(handler)\n log.setLevel(logging.WARNING)\n\n_setup_log()\n\nfrom .util.dtype import *\n", "path": "skimage/__init__.py"}]} | 1,516 | 265 |
gh_patches_debug_18589 | rasdani/github-patches | git_diff | ethereum__web3.py-1763 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Upgrade Parity version in test infrastructure
### What was wrong?
Our Parity tests run on version 2.3.5. According to a recent OpenEthereum [blog post](https://medium.com/openethereum/vision-for-openethereum-ex-parity-client-eb7b11f6eef8), latest stable recommendation is 2.5.13.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `web3/middleware/validation.py`
Content:
```
1 from typing import (
2 TYPE_CHECKING,
3 Any,
4 Callable,
5 )
6
7 from eth_utils.curried import (
8 apply_formatter_at_index,
9 apply_formatter_if,
10 apply_formatters_to_dict,
11 is_null,
12 )
13 from eth_utils.toolz import (
14 complement,
15 compose,
16 curry,
17 dissoc,
18 )
19 from hexbytes import (
20 HexBytes,
21 )
22
23 from web3._utils.rpc_abi import (
24 RPC,
25 )
26 from web3.exceptions import (
27 ExtraDataLengthError,
28 ValidationError,
29 )
30 from web3.middleware.formatting import (
31 construct_web3_formatting_middleware,
32 )
33 from web3.types import (
34 FormattersDict,
35 TxParams,
36 )
37
38 if TYPE_CHECKING:
39 from web3 import Web3 # noqa: F401
40
41 MAX_EXTRADATA_LENGTH = 32
42
43 is_not_null = complement(is_null)
44
45
46 @curry
47 def validate_chain_id(web3: "Web3", chain_id: int) -> int:
48 if int(chain_id) == web3.eth.chainId:
49 return chain_id
50 else:
51 raise ValidationError(
52 "The transaction declared chain ID %r, "
53 "but the connected node is on %r" % (
54 chain_id,
55 web3.eth.chainId,
56 )
57 )
58
59
60 def check_extradata_length(val: Any) -> Any:
61 if not isinstance(val, (str, int, bytes)):
62 return val
63 result = HexBytes(val)
64 if len(result) > MAX_EXTRADATA_LENGTH:
65 raise ExtraDataLengthError(
66 "The field extraData is %d bytes, but should be %d. "
67 "It is quite likely that you are connected to a POA chain. "
68 "Refer to "
69 "http://web3py.readthedocs.io/en/stable/middleware.html#geth-style-proof-of-authority "
70 "for more details. The full extraData is: %r" % (
71 len(result), MAX_EXTRADATA_LENGTH, result
72 )
73 )
74 return val
75
76
77 def transaction_normalizer(transaction: TxParams) -> TxParams:
78 return dissoc(transaction, 'chainId')
79
80
81 def transaction_param_validator(web3: "Web3") -> Callable[..., Any]:
82 transactions_params_validators = {
83 "chainId": apply_formatter_if(
84 # Bypass `validate_chain_id` if chainId can't be determined
85 lambda _: is_not_null(web3.eth.chainId),
86 validate_chain_id(web3),
87 ),
88 }
89 return apply_formatter_at_index(
90 apply_formatters_to_dict(transactions_params_validators),
91 0
92 )
93
94
95 BLOCK_VALIDATORS = {
96 'extraData': check_extradata_length,
97 }
98
99
100 block_validator = apply_formatter_if(
101 is_not_null,
102 apply_formatters_to_dict(BLOCK_VALIDATORS)
103 )
104
105
106 @curry
107 def chain_id_validator(web3: "Web3") -> Callable[..., Any]:
108 return compose(
109 apply_formatter_at_index(transaction_normalizer, 0),
110 transaction_param_validator(web3)
111 )
112
113
114 def build_validators_with_web3(w3: "Web3") -> FormattersDict:
115 return dict(
116 request_formatters={
117 RPC.eth_sendTransaction: chain_id_validator(w3),
118 RPC.eth_estimateGas: chain_id_validator(w3),
119 RPC.eth_call: chain_id_validator(w3),
120 },
121 result_formatters={
122 RPC.eth_getBlockByHash: block_validator,
123 RPC.eth_getBlockByNumber: block_validator,
124 },
125 )
126
127
128 validation_middleware = construct_web3_formatting_middleware(build_validators_with_web3)
129
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/web3/middleware/validation.py b/web3/middleware/validation.py
--- a/web3/middleware/validation.py
+++ b/web3/middleware/validation.py
@@ -9,6 +9,7 @@
apply_formatter_if,
apply_formatters_to_dict,
is_null,
+ is_string,
)
from eth_utils.toolz import (
complement,
@@ -20,6 +21,9 @@
HexBytes,
)
+from web3._utils.formatters import (
+ hex_to_integer,
+)
from web3._utils.rpc_abi import (
RPC,
)
@@ -42,10 +46,12 @@
is_not_null = complement(is_null)
+to_integer_if_hex = apply_formatter_if(is_string, hex_to_integer)
+
@curry
def validate_chain_id(web3: "Web3", chain_id: int) -> int:
- if int(chain_id) == web3.eth.chainId:
+ if to_integer_if_hex(chain_id) == web3.eth.chainId:
return chain_id
else:
raise ValidationError(
| {"golden_diff": "diff --git a/web3/middleware/validation.py b/web3/middleware/validation.py\n--- a/web3/middleware/validation.py\n+++ b/web3/middleware/validation.py\n@@ -9,6 +9,7 @@\n apply_formatter_if,\n apply_formatters_to_dict,\n is_null,\n+ is_string,\n )\n from eth_utils.toolz import (\n complement,\n@@ -20,6 +21,9 @@\n HexBytes,\n )\n \n+from web3._utils.formatters import (\n+ hex_to_integer,\n+)\n from web3._utils.rpc_abi import (\n RPC,\n )\n@@ -42,10 +46,12 @@\n \n is_not_null = complement(is_null)\n \n+to_integer_if_hex = apply_formatter_if(is_string, hex_to_integer)\n+\n \n @curry\n def validate_chain_id(web3: \"Web3\", chain_id: int) -> int:\n- if int(chain_id) == web3.eth.chainId:\n+ if to_integer_if_hex(chain_id) == web3.eth.chainId:\n return chain_id\n else:\n raise ValidationError(\n", "issue": "Upgrade Parity version in test infrastructure\n### What was wrong?\r\n\r\nOur Parity tests run on version 2.3.5. According to a recent OpenEthereum [blog post](https://medium.com/openethereum/vision-for-openethereum-ex-parity-client-eb7b11f6eef8), latest stable recommendation is 2.5.13.\n", "before_files": [{"content": "from typing import (\n TYPE_CHECKING,\n Any,\n Callable,\n)\n\nfrom eth_utils.curried import (\n apply_formatter_at_index,\n apply_formatter_if,\n apply_formatters_to_dict,\n is_null,\n)\nfrom eth_utils.toolz import (\n complement,\n compose,\n curry,\n dissoc,\n)\nfrom hexbytes import (\n HexBytes,\n)\n\nfrom web3._utils.rpc_abi import (\n RPC,\n)\nfrom web3.exceptions import (\n ExtraDataLengthError,\n ValidationError,\n)\nfrom web3.middleware.formatting import (\n construct_web3_formatting_middleware,\n)\nfrom web3.types import (\n FormattersDict,\n TxParams,\n)\n\nif TYPE_CHECKING:\n from web3 import Web3 # noqa: F401\n\nMAX_EXTRADATA_LENGTH = 32\n\nis_not_null = complement(is_null)\n\n\n@curry\ndef validate_chain_id(web3: \"Web3\", chain_id: int) -> int:\n if int(chain_id) == web3.eth.chainId:\n return chain_id\n else:\n raise ValidationError(\n \"The transaction declared chain ID %r, \"\n \"but the connected node is on %r\" % (\n chain_id,\n web3.eth.chainId,\n )\n )\n\n\ndef check_extradata_length(val: Any) -> Any:\n if not isinstance(val, (str, int, bytes)):\n return val\n result = HexBytes(val)\n if len(result) > MAX_EXTRADATA_LENGTH:\n raise ExtraDataLengthError(\n \"The field extraData is %d bytes, but should be %d. \"\n \"It is quite likely that you are connected to a POA chain. \"\n \"Refer to \"\n \"http://web3py.readthedocs.io/en/stable/middleware.html#geth-style-proof-of-authority \"\n \"for more details. The full extraData is: %r\" % (\n len(result), MAX_EXTRADATA_LENGTH, result\n )\n )\n return val\n\n\ndef transaction_normalizer(transaction: TxParams) -> TxParams:\n return dissoc(transaction, 'chainId')\n\n\ndef transaction_param_validator(web3: \"Web3\") -> Callable[..., Any]:\n transactions_params_validators = {\n \"chainId\": apply_formatter_if(\n # Bypass `validate_chain_id` if chainId can't be determined\n lambda _: is_not_null(web3.eth.chainId),\n validate_chain_id(web3),\n ),\n }\n return apply_formatter_at_index(\n apply_formatters_to_dict(transactions_params_validators),\n 0\n )\n\n\nBLOCK_VALIDATORS = {\n 'extraData': check_extradata_length,\n}\n\n\nblock_validator = apply_formatter_if(\n is_not_null,\n apply_formatters_to_dict(BLOCK_VALIDATORS)\n)\n\n\n@curry\ndef chain_id_validator(web3: \"Web3\") -> Callable[..., Any]:\n return compose(\n apply_formatter_at_index(transaction_normalizer, 0),\n transaction_param_validator(web3)\n )\n\n\ndef build_validators_with_web3(w3: \"Web3\") -> FormattersDict:\n return dict(\n request_formatters={\n RPC.eth_sendTransaction: chain_id_validator(w3),\n RPC.eth_estimateGas: chain_id_validator(w3),\n RPC.eth_call: chain_id_validator(w3),\n },\n result_formatters={\n RPC.eth_getBlockByHash: block_validator,\n RPC.eth_getBlockByNumber: block_validator,\n },\n )\n\n\nvalidation_middleware = construct_web3_formatting_middleware(build_validators_with_web3)\n", "path": "web3/middleware/validation.py"}], "after_files": [{"content": "from typing import (\n TYPE_CHECKING,\n Any,\n Callable,\n)\n\nfrom eth_utils.curried import (\n apply_formatter_at_index,\n apply_formatter_if,\n apply_formatters_to_dict,\n is_null,\n is_string,\n)\nfrom eth_utils.toolz import (\n complement,\n compose,\n curry,\n dissoc,\n)\nfrom hexbytes import (\n HexBytes,\n)\n\nfrom web3._utils.formatters import (\n hex_to_integer,\n)\nfrom web3._utils.rpc_abi import (\n RPC,\n)\nfrom web3.exceptions import (\n ExtraDataLengthError,\n ValidationError,\n)\nfrom web3.middleware.formatting import (\n construct_web3_formatting_middleware,\n)\nfrom web3.types import (\n FormattersDict,\n TxParams,\n)\n\nif TYPE_CHECKING:\n from web3 import Web3 # noqa: F401\n\nMAX_EXTRADATA_LENGTH = 32\n\nis_not_null = complement(is_null)\n\nto_integer_if_hex = apply_formatter_if(is_string, hex_to_integer)\n\n\n@curry\ndef validate_chain_id(web3: \"Web3\", chain_id: int) -> int:\n if to_integer_if_hex(chain_id) == web3.eth.chainId:\n return chain_id\n else:\n raise ValidationError(\n \"The transaction declared chain ID %r, \"\n \"but the connected node is on %r\" % (\n chain_id,\n web3.eth.chainId,\n )\n )\n\n\ndef check_extradata_length(val: Any) -> Any:\n if not isinstance(val, (str, int, bytes)):\n return val\n result = HexBytes(val)\n if len(result) > MAX_EXTRADATA_LENGTH:\n raise ExtraDataLengthError(\n \"The field extraData is %d bytes, but should be %d. \"\n \"It is quite likely that you are connected to a POA chain. \"\n \"Refer to \"\n \"http://web3py.readthedocs.io/en/stable/middleware.html#geth-style-proof-of-authority \"\n \"for more details. The full extraData is: %r\" % (\n len(result), MAX_EXTRADATA_LENGTH, result\n )\n )\n return val\n\n\ndef transaction_normalizer(transaction: TxParams) -> TxParams:\n return dissoc(transaction, 'chainId')\n\n\ndef transaction_param_validator(web3: \"Web3\") -> Callable[..., Any]:\n transactions_params_validators = {\n \"chainId\": apply_formatter_if(\n # Bypass `validate_chain_id` if chainId can't be determined\n lambda _: is_not_null(web3.eth.chainId),\n validate_chain_id(web3),\n ),\n }\n return apply_formatter_at_index(\n apply_formatters_to_dict(transactions_params_validators),\n 0\n )\n\n\nBLOCK_VALIDATORS = {\n 'extraData': check_extradata_length,\n}\n\n\nblock_validator = apply_formatter_if(\n is_not_null,\n apply_formatters_to_dict(BLOCK_VALIDATORS)\n)\n\n\n@curry\ndef chain_id_validator(web3: \"Web3\") -> Callable[..., Any]:\n return compose(\n apply_formatter_at_index(transaction_normalizer, 0),\n transaction_param_validator(web3)\n )\n\n\ndef build_validators_with_web3(w3: \"Web3\") -> FormattersDict:\n return dict(\n request_formatters={\n RPC.eth_sendTransaction: chain_id_validator(w3),\n RPC.eth_estimateGas: chain_id_validator(w3),\n RPC.eth_call: chain_id_validator(w3),\n },\n result_formatters={\n RPC.eth_getBlockByHash: block_validator,\n RPC.eth_getBlockByNumber: block_validator,\n },\n )\n\n\nvalidation_middleware = construct_web3_formatting_middleware(build_validators_with_web3)\n", "path": "web3/middleware/validation.py"}]} | 1,386 | 236 |
gh_patches_debug_24807 | rasdani/github-patches | git_diff | akvo__akvo-rsr-1928 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Project list widget throws an internal server error
## Test plan
GIVEN the 'Grab a widget' page
WHEN a project has no partners
THEN the project list and project map options should not be available
WHEN a project has partners
THEN all widgets should be available and not display an error
## Issue description
The project list widget gives an internal server error. And the map widget as well: http://sentry.support.akvo-ops.org/rsr/live/group/727/. This is due to the fact that they don't have a reporting organisation and the map and project listing widgets expect an organisation as an input.
The reporting organisation here should be changed to the project's primary organisation, and we should create a 404 exception when the project has no partners at all, just in case.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `akvo/rsr/views/widgets.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """Akvo RSR is covered by the GNU Affero General Public License.
3
4 See more details in the license.txt file located at the root folder of the Akvo RSR module.
5 For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.
6 """
7
8 from __future__ import absolute_import
9
10 import random
11
12 from django.shortcuts import get_object_or_404
13 from django.views.generic import TemplateView
14 from akvo.rsr.models import Organisation, Project
15
16
17 __all__ = [
18 'ProjectMapView',
19 'CobrandedBannerView',
20 'ProjectNarrowView',
21 ]
22
23
24 class BaseWidgetView(TemplateView):
25
26 """Setup a common base widget."""
27
28 def get_context_data(self, **kwargs):
29 context = super(BaseWidgetView, self).get_context_data(**kwargs)
30 context['style'] = 'darkBG'
31 if self.request.GET.get('style') == 'light':
32 context['style'] = 'lightBG'
33 return context
34
35
36 class ProjectBaseWidgetView(BaseWidgetView):
37
38 """Extends the base widget with a project from url."""
39
40 def get_context_data(self, **kwargs):
41 context = super(ProjectBaseWidgetView, self).get_context_data(**kwargs)
42 context['project'] = get_object_or_404(
43 Project, pk=int(self.kwargs['project_id']))
44 return context
45
46
47 class RandomBaseWidgetView(BaseWidgetView):
48
49 """Extends the base widget with random project."""
50
51 def get_context_data(self, **kwargs):
52 context = super(RandomBaseWidgetView, self).get_context_data(**kwargs)
53 org_id = self.request.GET.get('organisation_id', '0')
54 organisation = get_object_or_404(Organisation, pk=org_id)
55 context['project'] = random.choice(organisation.active_projects())
56 return context
57
58
59 class CobrandedBannerView(ProjectBaseWidgetView):
60 template_name = 'widgets/cobranded_banner.html'
61
62
63 class RandomCobrandedBannerView(RandomBaseWidgetView):
64 template_name = 'widgets/cobranded_banner.html'
65
66
67 class ProjectNarrowView(ProjectBaseWidgetView):
68 template_name = 'widgets/project_narrow.html'
69
70
71 class RandomProjectNarrowView(RandomBaseWidgetView):
72 template_name = 'widgets/project_narrow.html'
73
74
75 class ProjectSmallView(ProjectBaseWidgetView):
76 template_name = 'widgets/project_small.html'
77
78
79 class RandomProjectSmallView(RandomBaseWidgetView):
80 template_name = 'widgets/project_small.html'
81
82
83 class ProjectListView(BaseWidgetView):
84 template_name = 'widgets/project_list.html'
85
86 def get_context_data(self, **kwargs):
87 context = super(ProjectListView, self).get_context_data(**kwargs)
88 order_by = self.request.GET.get('order_by', 'title')
89 org_id = self.request.GET.get('organisation_id', '0')
90 organisation = get_object_or_404(Organisation, pk=org_id)
91 projects = Project.objects.select_related(
92 'publishingstatus__status',
93 'primary_location',
94 'primary_location__country'
95 ).prefetch_related(
96 'last_update'
97 ).filter(
98 partnerships__organisation__id=org_id,
99 publishingstatus__status__exact='published'
100 ).order_by('-id').distinct()
101
102 if order_by == 'status':
103 projects = projects.order_by('status', 'title')
104 elif order_by == 'last_update':
105 projects = projects.order_by('-last_update', 'title')
106 elif order_by in ['budget', 'funds_needed']:
107 projects = projects.extra(order_by=['-%s' % order_by, 'title'])
108 else:
109 projects = projects.order_by('title') # default to project title
110
111 context['organisation'] = organisation
112 context['projects_count'] = projects.count() # Limit to 100 projects
113 context['projects'] = projects[:100] # Limit to 100 projects
114 return context
115
116
117 class ProjectMapView(BaseWidgetView):
118 template_name = 'widgets/projects_map.html'
119
120 def get_context_data(self, **kwargs):
121 context = super(ProjectMapView, self).get_context_data(**kwargs)
122 context['height'] = self.request.GET.get('height', '300')
123 context['heightpx'] = str(int(context['height']) - 30) + 'px'
124 context['width'] = self.request.GET.get('width', '600')
125 context['widthpx'] = str(context['width']) + 'px'
126 context['style'] = self.request.GET.get('style', 'dark')
127 context['state'] = self.request.GET.get('state', 'dynamic')
128 org_id = self.request.GET.get('organisation_id', '0')
129 org = get_object_or_404(Organisation, pk=org_id)
130 context['projects'] = org.published_projects()
131 return context
132
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/akvo/rsr/views/widgets.py b/akvo/rsr/views/widgets.py
--- a/akvo/rsr/views/widgets.py
+++ b/akvo/rsr/views/widgets.py
@@ -86,7 +86,7 @@
def get_context_data(self, **kwargs):
context = super(ProjectListView, self).get_context_data(**kwargs)
order_by = self.request.GET.get('order_by', 'title')
- org_id = self.request.GET.get('organisation_id', '0')
+ org_id = self.request.GET.get('organisation_id')
organisation = get_object_or_404(Organisation, pk=org_id)
projects = Project.objects.select_related(
'publishingstatus__status',
@@ -125,7 +125,7 @@
context['widthpx'] = str(context['width']) + 'px'
context['style'] = self.request.GET.get('style', 'dark')
context['state'] = self.request.GET.get('state', 'dynamic')
- org_id = self.request.GET.get('organisation_id', '0')
+ org_id = self.request.GET.get('organisation_id')
org = get_object_or_404(Organisation, pk=org_id)
context['projects'] = org.published_projects()
return context
| {"golden_diff": "diff --git a/akvo/rsr/views/widgets.py b/akvo/rsr/views/widgets.py\n--- a/akvo/rsr/views/widgets.py\n+++ b/akvo/rsr/views/widgets.py\n@@ -86,7 +86,7 @@\n def get_context_data(self, **kwargs):\n context = super(ProjectListView, self).get_context_data(**kwargs)\n order_by = self.request.GET.get('order_by', 'title')\n- org_id = self.request.GET.get('organisation_id', '0')\n+ org_id = self.request.GET.get('organisation_id')\n organisation = get_object_or_404(Organisation, pk=org_id)\n projects = Project.objects.select_related(\n 'publishingstatus__status',\n@@ -125,7 +125,7 @@\n context['widthpx'] = str(context['width']) + 'px'\n context['style'] = self.request.GET.get('style', 'dark')\n context['state'] = self.request.GET.get('state', 'dynamic')\n- org_id = self.request.GET.get('organisation_id', '0')\n+ org_id = self.request.GET.get('organisation_id')\n org = get_object_or_404(Organisation, pk=org_id)\n context['projects'] = org.published_projects()\n return context\n", "issue": "Project list widget throws an internal server error\n## Test plan\n\nGIVEN the 'Grab a widget' page\n\nWHEN a project has no partners\nTHEN the project list and project map options should not be available\n\nWHEN a project has partners\nTHEN all widgets should be available and not display an error\n## Issue description\n\nThe project list widget gives an internal server error. And the map widget as well: http://sentry.support.akvo-ops.org/rsr/live/group/727/. This is due to the fact that they don't have a reporting organisation and the map and project listing widgets expect an organisation as an input.\n\nThe reporting organisation here should be changed to the project's primary organisation, and we should create a 404 exception when the project has no partners at all, just in case.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"Akvo RSR is covered by the GNU Affero General Public License.\n\nSee more details in the license.txt file located at the root folder of the Akvo RSR module.\nFor additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\"\"\"\n\nfrom __future__ import absolute_import\n\nimport random\n\nfrom django.shortcuts import get_object_or_404\nfrom django.views.generic import TemplateView\nfrom akvo.rsr.models import Organisation, Project\n\n\n__all__ = [\n 'ProjectMapView',\n 'CobrandedBannerView',\n 'ProjectNarrowView',\n]\n\n\nclass BaseWidgetView(TemplateView):\n\n \"\"\"Setup a common base widget.\"\"\"\n\n def get_context_data(self, **kwargs):\n context = super(BaseWidgetView, self).get_context_data(**kwargs)\n context['style'] = 'darkBG'\n if self.request.GET.get('style') == 'light':\n context['style'] = 'lightBG'\n return context\n\n\nclass ProjectBaseWidgetView(BaseWidgetView):\n\n \"\"\"Extends the base widget with a project from url.\"\"\"\n\n def get_context_data(self, **kwargs):\n context = super(ProjectBaseWidgetView, self).get_context_data(**kwargs)\n context['project'] = get_object_or_404(\n Project, pk=int(self.kwargs['project_id']))\n return context\n\n\nclass RandomBaseWidgetView(BaseWidgetView):\n\n \"\"\"Extends the base widget with random project.\"\"\"\n\n def get_context_data(self, **kwargs):\n context = super(RandomBaseWidgetView, self).get_context_data(**kwargs)\n org_id = self.request.GET.get('organisation_id', '0')\n organisation = get_object_or_404(Organisation, pk=org_id)\n context['project'] = random.choice(organisation.active_projects())\n return context\n\n\nclass CobrandedBannerView(ProjectBaseWidgetView):\n template_name = 'widgets/cobranded_banner.html'\n\n\nclass RandomCobrandedBannerView(RandomBaseWidgetView):\n template_name = 'widgets/cobranded_banner.html'\n\n\nclass ProjectNarrowView(ProjectBaseWidgetView):\n template_name = 'widgets/project_narrow.html'\n\n\nclass RandomProjectNarrowView(RandomBaseWidgetView):\n template_name = 'widgets/project_narrow.html'\n\n\nclass ProjectSmallView(ProjectBaseWidgetView):\n template_name = 'widgets/project_small.html'\n\n\nclass RandomProjectSmallView(RandomBaseWidgetView):\n template_name = 'widgets/project_small.html'\n\n\nclass ProjectListView(BaseWidgetView):\n template_name = 'widgets/project_list.html'\n\n def get_context_data(self, **kwargs):\n context = super(ProjectListView, self).get_context_data(**kwargs)\n order_by = self.request.GET.get('order_by', 'title')\n org_id = self.request.GET.get('organisation_id', '0')\n organisation = get_object_or_404(Organisation, pk=org_id)\n projects = Project.objects.select_related(\n 'publishingstatus__status',\n 'primary_location',\n 'primary_location__country'\n ).prefetch_related(\n 'last_update'\n ).filter(\n partnerships__organisation__id=org_id,\n publishingstatus__status__exact='published'\n ).order_by('-id').distinct()\n\n if order_by == 'status':\n projects = projects.order_by('status', 'title')\n elif order_by == 'last_update':\n projects = projects.order_by('-last_update', 'title')\n elif order_by in ['budget', 'funds_needed']:\n projects = projects.extra(order_by=['-%s' % order_by, 'title'])\n else:\n projects = projects.order_by('title') # default to project title\n\n context['organisation'] = organisation\n context['projects_count'] = projects.count() # Limit to 100 projects\n context['projects'] = projects[:100] # Limit to 100 projects\n return context\n\n\nclass ProjectMapView(BaseWidgetView):\n template_name = 'widgets/projects_map.html'\n\n def get_context_data(self, **kwargs):\n context = super(ProjectMapView, self).get_context_data(**kwargs)\n context['height'] = self.request.GET.get('height', '300')\n context['heightpx'] = str(int(context['height']) - 30) + 'px'\n context['width'] = self.request.GET.get('width', '600')\n context['widthpx'] = str(context['width']) + 'px'\n context['style'] = self.request.GET.get('style', 'dark')\n context['state'] = self.request.GET.get('state', 'dynamic')\n org_id = self.request.GET.get('organisation_id', '0')\n org = get_object_or_404(Organisation, pk=org_id)\n context['projects'] = org.published_projects()\n return context\n", "path": "akvo/rsr/views/widgets.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"Akvo RSR is covered by the GNU Affero General Public License.\n\nSee more details in the license.txt file located at the root folder of the Akvo RSR module.\nFor additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\"\"\"\n\nfrom __future__ import absolute_import\n\nimport random\n\nfrom django.shortcuts import get_object_or_404\nfrom django.views.generic import TemplateView\nfrom akvo.rsr.models import Organisation, Project\n\n\n__all__ = [\n 'ProjectMapView',\n 'CobrandedBannerView',\n 'ProjectNarrowView',\n]\n\n\nclass BaseWidgetView(TemplateView):\n\n \"\"\"Setup a common base widget.\"\"\"\n\n def get_context_data(self, **kwargs):\n context = super(BaseWidgetView, self).get_context_data(**kwargs)\n context['style'] = 'darkBG'\n if self.request.GET.get('style') == 'light':\n context['style'] = 'lightBG'\n return context\n\n\nclass ProjectBaseWidgetView(BaseWidgetView):\n\n \"\"\"Extends the base widget with a project from url.\"\"\"\n\n def get_context_data(self, **kwargs):\n context = super(ProjectBaseWidgetView, self).get_context_data(**kwargs)\n context['project'] = get_object_or_404(\n Project, pk=int(self.kwargs['project_id']))\n return context\n\n\nclass RandomBaseWidgetView(BaseWidgetView):\n\n \"\"\"Extends the base widget with random project.\"\"\"\n\n def get_context_data(self, **kwargs):\n context = super(RandomBaseWidgetView, self).get_context_data(**kwargs)\n org_id = self.request.GET.get('organisation_id', '0')\n organisation = get_object_or_404(Organisation, pk=org_id)\n context['project'] = random.choice(organisation.active_projects())\n return context\n\n\nclass CobrandedBannerView(ProjectBaseWidgetView):\n template_name = 'widgets/cobranded_banner.html'\n\n\nclass RandomCobrandedBannerView(RandomBaseWidgetView):\n template_name = 'widgets/cobranded_banner.html'\n\n\nclass ProjectNarrowView(ProjectBaseWidgetView):\n template_name = 'widgets/project_narrow.html'\n\n\nclass RandomProjectNarrowView(RandomBaseWidgetView):\n template_name = 'widgets/project_narrow.html'\n\n\nclass ProjectSmallView(ProjectBaseWidgetView):\n template_name = 'widgets/project_small.html'\n\n\nclass RandomProjectSmallView(RandomBaseWidgetView):\n template_name = 'widgets/project_small.html'\n\n\nclass ProjectListView(BaseWidgetView):\n template_name = 'widgets/project_list.html'\n\n def get_context_data(self, **kwargs):\n context = super(ProjectListView, self).get_context_data(**kwargs)\n order_by = self.request.GET.get('order_by', 'title')\n org_id = self.request.GET.get('organisation_id')\n organisation = get_object_or_404(Organisation, pk=org_id)\n projects = Project.objects.select_related(\n 'publishingstatus__status',\n 'primary_location',\n 'primary_location__country'\n ).prefetch_related(\n 'last_update'\n ).filter(\n partnerships__organisation__id=org_id,\n publishingstatus__status__exact='published'\n ).order_by('-id').distinct()\n\n if order_by == 'status':\n projects = projects.order_by('status', 'title')\n elif order_by == 'last_update':\n projects = projects.order_by('-last_update', 'title')\n elif order_by in ['budget', 'funds_needed']:\n projects = projects.extra(order_by=['-%s' % order_by, 'title'])\n else:\n projects = projects.order_by('title') # default to project title\n\n context['organisation'] = organisation\n context['projects_count'] = projects.count() # Limit to 100 projects\n context['projects'] = projects[:100] # Limit to 100 projects\n return context\n\n\nclass ProjectMapView(BaseWidgetView):\n template_name = 'widgets/projects_map.html'\n\n def get_context_data(self, **kwargs):\n context = super(ProjectMapView, self).get_context_data(**kwargs)\n context['height'] = self.request.GET.get('height', '300')\n context['heightpx'] = str(int(context['height']) - 30) + 'px'\n context['width'] = self.request.GET.get('width', '600')\n context['widthpx'] = str(context['width']) + 'px'\n context['style'] = self.request.GET.get('style', 'dark')\n context['state'] = self.request.GET.get('state', 'dynamic')\n org_id = self.request.GET.get('organisation_id')\n org = get_object_or_404(Organisation, pk=org_id)\n context['projects'] = org.published_projects()\n return context\n", "path": "akvo/rsr/views/widgets.py"}]} | 1,782 | 284 |
gh_patches_debug_18666 | rasdani/github-patches | git_diff | mozilla__pontoon-3061 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fluent pretranslations not serialized in canonical form
Fluent pretranslations are not stored in a canonical form. For example, `key = {example}` should actually be stored as `key = { example }` (mind the whitespace).
In the exported files in the repository, the strings appear as they should, which results in sync trying to import pretranslations, because they aren't the same as in Pontoon DB. That also triggers #2284.
In the UI (editor, string list) the strings are also rendered properly, which only makes the problem more difficult to spot.
We should make sure that pretranslations are always stored them in the canonical form.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pontoon/pretranslation/pretranslate.py`
Content:
```
1 import logging
2 import operator
3 import re
4
5 from django.db.models import CharField, Value as V
6 from django.db.models.functions import Concat
7
8 from fluent.syntax import FluentParser, FluentSerializer
9 from functools import reduce
10
11 from pontoon.base.models import User, TranslatedResource
12 from pontoon.machinery.utils import (
13 get_google_translate_data,
14 get_translation_memory_data,
15 )
16 from pontoon.pretranslation import AUTHORS
17 from .transformer import ApplyPretranslation
18
19
20 log = logging.getLogger(__name__)
21
22 parser = FluentParser()
23 serializer = FluentSerializer()
24
25
26 def get_pretranslations(entity, locale, preserve_placeables=False):
27 """
28 Get pretranslations for the entity-locale pair using internal translation memory and
29 Google's machine translation.
30
31 For Fluent strings, uplift SelectExpressions, serialize Placeables as TextElements
32 and then only pretranslate TextElements. Set the most frequent TextElement
33 pretranslation author as the author of the entire pretranslation.
34
35 :arg Entity entity: the Entity object
36 :arg Locale locale: the Locale object
37 :arg boolean preserve_placeables
38
39 :returns: a list of tuples, consisting of:
40 - a pretranslation of the entity
41 - a plural form
42 - a user (representing TM or GT service)
43 """
44 source = entity.string
45 services = {k: User.objects.get(email=email) for k, email in AUTHORS.items()}
46
47 if entity.resource.format == "ftl":
48 entry = parser.parse_entry(source)
49 pretranslate = ApplyPretranslation(
50 locale, entry, get_pretranslated_data, preserve_placeables
51 )
52
53 try:
54 pretranslate.visit(entry)
55 except ValueError as e:
56 log.info(f"Fluent pretranslation error: {e}")
57 return []
58
59 pretranslation = serializer.serialize_entry(entry)
60
61 authors = [services[service] for service in pretranslate.services]
62 author = max(set(authors), key=authors.count) if authors else services["tm"]
63
64 return [(pretranslation, None, author)]
65
66 else:
67 pretranslation, service = get_pretranslated_data(
68 source, locale, preserve_placeables
69 )
70
71 if pretranslation is None:
72 return []
73
74 author = services[service]
75 if entity.string_plural == "":
76 return [(pretranslation, None, author)]
77 else:
78 plural_forms = range(0, locale.nplurals or 1)
79 return [
80 (pretranslation, plural_form, author) for plural_form in plural_forms
81 ]
82
83
84 def get_pretranslated_data(source, locale, preserve_placeables):
85 # Empty strings do not need translation
86 if re.search("^\\s*$", source):
87 return source, "tm"
88
89 # Try to get matches from Translation Memory
90 tm_response = get_translation_memory_data(text=source, locale=locale)
91 tm_perfect = [t for t in tm_response if int(t["quality"]) == 100]
92 if tm_perfect:
93 return tm_perfect[0]["target"], "tm"
94
95 # Fetch from Google Translate
96 elif locale.google_translate_code:
97 gt_response = get_google_translate_data(
98 text=source, locale=locale, preserve_placeables=preserve_placeables
99 )
100 if gt_response["status"]:
101 return gt_response["translation"], "gt"
102
103 return None, None
104
105
106 def update_changed_instances(tr_filter, tr_dict, translations):
107 """
108 Update the latest activity and stats for changed Locales, ProjectLocales
109 & TranslatedResources
110 """
111 tr_filter = tuple(tr_filter)
112 # Combine all generated filters with an OK operator.
113 # `operator.ior` is the '|' Python operator, which turns into a logical OR
114 # when used between django ORM query objects.
115 tr_query = reduce(operator.ior, tr_filter)
116
117 translatedresources = TranslatedResource.objects.filter(tr_query).annotate(
118 locale_resource=Concat(
119 "locale_id", V("-"), "resource_id", output_field=CharField()
120 )
121 )
122
123 translatedresources.update_stats()
124
125 for tr in translatedresources:
126 index = tr_dict[tr.locale_resource]
127 translation = translations[index]
128 translation.update_latest_translation()
129
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pontoon/pretranslation/pretranslate.py b/pontoon/pretranslation/pretranslate.py
--- a/pontoon/pretranslation/pretranslate.py
+++ b/pontoon/pretranslation/pretranslate.py
@@ -58,6 +58,10 @@
pretranslation = serializer.serialize_entry(entry)
+ # Parse and serialize pretranslation again in order to assure cannonical style
+ parsed_pretranslation = parser.parse_entry(pretranslation)
+ pretranslation = serializer.serialize_entry(parsed_pretranslation)
+
authors = [services[service] for service in pretranslate.services]
author = max(set(authors), key=authors.count) if authors else services["tm"]
@@ -82,7 +86,7 @@
def get_pretranslated_data(source, locale, preserve_placeables):
- # Empty strings do not need translation
+ # Empty strings and strings containing whitespace only do not need translation
if re.search("^\\s*$", source):
return source, "tm"
| {"golden_diff": "diff --git a/pontoon/pretranslation/pretranslate.py b/pontoon/pretranslation/pretranslate.py\n--- a/pontoon/pretranslation/pretranslate.py\n+++ b/pontoon/pretranslation/pretranslate.py\n@@ -58,6 +58,10 @@\n \n pretranslation = serializer.serialize_entry(entry)\n \n+ # Parse and serialize pretranslation again in order to assure cannonical style\n+ parsed_pretranslation = parser.parse_entry(pretranslation)\n+ pretranslation = serializer.serialize_entry(parsed_pretranslation)\n+\n authors = [services[service] for service in pretranslate.services]\n author = max(set(authors), key=authors.count) if authors else services[\"tm\"]\n \n@@ -82,7 +86,7 @@\n \n \n def get_pretranslated_data(source, locale, preserve_placeables):\n- # Empty strings do not need translation\n+ # Empty strings and strings containing whitespace only do not need translation\n if re.search(\"^\\\\s*$\", source):\n return source, \"tm\"\n", "issue": "Fluent pretranslations not serialized in canonical form\nFluent pretranslations are not stored in a canonical form. For example, `key = {example}` should actually be stored as `key = { example }` (mind the whitespace).\r\n\r\nIn the exported files in the repository, the strings appear as they should, which results in sync trying to import pretranslations, because they aren't the same as in Pontoon DB. That also triggers #2284. \r\n\r\nIn the UI (editor, string list) the strings are also rendered properly, which only makes the problem more difficult to spot.\r\n\r\nWe should make sure that pretranslations are always stored them in the canonical form.\n", "before_files": [{"content": "import logging\nimport operator\nimport re\n\nfrom django.db.models import CharField, Value as V\nfrom django.db.models.functions import Concat\n\nfrom fluent.syntax import FluentParser, FluentSerializer\nfrom functools import reduce\n\nfrom pontoon.base.models import User, TranslatedResource\nfrom pontoon.machinery.utils import (\n get_google_translate_data,\n get_translation_memory_data,\n)\nfrom pontoon.pretranslation import AUTHORS\nfrom .transformer import ApplyPretranslation\n\n\nlog = logging.getLogger(__name__)\n\nparser = FluentParser()\nserializer = FluentSerializer()\n\n\ndef get_pretranslations(entity, locale, preserve_placeables=False):\n \"\"\"\n Get pretranslations for the entity-locale pair using internal translation memory and\n Google's machine translation.\n\n For Fluent strings, uplift SelectExpressions, serialize Placeables as TextElements\n and then only pretranslate TextElements. Set the most frequent TextElement\n pretranslation author as the author of the entire pretranslation.\n\n :arg Entity entity: the Entity object\n :arg Locale locale: the Locale object\n :arg boolean preserve_placeables\n\n :returns: a list of tuples, consisting of:\n - a pretranslation of the entity\n - a plural form\n - a user (representing TM or GT service)\n \"\"\"\n source = entity.string\n services = {k: User.objects.get(email=email) for k, email in AUTHORS.items()}\n\n if entity.resource.format == \"ftl\":\n entry = parser.parse_entry(source)\n pretranslate = ApplyPretranslation(\n locale, entry, get_pretranslated_data, preserve_placeables\n )\n\n try:\n pretranslate.visit(entry)\n except ValueError as e:\n log.info(f\"Fluent pretranslation error: {e}\")\n return []\n\n pretranslation = serializer.serialize_entry(entry)\n\n authors = [services[service] for service in pretranslate.services]\n author = max(set(authors), key=authors.count) if authors else services[\"tm\"]\n\n return [(pretranslation, None, author)]\n\n else:\n pretranslation, service = get_pretranslated_data(\n source, locale, preserve_placeables\n )\n\n if pretranslation is None:\n return []\n\n author = services[service]\n if entity.string_plural == \"\":\n return [(pretranslation, None, author)]\n else:\n plural_forms = range(0, locale.nplurals or 1)\n return [\n (pretranslation, plural_form, author) for plural_form in plural_forms\n ]\n\n\ndef get_pretranslated_data(source, locale, preserve_placeables):\n # Empty strings do not need translation\n if re.search(\"^\\\\s*$\", source):\n return source, \"tm\"\n\n # Try to get matches from Translation Memory\n tm_response = get_translation_memory_data(text=source, locale=locale)\n tm_perfect = [t for t in tm_response if int(t[\"quality\"]) == 100]\n if tm_perfect:\n return tm_perfect[0][\"target\"], \"tm\"\n\n # Fetch from Google Translate\n elif locale.google_translate_code:\n gt_response = get_google_translate_data(\n text=source, locale=locale, preserve_placeables=preserve_placeables\n )\n if gt_response[\"status\"]:\n return gt_response[\"translation\"], \"gt\"\n\n return None, None\n\n\ndef update_changed_instances(tr_filter, tr_dict, translations):\n \"\"\"\n Update the latest activity and stats for changed Locales, ProjectLocales\n & TranslatedResources\n \"\"\"\n tr_filter = tuple(tr_filter)\n # Combine all generated filters with an OK operator.\n # `operator.ior` is the '|' Python operator, which turns into a logical OR\n # when used between django ORM query objects.\n tr_query = reduce(operator.ior, tr_filter)\n\n translatedresources = TranslatedResource.objects.filter(tr_query).annotate(\n locale_resource=Concat(\n \"locale_id\", V(\"-\"), \"resource_id\", output_field=CharField()\n )\n )\n\n translatedresources.update_stats()\n\n for tr in translatedresources:\n index = tr_dict[tr.locale_resource]\n translation = translations[index]\n translation.update_latest_translation()\n", "path": "pontoon/pretranslation/pretranslate.py"}], "after_files": [{"content": "import logging\nimport operator\nimport re\n\nfrom django.db.models import CharField, Value as V\nfrom django.db.models.functions import Concat\n\nfrom fluent.syntax import FluentParser, FluentSerializer\nfrom functools import reduce\n\nfrom pontoon.base.models import User, TranslatedResource\nfrom pontoon.machinery.utils import (\n get_google_translate_data,\n get_translation_memory_data,\n)\nfrom pontoon.pretranslation import AUTHORS\nfrom .transformer import ApplyPretranslation\n\n\nlog = logging.getLogger(__name__)\n\nparser = FluentParser()\nserializer = FluentSerializer()\n\n\ndef get_pretranslations(entity, locale, preserve_placeables=False):\n \"\"\"\n Get pretranslations for the entity-locale pair using internal translation memory and\n Google's machine translation.\n\n For Fluent strings, uplift SelectExpressions, serialize Placeables as TextElements\n and then only pretranslate TextElements. Set the most frequent TextElement\n pretranslation author as the author of the entire pretranslation.\n\n :arg Entity entity: the Entity object\n :arg Locale locale: the Locale object\n :arg boolean preserve_placeables\n\n :returns: a list of tuples, consisting of:\n - a pretranslation of the entity\n - a plural form\n - a user (representing TM or GT service)\n \"\"\"\n source = entity.string\n services = {k: User.objects.get(email=email) for k, email in AUTHORS.items()}\n\n if entity.resource.format == \"ftl\":\n entry = parser.parse_entry(source)\n pretranslate = ApplyPretranslation(\n locale, entry, get_pretranslated_data, preserve_placeables\n )\n\n try:\n pretranslate.visit(entry)\n except ValueError as e:\n log.info(f\"Fluent pretranslation error: {e}\")\n return []\n\n pretranslation = serializer.serialize_entry(entry)\n\n # Parse and serialize pretranslation again in order to assure cannonical style\n parsed_pretranslation = parser.parse_entry(pretranslation)\n pretranslation = serializer.serialize_entry(parsed_pretranslation)\n\n authors = [services[service] for service in pretranslate.services]\n author = max(set(authors), key=authors.count) if authors else services[\"tm\"]\n\n return [(pretranslation, None, author)]\n\n else:\n pretranslation, service = get_pretranslated_data(\n source, locale, preserve_placeables\n )\n\n if pretranslation is None:\n return []\n\n author = services[service]\n if entity.string_plural == \"\":\n return [(pretranslation, None, author)]\n else:\n plural_forms = range(0, locale.nplurals or 1)\n return [\n (pretranslation, plural_form, author) for plural_form in plural_forms\n ]\n\n\ndef get_pretranslated_data(source, locale, preserve_placeables):\n # Empty strings and strings containing whitespace only do not need translation\n if re.search(\"^\\\\s*$\", source):\n return source, \"tm\"\n\n # Try to get matches from Translation Memory\n tm_response = get_translation_memory_data(text=source, locale=locale)\n tm_perfect = [t for t in tm_response if int(t[\"quality\"]) == 100]\n if tm_perfect:\n return tm_perfect[0][\"target\"], \"tm\"\n\n # Fetch from Google Translate\n elif locale.google_translate_code:\n gt_response = get_google_translate_data(\n text=source, locale=locale, preserve_placeables=preserve_placeables\n )\n if gt_response[\"status\"]:\n return gt_response[\"translation\"], \"gt\"\n\n return None, None\n\n\ndef update_changed_instances(tr_filter, tr_dict, translations):\n \"\"\"\n Update the latest activity and stats for changed Locales, ProjectLocales\n & TranslatedResources\n \"\"\"\n tr_filter = tuple(tr_filter)\n # Combine all generated filters with an OK operator.\n # `operator.ior` is the '|' Python operator, which turns into a logical OR\n # when used between django ORM query objects.\n tr_query = reduce(operator.ior, tr_filter)\n\n translatedresources = TranslatedResource.objects.filter(tr_query).annotate(\n locale_resource=Concat(\n \"locale_id\", V(\"-\"), \"resource_id\", output_field=CharField()\n )\n )\n\n translatedresources.update_stats()\n\n for tr in translatedresources:\n index = tr_dict[tr.locale_resource]\n translation = translations[index]\n translation.update_latest_translation()\n", "path": "pontoon/pretranslation/pretranslate.py"}]} | 1,575 | 215 |
gh_patches_debug_18957 | rasdani/github-patches | git_diff | wagtail__wagtail-8171 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Tags field needs universal help text to explain need to quote multi-word tags
Wagtail's tagging system requires multi-word tags to be quoted, to prevent a two-word tag becoming two separate tags. This appears to catch users out repeatedly and has generated several Issues in recent months.
The tag field widget, wherever used (page models, images, documents etc) should have standard help text (appearing beneath it like all help text) explaining the need for quoting. Explaining the need for comma-delimiting might also be worth adding while we're at it.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `wagtail/admin/widgets/tags.py`
Content:
```
1 import json
2
3 from django.conf import settings
4 from django.urls import reverse
5 from taggit.forms import TagWidget
6 from taggit.models import Tag
7
8
9 class AdminTagWidget(TagWidget):
10 template_name = "wagtailadmin/widgets/tag_widget.html"
11
12 def __init__(self, *args, **kwargs):
13 self.tag_model = kwargs.pop("tag_model", Tag)
14 # free_tagging = None means defer to the tag model's setting
15 self.free_tagging = kwargs.pop("free_tagging", None)
16 super().__init__(*args, **kwargs)
17
18 def get_context(self, name, value, attrs):
19 context = super().get_context(name, value, attrs)
20
21 if self.tag_model == Tag:
22 autocomplete_url = reverse("wagtailadmin_tag_autocomplete")
23 else:
24 autocomplete_url = reverse(
25 "wagtailadmin_tag_model_autocomplete",
26 args=(self.tag_model._meta.app_label, self.tag_model._meta.model_name),
27 )
28
29 if self.free_tagging is None:
30 free_tagging = getattr(self.tag_model, "free_tagging", True)
31 else:
32 free_tagging = self.free_tagging
33
34 context["widget"]["autocomplete_url"] = autocomplete_url
35 context["widget"]["options_json"] = json.dumps(
36 {
37 "allowSpaces": getattr(settings, "TAG_SPACES_ALLOWED", True),
38 "tagLimit": getattr(settings, "TAG_LIMIT", None),
39 "autocompleteOnly": not free_tagging,
40 }
41 )
42
43 return context
44
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/wagtail/admin/widgets/tags.py b/wagtail/admin/widgets/tags.py
--- a/wagtail/admin/widgets/tags.py
+++ b/wagtail/admin/widgets/tags.py
@@ -2,6 +2,7 @@
from django.conf import settings
from django.urls import reverse
+from django.utils.translation import gettext_lazy as _
from taggit.forms import TagWidget
from taggit.models import Tag
@@ -31,6 +32,15 @@
else:
free_tagging = self.free_tagging
+ tag_spaces_allowed = getattr(settings, "TAG_SPACES_ALLOWED", True)
+ if tag_spaces_allowed:
+ help_text = _(
+ 'Multi-word tags with spaces will automatically be enclosed in double quotes (").'
+ )
+ else:
+ help_text = _("Tags can only consist of a single word, no spaces allowed.")
+
+ context["widget"]["help_text"] = help_text
context["widget"]["autocomplete_url"] = autocomplete_url
context["widget"]["options_json"] = json.dumps(
{
| {"golden_diff": "diff --git a/wagtail/admin/widgets/tags.py b/wagtail/admin/widgets/tags.py\n--- a/wagtail/admin/widgets/tags.py\n+++ b/wagtail/admin/widgets/tags.py\n@@ -2,6 +2,7 @@\n \n from django.conf import settings\n from django.urls import reverse\n+from django.utils.translation import gettext_lazy as _\n from taggit.forms import TagWidget\n from taggit.models import Tag\n \n@@ -31,6 +32,15 @@\n else:\n free_tagging = self.free_tagging\n \n+ tag_spaces_allowed = getattr(settings, \"TAG_SPACES_ALLOWED\", True)\n+ if tag_spaces_allowed:\n+ help_text = _(\n+ 'Multi-word tags with spaces will automatically be enclosed in double quotes (\").'\n+ )\n+ else:\n+ help_text = _(\"Tags can only consist of a single word, no spaces allowed.\")\n+\n+ context[\"widget\"][\"help_text\"] = help_text\n context[\"widget\"][\"autocomplete_url\"] = autocomplete_url\n context[\"widget\"][\"options_json\"] = json.dumps(\n {\n", "issue": "Tags field needs universal help text to explain need to quote multi-word tags\nWagtail's tagging system requires multi-word tags to be quoted, to prevent a two-word tag becoming two separate tags. This appears to catch users out repeatedly and has generated several Issues in recent months.\n\nThe tag field widget, wherever used (page models, images, documents etc) should have standard help text (appearing beneath it like all help text) explaining the need for quoting. Explaining the need for comma-delimiting might also be worth adding while we're at it.\n\n", "before_files": [{"content": "import json\n\nfrom django.conf import settings\nfrom django.urls import reverse\nfrom taggit.forms import TagWidget\nfrom taggit.models import Tag\n\n\nclass AdminTagWidget(TagWidget):\n template_name = \"wagtailadmin/widgets/tag_widget.html\"\n\n def __init__(self, *args, **kwargs):\n self.tag_model = kwargs.pop(\"tag_model\", Tag)\n # free_tagging = None means defer to the tag model's setting\n self.free_tagging = kwargs.pop(\"free_tagging\", None)\n super().__init__(*args, **kwargs)\n\n def get_context(self, name, value, attrs):\n context = super().get_context(name, value, attrs)\n\n if self.tag_model == Tag:\n autocomplete_url = reverse(\"wagtailadmin_tag_autocomplete\")\n else:\n autocomplete_url = reverse(\n \"wagtailadmin_tag_model_autocomplete\",\n args=(self.tag_model._meta.app_label, self.tag_model._meta.model_name),\n )\n\n if self.free_tagging is None:\n free_tagging = getattr(self.tag_model, \"free_tagging\", True)\n else:\n free_tagging = self.free_tagging\n\n context[\"widget\"][\"autocomplete_url\"] = autocomplete_url\n context[\"widget\"][\"options_json\"] = json.dumps(\n {\n \"allowSpaces\": getattr(settings, \"TAG_SPACES_ALLOWED\", True),\n \"tagLimit\": getattr(settings, \"TAG_LIMIT\", None),\n \"autocompleteOnly\": not free_tagging,\n }\n )\n\n return context\n", "path": "wagtail/admin/widgets/tags.py"}], "after_files": [{"content": "import json\n\nfrom django.conf import settings\nfrom django.urls import reverse\nfrom django.utils.translation import gettext_lazy as _\nfrom taggit.forms import TagWidget\nfrom taggit.models import Tag\n\n\nclass AdminTagWidget(TagWidget):\n template_name = \"wagtailadmin/widgets/tag_widget.html\"\n\n def __init__(self, *args, **kwargs):\n self.tag_model = kwargs.pop(\"tag_model\", Tag)\n # free_tagging = None means defer to the tag model's setting\n self.free_tagging = kwargs.pop(\"free_tagging\", None)\n super().__init__(*args, **kwargs)\n\n def get_context(self, name, value, attrs):\n context = super().get_context(name, value, attrs)\n\n if self.tag_model == Tag:\n autocomplete_url = reverse(\"wagtailadmin_tag_autocomplete\")\n else:\n autocomplete_url = reverse(\n \"wagtailadmin_tag_model_autocomplete\",\n args=(self.tag_model._meta.app_label, self.tag_model._meta.model_name),\n )\n\n if self.free_tagging is None:\n free_tagging = getattr(self.tag_model, \"free_tagging\", True)\n else:\n free_tagging = self.free_tagging\n\n tag_spaces_allowed = getattr(settings, \"TAG_SPACES_ALLOWED\", True)\n if tag_spaces_allowed:\n help_text = _(\n 'Multi-word tags with spaces will automatically be enclosed in double quotes (\").'\n )\n else:\n help_text = _(\"Tags can only consist of a single word, no spaces allowed.\")\n\n context[\"widget\"][\"help_text\"] = help_text\n context[\"widget\"][\"autocomplete_url\"] = autocomplete_url\n context[\"widget\"][\"options_json\"] = json.dumps(\n {\n \"allowSpaces\": getattr(settings, \"TAG_SPACES_ALLOWED\", True),\n \"tagLimit\": getattr(settings, \"TAG_LIMIT\", None),\n \"autocompleteOnly\": not free_tagging,\n }\n )\n\n return context\n", "path": "wagtail/admin/widgets/tags.py"}]} | 781 | 229 |
gh_patches_debug_18003 | rasdani/github-patches | git_diff | rasterio__rasterio-223 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
guard_transform prevents translation of unprojected files
Where by translation I mean the pattern
``` python
with rasterio.open(file1.tif) as src:
with rasterio.open(file2.tif, 'w', **src.meta) as dst:
...
```
Traceback snippet:
```
Traceback (most recent call last):
...
File "copier.py", line 15, in to_png
with rio.open(dst_path, "w", **meta) as dst:
File "xxx/anaconda/lib/python2.7/site-packages/rasterio/__init__.py", line 91, in open
transform = guard_transform(transform)
File "xxx/anaconda/lib/python2.7/site-packages/rasterio/transform.py", line 27, in guard_transform
transform.a, transform.e))
ValueError: Transform has invalid coefficients a, e: (0.000000, 0.000000)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `rasterio/transform.py`
Content:
```
1
2 import warnings
3
4 from affine import Affine
5
6 IDENTITY = Affine.identity()
7
8 def tastes_like_gdal(t):
9 return t[2] == t[4] == 0.0 and t[1] > 0 and t[5] < 0
10
11 def guard_transform(transform):
12 """Return an Affine transformation instance"""
13 if not isinstance(transform, Affine):
14 if tastes_like_gdal(transform):
15 warnings.warn(
16 "GDAL-style transforms are deprecated and will not "
17 "be supported in Rasterio 1.0.",
18 FutureWarning,
19 stacklevel=2)
20 transform = Affine.from_gdal(*transform)
21 else:
22 transform = Affine(*transform)
23 a, e = transform.a, transform.e
24 if a == 0.0 or e == 0.0:
25 raise ValueError(
26 "Transform has invalid coefficients a, e: (%f, %f)" % (
27 transform.a, transform.e))
28 return transform
29
30
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/rasterio/transform.py b/rasterio/transform.py
--- a/rasterio/transform.py
+++ b/rasterio/transform.py
@@ -1,12 +1,14 @@
-
import warnings
from affine import Affine
IDENTITY = Affine.identity()
-def tastes_like_gdal(t):
- return t[2] == t[4] == 0.0 and t[1] > 0 and t[5] < 0
+
+def tastes_like_gdal(seq):
+ """Return True if `seq` matches the GDAL geotransform pattern."""
+ return seq[2] == seq[4] == 0.0 and seq[1] > 0 and seq[5] < 0
+
def guard_transform(transform):
"""Return an Affine transformation instance"""
@@ -20,10 +22,4 @@
transform = Affine.from_gdal(*transform)
else:
transform = Affine(*transform)
- a, e = transform.a, transform.e
- if a == 0.0 or e == 0.0:
- raise ValueError(
- "Transform has invalid coefficients a, e: (%f, %f)" % (
- transform.a, transform.e))
return transform
-
| {"golden_diff": "diff --git a/rasterio/transform.py b/rasterio/transform.py\n--- a/rasterio/transform.py\n+++ b/rasterio/transform.py\n@@ -1,12 +1,14 @@\n-\n import warnings\n \n from affine import Affine\n \n IDENTITY = Affine.identity()\n \n-def tastes_like_gdal(t):\n- return t[2] == t[4] == 0.0 and t[1] > 0 and t[5] < 0\n+\n+def tastes_like_gdal(seq):\n+ \"\"\"Return True if `seq` matches the GDAL geotransform pattern.\"\"\"\n+ return seq[2] == seq[4] == 0.0 and seq[1] > 0 and seq[5] < 0\n+\n \n def guard_transform(transform):\n \"\"\"Return an Affine transformation instance\"\"\"\n@@ -20,10 +22,4 @@\n transform = Affine.from_gdal(*transform)\n else:\n transform = Affine(*transform)\n- a, e = transform.a, transform.e\n- if a == 0.0 or e == 0.0:\n- raise ValueError(\n- \"Transform has invalid coefficients a, e: (%f, %f)\" % (\n- transform.a, transform.e))\n return transform\n-\n", "issue": "guard_transform prevents translation of unprojected files\nWhere by translation I mean the pattern\n\n``` python\nwith rasterio.open(file1.tif) as src:\n with rasterio.open(file2.tif, 'w', **src.meta) as dst:\n ...\n```\n\nTraceback snippet:\n\n```\nTraceback (most recent call last):\n ...\n File \"copier.py\", line 15, in to_png\n with rio.open(dst_path, \"w\", **meta) as dst:\n File \"xxx/anaconda/lib/python2.7/site-packages/rasterio/__init__.py\", line 91, in open\n transform = guard_transform(transform)\n File \"xxx/anaconda/lib/python2.7/site-packages/rasterio/transform.py\", line 27, in guard_transform\n transform.a, transform.e))\nValueError: Transform has invalid coefficients a, e: (0.000000, 0.000000)\n```\n\n", "before_files": [{"content": "\nimport warnings\n\nfrom affine import Affine\n\nIDENTITY = Affine.identity()\n\ndef tastes_like_gdal(t):\n return t[2] == t[4] == 0.0 and t[1] > 0 and t[5] < 0\n\ndef guard_transform(transform):\n \"\"\"Return an Affine transformation instance\"\"\"\n if not isinstance(transform, Affine):\n if tastes_like_gdal(transform):\n warnings.warn(\n \"GDAL-style transforms are deprecated and will not \"\n \"be supported in Rasterio 1.0.\",\n FutureWarning,\n stacklevel=2)\n transform = Affine.from_gdal(*transform)\n else:\n transform = Affine(*transform)\n a, e = transform.a, transform.e\n if a == 0.0 or e == 0.0:\n raise ValueError(\n \"Transform has invalid coefficients a, e: (%f, %f)\" % (\n transform.a, transform.e))\n return transform\n\n", "path": "rasterio/transform.py"}], "after_files": [{"content": "import warnings\n\nfrom affine import Affine\n\nIDENTITY = Affine.identity()\n\n\ndef tastes_like_gdal(seq):\n \"\"\"Return True if `seq` matches the GDAL geotransform pattern.\"\"\"\n return seq[2] == seq[4] == 0.0 and seq[1] > 0 and seq[5] < 0\n\n\ndef guard_transform(transform):\n \"\"\"Return an Affine transformation instance\"\"\"\n if not isinstance(transform, Affine):\n if tastes_like_gdal(transform):\n warnings.warn(\n \"GDAL-style transforms are deprecated and will not \"\n \"be supported in Rasterio 1.0.\",\n FutureWarning,\n stacklevel=2)\n transform = Affine.from_gdal(*transform)\n else:\n transform = Affine(*transform)\n return transform\n", "path": "rasterio/transform.py"}]} | 735 | 286 |
gh_patches_debug_9370 | rasdani/github-patches | git_diff | liqd__a4-product-360 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[f2f module] wording
Here are the wordings:
1
Edit face-to-face participation information
Informationen zur Vor-Ort-Beteiligung bearbeiten
2
Title
Titel
3
Highlighted Info
Hervorgehobene Information
3a (Hilfetext)
Highlight important information like the time or location of your face-to-face event
Zur Hervorhebung von wichtigen Informationen wie Ort oder Zeitraum der Vor-Ort-Beteiligung
4
Description
Beschreibung
5
Face-to-Face Information
Informationen Vor-Ort-Beteiligung
6
Face-to-Face Participation
Vor-Ort-Beteiligung
7
With this module you can provide information about events or phases for face-to-face participation. No online participation is possible in this module.
Mit diesem Modul können Informationen über Veranstaltungen und Phasen zur Vor-Ort-Beteiligung bereitgestellt werden. In diesem Modul ist keine Online-Beteiligung möglich.
8
Phase 1: Provide information about face-to-face participation events
Phase 1: Informationen zur Vor-Ort-Beteiligung bereitstellen


--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `liqd_product/apps/dashboard/blueprints.py`
Content:
```
1 from django.utils.translation import ugettext_lazy as _
2
3 from adhocracy4.dashboard.blueprints import ProjectBlueprint
4 from meinberlin.apps.activities import phases as activities_phases
5 from meinberlin.apps.budgeting import phases as budgeting_phases
6 from meinberlin.apps.documents import phases as documents_phases
7 from meinberlin.apps.ideas import phases as ideas_phases
8 from meinberlin.apps.mapideas import phases as mapideas_phases
9 from meinberlin.apps.polls import phases as poll_phases
10
11 blueprints = [
12 ('brainstorming',
13 ProjectBlueprint(
14 title=_('Brainstorming'),
15 description=_(
16 'Collect first ideas for a specific topic and comment on them.'
17 ),
18 content=[
19 ideas_phases.CollectPhase(),
20 ],
21 image='images/brainstorming.svg',
22 settings_model=None,
23 )),
24 ('map-brainstorming',
25 ProjectBlueprint(
26 title=_('Spatial Brainstorming'),
27 description=_(
28 'Collect location specific ideas for a topic and comment on them.'
29 ),
30 content=[
31 mapideas_phases.CollectPhase(),
32 ],
33 image='images/map-brainstorming.svg',
34 settings_model=('a4maps', 'AreaSettings'),
35 )),
36 ('map-idea-collection',
37 ProjectBlueprint(
38 title=_('Spatial Idea Collection'),
39 description=_(
40 'Collect location specific ideas that can be rated and commented.'
41 ),
42 content=[
43 mapideas_phases.CollectPhase(),
44 mapideas_phases.RatingPhase()
45 ],
46 image='images/map-idea-collection.svg',
47 settings_model=('a4maps', 'AreaSettings'),
48 )),
49 ('agenda-setting',
50 ProjectBlueprint(
51 title=_('Agenda Setting'),
52 description=_(
53 'With Agenda-Setting it’s possible to identify topics and to '
54 'define mission statements. Afterwards anyone can comment and '
55 'rate on different topics.'
56 ),
57 content=[
58 ideas_phases.CollectPhase(),
59 ideas_phases.RatingPhase(),
60 ],
61 image='images/agenda-setting.svg',
62 settings_model=None,
63 )),
64 ('text-review',
65 ProjectBlueprint(
66 title=_('Text Review'),
67 description=_(
68 'In the text-review it’s possible to structure draft texts '
69 'that can be commented.'
70 ),
71 content=[
72 documents_phases.CommentPhase(),
73 ],
74 image='images/text-review.svg',
75 settings_model=None,
76 )),
77 ('poll',
78 ProjectBlueprint(
79 title=_('Poll'),
80 description=_(
81 'Create a poll with multiple questions and possible answers. '
82 'Anyone can cast votes and comment on the poll.'
83 ),
84 content=[
85 poll_phases.VotingPhase(),
86 ],
87 image='images/poll.svg',
88 settings_model=None,
89 )),
90 ('participatory-budgeting',
91 ProjectBlueprint(
92 title=_('Participatory budgeting'),
93 description=_(
94 'With participatory-budgeting it’s possible to make proposals '
95 'with budget specifications and locate them. Anyone can comment '
96 'and rate on different proposals.'),
97 content=[
98 budgeting_phases.RequestPhase()],
99 image='images/participatory-budgeting.svg',
100 settings_model=('a4maps', 'AreaSettings'),
101 )),
102 ('facetoface',
103 ProjectBlueprint(
104 title=_('Face to Face Participation'),
105 description=_(
106 'Share info about a face to face participation event.'
107 ),
108 content=[
109 activities_phases.FaceToFacePhase(),
110 ],
111 image='images/facetoface.svg',
112 settings_model=None,
113 )),
114 ]
115
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/liqd_product/apps/dashboard/blueprints.py b/liqd_product/apps/dashboard/blueprints.py
--- a/liqd_product/apps/dashboard/blueprints.py
+++ b/liqd_product/apps/dashboard/blueprints.py
@@ -101,9 +101,11 @@
)),
('facetoface',
ProjectBlueprint(
- title=_('Face to Face Participation'),
+ title=_('Face-to-Face Participation'),
description=_(
- 'Share info about a face to face participation event.'
+ 'With this module you can provide information about events or '
+ 'phases for face-to-face participation. No online participation '
+ 'is possible in this module.'
),
content=[
activities_phases.FaceToFacePhase(),
| {"golden_diff": "diff --git a/liqd_product/apps/dashboard/blueprints.py b/liqd_product/apps/dashboard/blueprints.py\n--- a/liqd_product/apps/dashboard/blueprints.py\n+++ b/liqd_product/apps/dashboard/blueprints.py\n@@ -101,9 +101,11 @@\n )),\n ('facetoface',\n ProjectBlueprint(\n- title=_('Face to Face Participation'),\n+ title=_('Face-to-Face Participation'),\n description=_(\n- 'Share info about a face to face participation event.'\n+ 'With this module you can provide information about events or '\n+ 'phases for face-to-face participation. No online participation '\n+ 'is possible in this module.'\n ),\n content=[\n activities_phases.FaceToFacePhase(),\n", "issue": "[f2f module] wording\nHere are the wordings:\r\n\r\n1 \r\nEdit face-to-face participation information\r\nInformationen zur Vor-Ort-Beteiligung bearbeiten\r\n\r\n2\r\nTitle\r\nTitel\r\n\r\n3\r\nHighlighted Info\r\nHervorgehobene Information\r\n\r\n3a (Hilfetext)\r\nHighlight important information like the time or location of your face-to-face event\r\nZur Hervorhebung von wichtigen Informationen wie Ort oder Zeitraum der Vor-Ort-Beteiligung\r\n\r\n4\r\nDescription\r\nBeschreibung\r\n\r\n5\r\nFace-to-Face Information\r\nInformationen Vor-Ort-Beteiligung\r\n\r\n6\r\nFace-to-Face Participation\r\nVor-Ort-Beteiligung\r\n\r\n7\r\nWith this module you can provide information about events or phases for face-to-face participation. No online participation is possible in this module.\r\nMit diesem Modul k\u00f6nnen Informationen \u00fcber Veranstaltungen und Phasen zur Vor-Ort-Beteiligung bereitgestellt werden. In diesem Modul ist keine Online-Beteiligung m\u00f6glich.\r\n\r\n8\r\nPhase 1: Provide information about face-to-face participation events\r\nPhase 1: Informationen zur Vor-Ort-Beteiligung bereitstellen\r\n\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "from django.utils.translation import ugettext_lazy as _\n\nfrom adhocracy4.dashboard.blueprints import ProjectBlueprint\nfrom meinberlin.apps.activities import phases as activities_phases\nfrom meinberlin.apps.budgeting import phases as budgeting_phases\nfrom meinberlin.apps.documents import phases as documents_phases\nfrom meinberlin.apps.ideas import phases as ideas_phases\nfrom meinberlin.apps.mapideas import phases as mapideas_phases\nfrom meinberlin.apps.polls import phases as poll_phases\n\nblueprints = [\n ('brainstorming',\n ProjectBlueprint(\n title=_('Brainstorming'),\n description=_(\n 'Collect first ideas for a specific topic and comment on them.'\n ),\n content=[\n ideas_phases.CollectPhase(),\n ],\n image='images/brainstorming.svg',\n settings_model=None,\n )),\n ('map-brainstorming',\n ProjectBlueprint(\n title=_('Spatial Brainstorming'),\n description=_(\n 'Collect location specific ideas for a topic and comment on them.'\n ),\n content=[\n mapideas_phases.CollectPhase(),\n ],\n image='images/map-brainstorming.svg',\n settings_model=('a4maps', 'AreaSettings'),\n )),\n ('map-idea-collection',\n ProjectBlueprint(\n title=_('Spatial Idea Collection'),\n description=_(\n 'Collect location specific ideas that can be rated and commented.'\n ),\n content=[\n mapideas_phases.CollectPhase(),\n mapideas_phases.RatingPhase()\n ],\n image='images/map-idea-collection.svg',\n settings_model=('a4maps', 'AreaSettings'),\n )),\n ('agenda-setting',\n ProjectBlueprint(\n title=_('Agenda Setting'),\n description=_(\n 'With Agenda-Setting it\u2019s possible to identify topics and to '\n 'define mission statements. Afterwards anyone can comment and '\n 'rate on different topics.'\n ),\n content=[\n ideas_phases.CollectPhase(),\n ideas_phases.RatingPhase(),\n ],\n image='images/agenda-setting.svg',\n settings_model=None,\n )),\n ('text-review',\n ProjectBlueprint(\n title=_('Text Review'),\n description=_(\n 'In the text-review it\u2019s possible to structure draft texts '\n 'that can be commented.'\n ),\n content=[\n documents_phases.CommentPhase(),\n ],\n image='images/text-review.svg',\n settings_model=None,\n )),\n ('poll',\n ProjectBlueprint(\n title=_('Poll'),\n description=_(\n 'Create a poll with multiple questions and possible answers. '\n 'Anyone can cast votes and comment on the poll.'\n ),\n content=[\n poll_phases.VotingPhase(),\n ],\n image='images/poll.svg',\n settings_model=None,\n )),\n ('participatory-budgeting',\n ProjectBlueprint(\n title=_('Participatory budgeting'),\n description=_(\n 'With participatory-budgeting it\u2019s possible to make proposals '\n 'with budget specifications and locate them. Anyone can comment '\n 'and rate on different proposals.'),\n content=[\n budgeting_phases.RequestPhase()],\n image='images/participatory-budgeting.svg',\n settings_model=('a4maps', 'AreaSettings'),\n )),\n ('facetoface',\n ProjectBlueprint(\n title=_('Face to Face Participation'),\n description=_(\n 'Share info about a face to face participation event.'\n ),\n content=[\n activities_phases.FaceToFacePhase(),\n ],\n image='images/facetoface.svg',\n settings_model=None,\n )),\n]\n", "path": "liqd_product/apps/dashboard/blueprints.py"}], "after_files": [{"content": "from django.utils.translation import ugettext_lazy as _\n\nfrom adhocracy4.dashboard.blueprints import ProjectBlueprint\nfrom meinberlin.apps.activities import phases as activities_phases\nfrom meinberlin.apps.budgeting import phases as budgeting_phases\nfrom meinberlin.apps.documents import phases as documents_phases\nfrom meinberlin.apps.ideas import phases as ideas_phases\nfrom meinberlin.apps.mapideas import phases as mapideas_phases\nfrom meinberlin.apps.polls import phases as poll_phases\n\nblueprints = [\n ('brainstorming',\n ProjectBlueprint(\n title=_('Brainstorming'),\n description=_(\n 'Collect first ideas for a specific topic and comment on them.'\n ),\n content=[\n ideas_phases.CollectPhase(),\n ],\n image='images/brainstorming.svg',\n settings_model=None,\n )),\n ('map-brainstorming',\n ProjectBlueprint(\n title=_('Spatial Brainstorming'),\n description=_(\n 'Collect location specific ideas for a topic and comment on them.'\n ),\n content=[\n mapideas_phases.CollectPhase(),\n ],\n image='images/map-brainstorming.svg',\n settings_model=('a4maps', 'AreaSettings'),\n )),\n ('map-idea-collection',\n ProjectBlueprint(\n title=_('Spatial Idea Collection'),\n description=_(\n 'Collect location specific ideas that can be rated and commented.'\n ),\n content=[\n mapideas_phases.CollectPhase(),\n mapideas_phases.RatingPhase()\n ],\n image='images/map-idea-collection.svg',\n settings_model=('a4maps', 'AreaSettings'),\n )),\n ('agenda-setting',\n ProjectBlueprint(\n title=_('Agenda Setting'),\n description=_(\n 'With Agenda-Setting it\u2019s possible to identify topics and to '\n 'define mission statements. Afterwards anyone can comment and '\n 'rate on different topics.'\n ),\n content=[\n ideas_phases.CollectPhase(),\n ideas_phases.RatingPhase(),\n ],\n image='images/agenda-setting.svg',\n settings_model=None,\n )),\n ('text-review',\n ProjectBlueprint(\n title=_('Text Review'),\n description=_(\n 'In the text-review it\u2019s possible to structure draft texts '\n 'that can be commented.'\n ),\n content=[\n documents_phases.CommentPhase(),\n ],\n image='images/text-review.svg',\n settings_model=None,\n )),\n ('poll',\n ProjectBlueprint(\n title=_('Poll'),\n description=_(\n 'Create a poll with multiple questions and possible answers. '\n 'Anyone can cast votes and comment on the poll.'\n ),\n content=[\n poll_phases.VotingPhase(),\n ],\n image='images/poll.svg',\n settings_model=None,\n )),\n ('participatory-budgeting',\n ProjectBlueprint(\n title=_('Participatory budgeting'),\n description=_(\n 'With participatory-budgeting it\u2019s possible to make proposals '\n 'with budget specifications and locate them. Anyone can comment '\n 'and rate on different proposals.'),\n content=[\n budgeting_phases.RequestPhase()],\n image='images/participatory-budgeting.svg',\n settings_model=('a4maps', 'AreaSettings'),\n )),\n ('facetoface',\n ProjectBlueprint(\n title=_('Face-to-Face Participation'),\n description=_(\n 'With this module you can provide information about events or '\n 'phases for face-to-face participation. No online participation '\n 'is possible in this module.'\n ),\n content=[\n activities_phases.FaceToFacePhase(),\n ],\n image='images/facetoface.svg',\n settings_model=None,\n )),\n]\n", "path": "liqd_product/apps/dashboard/blueprints.py"}]} | 1,631 | 159 |
gh_patches_debug_28372 | rasdani/github-patches | git_diff | pypa__setuptools-4283 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Deduplicate testing dependencies by referencing `[testing-integration]`
<!-- First time contributors: Take a moment to review https://setuptools.pypa.io/en/latest/development/developer-guide.html! -->
<!-- Remove sections if not applicable -->
## Summary of changes
This was my original idea, but in https://github.com/pypa/setuptools/pull/4257#discussion_r1514754970, @abravalheri mentioned that:
> [...] I think we just remove the `testing-integration` and use `testing` everywhere...
>
> Although it might add a bit of overhead for the integration tests, it will simplify and streamline the setup... This way we reduce the divergence with `skeleton`. The additional overhead in the integration tests should not be too much, and the integration tests just run before the releases anyway.
But I'm still opening this PR as an option / for comparison.
Accepting this closes #4282
### Pull Request Checklist
- [x] Changes have tests (these are test changes)
- [x] News fragment added in [`newsfragments/`]. (no user facing changes)
_(See [documentation][PR docs] for details)_
[`newsfragments/`]: https://github.com/pypa/setuptools/tree/master/newsfragments
[PR docs]:
https://setuptools.pypa.io/en/latest/development/developer-guide.html#making-a-pull-request
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `conftest.py`
Content:
```
1 import sys
2
3 import pytest
4
5
6 pytest_plugins = 'setuptools.tests.fixtures'
7
8
9 def pytest_addoption(parser):
10 parser.addoption(
11 "--package_name",
12 action="append",
13 default=[],
14 help="list of package_name to pass to test functions",
15 )
16 parser.addoption(
17 "--integration",
18 action="store_true",
19 default=False,
20 help="run integration tests (only)",
21 )
22
23
24 def pytest_configure(config):
25 config.addinivalue_line("markers", "integration: integration tests")
26 config.addinivalue_line("markers", "uses_network: tests may try to download files")
27
28
29 collect_ignore = [
30 'tests/manual_test.py',
31 'setuptools/tests/mod_with_constant.py',
32 'setuptools/_distutils',
33 '_distutils_hack',
34 'setuptools/extern',
35 'pkg_resources/extern',
36 'pkg_resources/tests/data',
37 'setuptools/_vendor',
38 'pkg_resources/_vendor',
39 'setuptools/config/_validate_pyproject',
40 'setuptools/modified.py',
41 ]
42
43
44 if sys.version_info < (3, 9) or sys.platform == 'cygwin':
45 collect_ignore.append('tools/finalize.py')
46
47
48 @pytest.fixture(autouse=True)
49 def _skip_integration(request):
50 running_integration_tests = request.config.getoption("--integration")
51 is_integration_test = request.node.get_closest_marker("integration")
52 if running_integration_tests and not is_integration_test:
53 pytest.skip("running integration tests only")
54 if not running_integration_tests and is_integration_test:
55 pytest.skip("skipping integration tests")
56
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/conftest.py b/conftest.py
--- a/conftest.py
+++ b/conftest.py
@@ -24,6 +24,7 @@
def pytest_configure(config):
config.addinivalue_line("markers", "integration: integration tests")
config.addinivalue_line("markers", "uses_network: tests may try to download files")
+ _IntegrationTestSpeedups.disable_plugins_already_run(config)
collect_ignore = [
@@ -47,9 +48,25 @@
@pytest.fixture(autouse=True)
def _skip_integration(request):
- running_integration_tests = request.config.getoption("--integration")
- is_integration_test = request.node.get_closest_marker("integration")
- if running_integration_tests and not is_integration_test:
- pytest.skip("running integration tests only")
- if not running_integration_tests and is_integration_test:
- pytest.skip("skipping integration tests")
+ _IntegrationTestSpeedups.conditional_skip(request)
+
+
+class _IntegrationTestSpeedups:
+ """Speed-up integration tests by only running what does not run in other tests."""
+
+ RUNS_ON_NORMAL_TESTS = ("checkdocks", "cov", "mypy", "perf", "ruff")
+
+ @classmethod
+ def disable_plugins_already_run(cls, config):
+ if config.getoption("--integration"):
+ for plugin in cls.RUNS_ON_NORMAL_TESTS: # no need to run again
+ config.pluginmanager.set_blocked(plugin)
+
+ @staticmethod
+ def conditional_skip(request):
+ running_integration_tests = request.config.getoption("--integration")
+ is_integration_test = request.node.get_closest_marker("integration")
+ if running_integration_tests and not is_integration_test:
+ pytest.skip("running integration tests only")
+ if not running_integration_tests and is_integration_test:
+ pytest.skip("skipping integration tests")
| {"golden_diff": "diff --git a/conftest.py b/conftest.py\n--- a/conftest.py\n+++ b/conftest.py\n@@ -24,6 +24,7 @@\n def pytest_configure(config):\n config.addinivalue_line(\"markers\", \"integration: integration tests\")\n config.addinivalue_line(\"markers\", \"uses_network: tests may try to download files\")\n+ _IntegrationTestSpeedups.disable_plugins_already_run(config)\n \n \n collect_ignore = [\n@@ -47,9 +48,25 @@\n \n @pytest.fixture(autouse=True)\n def _skip_integration(request):\n- running_integration_tests = request.config.getoption(\"--integration\")\n- is_integration_test = request.node.get_closest_marker(\"integration\")\n- if running_integration_tests and not is_integration_test:\n- pytest.skip(\"running integration tests only\")\n- if not running_integration_tests and is_integration_test:\n- pytest.skip(\"skipping integration tests\")\n+ _IntegrationTestSpeedups.conditional_skip(request)\n+\n+\n+class _IntegrationTestSpeedups:\n+ \"\"\"Speed-up integration tests by only running what does not run in other tests.\"\"\"\n+\n+ RUNS_ON_NORMAL_TESTS = (\"checkdocks\", \"cov\", \"mypy\", \"perf\", \"ruff\")\n+\n+ @classmethod\n+ def disable_plugins_already_run(cls, config):\n+ if config.getoption(\"--integration\"):\n+ for plugin in cls.RUNS_ON_NORMAL_TESTS: # no need to run again\n+ config.pluginmanager.set_blocked(plugin)\n+\n+ @staticmethod\n+ def conditional_skip(request):\n+ running_integration_tests = request.config.getoption(\"--integration\")\n+ is_integration_test = request.node.get_closest_marker(\"integration\")\n+ if running_integration_tests and not is_integration_test:\n+ pytest.skip(\"running integration tests only\")\n+ if not running_integration_tests and is_integration_test:\n+ pytest.skip(\"skipping integration tests\")\n", "issue": "Deduplicate testing dependencies by referencing `[testing-integration]`\n<!-- First time contributors: Take a moment to review https://setuptools.pypa.io/en/latest/development/developer-guide.html! -->\r\n<!-- Remove sections if not applicable -->\r\n\r\n## Summary of changes\r\n\r\nThis was my original idea, but in https://github.com/pypa/setuptools/pull/4257#discussion_r1514754970, @abravalheri mentioned that:\r\n> [...] I think we just remove the `testing-integration` and use `testing` everywhere...\r\n> \r\n> Although it might add a bit of overhead for the integration tests, it will simplify and streamline the setup... This way we reduce the divergence with `skeleton`. The additional overhead in the integration tests should not be too much, and the integration tests just run before the releases anyway.\r\n\r\nBut I'm still opening this PR as an option / for comparison.\r\n\r\nAccepting this closes #4282\r\n\r\n### Pull Request Checklist\r\n- [x] Changes have tests (these are test changes)\r\n- [x] News fragment added in [`newsfragments/`]. (no user facing changes)\r\n _(See [documentation][PR docs] for details)_\r\n\r\n\r\n[`newsfragments/`]: https://github.com/pypa/setuptools/tree/master/newsfragments\r\n[PR docs]:\r\nhttps://setuptools.pypa.io/en/latest/development/developer-guide.html#making-a-pull-request\r\n\n", "before_files": [{"content": "import sys\n\nimport pytest\n\n\npytest_plugins = 'setuptools.tests.fixtures'\n\n\ndef pytest_addoption(parser):\n parser.addoption(\n \"--package_name\",\n action=\"append\",\n default=[],\n help=\"list of package_name to pass to test functions\",\n )\n parser.addoption(\n \"--integration\",\n action=\"store_true\",\n default=False,\n help=\"run integration tests (only)\",\n )\n\n\ndef pytest_configure(config):\n config.addinivalue_line(\"markers\", \"integration: integration tests\")\n config.addinivalue_line(\"markers\", \"uses_network: tests may try to download files\")\n\n\ncollect_ignore = [\n 'tests/manual_test.py',\n 'setuptools/tests/mod_with_constant.py',\n 'setuptools/_distutils',\n '_distutils_hack',\n 'setuptools/extern',\n 'pkg_resources/extern',\n 'pkg_resources/tests/data',\n 'setuptools/_vendor',\n 'pkg_resources/_vendor',\n 'setuptools/config/_validate_pyproject',\n 'setuptools/modified.py',\n]\n\n\nif sys.version_info < (3, 9) or sys.platform == 'cygwin':\n collect_ignore.append('tools/finalize.py')\n\n\[email protected](autouse=True)\ndef _skip_integration(request):\n running_integration_tests = request.config.getoption(\"--integration\")\n is_integration_test = request.node.get_closest_marker(\"integration\")\n if running_integration_tests and not is_integration_test:\n pytest.skip(\"running integration tests only\")\n if not running_integration_tests and is_integration_test:\n pytest.skip(\"skipping integration tests\")\n", "path": "conftest.py"}], "after_files": [{"content": "import sys\n\nimport pytest\n\n\npytest_plugins = 'setuptools.tests.fixtures'\n\n\ndef pytest_addoption(parser):\n parser.addoption(\n \"--package_name\",\n action=\"append\",\n default=[],\n help=\"list of package_name to pass to test functions\",\n )\n parser.addoption(\n \"--integration\",\n action=\"store_true\",\n default=False,\n help=\"run integration tests (only)\",\n )\n\n\ndef pytest_configure(config):\n config.addinivalue_line(\"markers\", \"integration: integration tests\")\n config.addinivalue_line(\"markers\", \"uses_network: tests may try to download files\")\n _IntegrationTestSpeedups.disable_plugins_already_run(config)\n\n\ncollect_ignore = [\n 'tests/manual_test.py',\n 'setuptools/tests/mod_with_constant.py',\n 'setuptools/_distutils',\n '_distutils_hack',\n 'setuptools/extern',\n 'pkg_resources/extern',\n 'pkg_resources/tests/data',\n 'setuptools/_vendor',\n 'pkg_resources/_vendor',\n 'setuptools/config/_validate_pyproject',\n 'setuptools/modified.py',\n]\n\n\nif sys.version_info < (3, 9) or sys.platform == 'cygwin':\n collect_ignore.append('tools/finalize.py')\n\n\[email protected](autouse=True)\ndef _skip_integration(request):\n _IntegrationTestSpeedups.conditional_skip(request)\n\n\nclass _IntegrationTestSpeedups:\n \"\"\"Speed-up integration tests by only running what does not run in other tests.\"\"\"\n\n RUNS_ON_NORMAL_TESTS = (\"checkdocks\", \"cov\", \"mypy\", \"perf\", \"ruff\")\n\n @classmethod\n def disable_plugins_already_run(cls, config):\n if config.getoption(\"--integration\"):\n for plugin in cls.RUNS_ON_NORMAL_TESTS: # no need to run again\n config.pluginmanager.set_blocked(plugin)\n\n @staticmethod\n def conditional_skip(request):\n running_integration_tests = request.config.getoption(\"--integration\")\n is_integration_test = request.node.get_closest_marker(\"integration\")\n if running_integration_tests and not is_integration_test:\n pytest.skip(\"running integration tests only\")\n if not running_integration_tests and is_integration_test:\n pytest.skip(\"skipping integration tests\")\n", "path": "conftest.py"}]} | 1,007 | 408 |
gh_patches_debug_63356 | rasdani/github-patches | git_diff | getredash__redash-716 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Deleting and re-adding widgets to a dashboard breaks it
There's a specific set of steps that has messed up some of our users' dashboards:
1. Create a new dashboard
2. Add multiple widgets to it.
3. Remove all those widgets from the dashboard
4. Re-add some widgets
5. Click the "Edit Dashboard (Name/Layout)" button
6. Click Save without changing anything.
7. Refresh the page
This makes none of the widgets appear and causes the "Layout" array in the admin panel to contain one or more "null" values (depending on how many widgets you added/deleted):

The only way to recover from this state is to manually delete the "null" values through the admin interface.
This is on re:dash version 0.8.2
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `redash/handlers/widgets.py`
Content:
```
1 import json
2
3 from flask import request
4
5 from redash import models
6 from redash.wsgi import api
7 from redash.permissions import require_permission
8 from redash.handlers.base import BaseResource
9
10
11 class WidgetListAPI(BaseResource):
12 @require_permission('edit_dashboard')
13 def post(self):
14 widget_properties = request.get_json(force=True)
15 widget_properties['options'] = json.dumps(widget_properties['options'])
16 widget_properties.pop('id', None)
17 widget_properties['dashboard'] = widget_properties.pop('dashboard_id')
18 widget_properties['visualization'] = widget_properties.pop('visualization_id')
19 widget = models.Widget(**widget_properties)
20 widget.save()
21
22 layout = json.loads(widget.dashboard.layout)
23 new_row = True
24
25 if len(layout) == 0 or widget.width == 2:
26 layout.append([widget.id])
27 elif len(layout[-1]) == 1:
28 neighbour_widget = models.Widget.get(models.Widget.id == layout[-1][0])
29 if neighbour_widget.width == 1:
30 layout[-1].append(widget.id)
31 new_row = False
32 else:
33 layout.append([widget.id])
34 else:
35 layout.append([widget.id])
36
37 widget.dashboard.layout = json.dumps(layout)
38 widget.dashboard.save()
39
40 return {'widget': widget.to_dict(), 'layout': layout, 'new_row': new_row}
41
42
43 class WidgetAPI(BaseResource):
44 @require_permission('edit_dashboard')
45 def delete(self, widget_id):
46 widget = models.Widget.get(models.Widget.id == widget_id)
47 widget.delete_instance()
48
49 api.add_resource(WidgetListAPI, '/api/widgets', endpoint='widgets')
50 api.add_resource(WidgetAPI, '/api/widgets/<int:widget_id>', endpoint='widget')
51
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/redash/handlers/widgets.py b/redash/handlers/widgets.py
--- a/redash/handlers/widgets.py
+++ b/redash/handlers/widgets.py
@@ -46,5 +46,7 @@
widget = models.Widget.get(models.Widget.id == widget_id)
widget.delete_instance()
+ return {'layout': widget.dashboard.layout }
+
api.add_resource(WidgetListAPI, '/api/widgets', endpoint='widgets')
api.add_resource(WidgetAPI, '/api/widgets/<int:widget_id>', endpoint='widget')
| {"golden_diff": "diff --git a/redash/handlers/widgets.py b/redash/handlers/widgets.py\n--- a/redash/handlers/widgets.py\n+++ b/redash/handlers/widgets.py\n@@ -46,5 +46,7 @@\n widget = models.Widget.get(models.Widget.id == widget_id)\n widget.delete_instance()\n \n+ return {'layout': widget.dashboard.layout }\n+\n api.add_resource(WidgetListAPI, '/api/widgets', endpoint='widgets')\n api.add_resource(WidgetAPI, '/api/widgets/<int:widget_id>', endpoint='widget')\n", "issue": "Deleting and re-adding widgets to a dashboard breaks it\nThere's a specific set of steps that has messed up some of our users' dashboards:\n1. Create a new dashboard\n2. Add multiple widgets to it.\n3. Remove all those widgets from the dashboard\n4. Re-add some widgets\n5. Click the \"Edit Dashboard (Name/Layout)\" button\n6. Click Save without changing anything.\n7. Refresh the page\n\nThis makes none of the widgets appear and causes the \"Layout\" array in the admin panel to contain one or more \"null\" values (depending on how many widgets you added/deleted):\n\n\nThe only way to recover from this state is to manually delete the \"null\" values through the admin interface.\n\nThis is on re:dash version 0.8.2\n\n", "before_files": [{"content": "import json\n\nfrom flask import request\n\nfrom redash import models\nfrom redash.wsgi import api\nfrom redash.permissions import require_permission\nfrom redash.handlers.base import BaseResource\n\n\nclass WidgetListAPI(BaseResource):\n @require_permission('edit_dashboard')\n def post(self):\n widget_properties = request.get_json(force=True)\n widget_properties['options'] = json.dumps(widget_properties['options'])\n widget_properties.pop('id', None)\n widget_properties['dashboard'] = widget_properties.pop('dashboard_id')\n widget_properties['visualization'] = widget_properties.pop('visualization_id')\n widget = models.Widget(**widget_properties)\n widget.save()\n\n layout = json.loads(widget.dashboard.layout)\n new_row = True\n\n if len(layout) == 0 or widget.width == 2:\n layout.append([widget.id])\n elif len(layout[-1]) == 1:\n neighbour_widget = models.Widget.get(models.Widget.id == layout[-1][0])\n if neighbour_widget.width == 1:\n layout[-1].append(widget.id)\n new_row = False\n else:\n layout.append([widget.id])\n else:\n layout.append([widget.id])\n\n widget.dashboard.layout = json.dumps(layout)\n widget.dashboard.save()\n\n return {'widget': widget.to_dict(), 'layout': layout, 'new_row': new_row}\n\n\nclass WidgetAPI(BaseResource):\n @require_permission('edit_dashboard')\n def delete(self, widget_id):\n widget = models.Widget.get(models.Widget.id == widget_id)\n widget.delete_instance()\n\napi.add_resource(WidgetListAPI, '/api/widgets', endpoint='widgets')\napi.add_resource(WidgetAPI, '/api/widgets/<int:widget_id>', endpoint='widget')\n", "path": "redash/handlers/widgets.py"}], "after_files": [{"content": "import json\n\nfrom flask import request\n\nfrom redash import models\nfrom redash.wsgi import api\nfrom redash.permissions import require_permission\nfrom redash.handlers.base import BaseResource\n\n\nclass WidgetListAPI(BaseResource):\n @require_permission('edit_dashboard')\n def post(self):\n widget_properties = request.get_json(force=True)\n widget_properties['options'] = json.dumps(widget_properties['options'])\n widget_properties.pop('id', None)\n widget_properties['dashboard'] = widget_properties.pop('dashboard_id')\n widget_properties['visualization'] = widget_properties.pop('visualization_id')\n widget = models.Widget(**widget_properties)\n widget.save()\n\n layout = json.loads(widget.dashboard.layout)\n new_row = True\n\n if len(layout) == 0 or widget.width == 2:\n layout.append([widget.id])\n elif len(layout[-1]) == 1:\n neighbour_widget = models.Widget.get(models.Widget.id == layout[-1][0])\n if neighbour_widget.width == 1:\n layout[-1].append(widget.id)\n new_row = False\n else:\n layout.append([widget.id])\n else:\n layout.append([widget.id])\n\n widget.dashboard.layout = json.dumps(layout)\n widget.dashboard.save()\n\n return {'widget': widget.to_dict(), 'layout': layout, 'new_row': new_row}\n\n\nclass WidgetAPI(BaseResource):\n @require_permission('edit_dashboard')\n def delete(self, widget_id):\n widget = models.Widget.get(models.Widget.id == widget_id)\n widget.delete_instance()\n\n return {'layout': widget.dashboard.layout }\n\napi.add_resource(WidgetListAPI, '/api/widgets', endpoint='widgets')\napi.add_resource(WidgetAPI, '/api/widgets/<int:widget_id>', endpoint='widget')\n", "path": "redash/handlers/widgets.py"}]} | 942 | 118 |
gh_patches_debug_1763 | rasdani/github-patches | git_diff | digitalfabrik__integreat-cms-802 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Improve push notification form
### Motivation
<!-- A clear and concise description of what the motivation for the new feature is, and what problem it is solving. -->
### Proposed Solution
<!-- A clear and concise description of the feature you would like to add, and how it solves the motivating problem. -->
- [x] Make input field to text area
- [x] Show how many characters are remaining
- [x] Rename "Push-Notifications" to "News" / "Nachrichten"
- [x] Change not sent symbol "x" to text "Message not sent yet"
### Additional Context

Files that need some editing for that issue are:
- src/cms/templates/push_notifications/push_notification_form.html (for everything related to the HTML-Input Field)
- src/cms/locale/de/LC_MESSAGES/django.po (for everything that needs some new translation. Search for "Push Notification" and change the german translation to "Nachricht/Nachrichten"
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cms/models/push_notifications/push_notification_translation.py`
Content:
```
1 from django.db import models
2 from django.utils import timezone
3 from django.utils.translation import ugettext_lazy as _
4
5 from ..languages.language import Language
6 from .push_notification import PushNotification
7
8
9 class PushNotificationTranslation(models.Model):
10 """
11 Data model representing a push notification translation
12 """
13
14 title = models.CharField(
15 max_length=250,
16 blank=True,
17 verbose_name=_("title"),
18 )
19 text = models.CharField(
20 max_length=250,
21 blank=True,
22 verbose_name=_("content"),
23 )
24 language = models.ForeignKey(
25 Language,
26 on_delete=models.CASCADE,
27 related_name="push_notification_translations",
28 verbose_name=_("language"),
29 )
30 push_notification = models.ForeignKey(
31 PushNotification,
32 on_delete=models.CASCADE,
33 related_name="translations",
34 verbose_name=_("push notification"),
35 )
36 created_date = models.DateTimeField(
37 default=timezone.now,
38 verbose_name=_("creation date"),
39 )
40 last_updated = models.DateTimeField(
41 auto_now=True,
42 verbose_name=_("modification date"),
43 )
44
45 def __str__(self):
46 """
47 This overwrites the default Django :meth:`~django.db.models.Model.__str__` method which would return ``PushNotificationTranslation object (id)``.
48 It is used in the Django admin backend and as label for ModelChoiceFields.
49
50 :return: A readable string representation of the event
51 :rtype: str
52 """
53 return self.title
54
55 def __repr__(self):
56 """
57 This overwrites the default Django ``__repr__()`` method which would return ``<PushNotificationTranslation: PushNotificationTranslation object (id)>``.
58 It is used for logging.
59
60 :return: The canonical string representation of the event
61 :rtype: str
62 """
63 return f"<PushNotificationTranslation (id: {self.id}, push_notification_id: {self.push_notification.id}, title: {self.title})>"
64
65 class Meta:
66 #: The verbose name of the model
67 verbose_name = _("push notification translation")
68 #: The plural verbose name of the model
69 verbose_name_plural = _("push notification translations")
70 #: The default permissions for this model
71 default_permissions = ()
72 #: Sets of field names that, taken together, must be unique
73 unique_together = ["push_notification", "language"]
74
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/cms/models/push_notifications/push_notification_translation.py b/src/cms/models/push_notifications/push_notification_translation.py
--- a/src/cms/models/push_notifications/push_notification_translation.py
+++ b/src/cms/models/push_notifications/push_notification_translation.py
@@ -16,7 +16,7 @@
blank=True,
verbose_name=_("title"),
)
- text = models.CharField(
+ text = models.TextField(
max_length=250,
blank=True,
verbose_name=_("content"),
| {"golden_diff": "diff --git a/src/cms/models/push_notifications/push_notification_translation.py b/src/cms/models/push_notifications/push_notification_translation.py\n--- a/src/cms/models/push_notifications/push_notification_translation.py\n+++ b/src/cms/models/push_notifications/push_notification_translation.py\n@@ -16,7 +16,7 @@\n blank=True,\n verbose_name=_(\"title\"),\n )\n- text = models.CharField(\n+ text = models.TextField(\n max_length=250,\n blank=True,\n verbose_name=_(\"content\"),\n", "issue": "Improve push notification form\n### Motivation\r\n<!-- A clear and concise description of what the motivation for the new feature is, and what problem it is solving. -->\r\n\r\n\r\n### Proposed Solution\r\n<!-- A clear and concise description of the feature you would like to add, and how it solves the motivating problem. -->\r\n- [x] Make input field to text area\r\n- [x] Show how many characters are remaining\r\n- [x] Rename \"Push-Notifications\" to \"News\" / \"Nachrichten\"\r\n- [x] Change not sent symbol \"x\" to text \"Message not sent yet\"\r\n\r\n\r\n### Additional Context\r\n\r\n\r\nFiles that need some editing for that issue are:\r\n\r\n- src/cms/templates/push_notifications/push_notification_form.html (for everything related to the HTML-Input Field)\r\n- src/cms/locale/de/LC_MESSAGES/django.po (for everything that needs some new translation. Search for \"Push Notification\" and change the german translation to \"Nachricht/Nachrichten\"\r\n\r\n\r\n\n", "before_files": [{"content": "from django.db import models\nfrom django.utils import timezone\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom ..languages.language import Language\nfrom .push_notification import PushNotification\n\n\nclass PushNotificationTranslation(models.Model):\n \"\"\"\n Data model representing a push notification translation\n \"\"\"\n\n title = models.CharField(\n max_length=250,\n blank=True,\n verbose_name=_(\"title\"),\n )\n text = models.CharField(\n max_length=250,\n blank=True,\n verbose_name=_(\"content\"),\n )\n language = models.ForeignKey(\n Language,\n on_delete=models.CASCADE,\n related_name=\"push_notification_translations\",\n verbose_name=_(\"language\"),\n )\n push_notification = models.ForeignKey(\n PushNotification,\n on_delete=models.CASCADE,\n related_name=\"translations\",\n verbose_name=_(\"push notification\"),\n )\n created_date = models.DateTimeField(\n default=timezone.now,\n verbose_name=_(\"creation date\"),\n )\n last_updated = models.DateTimeField(\n auto_now=True,\n verbose_name=_(\"modification date\"),\n )\n\n def __str__(self):\n \"\"\"\n This overwrites the default Django :meth:`~django.db.models.Model.__str__` method which would return ``PushNotificationTranslation object (id)``.\n It is used in the Django admin backend and as label for ModelChoiceFields.\n\n :return: A readable string representation of the event\n :rtype: str\n \"\"\"\n return self.title\n\n def __repr__(self):\n \"\"\"\n This overwrites the default Django ``__repr__()`` method which would return ``<PushNotificationTranslation: PushNotificationTranslation object (id)>``.\n It is used for logging.\n\n :return: The canonical string representation of the event\n :rtype: str\n \"\"\"\n return f\"<PushNotificationTranslation (id: {self.id}, push_notification_id: {self.push_notification.id}, title: {self.title})>\"\n\n class Meta:\n #: The verbose name of the model\n verbose_name = _(\"push notification translation\")\n #: The plural verbose name of the model\n verbose_name_plural = _(\"push notification translations\")\n #: The default permissions for this model\n default_permissions = ()\n #: Sets of field names that, taken together, must be unique\n unique_together = [\"push_notification\", \"language\"]\n", "path": "src/cms/models/push_notifications/push_notification_translation.py"}], "after_files": [{"content": "from django.db import models\nfrom django.utils import timezone\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom ..languages.language import Language\nfrom .push_notification import PushNotification\n\n\nclass PushNotificationTranslation(models.Model):\n \"\"\"\n Data model representing a push notification translation\n \"\"\"\n\n title = models.CharField(\n max_length=250,\n blank=True,\n verbose_name=_(\"title\"),\n )\n text = models.TextField(\n max_length=250,\n blank=True,\n verbose_name=_(\"content\"),\n )\n language = models.ForeignKey(\n Language,\n on_delete=models.CASCADE,\n related_name=\"push_notification_translations\",\n verbose_name=_(\"language\"),\n )\n push_notification = models.ForeignKey(\n PushNotification,\n on_delete=models.CASCADE,\n related_name=\"translations\",\n verbose_name=_(\"push notification\"),\n )\n created_date = models.DateTimeField(\n default=timezone.now,\n verbose_name=_(\"creation date\"),\n )\n last_updated = models.DateTimeField(\n auto_now=True,\n verbose_name=_(\"modification date\"),\n )\n\n def __str__(self):\n \"\"\"\n This overwrites the default Django :meth:`~django.db.models.Model.__str__` method which would return ``PushNotificationTranslation object (id)``.\n It is used in the Django admin backend and as label for ModelChoiceFields.\n\n :return: A readable string representation of the event\n :rtype: str\n \"\"\"\n return self.title\n\n def __repr__(self):\n \"\"\"\n This overwrites the default Django ``__repr__()`` method which would return ``<PushNotificationTranslation: PushNotificationTranslation object (id)>``.\n It is used for logging.\n\n :return: The canonical string representation of the event\n :rtype: str\n \"\"\"\n return f\"<PushNotificationTranslation (id: {self.id}, push_notification_id: {self.push_notification.id}, title: {self.title})>\"\n\n class Meta:\n #: The verbose name of the model\n verbose_name = _(\"push notification translation\")\n #: The plural verbose name of the model\n verbose_name_plural = _(\"push notification translations\")\n #: The default permissions for this model\n default_permissions = ()\n #: Sets of field names that, taken together, must be unique\n unique_together = [\"push_notification\", \"language\"]\n", "path": "src/cms/models/push_notifications/push_notification_translation.py"}]} | 1,182 | 112 |
gh_patches_debug_3531 | rasdani/github-patches | git_diff | web2py__web2py-928 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Typo in models/db.py for mail server
Hi,
I found a small typo, which prohibits sending mails in models/db.py line 65:
mail.settings.server = 'logging' if request.is_local else myconf.take('smtp.sender')
should be (smtp.server instead of smtp.sender):
mail.settings.server = 'logging' if request.is_local else myconf.take('smtp.server')
thanks
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `applications/welcome/models/db.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 #########################################################################
4 ## This scaffolding model makes your app work on Google App Engine too
5 ## File is released under public domain and you can use without limitations
6 #########################################################################
7
8 ## if SSL/HTTPS is properly configured and you want all HTTP requests to
9 ## be redirected to HTTPS, uncomment the line below:
10 # request.requires_https()
11
12 ## app configuration made easy. Look inside private/appconfig.ini
13 from gluon.contrib.appconfig import AppConfig
14 ## once in production, remove reload=True to gain full speed
15 myconf = AppConfig(reload=True)
16
17
18 if not request.env.web2py_runtime_gae:
19 ## if NOT running on Google App Engine use SQLite or other DB
20 db = DAL(myconf.take('db.uri'), pool_size=myconf.take('db.pool_size', cast=int), check_reserved=['all'])
21 else:
22 ## connect to Google BigTable (optional 'google:datastore://namespace')
23 db = DAL('google:datastore+ndb')
24 ## store sessions and tickets there
25 session.connect(request, response, db=db)
26 ## or store session in Memcache, Redis, etc.
27 ## from gluon.contrib.memdb import MEMDB
28 ## from google.appengine.api.memcache import Client
29 ## session.connect(request, response, db = MEMDB(Client()))
30
31 ## by default give a view/generic.extension to all actions from localhost
32 ## none otherwise. a pattern can be 'controller/function.extension'
33 response.generic_patterns = ['*'] if request.is_local else []
34 ## choose a style for forms
35 response.formstyle = myconf.take('forms.formstyle') # or 'bootstrap3_stacked' or 'bootstrap2' or other
36 response.form_label_separator = myconf.take('forms.separator')
37
38
39 ## (optional) optimize handling of static files
40 # response.optimize_css = 'concat,minify,inline'
41 # response.optimize_js = 'concat,minify,inline'
42 ## (optional) static assets folder versioning
43 # response.static_version = '0.0.0'
44 #########################################################################
45 ## Here is sample code if you need for
46 ## - email capabilities
47 ## - authentication (registration, login, logout, ... )
48 ## - authorization (role based authorization)
49 ## - services (xml, csv, json, xmlrpc, jsonrpc, amf, rss)
50 ## - old style crud actions
51 ## (more options discussed in gluon/tools.py)
52 #########################################################################
53
54 from gluon.tools import Auth, Service, PluginManager
55
56 auth = Auth(db)
57 service = Service()
58 plugins = PluginManager()
59
60 ## create all tables needed by auth if not custom tables
61 auth.define_tables(username=False, signature=False)
62
63 ## configure email
64 mail = auth.settings.mailer
65 mail.settings.server = 'logging' if request.is_local else myconf.take('smtp.sender')
66 mail.settings.sender = myconf.take('smtp.sender')
67 mail.settings.login = myconf.take('smtp.login')
68
69 ## configure auth policy
70 auth.settings.registration_requires_verification = False
71 auth.settings.registration_requires_approval = False
72 auth.settings.reset_password_requires_verification = True
73
74 #########################################################################
75 ## Define your tables below (or better in another model file) for example
76 ##
77 ## >>> db.define_table('mytable',Field('myfield','string'))
78 ##
79 ## Fields can be 'string','text','password','integer','double','boolean'
80 ## 'date','time','datetime','blob','upload', 'reference TABLENAME'
81 ## There is an implicit 'id integer autoincrement' field
82 ## Consult manual for more options, validators, etc.
83 ##
84 ## More API examples for controllers:
85 ##
86 ## >>> db.mytable.insert(myfield='value')
87 ## >>> rows=db(db.mytable.myfield=='value').select(db.mytable.ALL)
88 ## >>> for row in rows: print row.id, row.myfield
89 #########################################################################
90
91 ## after defining tables, uncomment below to enable auditing
92 # auth.enable_record_versioning(db)
93
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/applications/welcome/models/db.py b/applications/welcome/models/db.py
--- a/applications/welcome/models/db.py
+++ b/applications/welcome/models/db.py
@@ -62,7 +62,7 @@
## configure email
mail = auth.settings.mailer
-mail.settings.server = 'logging' if request.is_local else myconf.take('smtp.sender')
+mail.settings.server = 'logging' if request.is_local else myconf.take('smtp.server')
mail.settings.sender = myconf.take('smtp.sender')
mail.settings.login = myconf.take('smtp.login')
| {"golden_diff": "diff --git a/applications/welcome/models/db.py b/applications/welcome/models/db.py\n--- a/applications/welcome/models/db.py\n+++ b/applications/welcome/models/db.py\n@@ -62,7 +62,7 @@\n \n ## configure email\n mail = auth.settings.mailer\n-mail.settings.server = 'logging' if request.is_local else myconf.take('smtp.sender')\n+mail.settings.server = 'logging' if request.is_local else myconf.take('smtp.server')\n mail.settings.sender = myconf.take('smtp.sender')\n mail.settings.login = myconf.take('smtp.login')\n", "issue": "Typo in models/db.py for mail server\nHi, \n\nI found a small typo, which prohibits sending mails in models/db.py line 65:\nmail.settings.server = 'logging' if request.is_local else myconf.take('smtp.sender')\n\nshould be (smtp.server instead of smtp.sender):\nmail.settings.server = 'logging' if request.is_local else myconf.take('smtp.server') \n\nthanks\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n#########################################################################\n## This scaffolding model makes your app work on Google App Engine too\n## File is released under public domain and you can use without limitations\n#########################################################################\n\n## if SSL/HTTPS is properly configured and you want all HTTP requests to\n## be redirected to HTTPS, uncomment the line below:\n# request.requires_https()\n\n## app configuration made easy. Look inside private/appconfig.ini\nfrom gluon.contrib.appconfig import AppConfig\n## once in production, remove reload=True to gain full speed\nmyconf = AppConfig(reload=True)\n\n\nif not request.env.web2py_runtime_gae:\n ## if NOT running on Google App Engine use SQLite or other DB\n db = DAL(myconf.take('db.uri'), pool_size=myconf.take('db.pool_size', cast=int), check_reserved=['all'])\nelse:\n ## connect to Google BigTable (optional 'google:datastore://namespace')\n db = DAL('google:datastore+ndb')\n ## store sessions and tickets there\n session.connect(request, response, db=db)\n ## or store session in Memcache, Redis, etc.\n ## from gluon.contrib.memdb import MEMDB\n ## from google.appengine.api.memcache import Client\n ## session.connect(request, response, db = MEMDB(Client()))\n\n## by default give a view/generic.extension to all actions from localhost\n## none otherwise. a pattern can be 'controller/function.extension'\nresponse.generic_patterns = ['*'] if request.is_local else []\n## choose a style for forms\nresponse.formstyle = myconf.take('forms.formstyle') # or 'bootstrap3_stacked' or 'bootstrap2' or other\nresponse.form_label_separator = myconf.take('forms.separator')\n\n\n## (optional) optimize handling of static files\n# response.optimize_css = 'concat,minify,inline'\n# response.optimize_js = 'concat,minify,inline'\n## (optional) static assets folder versioning\n# response.static_version = '0.0.0'\n#########################################################################\n## Here is sample code if you need for\n## - email capabilities\n## - authentication (registration, login, logout, ... )\n## - authorization (role based authorization)\n## - services (xml, csv, json, xmlrpc, jsonrpc, amf, rss)\n## - old style crud actions\n## (more options discussed in gluon/tools.py)\n#########################################################################\n\nfrom gluon.tools import Auth, Service, PluginManager\n\nauth = Auth(db)\nservice = Service()\nplugins = PluginManager()\n\n## create all tables needed by auth if not custom tables\nauth.define_tables(username=False, signature=False)\n\n## configure email\nmail = auth.settings.mailer\nmail.settings.server = 'logging' if request.is_local else myconf.take('smtp.sender')\nmail.settings.sender = myconf.take('smtp.sender')\nmail.settings.login = myconf.take('smtp.login')\n\n## configure auth policy\nauth.settings.registration_requires_verification = False\nauth.settings.registration_requires_approval = False\nauth.settings.reset_password_requires_verification = True\n\n#########################################################################\n## Define your tables below (or better in another model file) for example\n##\n## >>> db.define_table('mytable',Field('myfield','string'))\n##\n## Fields can be 'string','text','password','integer','double','boolean'\n## 'date','time','datetime','blob','upload', 'reference TABLENAME'\n## There is an implicit 'id integer autoincrement' field\n## Consult manual for more options, validators, etc.\n##\n## More API examples for controllers:\n##\n## >>> db.mytable.insert(myfield='value')\n## >>> rows=db(db.mytable.myfield=='value').select(db.mytable.ALL)\n## >>> for row in rows: print row.id, row.myfield\n#########################################################################\n\n## after defining tables, uncomment below to enable auditing\n# auth.enable_record_versioning(db)\n", "path": "applications/welcome/models/db.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n#########################################################################\n## This scaffolding model makes your app work on Google App Engine too\n## File is released under public domain and you can use without limitations\n#########################################################################\n\n## if SSL/HTTPS is properly configured and you want all HTTP requests to\n## be redirected to HTTPS, uncomment the line below:\n# request.requires_https()\n\n## app configuration made easy. Look inside private/appconfig.ini\nfrom gluon.contrib.appconfig import AppConfig\n## once in production, remove reload=True to gain full speed\nmyconf = AppConfig(reload=True)\n\n\nif not request.env.web2py_runtime_gae:\n ## if NOT running on Google App Engine use SQLite or other DB\n db = DAL(myconf.take('db.uri'), pool_size=myconf.take('db.pool_size', cast=int), check_reserved=['all'])\nelse:\n ## connect to Google BigTable (optional 'google:datastore://namespace')\n db = DAL('google:datastore+ndb')\n ## store sessions and tickets there\n session.connect(request, response, db=db)\n ## or store session in Memcache, Redis, etc.\n ## from gluon.contrib.memdb import MEMDB\n ## from google.appengine.api.memcache import Client\n ## session.connect(request, response, db = MEMDB(Client()))\n\n## by default give a view/generic.extension to all actions from localhost\n## none otherwise. a pattern can be 'controller/function.extension'\nresponse.generic_patterns = ['*'] if request.is_local else []\n## choose a style for forms\nresponse.formstyle = myconf.take('forms.formstyle') # or 'bootstrap3_stacked' or 'bootstrap2' or other\nresponse.form_label_separator = myconf.take('forms.separator')\n\n\n## (optional) optimize handling of static files\n# response.optimize_css = 'concat,minify,inline'\n# response.optimize_js = 'concat,minify,inline'\n## (optional) static assets folder versioning\n# response.static_version = '0.0.0'\n#########################################################################\n## Here is sample code if you need for\n## - email capabilities\n## - authentication (registration, login, logout, ... )\n## - authorization (role based authorization)\n## - services (xml, csv, json, xmlrpc, jsonrpc, amf, rss)\n## - old style crud actions\n## (more options discussed in gluon/tools.py)\n#########################################################################\n\nfrom gluon.tools import Auth, Service, PluginManager\n\nauth = Auth(db)\nservice = Service()\nplugins = PluginManager()\n\n## create all tables needed by auth if not custom tables\nauth.define_tables(username=False, signature=False)\n\n## configure email\nmail = auth.settings.mailer\nmail.settings.server = 'logging' if request.is_local else myconf.take('smtp.server')\nmail.settings.sender = myconf.take('smtp.sender')\nmail.settings.login = myconf.take('smtp.login')\n\n## configure auth policy\nauth.settings.registration_requires_verification = False\nauth.settings.registration_requires_approval = False\nauth.settings.reset_password_requires_verification = True\n\n#########################################################################\n## Define your tables below (or better in another model file) for example\n##\n## >>> db.define_table('mytable',Field('myfield','string'))\n##\n## Fields can be 'string','text','password','integer','double','boolean'\n## 'date','time','datetime','blob','upload', 'reference TABLENAME'\n## There is an implicit 'id integer autoincrement' field\n## Consult manual for more options, validators, etc.\n##\n## More API examples for controllers:\n##\n## >>> db.mytable.insert(myfield='value')\n## >>> rows=db(db.mytable.myfield=='value').select(db.mytable.ALL)\n## >>> for row in rows: print row.id, row.myfield\n#########################################################################\n\n## after defining tables, uncomment below to enable auditing\n# auth.enable_record_versioning(db)\n", "path": "applications/welcome/models/db.py"}]} | 1,325 | 121 |
gh_patches_debug_13859 | rasdani/github-patches | git_diff | bridgecrewio__checkov-5413 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CKV_AWS_299 Fails for Values That Can't be Set
**Describe the issue**
Failing on - Check: CKV_AWS_299: "Ensure DMS S3 defines in-transit encryption"
When the endpoint of a DMS Migration Task has a Endpoint that is S3, there is no option in AWS to specify ssl_mode. AWS does not have an ssl_mode for DMS Endpoints of type S3. Setting this value on extra connections does nothing. Also, setting this in Terraform does nothing. I may be crazy, but I think DMS transfers to S3 uses SSL by default.
**Examples**
```
from __future__ import annotations
from typing import Any
from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck
from checkov.common.models.enums import CheckCategories
class DMSS3DefinesIntransitEncryption(BaseResourceValueCheck):
def __init__(self) -> None:
name = "Ensure DMS S3 defines in-transit encryption"
id = "CKV_AWS_299"
supported_resources = ("aws_dms_s3_endpoint",)
categories = (CheckCategories.ENCRYPTION,)
super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
def get_inspected_key(self) -> str:
return 'ssl_mode'
def get_expected_values(self) -> list[Any]:
return ["require", "verify-ca", "verify-full"]
check = DMSS3DefinesIntransitEncryption()
```
**Version (please complete the following information):**
- Checkov Version 2.3.343
**Additional context**
This is property does nothing in terraform when the aws_dme_enpoint is s3 and I file and issue over there, however, it was closed with no fix.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `checkov/terraform/checks/resource/aws/DMSS3DefinesIntransitEncryption.py`
Content:
```
1 from __future__ import annotations
2
3 from typing import Any
4
5 from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck
6 from checkov.common.models.enums import CheckCategories
7
8
9 class DMSS3DefinesIntransitEncryption(BaseResourceValueCheck):
10 def __init__(self) -> None:
11 name = "Ensure DMS S3 defines in-transit encryption"
12 id = "CKV_AWS_299"
13 supported_resources = ("aws_dms_s3_endpoint",)
14 categories = (CheckCategories.ENCRYPTION,)
15 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
16
17 def get_inspected_key(self) -> str:
18 return 'ssl_mode'
19
20 def get_expected_values(self) -> list[Any]:
21 return ["require", "verify-ca", "verify-full"]
22
23
24 check = DMSS3DefinesIntransitEncryption()
25
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/checkov/terraform/checks/resource/aws/DMSS3DefinesIntransitEncryption.py b/checkov/terraform/checks/resource/aws/DMSS3DefinesIntransitEncryption.py
deleted file mode 100644
--- a/checkov/terraform/checks/resource/aws/DMSS3DefinesIntransitEncryption.py
+++ /dev/null
@@ -1,24 +0,0 @@
-from __future__ import annotations
-
-from typing import Any
-
-from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck
-from checkov.common.models.enums import CheckCategories
-
-
-class DMSS3DefinesIntransitEncryption(BaseResourceValueCheck):
- def __init__(self) -> None:
- name = "Ensure DMS S3 defines in-transit encryption"
- id = "CKV_AWS_299"
- supported_resources = ("aws_dms_s3_endpoint",)
- categories = (CheckCategories.ENCRYPTION,)
- super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
-
- def get_inspected_key(self) -> str:
- return 'ssl_mode'
-
- def get_expected_values(self) -> list[Any]:
- return ["require", "verify-ca", "verify-full"]
-
-
-check = DMSS3DefinesIntransitEncryption()
| {"golden_diff": "diff --git a/checkov/terraform/checks/resource/aws/DMSS3DefinesIntransitEncryption.py b/checkov/terraform/checks/resource/aws/DMSS3DefinesIntransitEncryption.py\ndeleted file mode 100644\n--- a/checkov/terraform/checks/resource/aws/DMSS3DefinesIntransitEncryption.py\n+++ /dev/null\n@@ -1,24 +0,0 @@\n-from __future__ import annotations\n-\n-from typing import Any\n-\n-from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck\n-from checkov.common.models.enums import CheckCategories\n-\n-\n-class DMSS3DefinesIntransitEncryption(BaseResourceValueCheck):\n- def __init__(self) -> None:\n- name = \"Ensure DMS S3 defines in-transit encryption\"\n- id = \"CKV_AWS_299\"\n- supported_resources = (\"aws_dms_s3_endpoint\",)\n- categories = (CheckCategories.ENCRYPTION,)\n- super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n-\n- def get_inspected_key(self) -> str:\n- return 'ssl_mode'\n-\n- def get_expected_values(self) -> list[Any]:\n- return [\"require\", \"verify-ca\", \"verify-full\"]\n-\n-\n-check = DMSS3DefinesIntransitEncryption()\n", "issue": "CKV_AWS_299 Fails for Values That Can't be Set\n**Describe the issue**\r\nFailing on - Check: CKV_AWS_299: \"Ensure DMS S3 defines in-transit encryption\"\r\n\r\nWhen the endpoint of a DMS Migration Task has a Endpoint that is S3, there is no option in AWS to specify ssl_mode. AWS does not have an ssl_mode for DMS Endpoints of type S3. Setting this value on extra connections does nothing. Also, setting this in Terraform does nothing. I may be crazy, but I think DMS transfers to S3 uses SSL by default.\r\n\r\n**Examples**\r\n```\r\n\r\nfrom __future__ import annotations\r\n\r\nfrom typing import Any\r\n\r\nfrom checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck\r\nfrom checkov.common.models.enums import CheckCategories\r\n\r\n\r\nclass DMSS3DefinesIntransitEncryption(BaseResourceValueCheck):\r\n def __init__(self) -> None:\r\n name = \"Ensure DMS S3 defines in-transit encryption\"\r\n id = \"CKV_AWS_299\"\r\n supported_resources = (\"aws_dms_s3_endpoint\",)\r\n categories = (CheckCategories.ENCRYPTION,)\r\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\r\n\r\n def get_inspected_key(self) -> str:\r\n return 'ssl_mode'\r\n\r\n def get_expected_values(self) -> list[Any]:\r\n return [\"require\", \"verify-ca\", \"verify-full\"]\r\n\r\n\r\ncheck = DMSS3DefinesIntransitEncryption()\r\n\r\n```\r\n\r\n**Version (please complete the following information):**\r\n - Checkov Version 2.3.343\r\n\r\n**Additional context**\r\n\r\nThis is property does nothing in terraform when the aws_dme_enpoint is s3 and I file and issue over there, however, it was closed with no fix.\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom typing import Any\n\nfrom checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck\nfrom checkov.common.models.enums import CheckCategories\n\n\nclass DMSS3DefinesIntransitEncryption(BaseResourceValueCheck):\n def __init__(self) -> None:\n name = \"Ensure DMS S3 defines in-transit encryption\"\n id = \"CKV_AWS_299\"\n supported_resources = (\"aws_dms_s3_endpoint\",)\n categories = (CheckCategories.ENCRYPTION,)\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def get_inspected_key(self) -> str:\n return 'ssl_mode'\n\n def get_expected_values(self) -> list[Any]:\n return [\"require\", \"verify-ca\", \"verify-full\"]\n\n\ncheck = DMSS3DefinesIntransitEncryption()\n", "path": "checkov/terraform/checks/resource/aws/DMSS3DefinesIntransitEncryption.py"}], "after_files": [{"content": null, "path": "checkov/terraform/checks/resource/aws/DMSS3DefinesIntransitEncryption.py"}]} | 902 | 296 |
gh_patches_debug_25329 | rasdani/github-patches | git_diff | pypa__pip-3540 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Show INSTALLER from PEP376 in pip show
Now that #1646 is implemented, it would interesting to show this information in `pip show`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pip/commands/show.py`
Content:
```
1 from __future__ import absolute_import
2
3 from email.parser import FeedParser
4 import logging
5 import os
6
7 from pip.basecommand import Command
8 from pip.status_codes import SUCCESS, ERROR
9 from pip._vendor import pkg_resources
10
11
12 logger = logging.getLogger(__name__)
13
14
15 class ShowCommand(Command):
16 """Show information about one or more installed packages."""
17 name = 'show'
18 usage = """
19 %prog [options] <package> ..."""
20 summary = 'Show information about installed packages.'
21
22 def __init__(self, *args, **kw):
23 super(ShowCommand, self).__init__(*args, **kw)
24 self.cmd_opts.add_option(
25 '-f', '--files',
26 dest='files',
27 action='store_true',
28 default=False,
29 help='Show the full list of installed files for each package.')
30
31 self.parser.insert_option_group(0, self.cmd_opts)
32
33 def run(self, options, args):
34 if not args:
35 logger.warning('ERROR: Please provide a package name or names.')
36 return ERROR
37 query = args
38
39 results = search_packages_info(query)
40 if not print_results(results, options.files):
41 return ERROR
42 return SUCCESS
43
44
45 def search_packages_info(query):
46 """
47 Gather details from installed distributions. Print distribution name,
48 version, location, and installed files. Installed files requires a
49 pip generated 'installed-files.txt' in the distributions '.egg-info'
50 directory.
51 """
52 installed = dict(
53 [(p.project_name.lower(), p) for p in pkg_resources.working_set])
54 query_names = [name.lower() for name in query]
55 for dist in [installed[pkg] for pkg in query_names if pkg in installed]:
56 package = {
57 'name': dist.project_name,
58 'version': dist.version,
59 'location': dist.location,
60 'requires': [dep.project_name for dep in dist.requires()],
61 }
62 file_list = None
63 metadata = None
64 if isinstance(dist, pkg_resources.DistInfoDistribution):
65 # RECORDs should be part of .dist-info metadatas
66 if dist.has_metadata('RECORD'):
67 lines = dist.get_metadata_lines('RECORD')
68 paths = [l.split(',')[0] for l in lines]
69 paths = [os.path.join(dist.location, p) for p in paths]
70 file_list = [os.path.relpath(p, dist.location) for p in paths]
71
72 if dist.has_metadata('METADATA'):
73 metadata = dist.get_metadata('METADATA')
74 else:
75 # Otherwise use pip's log for .egg-info's
76 if dist.has_metadata('installed-files.txt'):
77 paths = dist.get_metadata_lines('installed-files.txt')
78 paths = [os.path.join(dist.egg_info, p) for p in paths]
79 file_list = [os.path.relpath(p, dist.location) for p in paths]
80
81 if dist.has_metadata('PKG-INFO'):
82 metadata = dist.get_metadata('PKG-INFO')
83
84 if dist.has_metadata('entry_points.txt'):
85 entry_points = dist.get_metadata_lines('entry_points.txt')
86 package['entry_points'] = entry_points
87
88 # @todo: Should pkg_resources.Distribution have a
89 # `get_pkg_info` method?
90 feed_parser = FeedParser()
91 feed_parser.feed(metadata)
92 pkg_info_dict = feed_parser.close()
93 for key in ('metadata-version', 'summary',
94 'home-page', 'author', 'author-email', 'license'):
95 package[key] = pkg_info_dict.get(key)
96
97 # It looks like FeedParser can not deal with repeated headers
98 classifiers = []
99 for line in metadata.splitlines():
100 if not line:
101 break
102 # Classifier: License :: OSI Approved :: MIT License
103 if line.startswith('Classifier: '):
104 classifiers.append(line[len('Classifier: '):])
105 package['classifiers'] = classifiers
106
107 if file_list:
108 package['files'] = sorted(file_list)
109 yield package
110
111
112 def print_results(distributions, list_all_files):
113 """
114 Print the informations from installed distributions found.
115 """
116 results_printed = False
117 for dist in distributions:
118 results_printed = True
119 logger.info("---")
120 logger.info("Metadata-Version: %s", dist.get('metadata-version'))
121 logger.info("Name: %s", dist['name'])
122 logger.info("Version: %s", dist['version'])
123 logger.info("Summary: %s", dist.get('summary'))
124 logger.info("Home-page: %s", dist.get('home-page'))
125 logger.info("Author: %s", dist.get('author'))
126 logger.info("Author-email: %s", dist.get('author-email'))
127 logger.info("License: %s", dist.get('license'))
128 logger.info("Location: %s", dist['location'])
129 logger.info("Requires: %s", ', '.join(dist['requires']))
130 logger.info("Classifiers:")
131 for classifier in dist['classifiers']:
132 logger.info(" %s", classifier)
133 if list_all_files:
134 logger.info("Files:")
135 if 'files' in dist:
136 for line in dist['files']:
137 logger.info(" %s", line.strip())
138 else:
139 logger.info("Cannot locate installed-files.txt")
140 if 'entry_points' in dist:
141 logger.info("Entry-points:")
142 for line in dist['entry_points']:
143 logger.info(" %s", line.strip())
144 return results_printed
145
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pip/commands/show.py b/pip/commands/show.py
--- a/pip/commands/show.py
+++ b/pip/commands/show.py
@@ -85,6 +85,14 @@
entry_points = dist.get_metadata_lines('entry_points.txt')
package['entry_points'] = entry_points
+ installer = None
+ if dist.has_metadata('INSTALLER'):
+ for line in dist.get_metadata_lines('INSTALLER'):
+ if line.strip():
+ installer = line.strip()
+ break
+ package['installer'] = installer
+
# @todo: Should pkg_resources.Distribution have a
# `get_pkg_info` method?
feed_parser = FeedParser()
@@ -124,6 +132,8 @@
logger.info("Home-page: %s", dist.get('home-page'))
logger.info("Author: %s", dist.get('author'))
logger.info("Author-email: %s", dist.get('author-email'))
+ if dist['installer'] is not None:
+ logger.info("Installer: %s", dist['installer'])
logger.info("License: %s", dist.get('license'))
logger.info("Location: %s", dist['location'])
logger.info("Requires: %s", ', '.join(dist['requires']))
| {"golden_diff": "diff --git a/pip/commands/show.py b/pip/commands/show.py\n--- a/pip/commands/show.py\n+++ b/pip/commands/show.py\n@@ -85,6 +85,14 @@\n entry_points = dist.get_metadata_lines('entry_points.txt')\n package['entry_points'] = entry_points\n \n+ installer = None\n+ if dist.has_metadata('INSTALLER'):\n+ for line in dist.get_metadata_lines('INSTALLER'):\n+ if line.strip():\n+ installer = line.strip()\n+ break\n+ package['installer'] = installer\n+\n # @todo: Should pkg_resources.Distribution have a\n # `get_pkg_info` method?\n feed_parser = FeedParser()\n@@ -124,6 +132,8 @@\n logger.info(\"Home-page: %s\", dist.get('home-page'))\n logger.info(\"Author: %s\", dist.get('author'))\n logger.info(\"Author-email: %s\", dist.get('author-email'))\n+ if dist['installer'] is not None:\n+ logger.info(\"Installer: %s\", dist['installer'])\n logger.info(\"License: %s\", dist.get('license'))\n logger.info(\"Location: %s\", dist['location'])\n logger.info(\"Requires: %s\", ', '.join(dist['requires']))\n", "issue": "Show INSTALLER from PEP376 in pip show\nNow that #1646 is implemented, it would interesting to show this information in `pip show`.\n\n", "before_files": [{"content": "from __future__ import absolute_import\n\nfrom email.parser import FeedParser\nimport logging\nimport os\n\nfrom pip.basecommand import Command\nfrom pip.status_codes import SUCCESS, ERROR\nfrom pip._vendor import pkg_resources\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass ShowCommand(Command):\n \"\"\"Show information about one or more installed packages.\"\"\"\n name = 'show'\n usage = \"\"\"\n %prog [options] <package> ...\"\"\"\n summary = 'Show information about installed packages.'\n\n def __init__(self, *args, **kw):\n super(ShowCommand, self).__init__(*args, **kw)\n self.cmd_opts.add_option(\n '-f', '--files',\n dest='files',\n action='store_true',\n default=False,\n help='Show the full list of installed files for each package.')\n\n self.parser.insert_option_group(0, self.cmd_opts)\n\n def run(self, options, args):\n if not args:\n logger.warning('ERROR: Please provide a package name or names.')\n return ERROR\n query = args\n\n results = search_packages_info(query)\n if not print_results(results, options.files):\n return ERROR\n return SUCCESS\n\n\ndef search_packages_info(query):\n \"\"\"\n Gather details from installed distributions. Print distribution name,\n version, location, and installed files. Installed files requires a\n pip generated 'installed-files.txt' in the distributions '.egg-info'\n directory.\n \"\"\"\n installed = dict(\n [(p.project_name.lower(), p) for p in pkg_resources.working_set])\n query_names = [name.lower() for name in query]\n for dist in [installed[pkg] for pkg in query_names if pkg in installed]:\n package = {\n 'name': dist.project_name,\n 'version': dist.version,\n 'location': dist.location,\n 'requires': [dep.project_name for dep in dist.requires()],\n }\n file_list = None\n metadata = None\n if isinstance(dist, pkg_resources.DistInfoDistribution):\n # RECORDs should be part of .dist-info metadatas\n if dist.has_metadata('RECORD'):\n lines = dist.get_metadata_lines('RECORD')\n paths = [l.split(',')[0] for l in lines]\n paths = [os.path.join(dist.location, p) for p in paths]\n file_list = [os.path.relpath(p, dist.location) for p in paths]\n\n if dist.has_metadata('METADATA'):\n metadata = dist.get_metadata('METADATA')\n else:\n # Otherwise use pip's log for .egg-info's\n if dist.has_metadata('installed-files.txt'):\n paths = dist.get_metadata_lines('installed-files.txt')\n paths = [os.path.join(dist.egg_info, p) for p in paths]\n file_list = [os.path.relpath(p, dist.location) for p in paths]\n\n if dist.has_metadata('PKG-INFO'):\n metadata = dist.get_metadata('PKG-INFO')\n\n if dist.has_metadata('entry_points.txt'):\n entry_points = dist.get_metadata_lines('entry_points.txt')\n package['entry_points'] = entry_points\n\n # @todo: Should pkg_resources.Distribution have a\n # `get_pkg_info` method?\n feed_parser = FeedParser()\n feed_parser.feed(metadata)\n pkg_info_dict = feed_parser.close()\n for key in ('metadata-version', 'summary',\n 'home-page', 'author', 'author-email', 'license'):\n package[key] = pkg_info_dict.get(key)\n\n # It looks like FeedParser can not deal with repeated headers\n classifiers = []\n for line in metadata.splitlines():\n if not line:\n break\n # Classifier: License :: OSI Approved :: MIT License\n if line.startswith('Classifier: '):\n classifiers.append(line[len('Classifier: '):])\n package['classifiers'] = classifiers\n\n if file_list:\n package['files'] = sorted(file_list)\n yield package\n\n\ndef print_results(distributions, list_all_files):\n \"\"\"\n Print the informations from installed distributions found.\n \"\"\"\n results_printed = False\n for dist in distributions:\n results_printed = True\n logger.info(\"---\")\n logger.info(\"Metadata-Version: %s\", dist.get('metadata-version'))\n logger.info(\"Name: %s\", dist['name'])\n logger.info(\"Version: %s\", dist['version'])\n logger.info(\"Summary: %s\", dist.get('summary'))\n logger.info(\"Home-page: %s\", dist.get('home-page'))\n logger.info(\"Author: %s\", dist.get('author'))\n logger.info(\"Author-email: %s\", dist.get('author-email'))\n logger.info(\"License: %s\", dist.get('license'))\n logger.info(\"Location: %s\", dist['location'])\n logger.info(\"Requires: %s\", ', '.join(dist['requires']))\n logger.info(\"Classifiers:\")\n for classifier in dist['classifiers']:\n logger.info(\" %s\", classifier)\n if list_all_files:\n logger.info(\"Files:\")\n if 'files' in dist:\n for line in dist['files']:\n logger.info(\" %s\", line.strip())\n else:\n logger.info(\"Cannot locate installed-files.txt\")\n if 'entry_points' in dist:\n logger.info(\"Entry-points:\")\n for line in dist['entry_points']:\n logger.info(\" %s\", line.strip())\n return results_printed\n", "path": "pip/commands/show.py"}], "after_files": [{"content": "from __future__ import absolute_import\n\nfrom email.parser import FeedParser\nimport logging\nimport os\n\nfrom pip.basecommand import Command\nfrom pip.status_codes import SUCCESS, ERROR\nfrom pip._vendor import pkg_resources\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass ShowCommand(Command):\n \"\"\"Show information about one or more installed packages.\"\"\"\n name = 'show'\n usage = \"\"\"\n %prog [options] <package> ...\"\"\"\n summary = 'Show information about installed packages.'\n\n def __init__(self, *args, **kw):\n super(ShowCommand, self).__init__(*args, **kw)\n self.cmd_opts.add_option(\n '-f', '--files',\n dest='files',\n action='store_true',\n default=False,\n help='Show the full list of installed files for each package.')\n\n self.parser.insert_option_group(0, self.cmd_opts)\n\n def run(self, options, args):\n if not args:\n logger.warning('ERROR: Please provide a package name or names.')\n return ERROR\n query = args\n\n results = search_packages_info(query)\n if not print_results(results, options.files):\n return ERROR\n return SUCCESS\n\n\ndef search_packages_info(query):\n \"\"\"\n Gather details from installed distributions. Print distribution name,\n version, location, and installed files. Installed files requires a\n pip generated 'installed-files.txt' in the distributions '.egg-info'\n directory.\n \"\"\"\n installed = dict(\n [(p.project_name.lower(), p) for p in pkg_resources.working_set])\n query_names = [name.lower() for name in query]\n for dist in [installed[pkg] for pkg in query_names if pkg in installed]:\n package = {\n 'name': dist.project_name,\n 'version': dist.version,\n 'location': dist.location,\n 'requires': [dep.project_name for dep in dist.requires()],\n }\n file_list = None\n metadata = None\n if isinstance(dist, pkg_resources.DistInfoDistribution):\n # RECORDs should be part of .dist-info metadatas\n if dist.has_metadata('RECORD'):\n lines = dist.get_metadata_lines('RECORD')\n paths = [l.split(',')[0] for l in lines]\n paths = [os.path.join(dist.location, p) for p in paths]\n file_list = [os.path.relpath(p, dist.location) for p in paths]\n\n if dist.has_metadata('METADATA'):\n metadata = dist.get_metadata('METADATA')\n else:\n # Otherwise use pip's log for .egg-info's\n if dist.has_metadata('installed-files.txt'):\n paths = dist.get_metadata_lines('installed-files.txt')\n paths = [os.path.join(dist.egg_info, p) for p in paths]\n file_list = [os.path.relpath(p, dist.location) for p in paths]\n\n if dist.has_metadata('PKG-INFO'):\n metadata = dist.get_metadata('PKG-INFO')\n\n if dist.has_metadata('entry_points.txt'):\n entry_points = dist.get_metadata_lines('entry_points.txt')\n package['entry_points'] = entry_points\n\n installer = None\n if dist.has_metadata('INSTALLER'):\n for line in dist.get_metadata_lines('INSTALLER'):\n if line.strip():\n installer = line.strip()\n break\n package['installer'] = installer\n\n # @todo: Should pkg_resources.Distribution have a\n # `get_pkg_info` method?\n feed_parser = FeedParser()\n feed_parser.feed(metadata)\n pkg_info_dict = feed_parser.close()\n for key in ('metadata-version', 'summary',\n 'home-page', 'author', 'author-email', 'license'):\n package[key] = pkg_info_dict.get(key)\n\n # It looks like FeedParser can not deal with repeated headers\n classifiers = []\n for line in metadata.splitlines():\n if not line:\n break\n # Classifier: License :: OSI Approved :: MIT License\n if line.startswith('Classifier: '):\n classifiers.append(line[len('Classifier: '):])\n package['classifiers'] = classifiers\n\n if file_list:\n package['files'] = sorted(file_list)\n yield package\n\n\ndef print_results(distributions, list_all_files):\n \"\"\"\n Print the informations from installed distributions found.\n \"\"\"\n results_printed = False\n for dist in distributions:\n results_printed = True\n logger.info(\"---\")\n logger.info(\"Metadata-Version: %s\", dist.get('metadata-version'))\n logger.info(\"Name: %s\", dist['name'])\n logger.info(\"Version: %s\", dist['version'])\n logger.info(\"Summary: %s\", dist.get('summary'))\n logger.info(\"Home-page: %s\", dist.get('home-page'))\n logger.info(\"Author: %s\", dist.get('author'))\n logger.info(\"Author-email: %s\", dist.get('author-email'))\n if dist['installer'] is not None:\n logger.info(\"Installer: %s\", dist['installer'])\n logger.info(\"License: %s\", dist.get('license'))\n logger.info(\"Location: %s\", dist['location'])\n logger.info(\"Requires: %s\", ', '.join(dist['requires']))\n logger.info(\"Classifiers:\")\n for classifier in dist['classifiers']:\n logger.info(\" %s\", classifier)\n if list_all_files:\n logger.info(\"Files:\")\n if 'files' in dist:\n for line in dist['files']:\n logger.info(\" %s\", line.strip())\n else:\n logger.info(\"Cannot locate installed-files.txt\")\n if 'entry_points' in dist:\n logger.info(\"Entry-points:\")\n for line in dist['entry_points']:\n logger.info(\" %s\", line.strip())\n return results_printed\n", "path": "pip/commands/show.py"}]} | 1,783 | 285 |
gh_patches_debug_27488 | rasdani/github-patches | git_diff | coala__coala-2732 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
WIP: JuliaRequirement: Add installation methods
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `coalib/bears/requirements/JuliaRequirement.py`
Content:
```
1 from coalib.bears.requirements.PackageRequirement import PackageRequirement
2
3
4 class JuliaRequirement(PackageRequirement):
5 """
6 This class is a subclass of ``PackageRequirement``, and helps specifying
7 requirements from ``julia``, without using the manager name.
8 """
9
10 def __init__(self, package, version=""):
11 """
12 Constructs a new ``JuliaRequirement``, using the ``PackageRequirement``
13 constructor.
14
15 >>> pr = JuliaRequirement('Lint', '19.2')
16 >>> pr.manager
17 'julia'
18 >>> pr.package
19 'Lint'
20 >>> pr.version
21 '19.2'
22
23 :param package: A string with the name of the package to be installed.
24 :param version: A version string. Leave empty to specify latest version.
25 """
26 PackageRequirement.__init__(self, 'julia', package, version)
27
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/coalib/bears/requirements/JuliaRequirement.py b/coalib/bears/requirements/JuliaRequirement.py
--- a/coalib/bears/requirements/JuliaRequirement.py
+++ b/coalib/bears/requirements/JuliaRequirement.py
@@ -1,4 +1,9 @@
+import shlex
+
from coalib.bears.requirements.PackageRequirement import PackageRequirement
+from coalib.misc.Shell import call_without_output
+
+from coala_utils.string_processing import escape
class JuliaRequirement(PackageRequirement):
@@ -24,3 +29,31 @@
:param version: A version string. Leave empty to specify latest version.
"""
PackageRequirement.__init__(self, 'julia', package, version)
+
+ def install_command(self):
+ """
+ Creates the installation command for the instance of the class.
+
+ >>> JuliaRequirement('Lint').install_command()
+ 'julia -e \\'Pkg.add("Lint")\\''
+
+ :return: A string with the installation command.
+ """
+ code = 'Pkg.add("{}")'.format(escape(self.package, '\\"'))
+ args = ('julia', '-e', shlex.quote(code))
+ return ' '.join(args)
+
+ def is_installed(self):
+ """
+ Checks if the dependency is installed.
+
+ :return: ``True`` if dependency is installed, ``False`` otherwise.
+ """
+ # We need to check explicitly if `nothing` is returned, as this happens
+ # when the package is *registered*, but *not installed*. If it's not
+ # even registered, julia will throw an exception which lets julia exit
+ # with an error code different from 0.
+ code = 'Pkg.installed("{}")==nothing?exit(1):exit(0)'.format(
+ escape(self.package, '\\"'))
+ args = ('julia', '-e', code)
+ return not call_without_output(args)
| {"golden_diff": "diff --git a/coalib/bears/requirements/JuliaRequirement.py b/coalib/bears/requirements/JuliaRequirement.py\n--- a/coalib/bears/requirements/JuliaRequirement.py\n+++ b/coalib/bears/requirements/JuliaRequirement.py\n@@ -1,4 +1,9 @@\n+import shlex\n+\n from coalib.bears.requirements.PackageRequirement import PackageRequirement\n+from coalib.misc.Shell import call_without_output\n+\n+from coala_utils.string_processing import escape\n \n \n class JuliaRequirement(PackageRequirement):\n@@ -24,3 +29,31 @@\n :param version: A version string. Leave empty to specify latest version.\n \"\"\"\n PackageRequirement.__init__(self, 'julia', package, version)\n+\n+ def install_command(self):\n+ \"\"\"\n+ Creates the installation command for the instance of the class.\n+\n+ >>> JuliaRequirement('Lint').install_command()\n+ 'julia -e \\\\'Pkg.add(\"Lint\")\\\\''\n+\n+ :return: A string with the installation command.\n+ \"\"\"\n+ code = 'Pkg.add(\"{}\")'.format(escape(self.package, '\\\\\"'))\n+ args = ('julia', '-e', shlex.quote(code))\n+ return ' '.join(args)\n+\n+ def is_installed(self):\n+ \"\"\"\n+ Checks if the dependency is installed.\n+\n+ :return: ``True`` if dependency is installed, ``False`` otherwise.\n+ \"\"\"\n+ # We need to check explicitly if `nothing` is returned, as this happens\n+ # when the package is *registered*, but *not installed*. If it's not\n+ # even registered, julia will throw an exception which lets julia exit\n+ # with an error code different from 0.\n+ code = 'Pkg.installed(\"{}\")==nothing?exit(1):exit(0)'.format(\n+ escape(self.package, '\\\\\"'))\n+ args = ('julia', '-e', code)\n+ return not call_without_output(args)\n", "issue": "WIP: JuliaRequirement: Add installation methods\n\n", "before_files": [{"content": "from coalib.bears.requirements.PackageRequirement import PackageRequirement\n\n\nclass JuliaRequirement(PackageRequirement):\n \"\"\"\n This class is a subclass of ``PackageRequirement``, and helps specifying\n requirements from ``julia``, without using the manager name.\n \"\"\"\n\n def __init__(self, package, version=\"\"):\n \"\"\"\n Constructs a new ``JuliaRequirement``, using the ``PackageRequirement``\n constructor.\n\n >>> pr = JuliaRequirement('Lint', '19.2')\n >>> pr.manager\n 'julia'\n >>> pr.package\n 'Lint'\n >>> pr.version\n '19.2'\n\n :param package: A string with the name of the package to be installed.\n :param version: A version string. Leave empty to specify latest version.\n \"\"\"\n PackageRequirement.__init__(self, 'julia', package, version)\n", "path": "coalib/bears/requirements/JuliaRequirement.py"}], "after_files": [{"content": "import shlex\n\nfrom coalib.bears.requirements.PackageRequirement import PackageRequirement\nfrom coalib.misc.Shell import call_without_output\n\nfrom coala_utils.string_processing import escape\n\n\nclass JuliaRequirement(PackageRequirement):\n \"\"\"\n This class is a subclass of ``PackageRequirement``, and helps specifying\n requirements from ``julia``, without using the manager name.\n \"\"\"\n\n def __init__(self, package, version=\"\"):\n \"\"\"\n Constructs a new ``JuliaRequirement``, using the ``PackageRequirement``\n constructor.\n\n >>> pr = JuliaRequirement('Lint', '19.2')\n >>> pr.manager\n 'julia'\n >>> pr.package\n 'Lint'\n >>> pr.version\n '19.2'\n\n :param package: A string with the name of the package to be installed.\n :param version: A version string. Leave empty to specify latest version.\n \"\"\"\n PackageRequirement.__init__(self, 'julia', package, version)\n\n def install_command(self):\n \"\"\"\n Creates the installation command for the instance of the class.\n\n >>> JuliaRequirement('Lint').install_command()\n 'julia -e \\\\'Pkg.add(\"Lint\")\\\\''\n\n :return: A string with the installation command.\n \"\"\"\n code = 'Pkg.add(\"{}\")'.format(escape(self.package, '\\\\\"'))\n args = ('julia', '-e', shlex.quote(code))\n return ' '.join(args)\n\n def is_installed(self):\n \"\"\"\n Checks if the dependency is installed.\n\n :return: ``True`` if dependency is installed, ``False`` otherwise.\n \"\"\"\n # We need to check explicitly if `nothing` is returned, as this happens\n # when the package is *registered*, but *not installed*. If it's not\n # even registered, julia will throw an exception which lets julia exit\n # with an error code different from 0.\n code = 'Pkg.installed(\"{}\")==nothing?exit(1):exit(0)'.format(\n escape(self.package, '\\\\\"'))\n args = ('julia', '-e', code)\n return not call_without_output(args)\n", "path": "coalib/bears/requirements/JuliaRequirement.py"}]} | 506 | 446 |
gh_patches_debug_8769 | rasdani/github-patches | git_diff | translate__translate-3429 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Multistring doesnt work as a python2 string
Previously you were able to so ~~`str(multistring("foo"))`~~ but it seems this no longer works.
EDIT: apologies that was a bad example it should have read something like `str(multistring("é"))`
Im wondering if we should add py2 compat similar to the one in django (https://docs.djangoproject.com/en/1.9/_modules/django/utils/six/#python_2_unicode_compatible)
@claudep any thoughts on this?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `translate/misc/multistring.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright 2006 Zuza Software Foundation
4 #
5 # This file is part of translate.
6 #
7 # translate is free software; you can redistribute it and/or modify
8 # it under the terms of the GNU General Public License as published by
9 # the Free Software Foundation; either version 2 of the License, or
10 # (at your option) any later version.
11 #
12 # translate is distributed in the hope that it will be useful,
13 # but WITHOUT ANY WARRANTY; without even the implied warranty of
14 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 # GNU General Public License for more details.
16 #
17 # You should have received a copy of the GNU General Public License
18 # along with this program; if not, see <http://www.gnu.org/licenses/>.
19
20 """Supports a hybrid Unicode string that can also have a list of alternate
21 strings in the strings attribute
22 """
23
24 import six
25
26
27 class multistring(six.text_type):
28
29 def __new__(newtype, string=u"", *args, **kwargs):
30 if isinstance(string, list):
31 if not string:
32 raise ValueError("multistring must contain at least one string")
33 mainstring = string[0]
34 newstring = multistring.__new__(newtype, string[0])
35 newstring.strings = [newstring] + [multistring.__new__(newtype, altstring) for altstring in string[1:]]
36 else:
37 newstring = six.text_type.__new__(newtype, string)
38 newstring.strings = [newstring]
39 return newstring
40
41 def __init__(self, *args, **kwargs):
42 super(multistring, self).__init__()
43 if not hasattr(self, "strings"):
44 self.strings = []
45
46 def __cmp__(self, otherstring):
47 def cmp_compat(s1, s2):
48 # Python 3 compatible cmp() equivalent
49 return (s1 > s2) - (s1 < s2)
50 if isinstance(otherstring, multistring):
51 parentcompare = cmp_compat(six.text_type(self), otherstring)
52 if parentcompare:
53 return parentcompare
54 else:
55 return cmp_compat(self.strings[1:], otherstring.strings[1:])
56 elif isinstance(otherstring, six.text_type):
57 return cmp_compat(six.text_type(self), otherstring)
58 elif isinstance(otherstring, bytes):
59 return cmp_compat(self.encode('utf-8'), otherstring)
60 elif isinstance(otherstring, list) and otherstring:
61 return cmp_compat(self, multistring(otherstring))
62 else:
63 return cmp_compat(str(type(self)), str(type(otherstring)))
64
65 def __hash__(self):
66 return hash(''.join(self.strings))
67
68 def __ne__(self, otherstring):
69 return self.__cmp__(otherstring) != 0
70
71 def __eq__(self, otherstring):
72 return self.__cmp__(otherstring) == 0
73
74 def __repr__(self):
75 _repr = "multistring([" + u",".join(self.strings) + "])"
76 return _repr.encode('utf-8') if six.PY2 else _repr
77
78 def replace(self, old, new, count=None):
79 if count is None:
80 newstr = multistring(super(multistring, self).replace(old, new))
81 else:
82 newstr = multistring(super(multistring, self).replace(old, new, count))
83 for s in self.strings[1:]:
84 if count is None:
85 newstr.strings.append(s.replace(old, new))
86 else:
87 newstr.strings.append(s.replace(old, new, count))
88 return newstr
89
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/translate/misc/multistring.py b/translate/misc/multistring.py
--- a/translate/misc/multistring.py
+++ b/translate/misc/multistring.py
@@ -75,6 +75,11 @@
_repr = "multistring([" + u",".join(self.strings) + "])"
return _repr.encode('utf-8') if six.PY2 else _repr
+ def __str__(self):
+ if six.PY2:
+ return self.encode('utf-8')
+ return super(multistring, self).__str__()
+
def replace(self, old, new, count=None):
if count is None:
newstr = multistring(super(multistring, self).replace(old, new))
| {"golden_diff": "diff --git a/translate/misc/multistring.py b/translate/misc/multistring.py\n--- a/translate/misc/multistring.py\n+++ b/translate/misc/multistring.py\n@@ -75,6 +75,11 @@\n _repr = \"multistring([\" + u\",\".join(self.strings) + \"])\"\n return _repr.encode('utf-8') if six.PY2 else _repr\n \n+ def __str__(self):\n+ if six.PY2:\n+ return self.encode('utf-8')\n+ return super(multistring, self).__str__()\n+\n def replace(self, old, new, count=None):\n if count is None:\n newstr = multistring(super(multistring, self).replace(old, new))\n", "issue": "Multistring doesnt work as a python2 string\nPreviously you were able to so ~~`str(multistring(\"foo\"))`~~ but it seems this no longer works.\n\nEDIT: apologies that was a bad example it should have read something like `str(multistring(\"\u00e9\"))`\n\nIm wondering if we should add py2 compat similar to the one in django (https://docs.djangoproject.com/en/1.9/_modules/django/utils/six/#python_2_unicode_compatible)\n\n@claudep any thoughts on this?\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright 2006 Zuza Software Foundation\n#\n# This file is part of translate.\n#\n# translate is free software; you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation; either version 2 of the License, or\n# (at your option) any later version.\n#\n# translate is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program; if not, see <http://www.gnu.org/licenses/>.\n\n\"\"\"Supports a hybrid Unicode string that can also have a list of alternate\nstrings in the strings attribute\n\"\"\"\n\nimport six\n\n\nclass multistring(six.text_type):\n\n def __new__(newtype, string=u\"\", *args, **kwargs):\n if isinstance(string, list):\n if not string:\n raise ValueError(\"multistring must contain at least one string\")\n mainstring = string[0]\n newstring = multistring.__new__(newtype, string[0])\n newstring.strings = [newstring] + [multistring.__new__(newtype, altstring) for altstring in string[1:]]\n else:\n newstring = six.text_type.__new__(newtype, string)\n newstring.strings = [newstring]\n return newstring\n\n def __init__(self, *args, **kwargs):\n super(multistring, self).__init__()\n if not hasattr(self, \"strings\"):\n self.strings = []\n\n def __cmp__(self, otherstring):\n def cmp_compat(s1, s2):\n # Python 3 compatible cmp() equivalent\n return (s1 > s2) - (s1 < s2)\n if isinstance(otherstring, multistring):\n parentcompare = cmp_compat(six.text_type(self), otherstring)\n if parentcompare:\n return parentcompare\n else:\n return cmp_compat(self.strings[1:], otherstring.strings[1:])\n elif isinstance(otherstring, six.text_type):\n return cmp_compat(six.text_type(self), otherstring)\n elif isinstance(otherstring, bytes):\n return cmp_compat(self.encode('utf-8'), otherstring)\n elif isinstance(otherstring, list) and otherstring:\n return cmp_compat(self, multistring(otherstring))\n else:\n return cmp_compat(str(type(self)), str(type(otherstring)))\n\n def __hash__(self):\n return hash(''.join(self.strings))\n\n def __ne__(self, otherstring):\n return self.__cmp__(otherstring) != 0\n\n def __eq__(self, otherstring):\n return self.__cmp__(otherstring) == 0\n\n def __repr__(self):\n _repr = \"multistring([\" + u\",\".join(self.strings) + \"])\"\n return _repr.encode('utf-8') if six.PY2 else _repr\n\n def replace(self, old, new, count=None):\n if count is None:\n newstr = multistring(super(multistring, self).replace(old, new))\n else:\n newstr = multistring(super(multistring, self).replace(old, new, count))\n for s in self.strings[1:]:\n if count is None:\n newstr.strings.append(s.replace(old, new))\n else:\n newstr.strings.append(s.replace(old, new, count))\n return newstr\n", "path": "translate/misc/multistring.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright 2006 Zuza Software Foundation\n#\n# This file is part of translate.\n#\n# translate is free software; you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation; either version 2 of the License, or\n# (at your option) any later version.\n#\n# translate is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program; if not, see <http://www.gnu.org/licenses/>.\n\n\"\"\"Supports a hybrid Unicode string that can also have a list of alternate\nstrings in the strings attribute\n\"\"\"\n\nimport six\n\n\nclass multistring(six.text_type):\n\n def __new__(newtype, string=u\"\", *args, **kwargs):\n if isinstance(string, list):\n if not string:\n raise ValueError(\"multistring must contain at least one string\")\n mainstring = string[0]\n newstring = multistring.__new__(newtype, string[0])\n newstring.strings = [newstring] + [multistring.__new__(newtype, altstring) for altstring in string[1:]]\n else:\n newstring = six.text_type.__new__(newtype, string)\n newstring.strings = [newstring]\n return newstring\n\n def __init__(self, *args, **kwargs):\n super(multistring, self).__init__()\n if not hasattr(self, \"strings\"):\n self.strings = []\n\n def __cmp__(self, otherstring):\n def cmp_compat(s1, s2):\n # Python 3 compatible cmp() equivalent\n return (s1 > s2) - (s1 < s2)\n if isinstance(otherstring, multistring):\n parentcompare = cmp_compat(six.text_type(self), otherstring)\n if parentcompare:\n return parentcompare\n else:\n return cmp_compat(self.strings[1:], otherstring.strings[1:])\n elif isinstance(otherstring, six.text_type):\n return cmp_compat(six.text_type(self), otherstring)\n elif isinstance(otherstring, bytes):\n return cmp_compat(self.encode('utf-8'), otherstring)\n elif isinstance(otherstring, list) and otherstring:\n return cmp_compat(self, multistring(otherstring))\n else:\n return cmp_compat(str(type(self)), str(type(otherstring)))\n\n def __hash__(self):\n return hash(''.join(self.strings))\n\n def __ne__(self, otherstring):\n return self.__cmp__(otherstring) != 0\n\n def __eq__(self, otherstring):\n return self.__cmp__(otherstring) == 0\n\n def __repr__(self):\n _repr = \"multistring([\" + u\",\".join(self.strings) + \"])\"\n return _repr.encode('utf-8') if six.PY2 else _repr\n\n def __str__(self):\n if six.PY2:\n return self.encode('utf-8')\n return super(multistring, self).__str__()\n\n def replace(self, old, new, count=None):\n if count is None:\n newstr = multistring(super(multistring, self).replace(old, new))\n else:\n newstr = multistring(super(multistring, self).replace(old, new, count))\n for s in self.strings[1:]:\n if count is None:\n newstr.strings.append(s.replace(old, new))\n else:\n newstr.strings.append(s.replace(old, new, count))\n return newstr\n", "path": "translate/misc/multistring.py"}]} | 1,314 | 167 |
gh_patches_debug_15897 | rasdani/github-patches | git_diff | cookiecutter__cookiecutter-563 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Improve help access and prompts from the CLI
A new user might enter any of the following and expect to be guided to the Help information about using the CLI:
- `cookiecutter`
- `cookiecutter -h`
- `cookiecutter --h`
- `cookiecutter help`
It would be helpful to improve the CLI messages or handle some of the additional cases for a new user.
@audreyr Thanks for the handy tool :bouquet:
<img width="773" alt="screen shot 2015-08-07 at 7 54 57 am" src="https://cloud.githubusercontent.com/assets/2680980/9138629/e64cb42a-3cd9-11e5-8c9c-cb9d726bd1eb.png">
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cookiecutter/cli.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 """
5 cookiecutter.cli
6 -----------------
7
8 Main `cookiecutter` CLI.
9 """
10
11 import os
12 import sys
13 import logging
14
15 import click
16
17 from cookiecutter import __version__
18 from cookiecutter.main import cookiecutter
19 from cookiecutter.exceptions import (
20 OutputDirExistsException, InvalidModeException
21 )
22
23 logger = logging.getLogger(__name__)
24
25
26 def version_msg():
27 python_version = sys.version[:3]
28 location = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
29 message = u'Cookiecutter %(version)s from {} (Python {})'
30 return message.format(location, python_version)
31
32
33 @click.command()
34 @click.version_option(__version__, u'-V', u'--version', message=version_msg())
35 @click.argument(u'template')
36 @click.option(
37 u'--no-input', is_flag=True,
38 help=u'Do not prompt for parameters and only use cookiecutter.json '
39 u'file content',
40 )
41 @click.option(
42 u'-c', u'--checkout',
43 help=u'branch, tag or commit to checkout after git clone',
44 )
45 @click.option(
46 '-v', '--verbose',
47 is_flag=True, help='Print debug information', default=False
48 )
49 @click.option(
50 u'--replay', is_flag=True,
51 help=u'Do not prompt for parameters and only use information entered '
52 u'previously',
53 )
54 @click.option(
55 u'-f', u'--overwrite-if-exists', is_flag=True,
56 help=u'Overwrite the contents of the output directory if it already exists'
57 )
58 @click.option(
59 u'-o', u'--output-dir', default='.', type=click.Path(),
60 help=u'Where to output the generated project dir into'
61 )
62 def main(template, no_input, checkout, verbose, replay, overwrite_if_exists,
63 output_dir):
64 """Create a project from a Cookiecutter project template (TEMPLATE)."""
65 if verbose:
66 logging.basicConfig(
67 format=u'%(levelname)s %(filename)s: %(message)s',
68 level=logging.DEBUG
69 )
70 else:
71 # Log info and above to console
72 logging.basicConfig(
73 format=u'%(levelname)s: %(message)s',
74 level=logging.INFO
75 )
76
77 try:
78 cookiecutter(
79 template, checkout, no_input,
80 replay=replay,
81 overwrite_if_exists=overwrite_if_exists,
82 output_dir=output_dir
83 )
84 except (OutputDirExistsException, InvalidModeException) as e:
85 click.echo(e)
86 sys.exit(1)
87
88 if __name__ == "__main__":
89 main()
90
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/cookiecutter/cli.py b/cookiecutter/cli.py
--- a/cookiecutter/cli.py
+++ b/cookiecutter/cli.py
@@ -30,7 +30,7 @@
return message.format(location, python_version)
[email protected]()
[email protected](context_settings=dict(help_option_names=[u'-h', u'--help']))
@click.version_option(__version__, u'-V', u'--version', message=version_msg())
@click.argument(u'template')
@click.option(
@@ -75,6 +75,13 @@
)
try:
+
+ # If you _need_ to support a local template in a directory
+ # called 'help', use a qualified path to the directory.
+ if template == u'help':
+ click.echo(click.get_current_context().get_help())
+ sys.exit(0)
+
cookiecutter(
template, checkout, no_input,
replay=replay,
| {"golden_diff": "diff --git a/cookiecutter/cli.py b/cookiecutter/cli.py\n--- a/cookiecutter/cli.py\n+++ b/cookiecutter/cli.py\n@@ -30,7 +30,7 @@\n return message.format(location, python_version)\n \n \[email protected]()\[email protected](context_settings=dict(help_option_names=[u'-h', u'--help']))\n @click.version_option(__version__, u'-V', u'--version', message=version_msg())\n @click.argument(u'template')\n @click.option(\n@@ -75,6 +75,13 @@\n )\n \n try:\n+\n+ # If you _need_ to support a local template in a directory\n+ # called 'help', use a qualified path to the directory.\n+ if template == u'help':\n+ click.echo(click.get_current_context().get_help())\n+ sys.exit(0)\n+\n cookiecutter(\n template, checkout, no_input,\n replay=replay,\n", "issue": "Improve help access and prompts from the CLI\nA new user might enter any of the following and expect to be guided to the Help information about using the CLI:\n- `cookiecutter`\n- `cookiecutter -h`\n- `cookiecutter --h`\n- `cookiecutter help`\n\nIt would be helpful to improve the CLI messages or handle some of the additional cases for a new user.\n\n@audreyr Thanks for the handy tool :bouquet: \n\n<img width=\"773\" alt=\"screen shot 2015-08-07 at 7 54 57 am\" src=\"https://cloud.githubusercontent.com/assets/2680980/9138629/e64cb42a-3cd9-11e5-8c9c-cb9d726bd1eb.png\">\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n\"\"\"\ncookiecutter.cli\n-----------------\n\nMain `cookiecutter` CLI.\n\"\"\"\n\nimport os\nimport sys\nimport logging\n\nimport click\n\nfrom cookiecutter import __version__\nfrom cookiecutter.main import cookiecutter\nfrom cookiecutter.exceptions import (\n OutputDirExistsException, InvalidModeException\n)\n\nlogger = logging.getLogger(__name__)\n\n\ndef version_msg():\n python_version = sys.version[:3]\n location = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\n message = u'Cookiecutter %(version)s from {} (Python {})'\n return message.format(location, python_version)\n\n\[email protected]()\[email protected]_option(__version__, u'-V', u'--version', message=version_msg())\[email protected](u'template')\[email protected](\n u'--no-input', is_flag=True,\n help=u'Do not prompt for parameters and only use cookiecutter.json '\n u'file content',\n)\[email protected](\n u'-c', u'--checkout',\n help=u'branch, tag or commit to checkout after git clone',\n)\[email protected](\n '-v', '--verbose',\n is_flag=True, help='Print debug information', default=False\n)\[email protected](\n u'--replay', is_flag=True,\n help=u'Do not prompt for parameters and only use information entered '\n u'previously',\n)\[email protected](\n u'-f', u'--overwrite-if-exists', is_flag=True,\n help=u'Overwrite the contents of the output directory if it already exists'\n)\[email protected](\n u'-o', u'--output-dir', default='.', type=click.Path(),\n help=u'Where to output the generated project dir into'\n)\ndef main(template, no_input, checkout, verbose, replay, overwrite_if_exists,\n output_dir):\n \"\"\"Create a project from a Cookiecutter project template (TEMPLATE).\"\"\"\n if verbose:\n logging.basicConfig(\n format=u'%(levelname)s %(filename)s: %(message)s',\n level=logging.DEBUG\n )\n else:\n # Log info and above to console\n logging.basicConfig(\n format=u'%(levelname)s: %(message)s',\n level=logging.INFO\n )\n\n try:\n cookiecutter(\n template, checkout, no_input,\n replay=replay,\n overwrite_if_exists=overwrite_if_exists,\n output_dir=output_dir\n )\n except (OutputDirExistsException, InvalidModeException) as e:\n click.echo(e)\n sys.exit(1)\n\nif __name__ == \"__main__\":\n main()\n", "path": "cookiecutter/cli.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n\"\"\"\ncookiecutter.cli\n-----------------\n\nMain `cookiecutter` CLI.\n\"\"\"\n\nimport os\nimport sys\nimport logging\n\nimport click\n\nfrom cookiecutter import __version__\nfrom cookiecutter.main import cookiecutter\nfrom cookiecutter.exceptions import (\n OutputDirExistsException, InvalidModeException\n)\n\nlogger = logging.getLogger(__name__)\n\n\ndef version_msg():\n python_version = sys.version[:3]\n location = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\n message = u'Cookiecutter %(version)s from {} (Python {})'\n return message.format(location, python_version)\n\n\[email protected](context_settings=dict(help_option_names=[u'-h', u'--help']))\[email protected]_option(__version__, u'-V', u'--version', message=version_msg())\[email protected](u'template')\[email protected](\n u'--no-input', is_flag=True,\n help=u'Do not prompt for parameters and only use cookiecutter.json '\n u'file content',\n)\[email protected](\n u'-c', u'--checkout',\n help=u'branch, tag or commit to checkout after git clone',\n)\[email protected](\n '-v', '--verbose',\n is_flag=True, help='Print debug information', default=False\n)\[email protected](\n u'--replay', is_flag=True,\n help=u'Do not prompt for parameters and only use information entered '\n u'previously',\n)\[email protected](\n u'-f', u'--overwrite-if-exists', is_flag=True,\n help=u'Overwrite the contents of the output directory if it already exists'\n)\[email protected](\n u'-o', u'--output-dir', default='.', type=click.Path(),\n help=u'Where to output the generated project dir into'\n)\ndef main(template, no_input, checkout, verbose, replay, overwrite_if_exists,\n output_dir):\n \"\"\"Create a project from a Cookiecutter project template (TEMPLATE).\"\"\"\n if verbose:\n logging.basicConfig(\n format=u'%(levelname)s %(filename)s: %(message)s',\n level=logging.DEBUG\n )\n else:\n # Log info and above to console\n logging.basicConfig(\n format=u'%(levelname)s: %(message)s',\n level=logging.INFO\n )\n\n try:\n\n # If you _need_ to support a local template in a directory\n # called 'help', use a qualified path to the directory.\n if template == u'help':\n click.echo(click.get_current_context().get_help())\n sys.exit(0)\n\n cookiecutter(\n template, checkout, no_input,\n replay=replay,\n overwrite_if_exists=overwrite_if_exists,\n output_dir=output_dir\n )\n except (OutputDirExistsException, InvalidModeException) as e:\n click.echo(e)\n sys.exit(1)\n\nif __name__ == \"__main__\":\n main()\n", "path": "cookiecutter/cli.py"}]} | 1,187 | 214 |
gh_patches_debug_9774 | rasdani/github-patches | git_diff | aws-cloudformation__cfn-lint-1214 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
NoEcho password warning on a Lambda Function named AccountPassword
*cfn-lint version: 0.25.1*
*Description of issue.*
I'm getting this warning:
```
W2501 Inappropriate map found for password on Resources/AccountPassword
```
on this Resource in my SAM template:
```yaml
AccountPassword:
Type: AWS::Serverless::Function
Properties:
FunctionName: !Sub ${RepositoryName}-account-password-${StageName}
Runtime: nodejs10.x
Layers:
- !Ref LambdaDependencyLayer
CodeUri: dist/account
Handler: password.handler
Description: Reset User Password
Timeout: 20
Events:
GET:
Type: Api
Properties:
RestApiId:
Ref: RestApi
Path: /account/password
Method: get
Role: !GetAtt LambdaExecutionRole.Arn
DeploymentPreference:
Alarms:
- !Ref AccountPasswordLatestVersionErrorMetricGreaterThanZeroAlarm
```
I'm guessing the resolution would be to make sure this check is only done on Parameters and not Resouces.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cfnlint/rules/resources/properties/Password.py`
Content:
```
1 """
2 Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
3 SPDX-License-Identifier: MIT-0
4 """
5 import re
6 import six
7 from cfnlint.rules import CloudFormationLintRule
8 from cfnlint.rules import RuleMatch
9 from cfnlint.helpers import REGEX_DYN_REF_SSM, REGEX_DYN_REF
10
11
12 class Password(CloudFormationLintRule):
13 """Check if Password Properties are properly configured"""
14 id = 'W2501'
15 shortdesc = 'Check if Password Properties are correctly configured'
16 description = 'Password properties should be strings and if parameter using NoEcho'
17 source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/best-practices.html#creds'
18 tags = ['parameters', 'passwords', 'security']
19
20 def match(self, cfn):
21 """Check CloudFormation Password Parameters"""
22
23 matches = []
24 password_properties = ['AccountPassword', 'AdminPassword', 'ADDomainJoinPassword', 'CrossRealmTrustPrincipalPassword',
25 'KdcAdminPassword', 'Password', 'DbPassword', 'MasterUserPassword', 'PasswordParam']
26
27 parameters = cfn.get_parameter_names()
28 fix_params = []
29 for password_property in password_properties:
30 # Build the list of refs
31 refs = cfn.search_deep_keys(password_property)
32 trees = [tree for tree in refs if tree[0] == 'Resources']
33
34 for tree in trees:
35 obj = tree[-1]
36 if isinstance(obj, (six.string_types)):
37 if re.match(REGEX_DYN_REF, obj):
38 if re.match(REGEX_DYN_REF_SSM, obj):
39 message = 'Password should use a secure dynamic reference for %s' % (
40 '/'.join(map(str, tree[:-1])))
41 matches.append(RuleMatch(tree[:-1], message))
42 else:
43 message = 'Password shouldn\'t be hardcoded for %s' % (
44 '/'.join(map(str, tree[:-1])))
45 matches.append(RuleMatch(tree[:-1], message))
46 elif isinstance(obj, dict):
47 if len(obj) == 1:
48 for key, value in obj.items():
49 if key == 'Ref':
50 if value in parameters:
51 param = cfn.template['Parameters'][value]
52 if 'NoEcho' in param:
53 if not param['NoEcho']:
54 fix_params.append(
55 {'Name': value, 'Use': password_property})
56 else:
57 fix_params.append({'Name': value, 'Use': password_property})
58 else:
59 message = 'Inappropriate map found for password on %s' % (
60 '/'.join(map(str, tree[:-1])))
61 matches.append(RuleMatch(tree[:-1], message))
62
63 for paramname in fix_params:
64 message = 'Parameter {} used as {}, therefore NoEcho should be True'.format(
65 paramname['Name'], paramname['Use'])
66 tree = ['Parameters', paramname['Name']]
67 matches.append(RuleMatch(tree, message))
68 return matches
69
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/cfnlint/rules/resources/properties/Password.py b/src/cfnlint/rules/resources/properties/Password.py
--- a/src/cfnlint/rules/resources/properties/Password.py
+++ b/src/cfnlint/rules/resources/properties/Password.py
@@ -29,7 +29,11 @@
for password_property in password_properties:
# Build the list of refs
refs = cfn.search_deep_keys(password_property)
- trees = [tree for tree in refs if tree[0] == 'Resources']
+ trees = []
+ for tree in refs:
+ if len(tree) > 2:
+ if tree[0] == 'Resources' and tree[2] == 'Properties':
+ trees.append(tree)
for tree in trees:
obj = tree[-1]
| {"golden_diff": "diff --git a/src/cfnlint/rules/resources/properties/Password.py b/src/cfnlint/rules/resources/properties/Password.py\n--- a/src/cfnlint/rules/resources/properties/Password.py\n+++ b/src/cfnlint/rules/resources/properties/Password.py\n@@ -29,7 +29,11 @@\n for password_property in password_properties:\n # Build the list of refs\n refs = cfn.search_deep_keys(password_property)\n- trees = [tree for tree in refs if tree[0] == 'Resources']\n+ trees = []\n+ for tree in refs:\n+ if len(tree) > 2:\n+ if tree[0] == 'Resources' and tree[2] == 'Properties':\n+ trees.append(tree)\n \n for tree in trees:\n obj = tree[-1]\n", "issue": "NoEcho password warning on a Lambda Function named AccountPassword\n*cfn-lint version: 0.25.1*\r\n\r\n*Description of issue.*\r\n\r\nI'm getting this warning:\r\n\r\n```\r\nW2501 Inappropriate map found for password on Resources/AccountPassword\r\n```\r\n\r\non this Resource in my SAM template:\r\n\r\n```yaml\r\n AccountPassword:\r\n Type: AWS::Serverless::Function\r\n Properties:\r\n FunctionName: !Sub ${RepositoryName}-account-password-${StageName}\r\n Runtime: nodejs10.x\r\n Layers:\r\n - !Ref LambdaDependencyLayer\r\n CodeUri: dist/account\r\n Handler: password.handler\r\n Description: Reset User Password\r\n Timeout: 20\r\n Events:\r\n GET:\r\n Type: Api\r\n Properties:\r\n RestApiId:\r\n Ref: RestApi\r\n Path: /account/password\r\n Method: get\r\n Role: !GetAtt LambdaExecutionRole.Arn\r\n DeploymentPreference:\r\n Alarms:\r\n - !Ref AccountPasswordLatestVersionErrorMetricGreaterThanZeroAlarm\r\n```\r\n\r\nI'm guessing the resolution would be to make sure this check is only done on Parameters and not Resouces.\n", "before_files": [{"content": "\"\"\"\nCopyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nimport re\nimport six\nfrom cfnlint.rules import CloudFormationLintRule\nfrom cfnlint.rules import RuleMatch\nfrom cfnlint.helpers import REGEX_DYN_REF_SSM, REGEX_DYN_REF\n\n\nclass Password(CloudFormationLintRule):\n \"\"\"Check if Password Properties are properly configured\"\"\"\n id = 'W2501'\n shortdesc = 'Check if Password Properties are correctly configured'\n description = 'Password properties should be strings and if parameter using NoEcho'\n source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/best-practices.html#creds'\n tags = ['parameters', 'passwords', 'security']\n\n def match(self, cfn):\n \"\"\"Check CloudFormation Password Parameters\"\"\"\n\n matches = []\n password_properties = ['AccountPassword', 'AdminPassword', 'ADDomainJoinPassword', 'CrossRealmTrustPrincipalPassword',\n 'KdcAdminPassword', 'Password', 'DbPassword', 'MasterUserPassword', 'PasswordParam']\n\n parameters = cfn.get_parameter_names()\n fix_params = []\n for password_property in password_properties:\n # Build the list of refs\n refs = cfn.search_deep_keys(password_property)\n trees = [tree for tree in refs if tree[0] == 'Resources']\n\n for tree in trees:\n obj = tree[-1]\n if isinstance(obj, (six.string_types)):\n if re.match(REGEX_DYN_REF, obj):\n if re.match(REGEX_DYN_REF_SSM, obj):\n message = 'Password should use a secure dynamic reference for %s' % (\n '/'.join(map(str, tree[:-1])))\n matches.append(RuleMatch(tree[:-1], message))\n else:\n message = 'Password shouldn\\'t be hardcoded for %s' % (\n '/'.join(map(str, tree[:-1])))\n matches.append(RuleMatch(tree[:-1], message))\n elif isinstance(obj, dict):\n if len(obj) == 1:\n for key, value in obj.items():\n if key == 'Ref':\n if value in parameters:\n param = cfn.template['Parameters'][value]\n if 'NoEcho' in param:\n if not param['NoEcho']:\n fix_params.append(\n {'Name': value, 'Use': password_property})\n else:\n fix_params.append({'Name': value, 'Use': password_property})\n else:\n message = 'Inappropriate map found for password on %s' % (\n '/'.join(map(str, tree[:-1])))\n matches.append(RuleMatch(tree[:-1], message))\n\n for paramname in fix_params:\n message = 'Parameter {} used as {}, therefore NoEcho should be True'.format(\n paramname['Name'], paramname['Use'])\n tree = ['Parameters', paramname['Name']]\n matches.append(RuleMatch(tree, message))\n return matches\n", "path": "src/cfnlint/rules/resources/properties/Password.py"}], "after_files": [{"content": "\"\"\"\nCopyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nimport re\nimport six\nfrom cfnlint.rules import CloudFormationLintRule\nfrom cfnlint.rules import RuleMatch\nfrom cfnlint.helpers import REGEX_DYN_REF_SSM, REGEX_DYN_REF\n\n\nclass Password(CloudFormationLintRule):\n \"\"\"Check if Password Properties are properly configured\"\"\"\n id = 'W2501'\n shortdesc = 'Check if Password Properties are correctly configured'\n description = 'Password properties should be strings and if parameter using NoEcho'\n source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/best-practices.html#creds'\n tags = ['parameters', 'passwords', 'security']\n\n def match(self, cfn):\n \"\"\"Check CloudFormation Password Parameters\"\"\"\n\n matches = []\n password_properties = ['AccountPassword', 'AdminPassword', 'ADDomainJoinPassword', 'CrossRealmTrustPrincipalPassword',\n 'KdcAdminPassword', 'Password', 'DbPassword', 'MasterUserPassword', 'PasswordParam']\n\n parameters = cfn.get_parameter_names()\n fix_params = []\n for password_property in password_properties:\n # Build the list of refs\n refs = cfn.search_deep_keys(password_property)\n trees = []\n for tree in refs:\n if len(tree) > 2:\n if tree[0] == 'Resources' and tree[2] == 'Properties':\n trees.append(tree)\n\n for tree in trees:\n obj = tree[-1]\n if isinstance(obj, (six.string_types)):\n if re.match(REGEX_DYN_REF, obj):\n if re.match(REGEX_DYN_REF_SSM, obj):\n message = 'Password should use a secure dynamic reference for %s' % (\n '/'.join(map(str, tree[:-1])))\n matches.append(RuleMatch(tree[:-1], message))\n else:\n message = 'Password shouldn\\'t be hardcoded for %s' % (\n '/'.join(map(str, tree[:-1])))\n matches.append(RuleMatch(tree[:-1], message))\n elif isinstance(obj, dict):\n if len(obj) == 1:\n for key, value in obj.items():\n if key == 'Ref':\n if value in parameters:\n param = cfn.template['Parameters'][value]\n if 'NoEcho' in param:\n if not param['NoEcho']:\n fix_params.append(\n {'Name': value, 'Use': password_property})\n else:\n fix_params.append({'Name': value, 'Use': password_property})\n else:\n message = 'Inappropriate map found for password on %s' % (\n '/'.join(map(str, tree[:-1])))\n matches.append(RuleMatch(tree[:-1], message))\n\n for paramname in fix_params:\n message = 'Parameter {} used as {}, therefore NoEcho should be True'.format(\n paramname['Name'], paramname['Use'])\n tree = ['Parameters', paramname['Name']]\n matches.append(RuleMatch(tree, message))\n return matches\n", "path": "src/cfnlint/rules/resources/properties/Password.py"}]} | 1,282 | 177 |
gh_patches_debug_20732 | rasdani/github-patches | git_diff | ocf__ocfweb-124 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Stop emailing us when there's a paper jam
```
A problem was encountered and reported via ocflib:
An exception occured in an ocfweb periodic function:
Traceback (most recent call last):
File "/usr/share/python/ocfweb/lib/python3.4/site-packages/ocfweb/caching.py", line 56, in cache_lookup_with_fallback
raise KeyError('Forcing miss as requested.')
KeyError: 'Forcing miss as requested.'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/share/python/ocfweb/lib/python3.4/site-packages/ocfweb/bin/run_periodic_functions.py", line 47, in run_periodic_functions
pf.update()
File "/usr/share/python/ocfweb/lib/python3.4/site-packages/ocfweb/caching.py", line 214, in update
force_miss=True,
File "/usr/share/python/ocfweb/lib/python3.4/site-packages/ocfweb/caching.py", line 66, in cache_lookup_with_fallback
result = fallback()
File "/usr/share/python/ocfweb/lib/python3.4/site-packages/ocfweb/caching.py", line 170, in function_with_timestamp
return (datetime.now(), self.function())
File "/usr/share/python/ocfweb/lib/python3.4/site-packages/ocfweb/stats/summary.py", line 68, in printers
for printer in PRINTERS
File "/usr/share/python/ocfweb/lib/python3.4/site-packages/ocfweb/stats/summary.py", line 68, in <genexpr>
for printer in PRINTERS
File "/usr/share/python/ocfweb/lib/python3.4/site-packages/ocflib/lab/printing.py", line 40, in get_toner
for oid in (OID_TONER_CUR, OID_TONER_MAX)
File "/usr/share/python/ocfweb/lib/python3.4/site-packages/ocflib/lab/printing.py", line 40, in <genexpr>
for oid in (OID_TONER_CUR, OID_TONER_MAX)
File "/usr/share/python/ocfweb/lib/python3.4/site-packages/ocflib/lab/printing.py", line 30, in _snmp
'Device {} returned error status: {}'.format(host, err_status),
OSError: Device deforestation returned error status: 5
Periodic function:
* Key: ('2016.01.20.18.11-gitbf0aae55', 'ocfweb.stats.summary#printers', (), ())
* Last Update: 2016-01-21 15:48:30.668738 (61.003647 seconds ago)
* Period: 60
* TTL: 120
The background process will now pause for 30 seconds.
====
Hostname: coma.ocf.berkeley.edu
Callstack:
at /usr/share/python/ocfweb/lib/python3.4/site-packages/ocflib/misc/mail.py:59 (send_problem_report)
by /usr/share/python/ocfweb/lib/python3.4/site-packages/ocfweb/bin/run_periodic_functions.py:73 (run_periodic_functions)
by /usr/share/python/ocfweb/lib/python3.4/site-packages/ocfweb/bin/run_periodic_functions.py:116 (main)
by /usr/share/python/ocfweb/bin/ocfweb-run-periodic-functions:9 (<module>)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ocfweb/stats/summary.py`
Content:
```
1 from datetime import date
2 from datetime import datetime
3 from operator import attrgetter
4
5 from django.shortcuts import render
6 from ocflib.constants import CURRENT_SEMESTER_START
7 from ocflib.lab.printing import get_maintkit
8 from ocflib.lab.printing import get_toner
9 from ocflib.lab.printing import PRINTERS
10 from ocflib.lab.stats import list_desktops
11 from ocflib.lab.stats import staff_in_lab as real_staff_in_lab
12 from ocflib.lab.stats import STATS_EPOCH
13 from ocflib.lab.stats import top_staff_alltime as real_top_staff_alltime
14 from ocflib.lab.stats import top_staff_semester as real_top_staff_semester
15 from ocflib.lab.stats import users_in_lab_count as real_users_in_lab_count
16 from ocflib.lab.stats import UtilizationProfile
17
18 from ocfweb.caching import periodic
19 from ocfweb.stats.daily_graph import get_open_close
20
21
22 @periodic(60)
23 def desktop_profiles():
24 open_, close = get_open_close(date.today())
25 now = datetime.today()
26
27 # If the lab has opened, but hasn't closed yet, only count
28 # statistics until the current time. If the lab isn't open
29 # yet, then don't count anything, and if it is closed, show
30 # statistics from when it was open during the day.
31 if now > open_ and now < close:
32 end = now
33 elif now <= open_:
34 end = open_
35 else:
36 end = close
37
38 return sorted(
39 UtilizationProfile.from_hostnames(list_desktops(), open_, end).values(),
40 key=attrgetter('hostname'),
41 )
42
43
44 @periodic(30)
45 def staff_in_lab():
46 return real_staff_in_lab()
47
48
49 @periodic(300)
50 def top_staff_alltime():
51 return real_top_staff_alltime()
52
53
54 @periodic(300)
55 def top_staff_semester():
56 return real_top_staff_semester()
57
58
59 @periodic(30)
60 def users_in_lab_count():
61 return real_users_in_lab_count()
62
63
64 @periodic(60)
65 def printers():
66 return sorted(
67 (printer, get_toner(printer), get_maintkit(printer))
68 for printer in PRINTERS
69 )
70
71
72 def summary(request):
73 return render(
74 request,
75 'summary.html',
76 {
77 'title': 'Lab Statistics',
78 'desktop_profiles': desktop_profiles(),
79 'current_semester_start': CURRENT_SEMESTER_START,
80 'stats_epoch': STATS_EPOCH,
81 'staff_in_lab': staff_in_lab(),
82 'top_staff_alltime': top_staff_alltime()[:10],
83 'top_staff_semester': top_staff_semester()[:10],
84 'users_in_lab_count': users_in_lab_count(),
85 'printers': printers(),
86 },
87 )
88
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ocfweb/stats/summary.py b/ocfweb/stats/summary.py
--- a/ocfweb/stats/summary.py
+++ b/ocfweb/stats/summary.py
@@ -1,3 +1,4 @@
+import logging
from datetime import date
from datetime import datetime
from operator import attrgetter
@@ -19,6 +20,9 @@
from ocfweb.stats.daily_graph import get_open_close
+_logger = logging.getLogger(__name__)
+
+
@periodic(60)
def desktop_profiles():
open_, close = get_open_close(date.today())
@@ -63,8 +67,17 @@
@periodic(60)
def printers():
+ def silence(f):
+ def inner(*args, **kwargs):
+ try:
+ return f(*args, **kwargs)
+ except OSError as ex:
+ _logger.warn('Silencing exception reading printer data: {}'.format(ex))
+ return None
+ return inner
+
return sorted(
- (printer, get_toner(printer), get_maintkit(printer))
+ (printer, silence(get_toner)(printer), silence(get_maintkit)(printer))
for printer in PRINTERS
)
| {"golden_diff": "diff --git a/ocfweb/stats/summary.py b/ocfweb/stats/summary.py\n--- a/ocfweb/stats/summary.py\n+++ b/ocfweb/stats/summary.py\n@@ -1,3 +1,4 @@\n+import logging\n from datetime import date\n from datetime import datetime\n from operator import attrgetter\n@@ -19,6 +20,9 @@\n from ocfweb.stats.daily_graph import get_open_close\n \n \n+_logger = logging.getLogger(__name__)\n+\n+\n @periodic(60)\n def desktop_profiles():\n open_, close = get_open_close(date.today())\n@@ -63,8 +67,17 @@\n \n @periodic(60)\n def printers():\n+ def silence(f):\n+ def inner(*args, **kwargs):\n+ try:\n+ return f(*args, **kwargs)\n+ except OSError as ex:\n+ _logger.warn('Silencing exception reading printer data: {}'.format(ex))\n+ return None\n+ return inner\n+\n return sorted(\n- (printer, get_toner(printer), get_maintkit(printer))\n+ (printer, silence(get_toner)(printer), silence(get_maintkit)(printer))\n for printer in PRINTERS\n )\n", "issue": "Stop emailing us when there's a paper jam\n```\nA problem was encountered and reported via ocflib:\n\nAn exception occured in an ocfweb periodic function:\n\nTraceback (most recent call last):\n File \"/usr/share/python/ocfweb/lib/python3.4/site-packages/ocfweb/caching.py\", line 56, in cache_lookup_with_fallback\n raise KeyError('Forcing miss as requested.')\nKeyError: 'Forcing miss as requested.'\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/usr/share/python/ocfweb/lib/python3.4/site-packages/ocfweb/bin/run_periodic_functions.py\", line 47, in run_periodic_functions\n pf.update()\n File \"/usr/share/python/ocfweb/lib/python3.4/site-packages/ocfweb/caching.py\", line 214, in update\n force_miss=True,\n File \"/usr/share/python/ocfweb/lib/python3.4/site-packages/ocfweb/caching.py\", line 66, in cache_lookup_with_fallback\n result = fallback()\n File \"/usr/share/python/ocfweb/lib/python3.4/site-packages/ocfweb/caching.py\", line 170, in function_with_timestamp\n return (datetime.now(), self.function())\n File \"/usr/share/python/ocfweb/lib/python3.4/site-packages/ocfweb/stats/summary.py\", line 68, in printers\n for printer in PRINTERS\n File \"/usr/share/python/ocfweb/lib/python3.4/site-packages/ocfweb/stats/summary.py\", line 68, in <genexpr>\n for printer in PRINTERS\n File \"/usr/share/python/ocfweb/lib/python3.4/site-packages/ocflib/lab/printing.py\", line 40, in get_toner\n for oid in (OID_TONER_CUR, OID_TONER_MAX)\n File \"/usr/share/python/ocfweb/lib/python3.4/site-packages/ocflib/lab/printing.py\", line 40, in <genexpr>\n for oid in (OID_TONER_CUR, OID_TONER_MAX)\n File \"/usr/share/python/ocfweb/lib/python3.4/site-packages/ocflib/lab/printing.py\", line 30, in _snmp\n 'Device {} returned error status: {}'.format(host, err_status),\nOSError: Device deforestation returned error status: 5\n\n\nPeriodic function:\n * Key: ('2016.01.20.18.11-gitbf0aae55', 'ocfweb.stats.summary#printers', (), ())\n * Last Update: 2016-01-21 15:48:30.668738 (61.003647 seconds ago)\n\n\n * Period: 60\n * TTL: 120\n\nThe background process will now pause for 30 seconds.\n\n\n====\nHostname: coma.ocf.berkeley.edu\nCallstack:\n at /usr/share/python/ocfweb/lib/python3.4/site-packages/ocflib/misc/mail.py:59 (send_problem_report)\n by /usr/share/python/ocfweb/lib/python3.4/site-packages/ocfweb/bin/run_periodic_functions.py:73 (run_periodic_functions)\n by /usr/share/python/ocfweb/lib/python3.4/site-packages/ocfweb/bin/run_periodic_functions.py:116 (main)\n by /usr/share/python/ocfweb/bin/ocfweb-run-periodic-functions:9 (<module>)\n```\n\n", "before_files": [{"content": "from datetime import date\nfrom datetime import datetime\nfrom operator import attrgetter\n\nfrom django.shortcuts import render\nfrom ocflib.constants import CURRENT_SEMESTER_START\nfrom ocflib.lab.printing import get_maintkit\nfrom ocflib.lab.printing import get_toner\nfrom ocflib.lab.printing import PRINTERS\nfrom ocflib.lab.stats import list_desktops\nfrom ocflib.lab.stats import staff_in_lab as real_staff_in_lab\nfrom ocflib.lab.stats import STATS_EPOCH\nfrom ocflib.lab.stats import top_staff_alltime as real_top_staff_alltime\nfrom ocflib.lab.stats import top_staff_semester as real_top_staff_semester\nfrom ocflib.lab.stats import users_in_lab_count as real_users_in_lab_count\nfrom ocflib.lab.stats import UtilizationProfile\n\nfrom ocfweb.caching import periodic\nfrom ocfweb.stats.daily_graph import get_open_close\n\n\n@periodic(60)\ndef desktop_profiles():\n open_, close = get_open_close(date.today())\n now = datetime.today()\n\n # If the lab has opened, but hasn't closed yet, only count\n # statistics until the current time. If the lab isn't open\n # yet, then don't count anything, and if it is closed, show\n # statistics from when it was open during the day.\n if now > open_ and now < close:\n end = now\n elif now <= open_:\n end = open_\n else:\n end = close\n\n return sorted(\n UtilizationProfile.from_hostnames(list_desktops(), open_, end).values(),\n key=attrgetter('hostname'),\n )\n\n\n@periodic(30)\ndef staff_in_lab():\n return real_staff_in_lab()\n\n\n@periodic(300)\ndef top_staff_alltime():\n return real_top_staff_alltime()\n\n\n@periodic(300)\ndef top_staff_semester():\n return real_top_staff_semester()\n\n\n@periodic(30)\ndef users_in_lab_count():\n return real_users_in_lab_count()\n\n\n@periodic(60)\ndef printers():\n return sorted(\n (printer, get_toner(printer), get_maintkit(printer))\n for printer in PRINTERS\n )\n\n\ndef summary(request):\n return render(\n request,\n 'summary.html',\n {\n 'title': 'Lab Statistics',\n 'desktop_profiles': desktop_profiles(),\n 'current_semester_start': CURRENT_SEMESTER_START,\n 'stats_epoch': STATS_EPOCH,\n 'staff_in_lab': staff_in_lab(),\n 'top_staff_alltime': top_staff_alltime()[:10],\n 'top_staff_semester': top_staff_semester()[:10],\n 'users_in_lab_count': users_in_lab_count(),\n 'printers': printers(),\n },\n )\n", "path": "ocfweb/stats/summary.py"}], "after_files": [{"content": "import logging\nfrom datetime import date\nfrom datetime import datetime\nfrom operator import attrgetter\n\nfrom django.shortcuts import render\nfrom ocflib.constants import CURRENT_SEMESTER_START\nfrom ocflib.lab.printing import get_maintkit\nfrom ocflib.lab.printing import get_toner\nfrom ocflib.lab.printing import PRINTERS\nfrom ocflib.lab.stats import list_desktops\nfrom ocflib.lab.stats import staff_in_lab as real_staff_in_lab\nfrom ocflib.lab.stats import STATS_EPOCH\nfrom ocflib.lab.stats import top_staff_alltime as real_top_staff_alltime\nfrom ocflib.lab.stats import top_staff_semester as real_top_staff_semester\nfrom ocflib.lab.stats import users_in_lab_count as real_users_in_lab_count\nfrom ocflib.lab.stats import UtilizationProfile\n\nfrom ocfweb.caching import periodic\nfrom ocfweb.stats.daily_graph import get_open_close\n\n\n_logger = logging.getLogger(__name__)\n\n\n@periodic(60)\ndef desktop_profiles():\n open_, close = get_open_close(date.today())\n now = datetime.today()\n\n # If the lab has opened, but hasn't closed yet, only count\n # statistics until the current time. If the lab isn't open\n # yet, then don't count anything, and if it is closed, show\n # statistics from when it was open during the day.\n if now > open_ and now < close:\n end = now\n elif now <= open_:\n end = open_\n else:\n end = close\n\n return sorted(\n UtilizationProfile.from_hostnames(list_desktops(), open_, end).values(),\n key=attrgetter('hostname'),\n )\n\n\n@periodic(30)\ndef staff_in_lab():\n return real_staff_in_lab()\n\n\n@periodic(300)\ndef top_staff_alltime():\n return real_top_staff_alltime()\n\n\n@periodic(300)\ndef top_staff_semester():\n return real_top_staff_semester()\n\n\n@periodic(30)\ndef users_in_lab_count():\n return real_users_in_lab_count()\n\n\n@periodic(60)\ndef printers():\n def silence(f):\n def inner(*args, **kwargs):\n try:\n return f(*args, **kwargs)\n except OSError as ex:\n _logger.warn('Silencing exception reading printer data: {}'.format(ex))\n return None\n return inner\n\n return sorted(\n (printer, silence(get_toner)(printer), silence(get_maintkit)(printer))\n for printer in PRINTERS\n )\n\n\ndef summary(request):\n return render(\n request,\n 'summary.html',\n {\n 'title': 'Lab Statistics',\n 'desktop_profiles': desktop_profiles(),\n 'current_semester_start': CURRENT_SEMESTER_START,\n 'stats_epoch': STATS_EPOCH,\n 'staff_in_lab': staff_in_lab(),\n 'top_staff_alltime': top_staff_alltime()[:10],\n 'top_staff_semester': top_staff_semester()[:10],\n 'users_in_lab_count': users_in_lab_count(),\n 'printers': printers(),\n },\n )\n", "path": "ocfweb/stats/summary.py"}]} | 1,847 | 271 |
gh_patches_debug_29381 | rasdani/github-patches | git_diff | opensearch-project__opensearch-build-193 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bundle system - copy opensearch-tar-install script into bundle
The [opensearch-tar-install](https://github.com/opensearch-project/opensearch-build/blob/main/release/tar/linux/opensearch-tar-install.sh) script needs to be included with assembled bundles similar to [this](https://github.com/opensearch-project/opensearch-build/blob/main/release/tar/linux/opensearch-tar-build.sh#L156) shell script.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bundle-workflow/python/assemble.py`
Content:
```
1 #!/usr/bin/env python
2
3 # Copyright OpenSearch Contributors.
4 # SPDX-License-Identifier: Apache-2.0
5
6 import os
7 import tempfile
8 import argparse
9 from assemble_workflow.bundle import Bundle
10 from assemble_workflow.bundle_recorder import BundleRecorder
11 from manifests.build_manifest import BuildManifest
12
13 parser = argparse.ArgumentParser(description = "Assemble an OpenSearch Bundle")
14 parser.add_argument('manifest', type = argparse.FileType('r'), help="Manifest file.")
15 args = parser.parse_args()
16
17 build_manifest = BuildManifest.from_file(args.manifest)
18 build = build_manifest.build
19 artifacts_dir = os.path.dirname(os.path.realpath(args.manifest.name))
20 output_dir = os.path.join(os.getcwd(), 'bundle')
21 os.makedirs(output_dir, exist_ok=True)
22
23 with tempfile.TemporaryDirectory() as work_dir:
24 print(f'Bundling {build.name} ({build.architecture}) into {output_dir} ...')
25
26 os.chdir(work_dir)
27
28 bundle_recorder = BundleRecorder(build, output_dir, artifacts_dir)
29 bundle = Bundle(build_manifest, artifacts_dir, bundle_recorder)
30
31 bundle.install_plugins()
32 print(f'Installed plugins: {bundle.installed_plugins}')
33
34 # Save a copy of the manifest inside of the tar
35 bundle_recorder.write_manifest(bundle.archive_path)
36 bundle.build_tar(output_dir)
37
38 bundle_recorder.write_manifest(output_dir)
39
40 print(f'Done.')
41
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bundle-workflow/python/assemble.py b/bundle-workflow/python/assemble.py
--- a/bundle-workflow/python/assemble.py
+++ b/bundle-workflow/python/assemble.py
@@ -6,6 +6,7 @@
import os
import tempfile
import argparse
+import shutil
from assemble_workflow.bundle import Bundle
from assemble_workflow.bundle_recorder import BundleRecorder
from manifests.build_manifest import BuildManifest
@@ -14,6 +15,12 @@
parser.add_argument('manifest', type = argparse.FileType('r'), help="Manifest file.")
args = parser.parse_args()
+tarball_installation_script = os.path.join(os.path.dirname(os.path.abspath(__file__)), '../../release/tar/linux/opensearch-tar-install.sh')
+
+if not os.path.isfile(tarball_installation_script):
+ print(f'No installation script found at path: {tarball_installation_script}')
+ exit(1)
+
build_manifest = BuildManifest.from_file(args.manifest)
build = build_manifest.build
artifacts_dir = os.path.dirname(os.path.realpath(args.manifest.name))
@@ -31,6 +38,9 @@
bundle.install_plugins()
print(f'Installed plugins: {bundle.installed_plugins}')
+ # Copy the tar installation script into the bundle
+ shutil.copyfile(tarball_installation_script, os.path.join(bundle.archive_path, os.path.basename(tarball_installation_script)))
+
# Save a copy of the manifest inside of the tar
bundle_recorder.write_manifest(bundle.archive_path)
bundle.build_tar(output_dir)
| {"golden_diff": "diff --git a/bundle-workflow/python/assemble.py b/bundle-workflow/python/assemble.py\n--- a/bundle-workflow/python/assemble.py\n+++ b/bundle-workflow/python/assemble.py\n@@ -6,6 +6,7 @@\n import os\n import tempfile\n import argparse\n+import shutil\n from assemble_workflow.bundle import Bundle\n from assemble_workflow.bundle_recorder import BundleRecorder\n from manifests.build_manifest import BuildManifest\n@@ -14,6 +15,12 @@\n parser.add_argument('manifest', type = argparse.FileType('r'), help=\"Manifest file.\")\n args = parser.parse_args()\n \n+tarball_installation_script = os.path.join(os.path.dirname(os.path.abspath(__file__)), '../../release/tar/linux/opensearch-tar-install.sh')\n+\n+if not os.path.isfile(tarball_installation_script):\n+ print(f'No installation script found at path: {tarball_installation_script}')\n+ exit(1)\n+\n build_manifest = BuildManifest.from_file(args.manifest)\n build = build_manifest.build\n artifacts_dir = os.path.dirname(os.path.realpath(args.manifest.name))\n@@ -31,6 +38,9 @@\n bundle.install_plugins()\n print(f'Installed plugins: {bundle.installed_plugins}')\n \n+ # Copy the tar installation script into the bundle\n+ shutil.copyfile(tarball_installation_script, os.path.join(bundle.archive_path, os.path.basename(tarball_installation_script)))\n+\n # Save a copy of the manifest inside of the tar\n bundle_recorder.write_manifest(bundle.archive_path)\n bundle.build_tar(output_dir)\n", "issue": "Bundle system - copy opensearch-tar-install script into bundle\nThe [opensearch-tar-install](https://github.com/opensearch-project/opensearch-build/blob/main/release/tar/linux/opensearch-tar-install.sh) script needs to be included with assembled bundles similar to [this](https://github.com/opensearch-project/opensearch-build/blob/main/release/tar/linux/opensearch-tar-build.sh#L156) shell script.\r\n\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\n# Copyright OpenSearch Contributors.\n# SPDX-License-Identifier: Apache-2.0\n\nimport os\nimport tempfile\nimport argparse\nfrom assemble_workflow.bundle import Bundle\nfrom assemble_workflow.bundle_recorder import BundleRecorder\nfrom manifests.build_manifest import BuildManifest\n\nparser = argparse.ArgumentParser(description = \"Assemble an OpenSearch Bundle\")\nparser.add_argument('manifest', type = argparse.FileType('r'), help=\"Manifest file.\")\nargs = parser.parse_args()\n\nbuild_manifest = BuildManifest.from_file(args.manifest)\nbuild = build_manifest.build\nartifacts_dir = os.path.dirname(os.path.realpath(args.manifest.name))\noutput_dir = os.path.join(os.getcwd(), 'bundle')\nos.makedirs(output_dir, exist_ok=True)\n\nwith tempfile.TemporaryDirectory() as work_dir:\n print(f'Bundling {build.name} ({build.architecture}) into {output_dir} ...')\n\n os.chdir(work_dir)\n\n bundle_recorder = BundleRecorder(build, output_dir, artifacts_dir)\n bundle = Bundle(build_manifest, artifacts_dir, bundle_recorder)\n\n bundle.install_plugins()\n print(f'Installed plugins: {bundle.installed_plugins}')\n\n # Save a copy of the manifest inside of the tar\n bundle_recorder.write_manifest(bundle.archive_path)\n bundle.build_tar(output_dir)\n\n bundle_recorder.write_manifest(output_dir)\n\nprint(f'Done.')\n", "path": "bundle-workflow/python/assemble.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\n# Copyright OpenSearch Contributors.\n# SPDX-License-Identifier: Apache-2.0\n\nimport os\nimport tempfile\nimport argparse\nimport shutil\nfrom assemble_workflow.bundle import Bundle\nfrom assemble_workflow.bundle_recorder import BundleRecorder\nfrom manifests.build_manifest import BuildManifest\n\nparser = argparse.ArgumentParser(description = \"Assemble an OpenSearch Bundle\")\nparser.add_argument('manifest', type = argparse.FileType('r'), help=\"Manifest file.\")\nargs = parser.parse_args()\n\ntarball_installation_script = os.path.join(os.path.dirname(os.path.abspath(__file__)), '../../release/tar/linux/opensearch-tar-install.sh')\n\nif not os.path.isfile(tarball_installation_script):\n print(f'No installation script found at path: {tarball_installation_script}')\n exit(1)\n\nbuild_manifest = BuildManifest.from_file(args.manifest)\nbuild = build_manifest.build\nartifacts_dir = os.path.dirname(os.path.realpath(args.manifest.name))\noutput_dir = os.path.join(os.getcwd(), 'bundle')\nos.makedirs(output_dir, exist_ok=True)\n\nwith tempfile.TemporaryDirectory() as work_dir:\n print(f'Bundling {build.name} ({build.architecture}) into {output_dir} ...')\n\n os.chdir(work_dir)\n\n bundle_recorder = BundleRecorder(build, output_dir, artifacts_dir)\n bundle = Bundle(build_manifest, artifacts_dir, bundle_recorder)\n\n bundle.install_plugins()\n print(f'Installed plugins: {bundle.installed_plugins}')\n\n # Copy the tar installation script into the bundle\n shutil.copyfile(tarball_installation_script, os.path.join(bundle.archive_path, os.path.basename(tarball_installation_script)))\n\n # Save a copy of the manifest inside of the tar\n bundle_recorder.write_manifest(bundle.archive_path)\n bundle.build_tar(output_dir)\n\n bundle_recorder.write_manifest(output_dir)\n\nprint(f'Done.')\n", "path": "bundle-workflow/python/assemble.py"}]} | 722 | 339 |
gh_patches_debug_2525 | rasdani/github-patches | git_diff | kornia__kornia-679 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Enable typing support
## 🚀 Feature
Enable typing support for `kornia` so other packages can benefit from the type hints.
## Motivation
Currently `kornia` only uses the type hints to check for internal consistency. For other packages you get the following error message when running `mypy`:
```python
import kornia
```
```
error: Cannot find implementation or library stub for module named 'kornia' [import]
```
## Pitch
To enable typing for other packages that depend on `kornia` we just need to put an empty file called `py.typed` in the root of the `kornia` package. See [`PEP561`](https://www.python.org/dev/peps/pep-0561/#packaging-type-information) for details.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # Welcome to the Kornia setup.py.
2 #
3
4 import os
5 from setuptools import setup, find_packages
6 import subprocess
7 import distutils.command.clean
8
9
10 ################
11 # The variables below define the current version under
12 # development and the current pytorch supported verions.
13 # WARNING: Becareful and do not touch those variables,
14 # unless you are a maintainer. Otherwise, could brake
15 # the package backward compatibility.
16
17 # NOTE(maintainers): modify this variable each time you do a release
18
19 version = '0.4.1' # this a tag for the current development version
20
21
22 # NOTE(maintainers): update this dictionary each time you do a release
23 # When multiple pytorch versions are associated with a single version of kornia,
24 # the oldest one is the requirement. The versions should be inequalities.
25 # Once a pytorch version (in the future) breaks a kornia version, we could just
26 # add a maximal version.
27 kornia_pt_dependencies = {
28 '0.4.0': '>=1.6.0,<1.7.0',
29 '0.3.2': '>=1.5.0,<1.6.0',
30 '0.3.1': '>=1.5.0',
31 '0.2.2': '>=1.4.0',
32 '0.1.4': '>=1.2.0',
33 }
34
35
36 # version can be overiden eg with KORNIA_BUILD_VERSION so we map each possible kornia version to the dictionary keys
37 def dep_version(version):
38 compatible_versions = [v for v in kornia_pt_dependencies.keys() if v >= version]
39 compatible_versions += [sorted(kornia_pt_dependencies)[-1]]
40 return min(compatible_versions)
41
42
43 #################################
44
45 sha = 'Unknown'
46 package_name = 'kornia'
47
48 cwd = os.path.dirname(os.path.abspath(__file__))
49
50 try:
51 sha = subprocess.check_output(['git', 'rev-parse', 'HEAD'], cwd=cwd).decode('ascii').strip()
52 except Exception:
53 pass
54
55 if os.getenv('KORNIA_BUILD_VERSION'):
56 version = os.getenv('KORNIA_BUILD_VERSION')
57 elif sha != 'Unknown':
58 version += '+' + sha[:7]
59 print("Building wheel {}-{}".format(package_name, version))
60
61
62 def write_version_file():
63 version_path = os.path.join(cwd, 'kornia', 'version.py')
64 with open(version_path, 'w') as f:
65 f.write("__version__ = '{}'\n".format(version))
66 f.write("git_version = {}\n".format(repr(sha)))
67
68
69 def read(*names, **kwargs):
70 with io.open(
71 os.path.join(os.path.dirname(__file__), *names),
72 encoding=kwargs.get("encoding", "utf8")
73 ) as fp:
74 return fp.read()
75
76
77 # open readme file and remove logo
78 readme = open('README.rst').read()
79 long_description = '\n'.join(readme.split('\n')[7:])
80
81
82 class clean(distutils.command.clean.clean):
83 def run(self):
84 with open('.gitignore', 'r') as f:
85 ignores = f.read()
86 for wildcard in filter(None, ignores.split('\n')):
87 for filename in glob.glob(wildcard):
88 try:
89 os.remove(filename)
90 except OSError:
91 shutil.rmtree(filename, ignore_errors=True)
92
93 # It's an old-style class in Python 2.7...
94 distutils.command.clean.clean.run(self)
95 # remove compiled and temporary files
96 subprocess.call(['rm -rf dist/ build/ kornia.egg*'], shell=True)
97
98 requirements = [
99 'numpy',
100 'torch' + kornia_pt_dependencies[dep_version(version)],
101 ]
102
103
104 if __name__ == '__main__':
105 write_version_file()
106 setup(
107 # Metadata
108 name=package_name,
109 version=version,
110 author='Edgar Riba',
111 author_email='[email protected]',
112 url='https://github.com/kornia/kornia',
113 description='Open Source Differentiable Computer Vision Library for PyTorch',
114 long_description=long_description,
115 license='Apache License 2.0',
116 python_requires='>=3.6',
117
118 # Test
119 setup_requires=['pytest-runner'],
120 tests_require=['pytest'],
121
122 # Package info
123 packages=find_packages(exclude=('docs', 'test', 'examples',)),
124
125 zip_safe=True,
126 install_requires=requirements,
127 classifiers=[
128 'Intended Audience :: Developers',
129 'Intended Audience :: Education',
130 'Intended Audience :: Science/Research',
131 'Operating System :: POSIX :: Linux',
132 'Programming Language :: Python :: 3 :: Only',
133 'License :: OSI Approved :: Apache Software License',
134 'Topic :: Scientific/Engineering :: Image Recognition',
135 'Topic :: Software Development :: Libraries',
136 ],
137 )
138
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -122,6 +122,10 @@
# Package info
packages=find_packages(exclude=('docs', 'test', 'examples',)),
+ package_data={
+ "kornia": ["py.typed"],
+ },
+
zip_safe=True,
install_requires=requirements,
classifiers=[
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -122,6 +122,10 @@\n # Package info\n packages=find_packages(exclude=('docs', 'test', 'examples',)),\n \n+ package_data={\n+ \"kornia\": [\"py.typed\"],\n+ },\n+\n zip_safe=True,\n install_requires=requirements,\n classifiers=[\n", "issue": "Enable typing support\n## \ud83d\ude80 Feature\r\n\r\nEnable typing support for `kornia` so other packages can benefit from the type hints.\r\n\r\n## Motivation\r\n\r\nCurrently `kornia` only uses the type hints to check for internal consistency. For other packages you get the following error message when running `mypy`:\r\n\r\n```python\r\nimport kornia\r\n```\r\n\r\n```\r\nerror: Cannot find implementation or library stub for module named 'kornia' [import]\r\n```\r\n\r\n## Pitch\r\n\r\nTo enable typing for other packages that depend on `kornia` we just need to put an empty file called `py.typed` in the root of the `kornia` package. See [`PEP561`](https://www.python.org/dev/peps/pep-0561/#packaging-type-information) for details.\r\n\r\n\n", "before_files": [{"content": "# Welcome to the Kornia setup.py.\n#\n\nimport os\nfrom setuptools import setup, find_packages\nimport subprocess\nimport distutils.command.clean\n\n\n################\n# The variables below define the current version under\n# development and the current pytorch supported verions.\n# WARNING: Becareful and do not touch those variables,\n# unless you are a maintainer. Otherwise, could brake\n# the package backward compatibility.\n\n# NOTE(maintainers): modify this variable each time you do a release\n\nversion = '0.4.1' # this a tag for the current development version\n\n\n# NOTE(maintainers): update this dictionary each time you do a release\n# When multiple pytorch versions are associated with a single version of kornia,\n# the oldest one is the requirement. The versions should be inequalities.\n# Once a pytorch version (in the future) breaks a kornia version, we could just\n# add a maximal version.\nkornia_pt_dependencies = {\n '0.4.0': '>=1.6.0,<1.7.0',\n '0.3.2': '>=1.5.0,<1.6.0',\n '0.3.1': '>=1.5.0',\n '0.2.2': '>=1.4.0',\n '0.1.4': '>=1.2.0',\n}\n\n\n# version can be overiden eg with KORNIA_BUILD_VERSION so we map each possible kornia version to the dictionary keys\ndef dep_version(version):\n compatible_versions = [v for v in kornia_pt_dependencies.keys() if v >= version]\n compatible_versions += [sorted(kornia_pt_dependencies)[-1]]\n return min(compatible_versions)\n\n\n#################################\n\nsha = 'Unknown'\npackage_name = 'kornia'\n\ncwd = os.path.dirname(os.path.abspath(__file__))\n\ntry:\n sha = subprocess.check_output(['git', 'rev-parse', 'HEAD'], cwd=cwd).decode('ascii').strip()\nexcept Exception:\n pass\n\nif os.getenv('KORNIA_BUILD_VERSION'):\n version = os.getenv('KORNIA_BUILD_VERSION')\nelif sha != 'Unknown':\n version += '+' + sha[:7]\nprint(\"Building wheel {}-{}\".format(package_name, version))\n\n\ndef write_version_file():\n version_path = os.path.join(cwd, 'kornia', 'version.py')\n with open(version_path, 'w') as f:\n f.write(\"__version__ = '{}'\\n\".format(version))\n f.write(\"git_version = {}\\n\".format(repr(sha)))\n\n\ndef read(*names, **kwargs):\n with io.open(\n os.path.join(os.path.dirname(__file__), *names),\n encoding=kwargs.get(\"encoding\", \"utf8\")\n ) as fp:\n return fp.read()\n\n\n# open readme file and remove logo\nreadme = open('README.rst').read()\nlong_description = '\\n'.join(readme.split('\\n')[7:])\n\n\nclass clean(distutils.command.clean.clean):\n def run(self):\n with open('.gitignore', 'r') as f:\n ignores = f.read()\n for wildcard in filter(None, ignores.split('\\n')):\n for filename in glob.glob(wildcard):\n try:\n os.remove(filename)\n except OSError:\n shutil.rmtree(filename, ignore_errors=True)\n\n # It's an old-style class in Python 2.7...\n distutils.command.clean.clean.run(self)\n # remove compiled and temporary files\n subprocess.call(['rm -rf dist/ build/ kornia.egg*'], shell=True)\n\nrequirements = [\n 'numpy',\n 'torch' + kornia_pt_dependencies[dep_version(version)],\n]\n\n\nif __name__ == '__main__':\n write_version_file()\n setup(\n # Metadata\n name=package_name,\n version=version,\n author='Edgar Riba',\n author_email='[email protected]',\n url='https://github.com/kornia/kornia',\n description='Open Source Differentiable Computer Vision Library for PyTorch',\n long_description=long_description,\n license='Apache License 2.0',\n python_requires='>=3.6',\n\n # Test\n setup_requires=['pytest-runner'],\n tests_require=['pytest'],\n\n # Package info\n packages=find_packages(exclude=('docs', 'test', 'examples',)),\n\n zip_safe=True,\n install_requires=requirements,\n classifiers=[\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'Operating System :: POSIX :: Linux',\n 'Programming Language :: Python :: 3 :: Only',\n 'License :: OSI Approved :: Apache Software License',\n 'Topic :: Scientific/Engineering :: Image Recognition',\n 'Topic :: Software Development :: Libraries',\n ],\n )\n", "path": "setup.py"}], "after_files": [{"content": "# Welcome to the Kornia setup.py.\n#\n\nimport os\nfrom setuptools import setup, find_packages\nimport subprocess\nimport distutils.command.clean\n\n\n################\n# The variables below define the current version under\n# development and the current pytorch supported verions.\n# WARNING: Becareful and do not touch those variables,\n# unless you are a maintainer. Otherwise, could brake\n# the package backward compatibility.\n\n# NOTE(maintainers): modify this variable each time you do a release\n\nversion = '0.4.1' # this a tag for the current development version\n\n\n# NOTE(maintainers): update this dictionary each time you do a release\n# When multiple pytorch versions are associated with a single version of kornia,\n# the oldest one is the requirement. The versions should be inequalities.\n# Once a pytorch version (in the future) breaks a kornia version, we could just\n# add a maximal version.\nkornia_pt_dependencies = {\n '0.4.0': '>=1.6.0,<1.7.0',\n '0.3.2': '>=1.5.0,<1.6.0',\n '0.3.1': '>=1.5.0',\n '0.2.2': '>=1.4.0',\n '0.1.4': '>=1.2.0',\n}\n\n\n# version can be overiden eg with KORNIA_BUILD_VERSION so we map each possible kornia version to the dictionary keys\ndef dep_version(version):\n compatible_versions = [v for v in kornia_pt_dependencies.keys() if v >= version]\n compatible_versions += [sorted(kornia_pt_dependencies)[-1]]\n return min(compatible_versions)\n\n\n#################################\n\nsha = 'Unknown'\npackage_name = 'kornia'\n\ncwd = os.path.dirname(os.path.abspath(__file__))\n\ntry:\n sha = subprocess.check_output(['git', 'rev-parse', 'HEAD'], cwd=cwd).decode('ascii').strip()\nexcept Exception:\n pass\n\nif os.getenv('KORNIA_BUILD_VERSION'):\n version = os.getenv('KORNIA_BUILD_VERSION')\nelif sha != 'Unknown':\n version += '+' + sha[:7]\nprint(\"Building wheel {}-{}\".format(package_name, version))\n\n\ndef write_version_file():\n version_path = os.path.join(cwd, 'kornia', 'version.py')\n with open(version_path, 'w') as f:\n f.write(\"__version__ = '{}'\\n\".format(version))\n f.write(\"git_version = {}\\n\".format(repr(sha)))\n\n\ndef read(*names, **kwargs):\n with io.open(\n os.path.join(os.path.dirname(__file__), *names),\n encoding=kwargs.get(\"encoding\", \"utf8\")\n ) as fp:\n return fp.read()\n\n\n# open readme file and remove logo\nreadme = open('README.rst').read()\nlong_description = '\\n'.join(readme.split('\\n')[7:])\n\n\nclass clean(distutils.command.clean.clean):\n def run(self):\n with open('.gitignore', 'r') as f:\n ignores = f.read()\n for wildcard in filter(None, ignores.split('\\n')):\n for filename in glob.glob(wildcard):\n try:\n os.remove(filename)\n except OSError:\n shutil.rmtree(filename, ignore_errors=True)\n\n # It's an old-style class in Python 2.7...\n distutils.command.clean.clean.run(self)\n # remove compiled and temporary files\n subprocess.call(['rm -rf dist/ build/ kornia.egg*'], shell=True)\n\nrequirements = [\n 'numpy',\n 'torch' + kornia_pt_dependencies[dep_version(version)],\n]\n\n\nif __name__ == '__main__':\n write_version_file()\n setup(\n # Metadata\n name=package_name,\n version=version,\n author='Edgar Riba',\n author_email='[email protected]',\n url='https://github.com/kornia/kornia',\n description='Open Source Differentiable Computer Vision Library for PyTorch',\n long_description=long_description,\n license='Apache License 2.0',\n python_requires='>=3.6',\n\n # Test\n setup_requires=['pytest-runner'],\n tests_require=['pytest'],\n\n # Package info\n packages=find_packages(exclude=('docs', 'test', 'examples',)),\n\n package_data={\n \"kornia\": [\"py.typed\"],\n },\n\n zip_safe=True,\n install_requires=requirements,\n classifiers=[\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'Operating System :: POSIX :: Linux',\n 'Programming Language :: Python :: 3 :: Only',\n 'License :: OSI Approved :: Apache Software License',\n 'Topic :: Scientific/Engineering :: Image Recognition',\n 'Topic :: Software Development :: Libraries',\n ],\n )\n", "path": "setup.py"}]} | 1,789 | 92 |
gh_patches_debug_5960 | rasdani/github-patches | git_diff | aws-cloudformation__cfn-lint-402 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
${aws:username} now fails
*cfn-lint version: 0.8
E1029 Found an embedded parameter outside of an "Fn::Sub" at Resources/ForceMFAPolicy/Properties/PolicyDocument/Statement/3/Resource/0/arn:aws:iam::*:mfa/${aws:username}
It looks like the IAM variables are now being detected as embedded parameters in cfn-lint.
Effect: "Allow"
Action:
- "iam:UploadSSHPublicKey"
Resource: "arn:aws:iam::*:user/${aws:username}"
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cfnlint/rules/functions/SubNeeded.py`
Content:
```
1 """
2 Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
3
4 Permission is hereby granted, free of charge, to any person obtaining a copy of this
5 software and associated documentation files (the "Software"), to deal in the Software
6 without restriction, including without limitation the rights to use, copy, modify,
7 merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
8 permit persons to whom the Software is furnished to do so.
9
10 THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
11 INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
12 PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
13 HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
14 OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
15 SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
16 """
17 import re
18 from cfnlint import CloudFormationLintRule
19 from cfnlint import RuleMatch
20
21 class SubNeeded(CloudFormationLintRule):
22 """Check if a substitution string exists without a substitution function"""
23 id = 'E1029'
24 shortdesc = 'Sub is required if a variable is used in a string'
25 description = 'If a substitution variable exists in a string but isn\'t wrapped with the Fn::Sub function the deployment will fail.'
26 source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-sub.html'
27 tags = ['functions', 'sub']
28
29 # Free-form text properties to exclude from this rule
30 excludes = ['UserData', 'ZipFile']
31
32 def _match_values(self, searchRegex, cfnelem, path):
33 """Recursively search for values matching the searchRegex"""
34 values = []
35 if isinstance(cfnelem, dict):
36 for key in cfnelem:
37 pathprop = path[:]
38 pathprop.append(key)
39 values.extend(self._match_values(searchRegex, cfnelem[key], pathprop))
40 elif isinstance(cfnelem, list):
41 for index, item in enumerate(cfnelem):
42 pathprop = path[:]
43 pathprop.append(index)
44 values.extend(self._match_values(searchRegex, item, pathprop))
45 else:
46 # Leaf node
47 if isinstance(cfnelem, str) and re.match(searchRegex, cfnelem):
48 values.append(path + [cfnelem])
49
50 return values
51
52 def match_values(self, searchRegex, cfn):
53 """
54 Search for values in all parts of the templates that match the searchRegex
55 """
56 results = []
57 results.extend(self._match_values(searchRegex, cfn.template, []))
58 # Globals are removed during a transform. They need to be checked manually
59 results.extend(self._match_values(searchRegex, cfn.template.get('Globals', {}), []))
60 return results
61
62 def match(self, cfn):
63 """Basic Rule Matching"""
64
65 matches = []
66
67 # Generic regex to match a string containing at least one ${parameter}
68 parameter_search = re.compile(r'^(.*\$\{.*\}.*(\$\{.*\}.*)*)$')
69
70 # Get a list of paths to every leaf node string containing at least one ${parameter}
71 parameter_string_paths = self.match_values(parameter_search, cfn)
72
73 # We want to search all of the paths to check if each one contains an 'Fn::Sub'
74 for parameter_string_path in parameter_string_paths:
75 found_sub = False
76
77 # Does the path contain an 'Fn::Sub'?
78 for step in parameter_string_path:
79 if step == 'Fn::Sub' or step in self.excludes:
80 found_sub = True
81
82 # If we didn't find an 'Fn::Sub' it means a string containing a ${parameter} may not be evaluated correctly
83 if not found_sub:
84 message = 'Found an embedded parameter outside of an "Fn::Sub" at {}'.format('/'.join(map(str, parameter_string_path)))
85 matches.append(RuleMatch(parameter_string_path, message))
86
87 return matches
88
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/cfnlint/rules/functions/SubNeeded.py b/src/cfnlint/rules/functions/SubNeeded.py
--- a/src/cfnlint/rules/functions/SubNeeded.py
+++ b/src/cfnlint/rules/functions/SubNeeded.py
@@ -27,7 +27,7 @@
tags = ['functions', 'sub']
# Free-form text properties to exclude from this rule
- excludes = ['UserData', 'ZipFile']
+ excludes = ['UserData', 'ZipFile', 'Resource', 'Condition']
def _match_values(self, searchRegex, cfnelem, path):
"""Recursively search for values matching the searchRegex"""
| {"golden_diff": "diff --git a/src/cfnlint/rules/functions/SubNeeded.py b/src/cfnlint/rules/functions/SubNeeded.py\n--- a/src/cfnlint/rules/functions/SubNeeded.py\n+++ b/src/cfnlint/rules/functions/SubNeeded.py\n@@ -27,7 +27,7 @@\n tags = ['functions', 'sub']\n \n # Free-form text properties to exclude from this rule\n- excludes = ['UserData', 'ZipFile']\n+ excludes = ['UserData', 'ZipFile', 'Resource', 'Condition']\n \n def _match_values(self, searchRegex, cfnelem, path):\n \"\"\"Recursively search for values matching the searchRegex\"\"\"\n", "issue": "${aws:username} now fails\n*cfn-lint version: 0.8\r\n\r\nE1029 Found an embedded parameter outside of an \"Fn::Sub\" at Resources/ForceMFAPolicy/Properties/PolicyDocument/Statement/3/Resource/0/arn:aws:iam::*:mfa/${aws:username}\r\n\r\nIt looks like the IAM variables are now being detected as embedded parameters in cfn-lint.\r\n\r\n Effect: \"Allow\"\r\n Action:\r\n - \"iam:UploadSSHPublicKey\"\r\n Resource: \"arn:aws:iam::*:user/${aws:username}\"\n", "before_files": [{"content": "\"\"\"\n Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n\n Permission is hereby granted, free of charge, to any person obtaining a copy of this\n software and associated documentation files (the \"Software\"), to deal in the Software\n without restriction, including without limitation the rights to use, copy, modify,\n merge, publish, distribute, sublicense, and/or sell copies of the Software, and to\n permit persons to whom the Software is furnished to do so.\n\n THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,\n INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A\n PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT\n HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION\n OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\"\"\"\nimport re\nfrom cfnlint import CloudFormationLintRule\nfrom cfnlint import RuleMatch\n\nclass SubNeeded(CloudFormationLintRule):\n \"\"\"Check if a substitution string exists without a substitution function\"\"\"\n id = 'E1029'\n shortdesc = 'Sub is required if a variable is used in a string'\n description = 'If a substitution variable exists in a string but isn\\'t wrapped with the Fn::Sub function the deployment will fail.'\n source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-sub.html'\n tags = ['functions', 'sub']\n\n # Free-form text properties to exclude from this rule\n excludes = ['UserData', 'ZipFile']\n\n def _match_values(self, searchRegex, cfnelem, path):\n \"\"\"Recursively search for values matching the searchRegex\"\"\"\n values = []\n if isinstance(cfnelem, dict):\n for key in cfnelem:\n pathprop = path[:]\n pathprop.append(key)\n values.extend(self._match_values(searchRegex, cfnelem[key], pathprop))\n elif isinstance(cfnelem, list):\n for index, item in enumerate(cfnelem):\n pathprop = path[:]\n pathprop.append(index)\n values.extend(self._match_values(searchRegex, item, pathprop))\n else:\n # Leaf node\n if isinstance(cfnelem, str) and re.match(searchRegex, cfnelem):\n values.append(path + [cfnelem])\n\n return values\n\n def match_values(self, searchRegex, cfn):\n \"\"\"\n Search for values in all parts of the templates that match the searchRegex\n \"\"\"\n results = []\n results.extend(self._match_values(searchRegex, cfn.template, []))\n # Globals are removed during a transform. They need to be checked manually\n results.extend(self._match_values(searchRegex, cfn.template.get('Globals', {}), []))\n return results\n\n def match(self, cfn):\n \"\"\"Basic Rule Matching\"\"\"\n\n matches = []\n\n # Generic regex to match a string containing at least one ${parameter}\n parameter_search = re.compile(r'^(.*\\$\\{.*\\}.*(\\$\\{.*\\}.*)*)$')\n\n # Get a list of paths to every leaf node string containing at least one ${parameter}\n parameter_string_paths = self.match_values(parameter_search, cfn)\n\n # We want to search all of the paths to check if each one contains an 'Fn::Sub'\n for parameter_string_path in parameter_string_paths:\n found_sub = False\n\n # Does the path contain an 'Fn::Sub'?\n for step in parameter_string_path:\n if step == 'Fn::Sub' or step in self.excludes:\n found_sub = True\n\n # If we didn't find an 'Fn::Sub' it means a string containing a ${parameter} may not be evaluated correctly\n if not found_sub:\n message = 'Found an embedded parameter outside of an \"Fn::Sub\" at {}'.format('/'.join(map(str, parameter_string_path)))\n matches.append(RuleMatch(parameter_string_path, message))\n\n return matches\n", "path": "src/cfnlint/rules/functions/SubNeeded.py"}], "after_files": [{"content": "\"\"\"\n Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n\n Permission is hereby granted, free of charge, to any person obtaining a copy of this\n software and associated documentation files (the \"Software\"), to deal in the Software\n without restriction, including without limitation the rights to use, copy, modify,\n merge, publish, distribute, sublicense, and/or sell copies of the Software, and to\n permit persons to whom the Software is furnished to do so.\n\n THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,\n INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A\n PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT\n HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION\n OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\"\"\"\nimport re\nfrom cfnlint import CloudFormationLintRule\nfrom cfnlint import RuleMatch\n\nclass SubNeeded(CloudFormationLintRule):\n \"\"\"Check if a substitution string exists without a substitution function\"\"\"\n id = 'E1029'\n shortdesc = 'Sub is required if a variable is used in a string'\n description = 'If a substitution variable exists in a string but isn\\'t wrapped with the Fn::Sub function the deployment will fail.'\n source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-sub.html'\n tags = ['functions', 'sub']\n\n # Free-form text properties to exclude from this rule\n excludes = ['UserData', 'ZipFile', 'Resource', 'Condition']\n\n def _match_values(self, searchRegex, cfnelem, path):\n \"\"\"Recursively search for values matching the searchRegex\"\"\"\n values = []\n if isinstance(cfnelem, dict):\n for key in cfnelem:\n pathprop = path[:]\n pathprop.append(key)\n values.extend(self._match_values(searchRegex, cfnelem[key], pathprop))\n elif isinstance(cfnelem, list):\n for index, item in enumerate(cfnelem):\n pathprop = path[:]\n pathprop.append(index)\n values.extend(self._match_values(searchRegex, item, pathprop))\n else:\n # Leaf node\n if isinstance(cfnelem, str) and re.match(searchRegex, cfnelem):\n values.append(path + [cfnelem])\n\n return values\n\n def match_values(self, searchRegex, cfn):\n \"\"\"\n Search for values in all parts of the templates that match the searchRegex\n \"\"\"\n results = []\n results.extend(self._match_values(searchRegex, cfn.template, []))\n # Globals are removed during a transform. They need to be checked manually\n results.extend(self._match_values(searchRegex, cfn.template.get('Globals', {}), []))\n return results\n\n def match(self, cfn):\n \"\"\"Basic Rule Matching\"\"\"\n\n matches = []\n\n # Generic regex to match a string containing at least one ${parameter}\n parameter_search = re.compile(r'^(.*\\$\\{.*\\}.*(\\$\\{.*\\}.*)*)$')\n\n # Get a list of paths to every leaf node string containing at least one ${parameter}\n parameter_string_paths = self.match_values(parameter_search, cfn)\n\n # We want to search all of the paths to check if each one contains an 'Fn::Sub'\n for parameter_string_path in parameter_string_paths:\n found_sub = False\n\n # Does the path contain an 'Fn::Sub'?\n for step in parameter_string_path:\n if step == 'Fn::Sub' or step in self.excludes:\n found_sub = True\n\n # If we didn't find an 'Fn::Sub' it means a string containing a ${parameter} may not be evaluated correctly\n if not found_sub:\n message = 'Found an embedded parameter outside of an \"Fn::Sub\" at {}'.format('/'.join(map(str, parameter_string_path)))\n matches.append(RuleMatch(parameter_string_path, message))\n\n return matches\n", "path": "src/cfnlint/rules/functions/SubNeeded.py"}]} | 1,445 | 139 |
gh_patches_debug_15881 | rasdani/github-patches | git_diff | svthalia__concrexit-2496 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add album to event foreign key to api
Feature request (I'm making this via the Github app so the template doesn't work)
See title, Julian wants this :)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `website/photos/api/v2/serializers/album.py`
Content:
```
1 from rest_framework import serializers
2
3 from photos import services
4 from photos.api.v2.serializers.photo import PhotoSerializer, PhotoListSerializer
5 from photos.models import Album
6 from thaliawebsite.api.v2.serializers.cleaned_model_serializer import (
7 CleanedModelSerializer,
8 )
9
10
11 class AlbumSerializer(CleanedModelSerializer):
12 """API serializer for albums."""
13
14 class Meta:
15 """Meta class for the serializer."""
16
17 model = Album
18 fields = ("slug", "title", "accessible", "shareable", "cover", "photos")
19
20 cover = PhotoSerializer()
21 accessible = serializers.SerializerMethodField("_accessible")
22 photos = PhotoListSerializer(source="photo_set", many=True)
23
24 def _accessible(self, obj):
25 return services.is_album_accessible(self.context["request"], obj)
26
27
28 class AlbumListSerializer(AlbumSerializer):
29 class Meta:
30 """Meta class for the serializer."""
31
32 model = Album
33 fields = ("slug", "title", "accessible", "shareable", "cover")
34
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/website/photos/api/v2/serializers/album.py b/website/photos/api/v2/serializers/album.py
--- a/website/photos/api/v2/serializers/album.py
+++ b/website/photos/api/v2/serializers/album.py
@@ -15,7 +15,15 @@
"""Meta class for the serializer."""
model = Album
- fields = ("slug", "title", "accessible", "shareable", "cover", "photos")
+ fields = (
+ "slug",
+ "title",
+ "accessible",
+ "shareable",
+ "cover",
+ "photos",
+ "event",
+ )
cover = PhotoSerializer()
accessible = serializers.SerializerMethodField("_accessible")
@@ -30,4 +38,4 @@
"""Meta class for the serializer."""
model = Album
- fields = ("slug", "title", "accessible", "shareable", "cover")
+ fields = ("slug", "title", "accessible", "shareable", "cover", "event")
| {"golden_diff": "diff --git a/website/photos/api/v2/serializers/album.py b/website/photos/api/v2/serializers/album.py\n--- a/website/photos/api/v2/serializers/album.py\n+++ b/website/photos/api/v2/serializers/album.py\n@@ -15,7 +15,15 @@\n \"\"\"Meta class for the serializer.\"\"\"\n \n model = Album\n- fields = (\"slug\", \"title\", \"accessible\", \"shareable\", \"cover\", \"photos\")\n+ fields = (\n+ \"slug\",\n+ \"title\",\n+ \"accessible\",\n+ \"shareable\",\n+ \"cover\",\n+ \"photos\",\n+ \"event\",\n+ )\n \n cover = PhotoSerializer()\n accessible = serializers.SerializerMethodField(\"_accessible\")\n@@ -30,4 +38,4 @@\n \"\"\"Meta class for the serializer.\"\"\"\n \n model = Album\n- fields = (\"slug\", \"title\", \"accessible\", \"shareable\", \"cover\")\n+ fields = (\"slug\", \"title\", \"accessible\", \"shareable\", \"cover\", \"event\")\n", "issue": "Add album to event foreign key to api\nFeature request (I'm making this via the Github app so the template doesn't work) \n\n\nSee title, Julian wants this :)\n", "before_files": [{"content": "from rest_framework import serializers\n\nfrom photos import services\nfrom photos.api.v2.serializers.photo import PhotoSerializer, PhotoListSerializer\nfrom photos.models import Album\nfrom thaliawebsite.api.v2.serializers.cleaned_model_serializer import (\n CleanedModelSerializer,\n)\n\n\nclass AlbumSerializer(CleanedModelSerializer):\n \"\"\"API serializer for albums.\"\"\"\n\n class Meta:\n \"\"\"Meta class for the serializer.\"\"\"\n\n model = Album\n fields = (\"slug\", \"title\", \"accessible\", \"shareable\", \"cover\", \"photos\")\n\n cover = PhotoSerializer()\n accessible = serializers.SerializerMethodField(\"_accessible\")\n photos = PhotoListSerializer(source=\"photo_set\", many=True)\n\n def _accessible(self, obj):\n return services.is_album_accessible(self.context[\"request\"], obj)\n\n\nclass AlbumListSerializer(AlbumSerializer):\n class Meta:\n \"\"\"Meta class for the serializer.\"\"\"\n\n model = Album\n fields = (\"slug\", \"title\", \"accessible\", \"shareable\", \"cover\")\n", "path": "website/photos/api/v2/serializers/album.py"}], "after_files": [{"content": "from rest_framework import serializers\n\nfrom photos import services\nfrom photos.api.v2.serializers.photo import PhotoSerializer, PhotoListSerializer\nfrom photos.models import Album\nfrom thaliawebsite.api.v2.serializers.cleaned_model_serializer import (\n CleanedModelSerializer,\n)\n\n\nclass AlbumSerializer(CleanedModelSerializer):\n \"\"\"API serializer for albums.\"\"\"\n\n class Meta:\n \"\"\"Meta class for the serializer.\"\"\"\n\n model = Album\n fields = (\n \"slug\",\n \"title\",\n \"accessible\",\n \"shareable\",\n \"cover\",\n \"photos\",\n \"event\",\n )\n\n cover = PhotoSerializer()\n accessible = serializers.SerializerMethodField(\"_accessible\")\n photos = PhotoListSerializer(source=\"photo_set\", many=True)\n\n def _accessible(self, obj):\n return services.is_album_accessible(self.context[\"request\"], obj)\n\n\nclass AlbumListSerializer(AlbumSerializer):\n class Meta:\n \"\"\"Meta class for the serializer.\"\"\"\n\n model = Album\n fields = (\"slug\", \"title\", \"accessible\", \"shareable\", \"cover\", \"event\")\n", "path": "website/photos/api/v2/serializers/album.py"}]} | 574 | 240 |
gh_patches_debug_35760 | rasdani/github-patches | git_diff | sql-machine-learning__elasticdl-1406 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[PS-1] RPC service `push_gradient` async-SGD implementation
Async apply gradients to non-embedding parameters. Updating embedding parameters will be supported in issue #1306 .
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `elasticdl/python/ps/parameters.py`
Content:
```
1 import tensorflow as tf
2
3 from elasticdl.python.common.tensor import tensor_pb_to_ndarray
4 from elasticdl.python.ps.embedding_table import create_embedding_table
5
6
7 class Parameters(object):
8 """
9 There are two kinds of parameters:
10
11 1. non-embedding parameters, or dense tensor parameters. We save it
12 in a hashmap `non-embedding_params`, the key is the parameter name,
13 the value is a tf.Variable` object.
14 2. embedding parameters, or row-sparse parameters. We save it in a
15 hashmap `embedding_params`, the key is the embedding layer name,
16 the value is an `EmbeddingTable` object.
17
18 """
19
20 def __init__(self):
21 self.version = 0
22 self.init_status = False
23 self.non_embedding_params = {}
24 self.embedding_params = {}
25
26 def get_embedding_param(self, name, indices):
27 if name not in self.embedding_params:
28 raise ValueError(
29 "Please initialize embedding param %s first!", name
30 )
31 return self.embedding_params[name].get(indices)
32
33 def set_embedding_param(self, name, indices, values):
34 if name not in self.embedding_params:
35 raise ValueError(
36 "Please initialize embedding param %s first!", name
37 )
38 self.embedding_params[name].set(indices, values)
39
40 def init_from_model_pb(self, model_pb):
41 if not self.init_status:
42 tensors_pb = model_pb.param
43 embeddings_pb = model_pb.embedding_table_info
44 self._init_non_embedding_params(tensors_pb)
45 self._init_embedding_params(embeddings_pb)
46 self.version = model_pb.version
47 self.init_status = True
48
49 def _init_non_embedding_params(self, tensors_pb):
50 for pb in tensors_pb:
51 name = pb.name
52 arr = tensor_pb_to_ndarray(pb)
53 var = tf.Variable(name=name, initial_value=arr, trainable=True)
54 self.non_embedding_params[name] = var
55
56 def _init_embedding_params(self, embeddings_pb):
57 for pb in embeddings_pb:
58 self.embedding_params[pb.name] = create_embedding_table(pb)
59
```
Path: `elasticdl/python/ps/servicer.py`
Content:
```
1 import threading
2
3 from google.protobuf import empty_pb2
4
5 from elasticdl.proto import elasticdl_pb2, elasticdl_pb2_grpc
6 from elasticdl.python.common.dtypes import dtype_numpy_to_tensor
7 from elasticdl.python.common.tensor import Tensor, serialize_tensor
8
9
10 class PserverServicer(elasticdl_pb2_grpc.PserverServicer):
11 """PS service implementation"""
12
13 def __init__(
14 self,
15 parameters,
16 grads_to_wait,
17 optimizer,
18 lr_staleness_modulation=False,
19 use_async=False,
20 ):
21 self._parameters = parameters
22 self._grads_to_wait = grads_to_wait
23 self._optimizer = optimizer
24 self._lr_staleness_modulation = lr_staleness_modulation
25 self._use_async = use_async
26 self._version = 0
27 self._lock = threading.Lock()
28
29 def pull_variable(self, request, _):
30 """
31 Response with all non-embedding parameters if initialized.
32 """
33 res = elasticdl_pb2.PullVariableResponse()
34 if not self._parameters.init_status:
35 res.model_init_status = False
36 return res
37
38 # Only sync-SGD needs lock
39 # TODO: use a read-write lock to support multiple concurrent reads
40 if not self._use_async:
41 self._lock.acquire()
42 res.model.version = self._parameters.version
43 for name, var in self._parameters.non_embedding_params.items():
44 tensor = res.model.param.add()
45 tensor.name = name
46 tensor.dim.extend(var.shape.as_list())
47 var_values = var.numpy()
48 tensor.content = var_values.tobytes()
49 tensor.dtype = dtype_numpy_to_tensor(var_values.dtype)
50 if not self._use_async:
51 self._lock.release()
52 res.model_init_status = True
53 return res
54
55 def pull_embedding_vector(self, request, _):
56 ret = elasticdl_pb2.Tensor()
57 if not request.ids:
58 return ret
59 embedding_vectors = self._parameters.get_embedding_param(
60 request.name, request.ids
61 )
62 tensor = Tensor(values=embedding_vectors)
63 serialize_tensor(tensor, ret)
64 return ret
65
66 def push_model(self, request, _):
67 with self._lock:
68 self._parameters.init_from_model_pb(request)
69 return empty_pb2.Empty()
70
71 def push_gradient(self, request, _):
72 # TODO: implement this RPC service
73 return elasticdl_pb2.PushGradientResponse()
74
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/elasticdl/python/ps/parameters.py b/elasticdl/python/ps/parameters.py
--- a/elasticdl/python/ps/parameters.py
+++ b/elasticdl/python/ps/parameters.py
@@ -23,6 +23,9 @@
self.non_embedding_params = {}
self.embedding_params = {}
+ def get_non_embedding_param(self, name, default_value=None):
+ return self.non_embedding_params.get(name, default_value)
+
def get_embedding_param(self, name, indices):
if name not in self.embedding_params:
raise ValueError(
diff --git a/elasticdl/python/ps/servicer.py b/elasticdl/python/ps/servicer.py
--- a/elasticdl/python/ps/servicer.py
+++ b/elasticdl/python/ps/servicer.py
@@ -4,6 +4,7 @@
from elasticdl.proto import elasticdl_pb2, elasticdl_pb2_grpc
from elasticdl.python.common.dtypes import dtype_numpy_to_tensor
+from elasticdl.python.common.log_utils import default_logger as logger
from elasticdl.python.common.tensor import Tensor, serialize_tensor
@@ -23,7 +24,7 @@
self._optimizer = optimizer
self._lr_staleness_modulation = lr_staleness_modulation
self._use_async = use_async
- self._version = 0
+ self._version_lock = threading.Lock()
self._lock = threading.Lock()
def pull_variable(self, request, _):
@@ -69,5 +70,29 @@
return empty_pb2.Empty()
def push_gradient(self, request, _):
- # TODO: implement this RPC service
+ if self._use_async:
+ grad_vars = []
+ for pb in request.gradients:
+ tensor = Tensor.from_tensor_pb(pb)
+ var = self._parameters.get_non_embedding_param(tensor.name)
+ if var is None:
+ logger.warning(
+ "Gradients with invalid name %s" % tensor.name
+ )
+ continue
+ grad = tensor.to_tf_tensor()
+ grad_vars.append((grad, var))
+
+ self._optimizer.apply_gradients(grad_vars)
+ with self._version_lock:
+ self._parameters.version += 1
+
+ res = elasticdl_pb2.PushGradientResponse()
+ res.accepted = True
+ res.model_version = self._parameters.version
+ return res
+
+ raise NotImplementedError(
+ "Updating parameters synchronously is not implemented."
+ )
return elasticdl_pb2.PushGradientResponse()
| {"golden_diff": "diff --git a/elasticdl/python/ps/parameters.py b/elasticdl/python/ps/parameters.py\n--- a/elasticdl/python/ps/parameters.py\n+++ b/elasticdl/python/ps/parameters.py\n@@ -23,6 +23,9 @@\n self.non_embedding_params = {}\n self.embedding_params = {}\n \n+ def get_non_embedding_param(self, name, default_value=None):\n+ return self.non_embedding_params.get(name, default_value)\n+\n def get_embedding_param(self, name, indices):\n if name not in self.embedding_params:\n raise ValueError(\ndiff --git a/elasticdl/python/ps/servicer.py b/elasticdl/python/ps/servicer.py\n--- a/elasticdl/python/ps/servicer.py\n+++ b/elasticdl/python/ps/servicer.py\n@@ -4,6 +4,7 @@\n \n from elasticdl.proto import elasticdl_pb2, elasticdl_pb2_grpc\n from elasticdl.python.common.dtypes import dtype_numpy_to_tensor\n+from elasticdl.python.common.log_utils import default_logger as logger\n from elasticdl.python.common.tensor import Tensor, serialize_tensor\n \n \n@@ -23,7 +24,7 @@\n self._optimizer = optimizer\n self._lr_staleness_modulation = lr_staleness_modulation\n self._use_async = use_async\n- self._version = 0\n+ self._version_lock = threading.Lock()\n self._lock = threading.Lock()\n \n def pull_variable(self, request, _):\n@@ -69,5 +70,29 @@\n return empty_pb2.Empty()\n \n def push_gradient(self, request, _):\n- # TODO: implement this RPC service\n+ if self._use_async:\n+ grad_vars = []\n+ for pb in request.gradients:\n+ tensor = Tensor.from_tensor_pb(pb)\n+ var = self._parameters.get_non_embedding_param(tensor.name)\n+ if var is None:\n+ logger.warning(\n+ \"Gradients with invalid name %s\" % tensor.name\n+ )\n+ continue\n+ grad = tensor.to_tf_tensor()\n+ grad_vars.append((grad, var))\n+\n+ self._optimizer.apply_gradients(grad_vars)\n+ with self._version_lock:\n+ self._parameters.version += 1\n+\n+ res = elasticdl_pb2.PushGradientResponse()\n+ res.accepted = True\n+ res.model_version = self._parameters.version\n+ return res\n+\n+ raise NotImplementedError(\n+ \"Updating parameters synchronously is not implemented.\"\n+ )\n return elasticdl_pb2.PushGradientResponse()\n", "issue": "[PS-1] RPC service `push_gradient` async-SGD implementation\nAsync apply gradients to non-embedding parameters. Updating embedding parameters will be supported in issue #1306 .\n", "before_files": [{"content": "import tensorflow as tf\n\nfrom elasticdl.python.common.tensor import tensor_pb_to_ndarray\nfrom elasticdl.python.ps.embedding_table import create_embedding_table\n\n\nclass Parameters(object):\n \"\"\"\n There are two kinds of parameters:\n\n 1. non-embedding parameters, or dense tensor parameters. We save it\n in a hashmap `non-embedding_params`, the key is the parameter name,\n the value is a tf.Variable` object.\n 2. embedding parameters, or row-sparse parameters. We save it in a\n hashmap `embedding_params`, the key is the embedding layer name,\n the value is an `EmbeddingTable` object.\n\n \"\"\"\n\n def __init__(self):\n self.version = 0\n self.init_status = False\n self.non_embedding_params = {}\n self.embedding_params = {}\n\n def get_embedding_param(self, name, indices):\n if name not in self.embedding_params:\n raise ValueError(\n \"Please initialize embedding param %s first!\", name\n )\n return self.embedding_params[name].get(indices)\n\n def set_embedding_param(self, name, indices, values):\n if name not in self.embedding_params:\n raise ValueError(\n \"Please initialize embedding param %s first!\", name\n )\n self.embedding_params[name].set(indices, values)\n\n def init_from_model_pb(self, model_pb):\n if not self.init_status:\n tensors_pb = model_pb.param\n embeddings_pb = model_pb.embedding_table_info\n self._init_non_embedding_params(tensors_pb)\n self._init_embedding_params(embeddings_pb)\n self.version = model_pb.version\n self.init_status = True\n\n def _init_non_embedding_params(self, tensors_pb):\n for pb in tensors_pb:\n name = pb.name\n arr = tensor_pb_to_ndarray(pb)\n var = tf.Variable(name=name, initial_value=arr, trainable=True)\n self.non_embedding_params[name] = var\n\n def _init_embedding_params(self, embeddings_pb):\n for pb in embeddings_pb:\n self.embedding_params[pb.name] = create_embedding_table(pb)\n", "path": "elasticdl/python/ps/parameters.py"}, {"content": "import threading\n\nfrom google.protobuf import empty_pb2\n\nfrom elasticdl.proto import elasticdl_pb2, elasticdl_pb2_grpc\nfrom elasticdl.python.common.dtypes import dtype_numpy_to_tensor\nfrom elasticdl.python.common.tensor import Tensor, serialize_tensor\n\n\nclass PserverServicer(elasticdl_pb2_grpc.PserverServicer):\n \"\"\"PS service implementation\"\"\"\n\n def __init__(\n self,\n parameters,\n grads_to_wait,\n optimizer,\n lr_staleness_modulation=False,\n use_async=False,\n ):\n self._parameters = parameters\n self._grads_to_wait = grads_to_wait\n self._optimizer = optimizer\n self._lr_staleness_modulation = lr_staleness_modulation\n self._use_async = use_async\n self._version = 0\n self._lock = threading.Lock()\n\n def pull_variable(self, request, _):\n \"\"\"\n Response with all non-embedding parameters if initialized.\n \"\"\"\n res = elasticdl_pb2.PullVariableResponse()\n if not self._parameters.init_status:\n res.model_init_status = False\n return res\n\n # Only sync-SGD needs lock\n # TODO: use a read-write lock to support multiple concurrent reads\n if not self._use_async:\n self._lock.acquire()\n res.model.version = self._parameters.version\n for name, var in self._parameters.non_embedding_params.items():\n tensor = res.model.param.add()\n tensor.name = name\n tensor.dim.extend(var.shape.as_list())\n var_values = var.numpy()\n tensor.content = var_values.tobytes()\n tensor.dtype = dtype_numpy_to_tensor(var_values.dtype)\n if not self._use_async:\n self._lock.release()\n res.model_init_status = True\n return res\n\n def pull_embedding_vector(self, request, _):\n ret = elasticdl_pb2.Tensor()\n if not request.ids:\n return ret\n embedding_vectors = self._parameters.get_embedding_param(\n request.name, request.ids\n )\n tensor = Tensor(values=embedding_vectors)\n serialize_tensor(tensor, ret)\n return ret\n\n def push_model(self, request, _):\n with self._lock:\n self._parameters.init_from_model_pb(request)\n return empty_pb2.Empty()\n\n def push_gradient(self, request, _):\n # TODO: implement this RPC service\n return elasticdl_pb2.PushGradientResponse()\n", "path": "elasticdl/python/ps/servicer.py"}], "after_files": [{"content": "import tensorflow as tf\n\nfrom elasticdl.python.common.tensor import tensor_pb_to_ndarray\nfrom elasticdl.python.ps.embedding_table import create_embedding_table\n\n\nclass Parameters(object):\n \"\"\"\n There are two kinds of parameters:\n\n 1. non-embedding parameters, or dense tensor parameters. We save it\n in a hashmap `non-embedding_params`, the key is the parameter name,\n the value is a tf.Variable` object.\n 2. embedding parameters, or row-sparse parameters. We save it in a\n hashmap `embedding_params`, the key is the embedding layer name,\n the value is an `EmbeddingTable` object.\n\n \"\"\"\n\n def __init__(self):\n self.version = 0\n self.init_status = False\n self.non_embedding_params = {}\n self.embedding_params = {}\n\n def get_non_embedding_param(self, name, default_value=None):\n return self.non_embedding_params.get(name, default_value)\n\n def get_embedding_param(self, name, indices):\n if name not in self.embedding_params:\n raise ValueError(\n \"Please initialize embedding param %s first!\", name\n )\n return self.embedding_params[name].get(indices)\n\n def set_embedding_param(self, name, indices, values):\n if name not in self.embedding_params:\n raise ValueError(\n \"Please initialize embedding param %s first!\", name\n )\n self.embedding_params[name].set(indices, values)\n\n def init_from_model_pb(self, model_pb):\n if not self.init_status:\n tensors_pb = model_pb.param\n embeddings_pb = model_pb.embedding_table_info\n self._init_non_embedding_params(tensors_pb)\n self._init_embedding_params(embeddings_pb)\n self.version = model_pb.version\n self.init_status = True\n\n def _init_non_embedding_params(self, tensors_pb):\n for pb in tensors_pb:\n name = pb.name\n arr = tensor_pb_to_ndarray(pb)\n var = tf.Variable(name=name, initial_value=arr, trainable=True)\n self.non_embedding_params[name] = var\n\n def _init_embedding_params(self, embeddings_pb):\n for pb in embeddings_pb:\n self.embedding_params[pb.name] = create_embedding_table(pb)\n", "path": "elasticdl/python/ps/parameters.py"}, {"content": "import threading\n\nfrom google.protobuf import empty_pb2\n\nfrom elasticdl.proto import elasticdl_pb2, elasticdl_pb2_grpc\nfrom elasticdl.python.common.dtypes import dtype_numpy_to_tensor\nfrom elasticdl.python.common.log_utils import default_logger as logger\nfrom elasticdl.python.common.tensor import Tensor, serialize_tensor\n\n\nclass PserverServicer(elasticdl_pb2_grpc.PserverServicer):\n \"\"\"PS service implementation\"\"\"\n\n def __init__(\n self,\n parameters,\n grads_to_wait,\n optimizer,\n lr_staleness_modulation=False,\n use_async=False,\n ):\n self._parameters = parameters\n self._grads_to_wait = grads_to_wait\n self._optimizer = optimizer\n self._lr_staleness_modulation = lr_staleness_modulation\n self._use_async = use_async\n self._version_lock = threading.Lock()\n self._lock = threading.Lock()\n\n def pull_variable(self, request, _):\n \"\"\"\n Response with all non-embedding parameters if initialized.\n \"\"\"\n res = elasticdl_pb2.PullVariableResponse()\n if not self._parameters.init_status:\n res.model_init_status = False\n return res\n\n # Only sync-SGD needs lock\n # TODO: use a read-write lock to support multiple concurrent reads\n if not self._use_async:\n self._lock.acquire()\n res.model.version = self._parameters.version\n for name, var in self._parameters.non_embedding_params.items():\n tensor = res.model.param.add()\n tensor.name = name\n tensor.dim.extend(var.shape.as_list())\n var_values = var.numpy()\n tensor.content = var_values.tobytes()\n tensor.dtype = dtype_numpy_to_tensor(var_values.dtype)\n if not self._use_async:\n self._lock.release()\n res.model_init_status = True\n return res\n\n def pull_embedding_vector(self, request, _):\n ret = elasticdl_pb2.Tensor()\n if not request.ids:\n return ret\n embedding_vectors = self._parameters.get_embedding_param(\n request.name, request.ids\n )\n tensor = Tensor(values=embedding_vectors)\n serialize_tensor(tensor, ret)\n return ret\n\n def push_model(self, request, _):\n with self._lock:\n self._parameters.init_from_model_pb(request)\n return empty_pb2.Empty()\n\n def push_gradient(self, request, _):\n if self._use_async:\n grad_vars = []\n for pb in request.gradients:\n tensor = Tensor.from_tensor_pb(pb)\n var = self._parameters.get_non_embedding_param(tensor.name)\n if var is None:\n logger.warning(\n \"Gradients with invalid name %s\" % tensor.name\n )\n continue\n grad = tensor.to_tf_tensor()\n grad_vars.append((grad, var))\n\n self._optimizer.apply_gradients(grad_vars)\n with self._version_lock:\n self._parameters.version += 1\n\n res = elasticdl_pb2.PushGradientResponse()\n res.accepted = True\n res.model_version = self._parameters.version\n return res\n\n raise NotImplementedError(\n \"Updating parameters synchronously is not implemented.\"\n )\n return elasticdl_pb2.PushGradientResponse()\n", "path": "elasticdl/python/ps/servicer.py"}]} | 1,532 | 567 |
gh_patches_debug_6233 | rasdani/github-patches | git_diff | WeblateOrg__weblate-6217 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Auth provider title and image override do not work for auth providers with '-' in it
**Describe the issue**
The new feature to override the name and image for an auth provider (introduced in 159ae591e91e7c2065ede9c39f14d743f5ab07e5) does not work for auth providers with '-' in them.
<!--
A clear and concise description of the problem you are facing.
-->
**I already tried**
Describe the steps you tried to solve the problem yourself.
- [x] I've read and searched [the docs](https://docs.weblate.org/) and did not find the answer there.
If you didn’t try already, try to search there what you wrote above.
**To Reproduce the issue**
Steps to reproduce the behavior:
1. Add an auth to your settings - e.g. `SOCIAL_AUTH_AZUREAD_TENANT_OAUTH2_KEY`
2. Set the title and image override (`SOCIAL_AUTH_AZUREAD_TENANT_OAUTH2_TITLE` and `SOCIAL_AUTH_AZUREAD_TENANT_OAUTH2_IMAGE`)
3. Go to the login page
4. The title and image are not overriden
**Expected behavior**
The title and image should be overridden.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `weblate/accounts/templatetags/authnames.py`
Content:
```
1 #
2 # Copyright © 2012 - 2021 Michal Čihař <[email protected]>
3 #
4 # This file is part of Weblate <https://weblate.org/>
5 #
6 # This program is free software: you can redistribute it and/or modify
7 # it under the terms of the GNU General Public License as published by
8 # the Free Software Foundation, either version 3 of the License, or
9 # (at your option) any later version.
10 #
11 # This program is distributed in the hope that it will be useful,
12 # but WITHOUT ANY WARRANTY; without even the implied warranty of
13 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
14 # GNU General Public License for more details.
15 #
16 # You should have received a copy of the GNU General Public License
17 # along with this program. If not, see <https://www.gnu.org/licenses/>.
18 #
19 """Provide user friendly names for social authentication methods."""
20
21 from django import template
22 from django.conf import settings
23 from django.contrib.staticfiles.storage import staticfiles_storage
24 from django.utils.safestring import mark_safe
25 from django.utils.translation import gettext_lazy
26
27 register = template.Library()
28
29 SOCIALS = {
30 "auth0": {"name": "Auth0", "image": "auth0.svg"},
31 "saml": {"name": "SAML", "image": "saml.svg"},
32 "google": {"name": "Google", "image": "google.svg"},
33 "google-oauth2": {"name": "Google", "image": "google.svg"},
34 "google-plus": {"name": "Google+", "image": "google.svg"},
35 "password": {"name": gettext_lazy("Password"), "image": "password.svg"},
36 "email": {"name": gettext_lazy("E-mail"), "image": "email.svg"},
37 "ubuntu": {"name": "Ubuntu", "image": "ubuntu.svg"},
38 "opensuse": {"name": "openSUSE", "image": "opensuse.svg"},
39 "fedora": {"name": "Fedora", "image": "fedora.svg"},
40 "facebook": {"name": "Facebook", "image": "facebook.svg"},
41 "github": {"name": "GitHub", "image": "github.svg"},
42 "github-enterprise": {"name": "GitHub Enterprise", "image": "github.svg"},
43 "bitbucket": {"name": "Bitbucket", "image": "bitbucket.svg"},
44 "bitbucket-oauth2": {"name": "Bitbucket", "image": "bitbucket.svg"},
45 "azuread-oauth2": {"name": "Azure", "image": "azure.svg"},
46 "azuread-tenant-oauth2": {"name": "Azure", "image": "azure.svg"},
47 "gitlab": {"name": "GitLab", "image": "gitlab.svg"},
48 "amazon": {"name": "Amazon", "image": "amazon.svg"},
49 "twitter": {"name": "Twitter", "image": "twitter.svg"},
50 "stackoverflow": {"name": "Stack Overflow", "image": "stackoverflow.svg"},
51 }
52
53 IMAGE_SOCIAL_TEMPLATE = """
54 <img class="auth-image" src="{image}" />
55 """
56
57 SOCIAL_TEMPLATE = """
58 {icon}
59 {separator}
60 {name}
61 """
62
63
64 def get_auth_params(auth: str):
65 """Returns authentication parameters."""
66 # Fallback values
67 params = {"name": auth, "image": "password.svg"}
68
69 # Hardcoded names
70 if auth in SOCIALS:
71 params.update(SOCIALS[auth])
72
73 # Settings override
74 settings_params = {
75 "name": f"SOCIAL_AUTH_{auth.upper()}_TITLE",
76 "image": f"SOCIAL_AUTH_{auth.upper()}_IMAGE",
77 }
78 for target, source in settings_params.items():
79 value = getattr(settings, source, None)
80 if value:
81 params[target] = value
82
83 return params
84
85
86 @register.simple_tag
87 def auth_name(auth: str, separator: str = "<br />"):
88 """Create HTML markup for social authentication method."""
89 params = get_auth_params(auth)
90
91 if not params["image"].startswith("http"):
92 params["image"] = staticfiles_storage.url("auth/" + params["image"])
93 params["icon"] = IMAGE_SOCIAL_TEMPLATE.format(separator=separator, **params)
94
95 return mark_safe(SOCIAL_TEMPLATE.format(separator=separator, **params))
96
97
98 def get_auth_name(auth: str):
99 """Get nice name for authentication backend."""
100 return get_auth_params(auth)["name"]
101
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/weblate/accounts/templatetags/authnames.py b/weblate/accounts/templatetags/authnames.py
--- a/weblate/accounts/templatetags/authnames.py
+++ b/weblate/accounts/templatetags/authnames.py
@@ -72,8 +72,8 @@
# Settings override
settings_params = {
- "name": f"SOCIAL_AUTH_{auth.upper()}_TITLE",
- "image": f"SOCIAL_AUTH_{auth.upper()}_IMAGE",
+ "name": f"SOCIAL_AUTH_{auth.upper().replace('-','_')}_TITLE",
+ "image": f"SOCIAL_AUTH_{auth.upper().replace('-','_')}_IMAGE",
}
for target, source in settings_params.items():
value = getattr(settings, source, None)
| {"golden_diff": "diff --git a/weblate/accounts/templatetags/authnames.py b/weblate/accounts/templatetags/authnames.py\n--- a/weblate/accounts/templatetags/authnames.py\n+++ b/weblate/accounts/templatetags/authnames.py\n@@ -72,8 +72,8 @@\n \n # Settings override\n settings_params = {\n- \"name\": f\"SOCIAL_AUTH_{auth.upper()}_TITLE\",\n- \"image\": f\"SOCIAL_AUTH_{auth.upper()}_IMAGE\",\n+ \"name\": f\"SOCIAL_AUTH_{auth.upper().replace('-','_')}_TITLE\",\n+ \"image\": f\"SOCIAL_AUTH_{auth.upper().replace('-','_')}_IMAGE\",\n }\n for target, source in settings_params.items():\n value = getattr(settings, source, None)\n", "issue": "Auth provider title and image override do not work for auth providers with '-' in it\n**Describe the issue**\r\n\r\nThe new feature to override the name and image for an auth provider (introduced in 159ae591e91e7c2065ede9c39f14d743f5ab07e5) does not work for auth providers with '-' in them. \r\n\r\n<!--\r\nA clear and concise description of the problem you are facing.\r\n-->\r\n\r\n**I already tried**\r\n\r\nDescribe the steps you tried to solve the problem yourself.\r\n\r\n- [x] I've read and searched [the docs](https://docs.weblate.org/) and did not find the answer there.\r\n If you didn\u2019t try already, try to search there what you wrote above.\r\n\r\n**To Reproduce the issue**\r\n\r\nSteps to reproduce the behavior:\r\n\r\n1. Add an auth to your settings - e.g. `SOCIAL_AUTH_AZUREAD_TENANT_OAUTH2_KEY`\r\n2. Set the title and image override (`SOCIAL_AUTH_AZUREAD_TENANT_OAUTH2_TITLE` and `SOCIAL_AUTH_AZUREAD_TENANT_OAUTH2_IMAGE`)\r\n3. Go to the login page\r\n4. The title and image are not overriden\r\n\r\n**Expected behavior**\r\n\r\nThe title and image should be overridden. \r\n\n", "before_files": [{"content": "#\n# Copyright \u00a9 2012 - 2021 Michal \u010ciha\u0159 <[email protected]>\n#\n# This file is part of Weblate <https://weblate.org/>\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program. If not, see <https://www.gnu.org/licenses/>.\n#\n\"\"\"Provide user friendly names for social authentication methods.\"\"\"\n\nfrom django import template\nfrom django.conf import settings\nfrom django.contrib.staticfiles.storage import staticfiles_storage\nfrom django.utils.safestring import mark_safe\nfrom django.utils.translation import gettext_lazy\n\nregister = template.Library()\n\nSOCIALS = {\n \"auth0\": {\"name\": \"Auth0\", \"image\": \"auth0.svg\"},\n \"saml\": {\"name\": \"SAML\", \"image\": \"saml.svg\"},\n \"google\": {\"name\": \"Google\", \"image\": \"google.svg\"},\n \"google-oauth2\": {\"name\": \"Google\", \"image\": \"google.svg\"},\n \"google-plus\": {\"name\": \"Google+\", \"image\": \"google.svg\"},\n \"password\": {\"name\": gettext_lazy(\"Password\"), \"image\": \"password.svg\"},\n \"email\": {\"name\": gettext_lazy(\"E-mail\"), \"image\": \"email.svg\"},\n \"ubuntu\": {\"name\": \"Ubuntu\", \"image\": \"ubuntu.svg\"},\n \"opensuse\": {\"name\": \"openSUSE\", \"image\": \"opensuse.svg\"},\n \"fedora\": {\"name\": \"Fedora\", \"image\": \"fedora.svg\"},\n \"facebook\": {\"name\": \"Facebook\", \"image\": \"facebook.svg\"},\n \"github\": {\"name\": \"GitHub\", \"image\": \"github.svg\"},\n \"github-enterprise\": {\"name\": \"GitHub Enterprise\", \"image\": \"github.svg\"},\n \"bitbucket\": {\"name\": \"Bitbucket\", \"image\": \"bitbucket.svg\"},\n \"bitbucket-oauth2\": {\"name\": \"Bitbucket\", \"image\": \"bitbucket.svg\"},\n \"azuread-oauth2\": {\"name\": \"Azure\", \"image\": \"azure.svg\"},\n \"azuread-tenant-oauth2\": {\"name\": \"Azure\", \"image\": \"azure.svg\"},\n \"gitlab\": {\"name\": \"GitLab\", \"image\": \"gitlab.svg\"},\n \"amazon\": {\"name\": \"Amazon\", \"image\": \"amazon.svg\"},\n \"twitter\": {\"name\": \"Twitter\", \"image\": \"twitter.svg\"},\n \"stackoverflow\": {\"name\": \"Stack Overflow\", \"image\": \"stackoverflow.svg\"},\n}\n\nIMAGE_SOCIAL_TEMPLATE = \"\"\"\n<img class=\"auth-image\" src=\"{image}\" />\n\"\"\"\n\nSOCIAL_TEMPLATE = \"\"\"\n{icon}\n{separator}\n{name}\n\"\"\"\n\n\ndef get_auth_params(auth: str):\n \"\"\"Returns authentication parameters.\"\"\"\n # Fallback values\n params = {\"name\": auth, \"image\": \"password.svg\"}\n\n # Hardcoded names\n if auth in SOCIALS:\n params.update(SOCIALS[auth])\n\n # Settings override\n settings_params = {\n \"name\": f\"SOCIAL_AUTH_{auth.upper()}_TITLE\",\n \"image\": f\"SOCIAL_AUTH_{auth.upper()}_IMAGE\",\n }\n for target, source in settings_params.items():\n value = getattr(settings, source, None)\n if value:\n params[target] = value\n\n return params\n\n\[email protected]_tag\ndef auth_name(auth: str, separator: str = \"<br />\"):\n \"\"\"Create HTML markup for social authentication method.\"\"\"\n params = get_auth_params(auth)\n\n if not params[\"image\"].startswith(\"http\"):\n params[\"image\"] = staticfiles_storage.url(\"auth/\" + params[\"image\"])\n params[\"icon\"] = IMAGE_SOCIAL_TEMPLATE.format(separator=separator, **params)\n\n return mark_safe(SOCIAL_TEMPLATE.format(separator=separator, **params))\n\n\ndef get_auth_name(auth: str):\n \"\"\"Get nice name for authentication backend.\"\"\"\n return get_auth_params(auth)[\"name\"]\n", "path": "weblate/accounts/templatetags/authnames.py"}], "after_files": [{"content": "#\n# Copyright \u00a9 2012 - 2021 Michal \u010ciha\u0159 <[email protected]>\n#\n# This file is part of Weblate <https://weblate.org/>\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program. If not, see <https://www.gnu.org/licenses/>.\n#\n\"\"\"Provide user friendly names for social authentication methods.\"\"\"\n\nfrom django import template\nfrom django.conf import settings\nfrom django.contrib.staticfiles.storage import staticfiles_storage\nfrom django.utils.safestring import mark_safe\nfrom django.utils.translation import gettext_lazy\n\nregister = template.Library()\n\nSOCIALS = {\n \"auth0\": {\"name\": \"Auth0\", \"image\": \"auth0.svg\"},\n \"saml\": {\"name\": \"SAML\", \"image\": \"saml.svg\"},\n \"google\": {\"name\": \"Google\", \"image\": \"google.svg\"},\n \"google-oauth2\": {\"name\": \"Google\", \"image\": \"google.svg\"},\n \"google-plus\": {\"name\": \"Google+\", \"image\": \"google.svg\"},\n \"password\": {\"name\": gettext_lazy(\"Password\"), \"image\": \"password.svg\"},\n \"email\": {\"name\": gettext_lazy(\"E-mail\"), \"image\": \"email.svg\"},\n \"ubuntu\": {\"name\": \"Ubuntu\", \"image\": \"ubuntu.svg\"},\n \"opensuse\": {\"name\": \"openSUSE\", \"image\": \"opensuse.svg\"},\n \"fedora\": {\"name\": \"Fedora\", \"image\": \"fedora.svg\"},\n \"facebook\": {\"name\": \"Facebook\", \"image\": \"facebook.svg\"},\n \"github\": {\"name\": \"GitHub\", \"image\": \"github.svg\"},\n \"github-enterprise\": {\"name\": \"GitHub Enterprise\", \"image\": \"github.svg\"},\n \"bitbucket\": {\"name\": \"Bitbucket\", \"image\": \"bitbucket.svg\"},\n \"bitbucket-oauth2\": {\"name\": \"Bitbucket\", \"image\": \"bitbucket.svg\"},\n \"azuread-oauth2\": {\"name\": \"Azure\", \"image\": \"azure.svg\"},\n \"azuread-tenant-oauth2\": {\"name\": \"Azure\", \"image\": \"azure.svg\"},\n \"gitlab\": {\"name\": \"GitLab\", \"image\": \"gitlab.svg\"},\n \"amazon\": {\"name\": \"Amazon\", \"image\": \"amazon.svg\"},\n \"twitter\": {\"name\": \"Twitter\", \"image\": \"twitter.svg\"},\n \"stackoverflow\": {\"name\": \"Stack Overflow\", \"image\": \"stackoverflow.svg\"},\n}\n\nIMAGE_SOCIAL_TEMPLATE = \"\"\"\n<img class=\"auth-image\" src=\"{image}\" />\n\"\"\"\n\nSOCIAL_TEMPLATE = \"\"\"\n{icon}\n{separator}\n{name}\n\"\"\"\n\n\ndef get_auth_params(auth: str):\n \"\"\"Returns authentication parameters.\"\"\"\n # Fallback values\n params = {\"name\": auth, \"image\": \"password.svg\"}\n\n # Hardcoded names\n if auth in SOCIALS:\n params.update(SOCIALS[auth])\n\n # Settings override\n settings_params = {\n \"name\": f\"SOCIAL_AUTH_{auth.upper().replace('-','_')}_TITLE\",\n \"image\": f\"SOCIAL_AUTH_{auth.upper().replace('-','_')}_IMAGE\",\n }\n for target, source in settings_params.items():\n value = getattr(settings, source, None)\n if value:\n params[target] = value\n\n return params\n\n\[email protected]_tag\ndef auth_name(auth: str, separator: str = \"<br />\"):\n \"\"\"Create HTML markup for social authentication method.\"\"\"\n params = get_auth_params(auth)\n\n if not params[\"image\"].startswith(\"http\"):\n params[\"image\"] = staticfiles_storage.url(\"auth/\" + params[\"image\"])\n params[\"icon\"] = IMAGE_SOCIAL_TEMPLATE.format(separator=separator, **params)\n\n return mark_safe(SOCIAL_TEMPLATE.format(separator=separator, **params))\n\n\ndef get_auth_name(auth: str):\n \"\"\"Get nice name for authentication backend.\"\"\"\n return get_auth_params(auth)[\"name\"]\n", "path": "weblate/accounts/templatetags/authnames.py"}]} | 1,719 | 183 |
gh_patches_debug_33535 | rasdani/github-patches | git_diff | OCHA-DAP__hdx-ckan-479 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Dataset edit: The country functionality should not be in optional and should work as it does on the Dataset Creation page
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ckanext-hdx_theme/ckanext/hdx_theme/licenses.py`
Content:
```
1 '''
2 Created on May 12, 2014
3
4 @author: alexandru-m-g
5 '''
6
7 from ckan.common import _
8 from ckan.model.license import DefaultLicense
9
10
11 class LicenseCreativeCommonsIntergovernmentalOrgs(DefaultLicense):
12 # domain_content = True
13 # domain_data = True
14 id = "cc-by-igo"
15 is_okd_compliant = False
16 url = "http://creativecommons.org/licenses/by/3.0/igo/legalcode"
17
18 @property
19 def title(self):
20 return _("Creative Commons Attribution for Intergovernmental Organisations")
21
22 class LicenseCreativeCommonsNoDerives(DefaultLicense):
23 # domain_content = True
24 # domain_data = True
25 id = "cc-by-nd"
26 is_okd_compliant = False
27 url = "http://creativecommons.org/licenses/by-nd/3.0/legalcode"
28
29 @property
30 def title(self):
31 return _("Creative Commons Attribution-NoDerives")
32
33 class LicenseOtherPublicDomainNoRestrictions(DefaultLicense):
34 # domain_content = True
35 id = "other-pd-nr"
36 is_generic = True
37 is_okd_compliant = True
38
39 @property
40 def title(self):
41 return _("Public Domain / No Restrictions")
42
43 class LicenseHdxMultiple(DefaultLicense):
44 # domain_content = True
45 id = "hdx-multi"
46 # is_generic = True
47 # is_okd_compliant = True
48
49 @property
50 def title(self):
51 return _("Multiple Licenses")
52
53 class LicenseHdxOther(DefaultLicense):
54 # domain_content = True
55 id = "hdx-other"
56 # is_generic = True
57 # is_okd_compliant = True
58
59 @property
60 def title(self):
61 return _("Other")
62
63
```
Path: `ckanext-hdx_theme/ckanext/hdx_theme/plugin.py`
Content:
```
1 import ckanext.hdx_theme.licenses as hdx_licenses
2
3 import ckan.plugins as plugins
4 import ckan.plugins.toolkit as toolkit
5 import ckan.model.package as package
6 import ckan.model.license as license
7 import version
8
9 import ckanext.hdx_theme.caching as caching
10
11
12
13 def run_on_startup():
14 _generate_license_list()
15
16 caching.cached_get_group_package_stuff()
17
18
19 def _generate_license_list():
20 package.Package._license_register = license.LicenseRegister()
21 package.Package._license_register.licenses = [
22 license.License(hdx_licenses.LicenseCreativeCommonsIntergovernmentalOrgs()),
23 license.License(license.LicenseCreativeCommonsAttribution()),
24 license.License(license.LicenseCreativeCommonsAttributionShareAlike()),
25 license.License(hdx_licenses.LicenseCreativeCommonsNoDerives()),
26 license.License(hdx_licenses.LicenseOtherPublicDomainNoRestrictions()),
27 license.License(hdx_licenses.LicenseHdxMultiple()),
28 license.License(hdx_licenses.LicenseHdxOther())
29 ]
30
31 class HDXThemePlugin(plugins.SingletonPlugin):
32 plugins.implements(plugins.IConfigurer)
33 plugins.implements(plugins.IRoutes, inherit=True)
34 plugins.implements(plugins.ITemplateHelpers)
35 plugins.implements(plugins.IActions)
36 plugins.implements(plugins.IGroupController, inherit=True)
37 plugins.implements(plugins.IMiddleware, inherit=True)
38
39 def update_config(self, config):
40 toolkit.add_template_directory(config, 'templates')
41 toolkit.add_public_directory(config, 'public')
42 toolkit.add_resource('fanstatic', 'hdx_theme')
43
44
45 def before_map(self, map):
46 map.connect('home', '/', controller='ckanext.hdx_theme.splash_page:SplashPageController', action='index')
47 map.connect('/count/dataset', controller='ckanext.hdx_theme.count:CountController', action='dataset')
48 map.connect('/count/country', controller='ckanext.hdx_theme.count:CountController', action='country')
49 map.connect('/count/source', controller='ckanext.hdx_theme.count:CountController', action='source')
50 map.connect('/user/logged_in', controller='ckanext.hdx_theme.login:LoginController', action='logged_in')
51 map.connect('/contribute', controller='ckanext.hdx_theme.login:LoginController', action='contribute')
52
53 map.connect('/count/test', controller='ckanext.hdx_theme.count:CountController', action='test')
54
55 return map
56
57 def create(self, entity):
58 caching.invalidate_group_caches()
59
60 def edit(self, entity):
61 caching.invalidate_group_caches()
62
63 def get_helpers(self):
64 from ckanext.hdx_theme import helpers as hdx_helpers
65 return {
66 'is_downloadable': hdx_helpers.is_downloadable,
67 'get_facet_items_dict':hdx_helpers.get_facet_items_dict,
68 'get_last_modifier_user': hdx_helpers.get_last_modifier_user,
69 'get_filtered_params_list':hdx_helpers.get_filtered_params_list,
70 'get_last_revision_package':hdx_helpers.get_last_revision_package,
71 'get_last_modifier_user':hdx_helpers.get_last_modifier_user,
72 'get_last_revision_group':hdx_helpers.get_last_revision_group,
73 'get_group_followers':hdx_helpers.get_group_followers,
74 'get_group_members':hdx_helpers.get_group_members,
75 'markdown_extract_strip':hdx_helpers.markdown_extract_strip,
76 'render_date_from_concat_str':hdx_helpers.render_date_from_concat_str,
77 'hdx_version':hdx_helpers.hdx_version,
78 'hdx_build_nav_icon_with_message':hdx_helpers.hdx_build_nav_icon_with_message,
79 'hdx_num_of_new_related_items':hdx_helpers.hdx_num_of_new_related_items
80 }
81
82 def get_actions(self):
83 from ckanext.hdx_theme import actions as hdx_actions
84 return {
85 'organization_list_for_user':hdx_actions.organization_list_for_user,
86 'cached_group_list': hdx_actions.cached_group_list
87
88 }
89
90 def make_middleware(self, app, config):
91 run_on_startup()
92 return app
93
94
95
96
97
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ckanext-hdx_theme/ckanext/hdx_theme/licenses.py b/ckanext-hdx_theme/ckanext/hdx_theme/licenses.py
--- a/ckanext-hdx_theme/ckanext/hdx_theme/licenses.py
+++ b/ckanext-hdx_theme/ckanext/hdx_theme/licenses.py
@@ -19,16 +19,16 @@
def title(self):
return _("Creative Commons Attribution for Intergovernmental Organisations")
-class LicenseCreativeCommonsNoDerives(DefaultLicense):
+#class LicenseCreativeCommonsNoDerives(DefaultLicense):
# domain_content = True
# domain_data = True
- id = "cc-by-nd"
- is_okd_compliant = False
- url = "http://creativecommons.org/licenses/by-nd/3.0/legalcode"
+# id = "cc-by-nd"
+# is_okd_compliant = False
+# url = "http://creativecommons.org/licenses/by-nd/3.0/legalcode"
- @property
- def title(self):
- return _("Creative Commons Attribution-NoDerives")
+# @property
+# def title(self):
+# return _("Creative Commons Attribution-NoDerives")
class LicenseOtherPublicDomainNoRestrictions(DefaultLicense):
# domain_content = True
diff --git a/ckanext-hdx_theme/ckanext/hdx_theme/plugin.py b/ckanext-hdx_theme/ckanext/hdx_theme/plugin.py
--- a/ckanext-hdx_theme/ckanext/hdx_theme/plugin.py
+++ b/ckanext-hdx_theme/ckanext/hdx_theme/plugin.py
@@ -21,7 +21,6 @@
license.License(hdx_licenses.LicenseCreativeCommonsIntergovernmentalOrgs()),
license.License(license.LicenseCreativeCommonsAttribution()),
license.License(license.LicenseCreativeCommonsAttributionShareAlike()),
- license.License(hdx_licenses.LicenseCreativeCommonsNoDerives()),
license.License(hdx_licenses.LicenseOtherPublicDomainNoRestrictions()),
license.License(hdx_licenses.LicenseHdxMultiple()),
license.License(hdx_licenses.LicenseHdxOther())
| {"golden_diff": "diff --git a/ckanext-hdx_theme/ckanext/hdx_theme/licenses.py b/ckanext-hdx_theme/ckanext/hdx_theme/licenses.py\n--- a/ckanext-hdx_theme/ckanext/hdx_theme/licenses.py\n+++ b/ckanext-hdx_theme/ckanext/hdx_theme/licenses.py\n@@ -19,16 +19,16 @@\n def title(self):\n return _(\"Creative Commons Attribution for Intergovernmental Organisations\")\n \n-class LicenseCreativeCommonsNoDerives(DefaultLicense):\n+#class LicenseCreativeCommonsNoDerives(DefaultLicense):\n # domain_content = True\n # domain_data = True\n- id = \"cc-by-nd\"\n- is_okd_compliant = False\n- url = \"http://creativecommons.org/licenses/by-nd/3.0/legalcode\"\n+# id = \"cc-by-nd\"\n+# is_okd_compliant = False\n+# url = \"http://creativecommons.org/licenses/by-nd/3.0/legalcode\"\n \n- @property\n- def title(self):\n- return _(\"Creative Commons Attribution-NoDerives\")\n+# @property\n+# def title(self):\n+# return _(\"Creative Commons Attribution-NoDerives\")\n \n class LicenseOtherPublicDomainNoRestrictions(DefaultLicense):\n # domain_content = True\ndiff --git a/ckanext-hdx_theme/ckanext/hdx_theme/plugin.py b/ckanext-hdx_theme/ckanext/hdx_theme/plugin.py\n--- a/ckanext-hdx_theme/ckanext/hdx_theme/plugin.py\n+++ b/ckanext-hdx_theme/ckanext/hdx_theme/plugin.py\n@@ -21,7 +21,6 @@\n license.License(hdx_licenses.LicenseCreativeCommonsIntergovernmentalOrgs()),\n license.License(license.LicenseCreativeCommonsAttribution()),\n license.License(license.LicenseCreativeCommonsAttributionShareAlike()),\n- license.License(hdx_licenses.LicenseCreativeCommonsNoDerives()),\n license.License(hdx_licenses.LicenseOtherPublicDomainNoRestrictions()),\n license.License(hdx_licenses.LicenseHdxMultiple()),\n license.License(hdx_licenses.LicenseHdxOther())\n", "issue": "Dataset edit: The country functionality should not be in optional and should work as it does on the Dataset Creation page\n\n", "before_files": [{"content": "'''\nCreated on May 12, 2014\n\n@author: alexandru-m-g\n'''\n\nfrom ckan.common import _\nfrom ckan.model.license import DefaultLicense\n\n\nclass LicenseCreativeCommonsIntergovernmentalOrgs(DefaultLicense):\n# domain_content = True\n# domain_data = True\n id = \"cc-by-igo\"\n is_okd_compliant = False\n url = \"http://creativecommons.org/licenses/by/3.0/igo/legalcode\"\n\n @property\n def title(self):\n return _(\"Creative Commons Attribution for Intergovernmental Organisations\")\n \nclass LicenseCreativeCommonsNoDerives(DefaultLicense):\n# domain_content = True\n# domain_data = True\n id = \"cc-by-nd\"\n is_okd_compliant = False\n url = \"http://creativecommons.org/licenses/by-nd/3.0/legalcode\"\n\n @property\n def title(self):\n return _(\"Creative Commons Attribution-NoDerives\")\n \nclass LicenseOtherPublicDomainNoRestrictions(DefaultLicense):\n# domain_content = True\n id = \"other-pd-nr\"\n is_generic = True\n is_okd_compliant = True\n\n @property\n def title(self):\n return _(\"Public Domain / No Restrictions\")\n\nclass LicenseHdxMultiple(DefaultLicense):\n# domain_content = True\n id = \"hdx-multi\"\n# is_generic = True\n# is_okd_compliant = True\n\n @property\n def title(self):\n return _(\"Multiple Licenses\")\n\nclass LicenseHdxOther(DefaultLicense):\n# domain_content = True\n id = \"hdx-other\"\n# is_generic = True\n# is_okd_compliant = True\n\n @property\n def title(self):\n return _(\"Other\")\n\n ", "path": "ckanext-hdx_theme/ckanext/hdx_theme/licenses.py"}, {"content": "import ckanext.hdx_theme.licenses as hdx_licenses\n\nimport ckan.plugins as plugins\nimport ckan.plugins.toolkit as toolkit\nimport ckan.model.package as package\nimport ckan.model.license as license\nimport version\n\nimport ckanext.hdx_theme.caching as caching\n\n\n\ndef run_on_startup():\n _generate_license_list()\n \n caching.cached_get_group_package_stuff()\n \n\ndef _generate_license_list():\n package.Package._license_register = license.LicenseRegister() \n package.Package._license_register.licenses = [\n license.License(hdx_licenses.LicenseCreativeCommonsIntergovernmentalOrgs()),\n license.License(license.LicenseCreativeCommonsAttribution()),\n license.License(license.LicenseCreativeCommonsAttributionShareAlike()),\n license.License(hdx_licenses.LicenseCreativeCommonsNoDerives()),\n license.License(hdx_licenses.LicenseOtherPublicDomainNoRestrictions()),\n license.License(hdx_licenses.LicenseHdxMultiple()),\n license.License(hdx_licenses.LicenseHdxOther())\n ]\n\nclass HDXThemePlugin(plugins.SingletonPlugin):\n plugins.implements(plugins.IConfigurer)\n plugins.implements(plugins.IRoutes, inherit=True)\n plugins.implements(plugins.ITemplateHelpers)\n plugins.implements(plugins.IActions)\n plugins.implements(plugins.IGroupController, inherit=True)\n plugins.implements(plugins.IMiddleware, inherit=True)\n \n def update_config(self, config):\n toolkit.add_template_directory(config, 'templates')\n toolkit.add_public_directory(config, 'public')\n toolkit.add_resource('fanstatic', 'hdx_theme')\n \n\n def before_map(self, map):\n map.connect('home', '/', controller='ckanext.hdx_theme.splash_page:SplashPageController', action='index')\n map.connect('/count/dataset', controller='ckanext.hdx_theme.count:CountController', action='dataset')\n map.connect('/count/country', controller='ckanext.hdx_theme.count:CountController', action='country')\n map.connect('/count/source', controller='ckanext.hdx_theme.count:CountController', action='source')\n map.connect('/user/logged_in', controller='ckanext.hdx_theme.login:LoginController', action='logged_in')\n map.connect('/contribute', controller='ckanext.hdx_theme.login:LoginController', action='contribute')\n \n map.connect('/count/test', controller='ckanext.hdx_theme.count:CountController', action='test')\n \n return map\n \n def create(self, entity):\n caching.invalidate_group_caches()\n\n def edit(self, entity):\n caching.invalidate_group_caches()\n\n def get_helpers(self):\n from ckanext.hdx_theme import helpers as hdx_helpers\n return {\n 'is_downloadable': hdx_helpers.is_downloadable,\n 'get_facet_items_dict':hdx_helpers.get_facet_items_dict,\n 'get_last_modifier_user': hdx_helpers.get_last_modifier_user,\n 'get_filtered_params_list':hdx_helpers.get_filtered_params_list,\n 'get_last_revision_package':hdx_helpers.get_last_revision_package,\n 'get_last_modifier_user':hdx_helpers.get_last_modifier_user,\n 'get_last_revision_group':hdx_helpers.get_last_revision_group,\n 'get_group_followers':hdx_helpers.get_group_followers,\n 'get_group_members':hdx_helpers.get_group_members,\n 'markdown_extract_strip':hdx_helpers.markdown_extract_strip,\n 'render_date_from_concat_str':hdx_helpers.render_date_from_concat_str,\n 'hdx_version':hdx_helpers.hdx_version,\n 'hdx_build_nav_icon_with_message':hdx_helpers.hdx_build_nav_icon_with_message,\n 'hdx_num_of_new_related_items':hdx_helpers.hdx_num_of_new_related_items\n }\n \n def get_actions(self):\n from ckanext.hdx_theme import actions as hdx_actions\n return {\n 'organization_list_for_user':hdx_actions.organization_list_for_user, \n 'cached_group_list': hdx_actions.cached_group_list\n \n }\n \n def make_middleware(self, app, config):\n run_on_startup()\n return app\n\n \n \n\n", "path": "ckanext-hdx_theme/ckanext/hdx_theme/plugin.py"}], "after_files": [{"content": "'''\nCreated on May 12, 2014\n\n@author: alexandru-m-g\n'''\n\nfrom ckan.common import _\nfrom ckan.model.license import DefaultLicense\n\n\nclass LicenseCreativeCommonsIntergovernmentalOrgs(DefaultLicense):\n# domain_content = True\n# domain_data = True\n id = \"cc-by-igo\"\n is_okd_compliant = False\n url = \"http://creativecommons.org/licenses/by/3.0/igo/legalcode\"\n\n @property\n def title(self):\n return _(\"Creative Commons Attribution for Intergovernmental Organisations\")\n \n#class LicenseCreativeCommonsNoDerives(DefaultLicense):\n# domain_content = True\n# domain_data = True\n# id = \"cc-by-nd\"\n# is_okd_compliant = False\n# url = \"http://creativecommons.org/licenses/by-nd/3.0/legalcode\"\n\n# @property\n# def title(self):\n# return _(\"Creative Commons Attribution-NoDerives\")\n \nclass LicenseOtherPublicDomainNoRestrictions(DefaultLicense):\n# domain_content = True\n id = \"other-pd-nr\"\n is_generic = True\n is_okd_compliant = True\n\n @property\n def title(self):\n return _(\"Public Domain / No Restrictions\")\n\nclass LicenseHdxMultiple(DefaultLicense):\n# domain_content = True\n id = \"hdx-multi\"\n# is_generic = True\n# is_okd_compliant = True\n\n @property\n def title(self):\n return _(\"Multiple Licenses\")\n\nclass LicenseHdxOther(DefaultLicense):\n# domain_content = True\n id = \"hdx-other\"\n# is_generic = True\n# is_okd_compliant = True\n\n @property\n def title(self):\n return _(\"Other\")\n\n ", "path": "ckanext-hdx_theme/ckanext/hdx_theme/licenses.py"}, {"content": "import ckanext.hdx_theme.licenses as hdx_licenses\nfrom beaker.cache import cache_regions\n\nimport ckan.plugins as plugins\nimport ckan.plugins.toolkit as toolkit\nimport ckan.model.package as package\nimport ckan.model.license as license\nimport version;\n\ncache_regions.update({\n 'hdx_memory_cache':{\n 'expire': 172800, # 2 days\n 'type':'memory',\n 'key_length': 250\n }\n })\n\ndef _generate_license_list():\n package.Package._license_register = license.LicenseRegister() \n package.Package._license_register.licenses = [\n license.License(hdx_licenses.LicenseCreativeCommonsIntergovernmentalOrgs()),\n license.License(license.LicenseCreativeCommonsAttribution()),\n license.License(license.LicenseCreativeCommonsAttributionShareAlike()),\n license.License(hdx_licenses.LicenseOtherPublicDomainNoRestrictions()),\n license.License(hdx_licenses.LicenseHdxMultiple()),\n license.License(hdx_licenses.LicenseHdxOther())\n ]\n\nclass HDXThemePlugin(plugins.SingletonPlugin):\n plugins.implements(plugins.IConfigurer)\n plugins.implements(plugins.IRoutes, inherit=True)\n plugins.implements(plugins.ITemplateHelpers)\n plugins.implements(plugins.IActions)\n\n def update_config(self, config):\n toolkit.add_template_directory(config, 'templates')\n toolkit.add_public_directory(config, 'public')\n toolkit.add_resource('fanstatic', 'hdx_theme')\n\n def before_map(self, map):\n map.connect('home', '/', controller='ckanext.hdx_theme.splash_page:SplashPageController', action='index')\n map.connect('/count/dataset', controller='ckanext.hdx_theme.count:CountController', action='dataset')\n map.connect('/count/country', controller='ckanext.hdx_theme.count:CountController', action='country')\n map.connect('/count/source', controller='ckanext.hdx_theme.count:CountController', action='source')\n map.connect('/user/logged_in', controller='ckanext.hdx_theme.login:LoginController', action='logged_in')\n map.connect('/contribute', controller='ckanext.hdx_theme.login:LoginController', action='contribute')\n \n map.connect('/count/test', controller='ckanext.hdx_theme.count:CountController', action='test')\n \n # this is actually a HACK to force the customization of the license list.\n # the license list should be changed to be based on a JSON rest service\n _generate_license_list()\n \n return map\n\n def get_helpers(self):\n from ckanext.hdx_theme import helpers as hdx_helpers\n return {\n 'is_downloadable': hdx_helpers.is_downloadable,\n 'get_facet_items_dict':hdx_helpers.get_facet_items_dict,\n 'get_last_modifier_user': hdx_helpers.get_last_modifier_user,\n 'get_filtered_params_list':hdx_helpers.get_filtered_params_list,\n 'get_last_revision_package':hdx_helpers.get_last_revision_package,\n 'get_last_modifier_user':hdx_helpers.get_last_modifier_user,\n 'get_last_revision_group':hdx_helpers.get_last_revision_group,\n 'get_group_followers':hdx_helpers.get_group_followers,\n 'get_group_members':hdx_helpers.get_group_members,\n 'markdown_extract_strip':hdx_helpers.markdown_extract_strip,\n 'render_date_from_concat_str':hdx_helpers.render_date_from_concat_str,\n 'hdx_version':hdx_helpers.hdx_version,\n 'hdx_build_nav_icon_with_message':hdx_helpers.hdx_build_nav_icon_with_message,\n 'hdx_num_of_new_related_items':hdx_helpers.hdx_num_of_new_related_items\n }\n \n def get_actions(self):\n from ckanext.hdx_theme import actions as hdx_actions\n return {\n 'organization_list_for_user':hdx_actions.organization_list_for_user\n }\n \n \n\n", "path": "ckanext-hdx_theme/ckanext/hdx_theme/plugin.py"}]} | 1,920 | 493 |
gh_patches_debug_18558 | rasdani/github-patches | git_diff | litestar-org__litestar-2592 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug: Caching route handlers with the same path but different methods leads to overwriting of cache
### Description
Caching route handlers with the same path (`/` in this example) but different methods (`GET` and `POST` in this example) leads to overwriting of cache. The comments above the assert statements illustrate the current vs expected behavior.
This is related to https://github.com/litestar-org/litestar/issues/2573 and is not restricted to just `OPTIONS` and `GET`
### URL to code causing the issue
_No response_
### MCVE
```python
from litestar import get, post
from litestar.testing import create_test_client
@get(cache=10)
async def something() -> str:
return "text"
@post(cache=10)
async def something_post() -> str:
return "text_post"
with create_test_client([something, something_post]) as client:
response = client.post("")
assert response.status_code == 201
assert response.text == "text_post"
# these shouldn't pass, but they do
response = client.get("")
assert response.status_code == 201
assert response.text == "text_post"
# these should pass, but they don't
response = client.get("")
assert response.status_code == 200
assert response.text == "text"
```
### Steps to reproduce
```bash
1. Run the code
2. The second set of asserts should fail, the third set of asserts should pass
```
### Screenshots
```bash
""
```
### Logs
_No response_
### Litestar Version
GitHub - main (fd06486e2ad4ed0a41636659fec4f093a09e3dd0) as of creating this issue
### Platform
- [ ] Linux
- [X] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above)
<!-- POLAR PLEDGE BADGE START -->
---
> [!NOTE]
> While we are open for sponsoring on [GitHub Sponsors](https://github.com/sponsors/litestar-org/) and
> [OpenCollective](https://opencollective.com/litestar), we also utilize [Polar.sh](https://polar.sh/) to engage in pledge-based sponsorship.
>
> Check out all issues funded or available for funding [on our Polar.sh Litestar dashboard](https://polar.sh/litestar-org)
> * If you would like to see an issue prioritized, make a pledge towards it!
> * We receive the pledge once the issue is completed & verified
> * This, along with engagement in the community, helps us know which features are a priority to our users.
<a href="https://polar.sh/litestar-org/litestar/issues/2588">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://polar.sh/api/github/litestar-org/litestar/issues/2588/pledge.svg?darkmode=1">
<img alt="Fund with Polar" src="https://polar.sh/api/github/litestar-org/litestar/issues/2588/pledge.svg">
</picture>
</a>
<!-- POLAR PLEDGE BADGE END -->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `litestar/config/response_cache.py`
Content:
```
1 from __future__ import annotations
2
3 from dataclasses import dataclass, field
4 from typing import TYPE_CHECKING, Any, Callable, final
5 from urllib.parse import urlencode
6
7 from litestar.status_codes import (
8 HTTP_200_OK,
9 HTTP_300_MULTIPLE_CHOICES,
10 HTTP_301_MOVED_PERMANENTLY,
11 HTTP_308_PERMANENT_REDIRECT,
12 )
13
14 if TYPE_CHECKING:
15 from litestar import Litestar
16 from litestar.connection import Request
17 from litestar.stores.base import Store
18 from litestar.types import CacheKeyBuilder, HTTPScope
19
20 __all__ = ("ResponseCacheConfig", "default_cache_key_builder", "CACHE_FOREVER")
21
22
23 @final
24 class CACHE_FOREVER: # noqa: N801
25 """Sentinel value indicating that a cached response should be stored without an expiration, explicitly skipping the
26 default expiration
27 """
28
29
30 def default_cache_key_builder(request: Request[Any, Any, Any]) -> str:
31 """Given a request object, returns a cache key by combining the path with the sorted query params.
32
33 Args:
34 request: request used to generate cache key.
35
36 Returns:
37 A combination of url path and query parameters
38 """
39 query_params: list[tuple[str, Any]] = list(request.query_params.dict().items())
40 query_params.sort(key=lambda x: x[0])
41 return request.url.path + urlencode(query_params, doseq=True)
42
43
44 def default_do_cache_predicate(_: HTTPScope, status_code: int) -> bool:
45 """Given a status code, returns a boolean indicating whether the response should be cached.
46
47 Args:
48 _: ASGI scope.
49 status_code: status code of the response.
50
51 Returns:
52 A boolean indicating whether the response should be cached.
53 """
54 return HTTP_200_OK <= status_code < HTTP_300_MULTIPLE_CHOICES or status_code in (
55 HTTP_301_MOVED_PERMANENTLY,
56 HTTP_308_PERMANENT_REDIRECT,
57 )
58
59
60 @dataclass
61 class ResponseCacheConfig:
62 """Configuration for response caching.
63
64 To enable response caching, pass an instance of this class to :class:`Litestar <.app.Litestar>` using the
65 ``response_cache_config`` key.
66 """
67
68 default_expiration: int | None = 60
69 """Default cache expiration in seconds used when a route handler is configured with ``cache=True``."""
70 key_builder: CacheKeyBuilder = field(default=default_cache_key_builder)
71 """:class:`CacheKeyBuilder <.types.CacheKeyBuilder>`. Defaults to :func:`default_cache_key_builder`."""
72 store: str = "response_cache"
73 """Name of the :class:`Store <.stores.base.Store>` to use."""
74 cache_response_filter: Callable[[HTTPScope, int], bool] = field(default=default_do_cache_predicate)
75 """A callable that receives connection scope and a status code, and returns a boolean indicating whether the
76 response should be cached."""
77
78 def get_store_from_app(self, app: Litestar) -> Store:
79 """Get the store defined in :attr:`store` from an :class:`Litestar <.app.Litestar>` instance."""
80 return app.stores.get(self.store)
81
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/litestar/config/response_cache.py b/litestar/config/response_cache.py
--- a/litestar/config/response_cache.py
+++ b/litestar/config/response_cache.py
@@ -28,7 +28,8 @@
def default_cache_key_builder(request: Request[Any, Any, Any]) -> str:
- """Given a request object, returns a cache key by combining the path with the sorted query params.
+ """Given a request object, returns a cache key by combining
+ the request method and path with the sorted query params.
Args:
request: request used to generate cache key.
@@ -38,7 +39,7 @@
"""
query_params: list[tuple[str, Any]] = list(request.query_params.dict().items())
query_params.sort(key=lambda x: x[0])
- return request.url.path + urlencode(query_params, doseq=True)
+ return request.method + request.url.path + urlencode(query_params, doseq=True)
def default_do_cache_predicate(_: HTTPScope, status_code: int) -> bool:
| {"golden_diff": "diff --git a/litestar/config/response_cache.py b/litestar/config/response_cache.py\n--- a/litestar/config/response_cache.py\n+++ b/litestar/config/response_cache.py\n@@ -28,7 +28,8 @@\n \n \n def default_cache_key_builder(request: Request[Any, Any, Any]) -> str:\n- \"\"\"Given a request object, returns a cache key by combining the path with the sorted query params.\n+ \"\"\"Given a request object, returns a cache key by combining\n+ the request method and path with the sorted query params.\n \n Args:\n request: request used to generate cache key.\n@@ -38,7 +39,7 @@\n \"\"\"\n query_params: list[tuple[str, Any]] = list(request.query_params.dict().items())\n query_params.sort(key=lambda x: x[0])\n- return request.url.path + urlencode(query_params, doseq=True)\n+ return request.method + request.url.path + urlencode(query_params, doseq=True)\n \n \n def default_do_cache_predicate(_: HTTPScope, status_code: int) -> bool:\n", "issue": "Bug: Caching route handlers with the same path but different methods leads to overwriting of cache\n### Description\r\n\r\nCaching route handlers with the same path (`/` in this example) but different methods (`GET` and `POST` in this example) leads to overwriting of cache. The comments above the assert statements illustrate the current vs expected behavior.\r\n\r\nThis is related to https://github.com/litestar-org/litestar/issues/2573 and is not restricted to just `OPTIONS` and `GET`\r\n\r\n### URL to code causing the issue\r\n\r\n_No response_\r\n\r\n### MCVE\r\n\r\n```python\r\nfrom litestar import get, post\r\nfrom litestar.testing import create_test_client\r\n\r\n\r\n@get(cache=10)\r\nasync def something() -> str:\r\n return \"text\"\r\n\r\n\r\n@post(cache=10)\r\nasync def something_post() -> str:\r\n return \"text_post\"\r\n\r\n\r\nwith create_test_client([something, something_post]) as client:\r\n response = client.post(\"\")\r\n assert response.status_code == 201\r\n assert response.text == \"text_post\"\r\n\r\n # these shouldn't pass, but they do\r\n response = client.get(\"\")\r\n assert response.status_code == 201\r\n assert response.text == \"text_post\"\r\n\r\n # these should pass, but they don't\r\n response = client.get(\"\")\r\n assert response.status_code == 200\r\n assert response.text == \"text\"\r\n```\r\n\r\n\r\n### Steps to reproduce\r\n\r\n```bash\r\n1. Run the code\r\n2. The second set of asserts should fail, the third set of asserts should pass\r\n```\r\n\r\n\r\n### Screenshots\r\n\r\n```bash\r\n\"\"\r\n```\r\n\r\n\r\n### Logs\r\n\r\n_No response_\r\n\r\n### Litestar Version\r\n\r\nGitHub - main (fd06486e2ad4ed0a41636659fec4f093a09e3dd0) as of creating this issue\r\n\r\n### Platform\r\n\r\n- [ ] Linux\r\n- [X] Mac\r\n- [ ] Windows\r\n- [ ] Other (Please specify in the description above)\r\n\r\n<!-- POLAR PLEDGE BADGE START -->\r\n---\r\n> [!NOTE] \r\n> While we are open for sponsoring on [GitHub Sponsors](https://github.com/sponsors/litestar-org/) and \r\n> [OpenCollective](https://opencollective.com/litestar), we also utilize [Polar.sh](https://polar.sh/) to engage in pledge-based sponsorship.\r\n>\r\n> Check out all issues funded or available for funding [on our Polar.sh Litestar dashboard](https://polar.sh/litestar-org)\r\n> * If you would like to see an issue prioritized, make a pledge towards it!\r\n> * We receive the pledge once the issue is completed & verified\r\n> * This, along with engagement in the community, helps us know which features are a priority to our users.\r\n\r\n<a href=\"https://polar.sh/litestar-org/litestar/issues/2588\">\r\n<picture>\r\n <source media=\"(prefers-color-scheme: dark)\" srcset=\"https://polar.sh/api/github/litestar-org/litestar/issues/2588/pledge.svg?darkmode=1\">\r\n <img alt=\"Fund with Polar\" src=\"https://polar.sh/api/github/litestar-org/litestar/issues/2588/pledge.svg\">\r\n</picture>\r\n</a>\r\n<!-- POLAR PLEDGE BADGE END -->\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom dataclasses import dataclass, field\nfrom typing import TYPE_CHECKING, Any, Callable, final\nfrom urllib.parse import urlencode\n\nfrom litestar.status_codes import (\n HTTP_200_OK,\n HTTP_300_MULTIPLE_CHOICES,\n HTTP_301_MOVED_PERMANENTLY,\n HTTP_308_PERMANENT_REDIRECT,\n)\n\nif TYPE_CHECKING:\n from litestar import Litestar\n from litestar.connection import Request\n from litestar.stores.base import Store\n from litestar.types import CacheKeyBuilder, HTTPScope\n\n__all__ = (\"ResponseCacheConfig\", \"default_cache_key_builder\", \"CACHE_FOREVER\")\n\n\n@final\nclass CACHE_FOREVER: # noqa: N801\n \"\"\"Sentinel value indicating that a cached response should be stored without an expiration, explicitly skipping the\n default expiration\n \"\"\"\n\n\ndef default_cache_key_builder(request: Request[Any, Any, Any]) -> str:\n \"\"\"Given a request object, returns a cache key by combining the path with the sorted query params.\n\n Args:\n request: request used to generate cache key.\n\n Returns:\n A combination of url path and query parameters\n \"\"\"\n query_params: list[tuple[str, Any]] = list(request.query_params.dict().items())\n query_params.sort(key=lambda x: x[0])\n return request.url.path + urlencode(query_params, doseq=True)\n\n\ndef default_do_cache_predicate(_: HTTPScope, status_code: int) -> bool:\n \"\"\"Given a status code, returns a boolean indicating whether the response should be cached.\n\n Args:\n _: ASGI scope.\n status_code: status code of the response.\n\n Returns:\n A boolean indicating whether the response should be cached.\n \"\"\"\n return HTTP_200_OK <= status_code < HTTP_300_MULTIPLE_CHOICES or status_code in (\n HTTP_301_MOVED_PERMANENTLY,\n HTTP_308_PERMANENT_REDIRECT,\n )\n\n\n@dataclass\nclass ResponseCacheConfig:\n \"\"\"Configuration for response caching.\n\n To enable response caching, pass an instance of this class to :class:`Litestar <.app.Litestar>` using the\n ``response_cache_config`` key.\n \"\"\"\n\n default_expiration: int | None = 60\n \"\"\"Default cache expiration in seconds used when a route handler is configured with ``cache=True``.\"\"\"\n key_builder: CacheKeyBuilder = field(default=default_cache_key_builder)\n \"\"\":class:`CacheKeyBuilder <.types.CacheKeyBuilder>`. Defaults to :func:`default_cache_key_builder`.\"\"\"\n store: str = \"response_cache\"\n \"\"\"Name of the :class:`Store <.stores.base.Store>` to use.\"\"\"\n cache_response_filter: Callable[[HTTPScope, int], bool] = field(default=default_do_cache_predicate)\n \"\"\"A callable that receives connection scope and a status code, and returns a boolean indicating whether the\n response should be cached.\"\"\"\n\n def get_store_from_app(self, app: Litestar) -> Store:\n \"\"\"Get the store defined in :attr:`store` from an :class:`Litestar <.app.Litestar>` instance.\"\"\"\n return app.stores.get(self.store)\n", "path": "litestar/config/response_cache.py"}], "after_files": [{"content": "from __future__ import annotations\n\nfrom dataclasses import dataclass, field\nfrom typing import TYPE_CHECKING, Any, Callable, final\nfrom urllib.parse import urlencode\n\nfrom litestar.status_codes import (\n HTTP_200_OK,\n HTTP_300_MULTIPLE_CHOICES,\n HTTP_301_MOVED_PERMANENTLY,\n HTTP_308_PERMANENT_REDIRECT,\n)\n\nif TYPE_CHECKING:\n from litestar import Litestar\n from litestar.connection import Request\n from litestar.stores.base import Store\n from litestar.types import CacheKeyBuilder, HTTPScope\n\n__all__ = (\"ResponseCacheConfig\", \"default_cache_key_builder\", \"CACHE_FOREVER\")\n\n\n@final\nclass CACHE_FOREVER: # noqa: N801\n \"\"\"Sentinel value indicating that a cached response should be stored without an expiration, explicitly skipping the\n default expiration\n \"\"\"\n\n\ndef default_cache_key_builder(request: Request[Any, Any, Any]) -> str:\n \"\"\"Given a request object, returns a cache key by combining\n the request method and path with the sorted query params.\n\n Args:\n request: request used to generate cache key.\n\n Returns:\n A combination of url path and query parameters\n \"\"\"\n query_params: list[tuple[str, Any]] = list(request.query_params.dict().items())\n query_params.sort(key=lambda x: x[0])\n return request.method + request.url.path + urlencode(query_params, doseq=True)\n\n\ndef default_do_cache_predicate(_: HTTPScope, status_code: int) -> bool:\n \"\"\"Given a status code, returns a boolean indicating whether the response should be cached.\n\n Args:\n _: ASGI scope.\n status_code: status code of the response.\n\n Returns:\n A boolean indicating whether the response should be cached.\n \"\"\"\n return HTTP_200_OK <= status_code < HTTP_300_MULTIPLE_CHOICES or status_code in (\n HTTP_301_MOVED_PERMANENTLY,\n HTTP_308_PERMANENT_REDIRECT,\n )\n\n\n@dataclass\nclass ResponseCacheConfig:\n \"\"\"Configuration for response caching.\n\n To enable response caching, pass an instance of this class to :class:`Litestar <.app.Litestar>` using the\n ``response_cache_config`` key.\n \"\"\"\n\n default_expiration: int | None = 60\n \"\"\"Default cache expiration in seconds used when a route handler is configured with ``cache=True``.\"\"\"\n key_builder: CacheKeyBuilder = field(default=default_cache_key_builder)\n \"\"\":class:`CacheKeyBuilder <.types.CacheKeyBuilder>`. Defaults to :func:`default_cache_key_builder`.\"\"\"\n store: str = \"response_cache\"\n \"\"\"Name of the :class:`Store <.stores.base.Store>` to use.\"\"\"\n cache_response_filter: Callable[[HTTPScope, int], bool] = field(default=default_do_cache_predicate)\n \"\"\"A callable that receives connection scope and a status code, and returns a boolean indicating whether the\n response should be cached.\"\"\"\n\n def get_store_from_app(self, app: Litestar) -> Store:\n \"\"\"Get the store defined in :attr:`store` from an :class:`Litestar <.app.Litestar>` instance.\"\"\"\n return app.stores.get(self.store)\n", "path": "litestar/config/response_cache.py"}]} | 1,849 | 231 |
gh_patches_debug_696 | rasdani/github-patches | git_diff | boto__botocore-658 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Pin jmespatch dependency version
Can this library pin its jmespath dependency to a specific version? Currently, it depends on the development branch of the jmespath GitHub repo - which is not stable nor deterministic.
Currently, this project's setup.py requires version 0.7.1 but the upstream GitHub repo/branch does not deliver that version - so this project's dependency graph is disconnected.
This can result in runtime errors for downstream consumers - like my organization did today.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 import botocore
3 import sys
4
5 from setuptools import setup, find_packages
6
7
8 requires = ['jmespath==0.7.1',
9 'python-dateutil>=2.1,<3.0.0',
10 'docutils>=0.10']
11
12
13 if sys.version_info[:2] == (2, 6):
14 # For python2.6 we have a few other dependencies.
15 # First we need an ordered dictionary so we use the
16 # 2.6 backport.
17 requires.append('ordereddict==1.1')
18 # Then we need simplejson. This is because we need
19 # a json version that allows us to specify we want to
20 # use an ordereddict instead of a normal dict for the
21 # JSON objects. The 2.7 json module has this. For 2.6
22 # we need simplejson.
23 requires.append('simplejson==3.3.0')
24
25
26 setup(
27 name='botocore',
28 version=botocore.__version__,
29 description='Low-level, data-driven core of boto 3.',
30 long_description=open('README.rst').read(),
31 author='Amazon Web Services',
32 url='https://github.com/boto/botocore',
33 scripts=[],
34 packages=find_packages(exclude=['tests*']),
35 package_data={'botocore': ['data/*.json', 'data/*/*.json'],
36 'botocore.vendored.requests': ['*.pem']},
37 include_package_data=True,
38 install_requires=requires,
39 extras_require={
40 ':python_version=="2.6"': [
41 'ordereddict==1.1',
42 'simplejson==3.3.0',
43 ]
44 },
45 license="Apache License 2.0",
46 classifiers=(
47 'Development Status :: 5 - Production/Stable',
48 'Intended Audience :: Developers',
49 'Intended Audience :: System Administrators',
50 'Natural Language :: English',
51 'License :: OSI Approved :: Apache Software License',
52 'Programming Language :: Python',
53 'Programming Language :: Python :: 2.6',
54 'Programming Language :: Python :: 2.7',
55 'Programming Language :: Python :: 3',
56 'Programming Language :: Python :: 3.3',
57 'Programming Language :: Python :: 3.4',
58 ),
59 )
60
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -5,7 +5,7 @@
from setuptools import setup, find_packages
-requires = ['jmespath==0.7.1',
+requires = ['jmespath>=0.7.1,<1.0.0',
'python-dateutil>=2.1,<3.0.0',
'docutils>=0.10']
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -5,7 +5,7 @@\n from setuptools import setup, find_packages\n \n \n-requires = ['jmespath==0.7.1',\n+requires = ['jmespath>=0.7.1,<1.0.0',\n 'python-dateutil>=2.1,<3.0.0',\n 'docutils>=0.10']\n", "issue": "Pin jmespatch dependency version\nCan this library pin its jmespath dependency to a specific version? Currently, it depends on the development branch of the jmespath GitHub repo - which is not stable nor deterministic.\n\nCurrently, this project's setup.py requires version 0.7.1 but the upstream GitHub repo/branch does not deliver that version - so this project's dependency graph is disconnected.\n\nThis can result in runtime errors for downstream consumers - like my organization did today.\n\n", "before_files": [{"content": "#!/usr/bin/env python\nimport botocore\nimport sys\n\nfrom setuptools import setup, find_packages\n\n\nrequires = ['jmespath==0.7.1',\n 'python-dateutil>=2.1,<3.0.0',\n 'docutils>=0.10']\n\n\nif sys.version_info[:2] == (2, 6):\n # For python2.6 we have a few other dependencies.\n # First we need an ordered dictionary so we use the\n # 2.6 backport.\n requires.append('ordereddict==1.1')\n # Then we need simplejson. This is because we need\n # a json version that allows us to specify we want to\n # use an ordereddict instead of a normal dict for the\n # JSON objects. The 2.7 json module has this. For 2.6\n # we need simplejson.\n requires.append('simplejson==3.3.0')\n\n\nsetup(\n name='botocore',\n version=botocore.__version__,\n description='Low-level, data-driven core of boto 3.',\n long_description=open('README.rst').read(),\n author='Amazon Web Services',\n url='https://github.com/boto/botocore',\n scripts=[],\n packages=find_packages(exclude=['tests*']),\n package_data={'botocore': ['data/*.json', 'data/*/*.json'],\n 'botocore.vendored.requests': ['*.pem']},\n include_package_data=True,\n install_requires=requires,\n extras_require={\n ':python_version==\"2.6\"': [\n 'ordereddict==1.1',\n 'simplejson==3.3.0',\n ]\n },\n license=\"Apache License 2.0\",\n classifiers=(\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'Intended Audience :: System Administrators',\n 'Natural Language :: English',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n ),\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\nimport botocore\nimport sys\n\nfrom setuptools import setup, find_packages\n\n\nrequires = ['jmespath>=0.7.1,<1.0.0',\n 'python-dateutil>=2.1,<3.0.0',\n 'docutils>=0.10']\n\n\nif sys.version_info[:2] == (2, 6):\n # For python2.6 we have a few other dependencies.\n # First we need an ordered dictionary so we use the\n # 2.6 backport.\n requires.append('ordereddict==1.1')\n # Then we need simplejson. This is because we need\n # a json version that allows us to specify we want to\n # use an ordereddict instead of a normal dict for the\n # JSON objects. The 2.7 json module has this. For 2.6\n # we need simplejson.\n requires.append('simplejson==3.3.0')\n\n\nsetup(\n name='botocore',\n version=botocore.__version__,\n description='Low-level, data-driven core of boto 3.',\n long_description=open('README.rst').read(),\n author='Amazon Web Services',\n url='https://github.com/boto/botocore',\n scripts=[],\n packages=find_packages(exclude=['tests*']),\n package_data={'botocore': ['data/*.json', 'data/*/*.json'],\n 'botocore.vendored.requests': ['*.pem']},\n include_package_data=True,\n install_requires=requires,\n extras_require={\n ':python_version==\"2.6\"': [\n 'ordereddict==1.1',\n 'simplejson==3.3.0',\n ]\n },\n license=\"Apache License 2.0\",\n classifiers=(\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'Intended Audience :: System Administrators',\n 'Natural Language :: English',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n ),\n)\n", "path": "setup.py"}]} | 973 | 99 |
gh_patches_debug_21576 | rasdani/github-patches | git_diff | shuup__shuup-1707 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Admin: Product delete returns 404
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `shuup/admin/modules/products/views/delete.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # This file is part of Shuup.
3 #
4 # Copyright (c) 2012-2019, Shoop Commerce Ltd. All rights reserved.
5 #
6 # This source code is licensed under the OSL-3.0 license found in the
7 # LICENSE file in the root directory of this source tree.
8
9 from __future__ import unicode_literals
10
11 from django.contrib import messages
12 from django.core.urlresolvers import reverse
13 from django.http.response import HttpResponseRedirect
14 from django.utils.translation import ugettext as _
15 from django.views.generic import DetailView
16
17 from shuup.admin.utils.urls import get_model_url
18 from shuup.core.models import Product
19
20
21 class ProductDeleteView(DetailView):
22 model = Product
23 context_object_name = "product"
24
25 def get(self, request, *args, **kwargs):
26 return HttpResponseRedirect(get_model_url(self.get_object(), shop=self.request.shop))
27
28 def post(self, request, *args, **kwargs):
29 product = self.get_object()
30 product.soft_delete(user=request.user)
31 messages.success(request, _(u"%s has been marked deleted.") % product)
32 return HttpResponseRedirect(reverse("shuup_admin:shop_product.list"))
33
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/shuup/admin/modules/products/views/delete.py b/shuup/admin/modules/products/views/delete.py
--- a/shuup/admin/modules/products/views/delete.py
+++ b/shuup/admin/modules/products/views/delete.py
@@ -15,18 +15,19 @@
from django.views.generic import DetailView
from shuup.admin.utils.urls import get_model_url
-from shuup.core.models import Product
+from shuup.core.models import ShopProduct
class ProductDeleteView(DetailView):
- model = Product
+ model = ShopProduct
context_object_name = "product"
def get(self, request, *args, **kwargs):
- return HttpResponseRedirect(get_model_url(self.get_object(), shop=self.request.shop))
+ product = self.get_object().product
+ return HttpResponseRedirect(get_model_url(product, shop=self.request.shop))
def post(self, request, *args, **kwargs):
- product = self.get_object()
+ product = self.get_object().product
product.soft_delete(user=request.user)
messages.success(request, _(u"%s has been marked deleted.") % product)
return HttpResponseRedirect(reverse("shuup_admin:shop_product.list"))
| {"golden_diff": "diff --git a/shuup/admin/modules/products/views/delete.py b/shuup/admin/modules/products/views/delete.py\n--- a/shuup/admin/modules/products/views/delete.py\n+++ b/shuup/admin/modules/products/views/delete.py\n@@ -15,18 +15,19 @@\n from django.views.generic import DetailView\n \n from shuup.admin.utils.urls import get_model_url\n-from shuup.core.models import Product\n+from shuup.core.models import ShopProduct\n \n \n class ProductDeleteView(DetailView):\n- model = Product\n+ model = ShopProduct\n context_object_name = \"product\"\n \n def get(self, request, *args, **kwargs):\n- return HttpResponseRedirect(get_model_url(self.get_object(), shop=self.request.shop))\n+ product = self.get_object().product\n+ return HttpResponseRedirect(get_model_url(product, shop=self.request.shop))\n \n def post(self, request, *args, **kwargs):\n- product = self.get_object()\n+ product = self.get_object().product\n product.soft_delete(user=request.user)\n messages.success(request, _(u\"%s has been marked deleted.\") % product)\n return HttpResponseRedirect(reverse(\"shuup_admin:shop_product.list\"))\n", "issue": "Admin: Product delete returns 404\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# This file is part of Shuup.\n#\n# Copyright (c) 2012-2019, Shoop Commerce Ltd. All rights reserved.\n#\n# This source code is licensed under the OSL-3.0 license found in the\n# LICENSE file in the root directory of this source tree.\n\nfrom __future__ import unicode_literals\n\nfrom django.contrib import messages\nfrom django.core.urlresolvers import reverse\nfrom django.http.response import HttpResponseRedirect\nfrom django.utils.translation import ugettext as _\nfrom django.views.generic import DetailView\n\nfrom shuup.admin.utils.urls import get_model_url\nfrom shuup.core.models import Product\n\n\nclass ProductDeleteView(DetailView):\n model = Product\n context_object_name = \"product\"\n\n def get(self, request, *args, **kwargs):\n return HttpResponseRedirect(get_model_url(self.get_object(), shop=self.request.shop))\n\n def post(self, request, *args, **kwargs):\n product = self.get_object()\n product.soft_delete(user=request.user)\n messages.success(request, _(u\"%s has been marked deleted.\") % product)\n return HttpResponseRedirect(reverse(\"shuup_admin:shop_product.list\"))\n", "path": "shuup/admin/modules/products/views/delete.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# This file is part of Shuup.\n#\n# Copyright (c) 2012-2019, Shoop Commerce Ltd. All rights reserved.\n#\n# This source code is licensed under the OSL-3.0 license found in the\n# LICENSE file in the root directory of this source tree.\n\nfrom __future__ import unicode_literals\n\nfrom django.contrib import messages\nfrom django.core.urlresolvers import reverse\nfrom django.http.response import HttpResponseRedirect\nfrom django.utils.translation import ugettext as _\nfrom django.views.generic import DetailView\n\nfrom shuup.admin.utils.urls import get_model_url\nfrom shuup.core.models import ShopProduct\n\n\nclass ProductDeleteView(DetailView):\n model = ShopProduct\n context_object_name = \"product\"\n\n def get(self, request, *args, **kwargs):\n product = self.get_object().product\n return HttpResponseRedirect(get_model_url(product, shop=self.request.shop))\n\n def post(self, request, *args, **kwargs):\n product = self.get_object().product\n product.soft_delete(user=request.user)\n messages.success(request, _(u\"%s has been marked deleted.\") % product)\n return HttpResponseRedirect(reverse(\"shuup_admin:shop_product.list\"))\n", "path": "shuup/admin/modules/products/views/delete.py"}]} | 589 | 257 |
gh_patches_debug_9949 | rasdani/github-patches | git_diff | inducer__relate-607 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Update pysaml2 to 4.6.0 (or higher) to fix CVE-2017-1000246
* https://nvd.nist.gov/vuln/detail/CVE-2017-1000246
* https://github.com/IdentityPython/pysaml2/issues/417
Currently blocked on https://github.com/knaperek/djangosaml2/issues/129.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 from setuptools import setup, find_packages
5
6 # This script (for now) is only intended to install the 'relate' content helper
7 # script. relate-validate. Its use is not needed for (and unrelated to)
8 # deploying RELATE as a web service.
9
10 # Use 'pip install -r requirements.txt' first to install prerequisites.
11
12 setup(name="relate-courseware",
13 version="2016.1",
14 description="RELATE courseware",
15 long_description=open("README.rst", "rt").read(),
16
17 scripts=["bin/relate"],
18 author="Andreas Kloeckner",
19 url="https://github.com/inducer/relate",
20 author_email="[email protected]",
21 license="MIT",
22 packages=find_packages(exclude=['tests']),
23 install_requires=[
24 "django>=1.10,<2.1",
25 "django-crispy-forms>=1.5.1",
26 "colorama",
27 "markdown",
28 "dulwich",
29 "pyyaml",
30 "nbconvert>=5.2.1",
31 "pymbolic",
32 "sympy",
33 ],
34 package_data={
35 "relate": [
36 "templates/*.html",
37 ],
38 "course": [
39 "templates/course/*.html",
40 "templates/course/jinja2/*.tpl",
41 ],
42 },
43 )
44
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -4,10 +4,11 @@
from setuptools import setup, find_packages
# This script (for now) is only intended to install the 'relate' content helper
-# script. relate-validate. Its use is not needed for (and unrelated to)
-# deploying RELATE as a web service.
+# script. Its use is not needed for (and unrelated to) deploying RELATE as a
+# web service.
-# Use 'pip install -r requirements.txt' first to install prerequisites.
+# Use 'pip install -r requirements.txt' to install prerequisites for RELATE as
+# a web service.
setup(name="relate-courseware",
version="2016.1",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -4,10 +4,11 @@\n from setuptools import setup, find_packages\n \n # This script (for now) is only intended to install the 'relate' content helper\n-# script. relate-validate. Its use is not needed for (and unrelated to)\n-# deploying RELATE as a web service.\n+# script. Its use is not needed for (and unrelated to) deploying RELATE as a\n+# web service.\n \n-# Use 'pip install -r requirements.txt' first to install prerequisites.\n+# Use 'pip install -r requirements.txt' to install prerequisites for RELATE as\n+# a web service.\n \n setup(name=\"relate-courseware\",\n version=\"2016.1\",\n", "issue": "Update pysaml2 to 4.6.0 (or higher) to fix CVE-2017-1000246\n* https://nvd.nist.gov/vuln/detail/CVE-2017-1000246\r\n* https://github.com/IdentityPython/pysaml2/issues/417\r\n\r\nCurrently blocked on https://github.com/knaperek/djangosaml2/issues/129.\r\n\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nfrom setuptools import setup, find_packages\n\n# This script (for now) is only intended to install the 'relate' content helper\n# script. relate-validate. Its use is not needed for (and unrelated to)\n# deploying RELATE as a web service.\n\n# Use 'pip install -r requirements.txt' first to install prerequisites.\n\nsetup(name=\"relate-courseware\",\n version=\"2016.1\",\n description=\"RELATE courseware\",\n long_description=open(\"README.rst\", \"rt\").read(),\n\n scripts=[\"bin/relate\"],\n author=\"Andreas Kloeckner\",\n url=\"https://github.com/inducer/relate\",\n author_email=\"[email protected]\",\n license=\"MIT\",\n packages=find_packages(exclude=['tests']),\n install_requires=[\n \"django>=1.10,<2.1\",\n \"django-crispy-forms>=1.5.1\",\n \"colorama\",\n \"markdown\",\n \"dulwich\",\n \"pyyaml\",\n \"nbconvert>=5.2.1\",\n \"pymbolic\",\n \"sympy\",\n ],\n package_data={\n \"relate\": [\n \"templates/*.html\",\n ],\n \"course\": [\n \"templates/course/*.html\",\n \"templates/course/jinja2/*.tpl\",\n ],\n },\n )\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nfrom setuptools import setup, find_packages\n\n# This script (for now) is only intended to install the 'relate' content helper\n# script. Its use is not needed for (and unrelated to) deploying RELATE as a\n# web service.\n\n# Use 'pip install -r requirements.txt' to install prerequisites for RELATE as\n# a web service.\n\nsetup(name=\"relate-courseware\",\n version=\"2016.1\",\n description=\"RELATE courseware\",\n long_description=open(\"README.rst\", \"rt\").read(),\n\n scripts=[\"bin/relate\"],\n author=\"Andreas Kloeckner\",\n url=\"https://github.com/inducer/relate\",\n author_email=\"[email protected]\",\n license=\"MIT\",\n packages=find_packages(exclude=['tests']),\n install_requires=[\n \"django>=1.10,<2.1\",\n \"django-crispy-forms>=1.5.1\",\n \"colorama\",\n \"markdown\",\n \"dulwich\",\n \"pyyaml\",\n \"nbconvert>=5.2.1\",\n \"pymbolic\",\n \"sympy\",\n ],\n package_data={\n \"relate\": [\n \"templates/*.html\",\n ],\n \"course\": [\n \"templates/course/*.html\",\n \"templates/course/jinja2/*.tpl\",\n ],\n },\n )\n", "path": "setup.py"}]} | 745 | 172 |
gh_patches_debug_34167 | rasdani/github-patches | git_diff | PyGithub__PyGithub-1810 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Attributes "maintain" and "triage" missing in class "Permissions"
Current Github API returns a permission values for `admin`, `push`, `maintain`, `triage` and `pull` (see <https://docs.github.com/en/free-pro-team@latest/rest/reference/teams#check-team-permissions-for-a-repository>).
PyGithub so far only supports `admin`, `push` and `pull`.
This is a feature request to add the two missing attributes to class `github.Permissions.Permissions`.
https://github.com/PyGithub/PyGithub/blob/34d097ce473601624722b90fc5d0396011dd3acb/github/Permissions.py#L34
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `github/Permissions.py`
Content:
```
1 ############################ Copyrights and license ############################
2 # #
3 # Copyright 2012 Vincent Jacques <[email protected]> #
4 # Copyright 2012 Zearin <[email protected]> #
5 # Copyright 2013 AKFish <[email protected]> #
6 # Copyright 2013 Vincent Jacques <[email protected]> #
7 # Copyright 2014 Vincent Jacques <[email protected]> #
8 # Copyright 2016 Jannis Gebauer <[email protected]> #
9 # Copyright 2016 Peter Buckley <[email protected]> #
10 # Copyright 2018 Wan Liuyang <[email protected]> #
11 # Copyright 2018 sfdye <[email protected]> #
12 # #
13 # This file is part of PyGithub. #
14 # http://pygithub.readthedocs.io/ #
15 # #
16 # PyGithub is free software: you can redistribute it and/or modify it under #
17 # the terms of the GNU Lesser General Public License as published by the Free #
18 # Software Foundation, either version 3 of the License, or (at your option) #
19 # any later version. #
20 # #
21 # PyGithub is distributed in the hope that it will be useful, but WITHOUT ANY #
22 # WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS #
23 # FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more #
24 # details. #
25 # #
26 # You should have received a copy of the GNU Lesser General Public License #
27 # along with PyGithub. If not, see <http://www.gnu.org/licenses/>. #
28 # #
29 ################################################################################
30
31 import github.GithubObject
32
33
34 class Permissions(github.GithubObject.NonCompletableGithubObject):
35 """
36 This class represents Permissions
37 """
38
39 def __repr__(self):
40 return self.get__repr__(
41 {
42 "admin": self._admin.value,
43 "pull": self._pull.value,
44 "push": self._push.value,
45 }
46 )
47
48 @property
49 def admin(self):
50 """
51 :type: bool
52 """
53 return self._admin.value
54
55 @property
56 def pull(self):
57 """
58 :type: bool
59 """
60 return self._pull.value
61
62 @property
63 def push(self):
64 """
65 :type: bool
66 """
67 return self._push.value
68
69 def _initAttributes(self):
70 self._admin = github.GithubObject.NotSet
71 self._pull = github.GithubObject.NotSet
72 self._push = github.GithubObject.NotSet
73
74 def _useAttributes(self, attributes):
75 if "admin" in attributes: # pragma no branch
76 self._admin = self._makeBoolAttribute(attributes["admin"])
77 if "pull" in attributes: # pragma no branch
78 self._pull = self._makeBoolAttribute(attributes["pull"])
79 if "push" in attributes: # pragma no branch
80 self._push = self._makeBoolAttribute(attributes["push"])
81
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/github/Permissions.py b/github/Permissions.py
--- a/github/Permissions.py
+++ b/github/Permissions.py
@@ -40,8 +40,10 @@
return self.get__repr__(
{
"admin": self._admin.value,
+ "maintain": self._maintain.value,
"pull": self._pull.value,
"push": self._push.value,
+ "triage": self._triage.value,
}
)
@@ -52,6 +54,13 @@
"""
return self._admin.value
+ @property
+ def maintain(self):
+ """
+ :type: bool
+ """
+ return self._maintain.value
+
@property
def pull(self):
"""
@@ -66,15 +75,28 @@
"""
return self._push.value
+ @property
+ def triage(self):
+ """
+ :type: bool
+ """
+ return self._triage.value
+
def _initAttributes(self):
self._admin = github.GithubObject.NotSet
+ self._maintain = github.GithubObject.NotSet
self._pull = github.GithubObject.NotSet
self._push = github.GithubObject.NotSet
+ self._triage = github.GithubObject.NotSet
def _useAttributes(self, attributes):
if "admin" in attributes: # pragma no branch
self._admin = self._makeBoolAttribute(attributes["admin"])
+ if "maintain" in attributes: # pragma no branch
+ self._maintain = self._makeBoolAttribute(attributes["maintain"])
if "pull" in attributes: # pragma no branch
self._pull = self._makeBoolAttribute(attributes["pull"])
if "push" in attributes: # pragma no branch
self._push = self._makeBoolAttribute(attributes["push"])
+ if "triage" in attributes: # pragma no branch
+ self._triage = self._makeBoolAttribute(attributes["triage"])
| {"golden_diff": "diff --git a/github/Permissions.py b/github/Permissions.py\n--- a/github/Permissions.py\n+++ b/github/Permissions.py\n@@ -40,8 +40,10 @@\n return self.get__repr__(\n {\n \"admin\": self._admin.value,\n+ \"maintain\": self._maintain.value,\n \"pull\": self._pull.value,\n \"push\": self._push.value,\n+ \"triage\": self._triage.value,\n }\n )\n \n@@ -52,6 +54,13 @@\n \"\"\"\n return self._admin.value\n \n+ @property\n+ def maintain(self):\n+ \"\"\"\n+ :type: bool\n+ \"\"\"\n+ return self._maintain.value\n+\n @property\n def pull(self):\n \"\"\"\n@@ -66,15 +75,28 @@\n \"\"\"\n return self._push.value\n \n+ @property\n+ def triage(self):\n+ \"\"\"\n+ :type: bool\n+ \"\"\"\n+ return self._triage.value\n+\n def _initAttributes(self):\n self._admin = github.GithubObject.NotSet\n+ self._maintain = github.GithubObject.NotSet\n self._pull = github.GithubObject.NotSet\n self._push = github.GithubObject.NotSet\n+ self._triage = github.GithubObject.NotSet\n \n def _useAttributes(self, attributes):\n if \"admin\" in attributes: # pragma no branch\n self._admin = self._makeBoolAttribute(attributes[\"admin\"])\n+ if \"maintain\" in attributes: # pragma no branch\n+ self._maintain = self._makeBoolAttribute(attributes[\"maintain\"])\n if \"pull\" in attributes: # pragma no branch\n self._pull = self._makeBoolAttribute(attributes[\"pull\"])\n if \"push\" in attributes: # pragma no branch\n self._push = self._makeBoolAttribute(attributes[\"push\"])\n+ if \"triage\" in attributes: # pragma no branch\n+ self._triage = self._makeBoolAttribute(attributes[\"triage\"])\n", "issue": "Attributes \"maintain\" and \"triage\" missing in class \"Permissions\"\nCurrent Github API returns a permission values for `admin`, `push`, `maintain`, `triage` and `pull` (see <https://docs.github.com/en/free-pro-team@latest/rest/reference/teams#check-team-permissions-for-a-repository>).\r\nPyGithub so far only supports `admin`, `push` and `pull`.\r\n\r\nThis is a feature request to add the two missing attributes to class `github.Permissions.Permissions`.\r\nhttps://github.com/PyGithub/PyGithub/blob/34d097ce473601624722b90fc5d0396011dd3acb/github/Permissions.py#L34\r\n\n", "before_files": [{"content": "############################ Copyrights and license ############################\n# #\n# Copyright 2012 Vincent Jacques <[email protected]> #\n# Copyright 2012 Zearin <[email protected]> #\n# Copyright 2013 AKFish <[email protected]> #\n# Copyright 2013 Vincent Jacques <[email protected]> #\n# Copyright 2014 Vincent Jacques <[email protected]> #\n# Copyright 2016 Jannis Gebauer <[email protected]> #\n# Copyright 2016 Peter Buckley <[email protected]> #\n# Copyright 2018 Wan Liuyang <[email protected]> #\n# Copyright 2018 sfdye <[email protected]> #\n# #\n# This file is part of PyGithub. #\n# http://pygithub.readthedocs.io/ #\n# #\n# PyGithub is free software: you can redistribute it and/or modify it under #\n# the terms of the GNU Lesser General Public License as published by the Free #\n# Software Foundation, either version 3 of the License, or (at your option) #\n# any later version. #\n# #\n# PyGithub is distributed in the hope that it will be useful, but WITHOUT ANY #\n# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS #\n# FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more #\n# details. #\n# #\n# You should have received a copy of the GNU Lesser General Public License #\n# along with PyGithub. If not, see <http://www.gnu.org/licenses/>. #\n# #\n################################################################################\n\nimport github.GithubObject\n\n\nclass Permissions(github.GithubObject.NonCompletableGithubObject):\n \"\"\"\n This class represents Permissions\n \"\"\"\n\n def __repr__(self):\n return self.get__repr__(\n {\n \"admin\": self._admin.value,\n \"pull\": self._pull.value,\n \"push\": self._push.value,\n }\n )\n\n @property\n def admin(self):\n \"\"\"\n :type: bool\n \"\"\"\n return self._admin.value\n\n @property\n def pull(self):\n \"\"\"\n :type: bool\n \"\"\"\n return self._pull.value\n\n @property\n def push(self):\n \"\"\"\n :type: bool\n \"\"\"\n return self._push.value\n\n def _initAttributes(self):\n self._admin = github.GithubObject.NotSet\n self._pull = github.GithubObject.NotSet\n self._push = github.GithubObject.NotSet\n\n def _useAttributes(self, attributes):\n if \"admin\" in attributes: # pragma no branch\n self._admin = self._makeBoolAttribute(attributes[\"admin\"])\n if \"pull\" in attributes: # pragma no branch\n self._pull = self._makeBoolAttribute(attributes[\"pull\"])\n if \"push\" in attributes: # pragma no branch\n self._push = self._makeBoolAttribute(attributes[\"push\"])\n", "path": "github/Permissions.py"}], "after_files": [{"content": "############################ Copyrights and license ############################\n# #\n# Copyright 2012 Vincent Jacques <[email protected]> #\n# Copyright 2012 Zearin <[email protected]> #\n# Copyright 2013 AKFish <[email protected]> #\n# Copyright 2013 Vincent Jacques <[email protected]> #\n# Copyright 2014 Vincent Jacques <[email protected]> #\n# Copyright 2016 Jannis Gebauer <[email protected]> #\n# Copyright 2016 Peter Buckley <[email protected]> #\n# Copyright 2018 Wan Liuyang <[email protected]> #\n# Copyright 2018 sfdye <[email protected]> #\n# #\n# This file is part of PyGithub. #\n# http://pygithub.readthedocs.io/ #\n# #\n# PyGithub is free software: you can redistribute it and/or modify it under #\n# the terms of the GNU Lesser General Public License as published by the Free #\n# Software Foundation, either version 3 of the License, or (at your option) #\n# any later version. #\n# #\n# PyGithub is distributed in the hope that it will be useful, but WITHOUT ANY #\n# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS #\n# FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more #\n# details. #\n# #\n# You should have received a copy of the GNU Lesser General Public License #\n# along with PyGithub. If not, see <http://www.gnu.org/licenses/>. #\n# #\n################################################################################\n\nimport github.GithubObject\n\n\nclass Permissions(github.GithubObject.NonCompletableGithubObject):\n \"\"\"\n This class represents Permissions\n \"\"\"\n\n def __repr__(self):\n return self.get__repr__(\n {\n \"admin\": self._admin.value,\n \"maintain\": self._maintain.value,\n \"pull\": self._pull.value,\n \"push\": self._push.value,\n \"triage\": self._triage.value,\n }\n )\n\n @property\n def admin(self):\n \"\"\"\n :type: bool\n \"\"\"\n return self._admin.value\n\n @property\n def maintain(self):\n \"\"\"\n :type: bool\n \"\"\"\n return self._maintain.value\n\n @property\n def pull(self):\n \"\"\"\n :type: bool\n \"\"\"\n return self._pull.value\n\n @property\n def push(self):\n \"\"\"\n :type: bool\n \"\"\"\n return self._push.value\n\n @property\n def triage(self):\n \"\"\"\n :type: bool\n \"\"\"\n return self._triage.value\n\n def _initAttributes(self):\n self._admin = github.GithubObject.NotSet\n self._maintain = github.GithubObject.NotSet\n self._pull = github.GithubObject.NotSet\n self._push = github.GithubObject.NotSet\n self._triage = github.GithubObject.NotSet\n\n def _useAttributes(self, attributes):\n if \"admin\" in attributes: # pragma no branch\n self._admin = self._makeBoolAttribute(attributes[\"admin\"])\n if \"maintain\" in attributes: # pragma no branch\n self._maintain = self._makeBoolAttribute(attributes[\"maintain\"])\n if \"pull\" in attributes: # pragma no branch\n self._pull = self._makeBoolAttribute(attributes[\"pull\"])\n if \"push\" in attributes: # pragma no branch\n self._push = self._makeBoolAttribute(attributes[\"push\"])\n if \"triage\" in attributes: # pragma no branch\n self._triage = self._makeBoolAttribute(attributes[\"triage\"])\n", "path": "github/Permissions.py"}]} | 1,270 | 461 |
gh_patches_debug_7969 | rasdani/github-patches | git_diff | urllib3__urllib3-3034 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add OpenGraph information to our documentation
Would be good to fill in some open graph fields for our documentation so when we link to things they show up with rich output / images rather than as a standard link. Really works nicely for Social Media, Slack, etc.
More information: https://ogp.me/
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/conf.py`
Content:
```
1 from __future__ import annotations
2
3 import os
4 import sys
5 from datetime import date
6
7 # If extensions (or modules to document with autodoc) are in another directory,
8 # add these directories to sys.path here. If the directory is relative to the
9 # documentation root, use os.path.abspath to make it absolute, like shown here.
10
11 root_path = os.path.abspath(os.path.join(os.path.dirname(__file__), ".."))
12 sys.path.insert(0, root_path)
13
14 # https://docs.readthedocs.io/en/stable/builds.html#build-environment
15 if "READTHEDOCS" in os.environ:
16 import glob
17
18 if glob.glob("../changelog/*.*.rst"):
19 print("-- Found changes; running towncrier --", flush=True)
20 import subprocess
21
22 subprocess.run(
23 ["towncrier", "--yes", "--date", "not released yet"], cwd="..", check=True
24 )
25
26 import urllib3
27
28 # -- General configuration -----------------------------------------------------
29
30
31 # Add any Sphinx extension module names here, as strings. They can be extensions
32 # coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
33 extensions = [
34 "sphinx.ext.autodoc",
35 "sphinx_copybutton",
36 "sphinx.ext.doctest",
37 "sphinx.ext.intersphinx",
38 ]
39
40 # Test code blocks only when explicitly specified
41 doctest_test_doctest_blocks = ""
42
43 # Add any paths that contain templates here, relative to this directory.
44 templates_path = ["_templates"]
45
46 # The suffix of source filenames.
47 source_suffix = ".rst"
48
49 # The master toctree document.
50 master_doc = "index"
51
52 # General information about the project.
53 project = "urllib3"
54 copyright = f"{date.today().year}, Andrey Petrov"
55
56 # The short X.Y version.
57 version = urllib3.__version__
58 # The full version, including alpha/beta/rc tags.
59 release = version
60
61 # List of patterns, relative to source directory, that match files and
62 # directories to ignore when looking for source files.
63 exclude_patterns = ["_build"]
64
65 # The name of the Pygments (syntax highlighting) style to use.
66 pygments_style = "friendly"
67
68 # The theme to use for HTML and HTML Help pages. See the documentation for
69 # a list of builtin themes.
70 html_theme = "furo"
71 html_favicon = "images/favicon.png"
72
73 html_static_path = ["_static"]
74 html_theme_options = {
75 "announcement": """
76 <a style=\"text-decoration: none; color: white;\"
77 href=\"https://github.com/sponsors/urllib3\">
78 <img src=\"/en/latest/_static/favicon.png\"/> Support urllib3 on GitHub Sponsors
79 </a>
80 """,
81 "sidebar_hide_name": True,
82 "light_logo": "banner.svg",
83 "dark_logo": "dark-logo.svg",
84 }
85
86 intersphinx_mapping = {"python": ("https://docs.python.org/3", None)}
87
88 # Show typehints as content of the function or method
89 autodoc_typehints = "description"
90
91 # Warn about all references to unknown targets
92 nitpicky = True
93 # Except for these ones, which we expect to point to unknown targets:
94 nitpick_ignore = [
95 ("py:class", "_TYPE_SOCKS_OPTIONS"),
96 ("py:class", "_TYPE_SOCKET_OPTIONS"),
97 ("py:class", "_TYPE_TIMEOUT"),
98 ("py:class", "_TYPE_FIELD_VALUE"),
99 ("py:class", "_TYPE_BODY"),
100 ("py:class", "_HttplibHTTPResponse"),
101 ("py:class", "_HttplibHTTPMessage"),
102 ("py:class", "TracebackType"),
103 ("py:class", "Literal"),
104 ("py:class", "email.errors.MessageDefect"),
105 ("py:class", "MessageDefect"),
106 ("py:class", "http.client.HTTPMessage"),
107 ("py:class", "RequestHistory"),
108 ("py:class", "SSLTransportType"),
109 ("py:class", "VerifyMode"),
110 ("py:class", "_ssl._SSLContext"),
111 ("py:class", "urllib3._collections.HTTPHeaderDict"),
112 ("py:class", "urllib3._collections.RecentlyUsedContainer"),
113 ("py:class", "urllib3._request_methods.RequestMethods"),
114 ("py:class", "urllib3.contrib.socks._TYPE_SOCKS_OPTIONS"),
115 ("py:class", "urllib3.util.timeout._TYPE_DEFAULT"),
116 ("py:class", "BaseHTTPConnection"),
117 ]
118
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -35,8 +35,16 @@
"sphinx_copybutton",
"sphinx.ext.doctest",
"sphinx.ext.intersphinx",
+ "sphinxext.opengraph",
]
+# Open Graph metadata
+ogp_title = "urllib3 documentation"
+ogp_site_url = "https://urllib3.readthedocs.io"
+ogp_type = "website"
+ogp_image = "https://github.com/urllib3/urllib3/raw/main/docs/_static/banner_github.svg"
+ogp_description = "urllib3 is a user-friendly HTTP client library for Python."
+
# Test code blocks only when explicitly specified
doctest_test_doctest_blocks = ""
| {"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -35,8 +35,16 @@\n \"sphinx_copybutton\",\n \"sphinx.ext.doctest\",\n \"sphinx.ext.intersphinx\",\n+ \"sphinxext.opengraph\",\n ]\n \n+# Open Graph metadata\n+ogp_title = \"urllib3 documentation\"\n+ogp_site_url = \"https://urllib3.readthedocs.io\"\n+ogp_type = \"website\"\n+ogp_image = \"https://github.com/urllib3/urllib3/raw/main/docs/_static/banner_github.svg\"\n+ogp_description = \"urllib3 is a user-friendly HTTP client library for Python.\"\n+\n # Test code blocks only when explicitly specified\n doctest_test_doctest_blocks = \"\"\n", "issue": "Add OpenGraph information to our documentation\nWould be good to fill in some open graph fields for our documentation so when we link to things they show up with rich output / images rather than as a standard link. Really works nicely for Social Media, Slack, etc.\r\n\r\nMore information: https://ogp.me/\n", "before_files": [{"content": "from __future__ import annotations\n\nimport os\nimport sys\nfrom datetime import date\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n\nroot_path = os.path.abspath(os.path.join(os.path.dirname(__file__), \"..\"))\nsys.path.insert(0, root_path)\n\n# https://docs.readthedocs.io/en/stable/builds.html#build-environment\nif \"READTHEDOCS\" in os.environ:\n import glob\n\n if glob.glob(\"../changelog/*.*.rst\"):\n print(\"-- Found changes; running towncrier --\", flush=True)\n import subprocess\n\n subprocess.run(\n [\"towncrier\", \"--yes\", \"--date\", \"not released yet\"], cwd=\"..\", check=True\n )\n\nimport urllib3\n\n# -- General configuration -----------------------------------------------------\n\n\n# Add any Sphinx extension module names here, as strings. They can be extensions\n# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx_copybutton\",\n \"sphinx.ext.doctest\",\n \"sphinx.ext.intersphinx\",\n]\n\n# Test code blocks only when explicitly specified\ndoctest_test_doctest_blocks = \"\"\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# The suffix of source filenames.\nsource_suffix = \".rst\"\n\n# The master toctree document.\nmaster_doc = \"index\"\n\n# General information about the project.\nproject = \"urllib3\"\ncopyright = f\"{date.today().year}, Andrey Petrov\"\n\n# The short X.Y version.\nversion = urllib3.__version__\n# The full version, including alpha/beta/rc tags.\nrelease = version\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\nexclude_patterns = [\"_build\"]\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = \"friendly\"\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\nhtml_theme = \"furo\"\nhtml_favicon = \"images/favicon.png\"\n\nhtml_static_path = [\"_static\"]\nhtml_theme_options = {\n \"announcement\": \"\"\"\n <a style=\\\"text-decoration: none; color: white;\\\" \n href=\\\"https://github.com/sponsors/urllib3\\\">\n <img src=\\\"/en/latest/_static/favicon.png\\\"/> Support urllib3 on GitHub Sponsors\n </a>\n \"\"\",\n \"sidebar_hide_name\": True,\n \"light_logo\": \"banner.svg\",\n \"dark_logo\": \"dark-logo.svg\",\n}\n\nintersphinx_mapping = {\"python\": (\"https://docs.python.org/3\", None)}\n\n# Show typehints as content of the function or method\nautodoc_typehints = \"description\"\n\n# Warn about all references to unknown targets\nnitpicky = True\n# Except for these ones, which we expect to point to unknown targets:\nnitpick_ignore = [\n (\"py:class\", \"_TYPE_SOCKS_OPTIONS\"),\n (\"py:class\", \"_TYPE_SOCKET_OPTIONS\"),\n (\"py:class\", \"_TYPE_TIMEOUT\"),\n (\"py:class\", \"_TYPE_FIELD_VALUE\"),\n (\"py:class\", \"_TYPE_BODY\"),\n (\"py:class\", \"_HttplibHTTPResponse\"),\n (\"py:class\", \"_HttplibHTTPMessage\"),\n (\"py:class\", \"TracebackType\"),\n (\"py:class\", \"Literal\"),\n (\"py:class\", \"email.errors.MessageDefect\"),\n (\"py:class\", \"MessageDefect\"),\n (\"py:class\", \"http.client.HTTPMessage\"),\n (\"py:class\", \"RequestHistory\"),\n (\"py:class\", \"SSLTransportType\"),\n (\"py:class\", \"VerifyMode\"),\n (\"py:class\", \"_ssl._SSLContext\"),\n (\"py:class\", \"urllib3._collections.HTTPHeaderDict\"),\n (\"py:class\", \"urllib3._collections.RecentlyUsedContainer\"),\n (\"py:class\", \"urllib3._request_methods.RequestMethods\"),\n (\"py:class\", \"urllib3.contrib.socks._TYPE_SOCKS_OPTIONS\"),\n (\"py:class\", \"urllib3.util.timeout._TYPE_DEFAULT\"),\n (\"py:class\", \"BaseHTTPConnection\"),\n]\n", "path": "docs/conf.py"}], "after_files": [{"content": "from __future__ import annotations\n\nimport os\nimport sys\nfrom datetime import date\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n\nroot_path = os.path.abspath(os.path.join(os.path.dirname(__file__), \"..\"))\nsys.path.insert(0, root_path)\n\n# https://docs.readthedocs.io/en/stable/builds.html#build-environment\nif \"READTHEDOCS\" in os.environ:\n import glob\n\n if glob.glob(\"../changelog/*.*.rst\"):\n print(\"-- Found changes; running towncrier --\", flush=True)\n import subprocess\n\n subprocess.run(\n [\"towncrier\", \"--yes\", \"--date\", \"not released yet\"], cwd=\"..\", check=True\n )\n\nimport urllib3\n\n# -- General configuration -----------------------------------------------------\n\n\n# Add any Sphinx extension module names here, as strings. They can be extensions\n# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx_copybutton\",\n \"sphinx.ext.doctest\",\n \"sphinx.ext.intersphinx\",\n \"sphinxext.opengraph\",\n]\n\n# Open Graph metadata\nogp_title = \"urllib3 documentation\"\nogp_site_url = \"https://urllib3.readthedocs.io\"\nogp_type = \"website\"\nogp_image = \"https://github.com/urllib3/urllib3/raw/main/docs/_static/banner_github.svg\"\nogp_description = \"urllib3 is a user-friendly HTTP client library for Python.\"\n\n# Test code blocks only when explicitly specified\ndoctest_test_doctest_blocks = \"\"\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# The suffix of source filenames.\nsource_suffix = \".rst\"\n\n# The master toctree document.\nmaster_doc = \"index\"\n\n# General information about the project.\nproject = \"urllib3\"\ncopyright = f\"{date.today().year}, Andrey Petrov\"\n\n# The short X.Y version.\nversion = urllib3.__version__\n# The full version, including alpha/beta/rc tags.\nrelease = version\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\nexclude_patterns = [\"_build\"]\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = \"friendly\"\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\nhtml_theme = \"furo\"\nhtml_favicon = \"images/favicon.png\"\n\nhtml_static_path = [\"_static\"]\nhtml_theme_options = {\n \"announcement\": \"\"\"\n <a style=\\\"text-decoration: none; color: white;\\\" \n href=\\\"https://github.com/sponsors/urllib3\\\">\n <img src=\\\"/en/latest/_static/favicon.png\\\"/> Support urllib3 on GitHub Sponsors\n </a>\n \"\"\",\n \"sidebar_hide_name\": True,\n \"light_logo\": \"banner.svg\",\n \"dark_logo\": \"dark-logo.svg\",\n}\n\nintersphinx_mapping = {\"python\": (\"https://docs.python.org/3\", None)}\n\n# Show typehints as content of the function or method\nautodoc_typehints = \"description\"\n\n# Warn about all references to unknown targets\nnitpicky = True\n# Except for these ones, which we expect to point to unknown targets:\nnitpick_ignore = [\n (\"py:class\", \"_TYPE_SOCKS_OPTIONS\"),\n (\"py:class\", \"_TYPE_SOCKET_OPTIONS\"),\n (\"py:class\", \"_TYPE_TIMEOUT\"),\n (\"py:class\", \"_TYPE_FIELD_VALUE\"),\n (\"py:class\", \"_TYPE_BODY\"),\n (\"py:class\", \"_HttplibHTTPResponse\"),\n (\"py:class\", \"_HttplibHTTPMessage\"),\n (\"py:class\", \"TracebackType\"),\n (\"py:class\", \"Literal\"),\n (\"py:class\", \"email.errors.MessageDefect\"),\n (\"py:class\", \"MessageDefect\"),\n (\"py:class\", \"http.client.HTTPMessage\"),\n (\"py:class\", \"RequestHistory\"),\n (\"py:class\", \"SSLTransportType\"),\n (\"py:class\", \"VerifyMode\"),\n (\"py:class\", \"_ssl._SSLContext\"),\n (\"py:class\", \"urllib3._collections.HTTPHeaderDict\"),\n (\"py:class\", \"urllib3._collections.RecentlyUsedContainer\"),\n (\"py:class\", \"urllib3._request_methods.RequestMethods\"),\n (\"py:class\", \"urllib3.contrib.socks._TYPE_SOCKS_OPTIONS\"),\n (\"py:class\", \"urllib3.util.timeout._TYPE_DEFAULT\"),\n (\"py:class\", \"BaseHTTPConnection\"),\n]\n", "path": "docs/conf.py"}]} | 1,513 | 178 |
gh_patches_debug_17514 | rasdani/github-patches | git_diff | pydantic__pydantic-7883 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
@validate_call on instance method of class with `__slots__` raises "AttributeError: object attribute is read-only"
### Initial Checks
- [X] I confirm that I'm using Pydantic V2
### Description
When the @validate_call decorator is applied to an instance method of a class with `__slots__` set, calls to the instance method raise an AttributeError:
```
Traceback (most recent call last):
File "./test_validate_call.py", line 25, in <module>
c.any_instance_method()
File "./.venv/lib/python3.8/site-packages/pydantic/_internal/_validate_call.py", line 118, in __get__
object.__setattr__(obj, self._name, result)
AttributeError: 'ClassWithSlots' object attribute 'any_instance_method' is read-only
```
This error is not raised when using the `@validate_arguments` decorator instead of `@validate_call`.
### Example Code
```Python
from pydantic import validate_call
class ClassWithSlots():
__slots__ = {}
@validate_call
def any_instance_method(self):
pass
c = ClassWithSlots()
c.any_instance_method()
```
### Python, Pydantic & OS Version
```Text
pydantic version: 2.3.0
pydantic-core version: 2.6.3
pydantic-core build: profile=release pgo=true
install path: ./.venv/lib/python3.8/site-packages/pydantic
python version: 3.8.10 (default, May 26 2023, 14:05:08) [GCC 9.4.0]
platform: Linux-5.15.0-83-generic-x86_64-with-glibc2.29
optional deps. installed: ['typing-extensions']
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pydantic/_internal/_validate_call.py`
Content:
```
1 from __future__ import annotations as _annotations
2
3 import inspect
4 from dataclasses import dataclass
5 from functools import partial
6 from typing import Any, Awaitable, Callable
7
8 import pydantic_core
9
10 from ..config import ConfigDict
11 from ..plugin._schema_validator import create_schema_validator
12 from . import _generate_schema, _typing_extra
13 from ._config import ConfigWrapper
14
15
16 @dataclass
17 class CallMarker:
18 function: Callable[..., Any]
19 validate_return: bool
20
21
22 class ValidateCallWrapper:
23 """This is a wrapper around a function that validates the arguments passed to it, and optionally the return value.
24
25 It's partially inspired by `wraps` which in turn uses `partial`, but extended to be a descriptor so
26 these functions can be applied to instance methods, class methods, static methods, as well as normal functions.
27 """
28
29 __slots__ = (
30 'raw_function',
31 '_config',
32 '_validate_return',
33 '__pydantic_core_schema__',
34 '__pydantic_validator__',
35 '__signature__',
36 '__name__',
37 '__qualname__',
38 '__annotations__',
39 '__dict__', # required for __module__
40 )
41
42 def __init__(self, function: Callable[..., Any], config: ConfigDict | None, validate_return: bool):
43 self.raw_function = function
44 self._config = config
45 self._validate_return = validate_return
46 self.__signature__ = inspect.signature(function)
47 if isinstance(function, partial):
48 func = function.func
49 self.__name__ = f'partial({func.__name__})'
50 self.__qualname__ = f'partial({func.__qualname__})'
51 self.__annotations__ = func.__annotations__
52 self.__module__ = func.__module__
53 self.__doc__ = func.__doc__
54 else:
55 self.__name__ = function.__name__
56 self.__qualname__ = function.__qualname__
57 self.__annotations__ = function.__annotations__
58 self.__module__ = function.__module__
59 self.__doc__ = function.__doc__
60
61 namespace = _typing_extra.add_module_globals(function, None)
62 config_wrapper = ConfigWrapper(config)
63 gen_schema = _generate_schema.GenerateSchema(config_wrapper, namespace)
64 schema = gen_schema.clean_schema(gen_schema.generate_schema(function))
65 self.__pydantic_core_schema__ = schema
66 core_config = config_wrapper.core_config(self)
67 self.__pydantic_validator__ = create_schema_validator(schema, core_config, config_wrapper.plugin_settings)
68
69 if self._validate_return:
70 return_type = (
71 self.__signature__.return_annotation
72 if self.__signature__.return_annotation is not self.__signature__.empty
73 else Any
74 )
75 gen_schema = _generate_schema.GenerateSchema(config_wrapper, namespace)
76 schema = gen_schema.clean_schema(gen_schema.generate_schema(return_type))
77 self.__return_pydantic_core_schema__ = schema
78 validator = create_schema_validator(schema, core_config, config_wrapper.plugin_settings)
79 if inspect.iscoroutinefunction(self.raw_function):
80
81 async def return_val_wrapper(aw: Awaitable[Any]) -> None:
82 return validator.validate_python(await aw)
83
84 self.__return_pydantic_validator__ = return_val_wrapper
85 else:
86 self.__return_pydantic_validator__ = validator.validate_python
87 else:
88 self.__return_pydantic_core_schema__ = None
89 self.__return_pydantic_validator__ = None
90
91 self._name: str | None = None # set by __get__, used to set the instance attribute when decorating methods
92
93 def __call__(self, *args: Any, **kwargs: Any) -> Any:
94 res = self.__pydantic_validator__.validate_python(pydantic_core.ArgsKwargs(args, kwargs))
95 if self.__return_pydantic_validator__:
96 return self.__return_pydantic_validator__(res)
97 return res
98
99 def __get__(self, obj: Any, objtype: type[Any] | None = None) -> ValidateCallWrapper:
100 """Bind the raw function and return another ValidateCallWrapper wrapping that."""
101 if obj is None:
102 try:
103 # Handle the case where a method is accessed as a class attribute
104 return objtype.__getattribute__(objtype, self._name) # type: ignore
105 except AttributeError:
106 # This will happen the first time the attribute is accessed
107 pass
108
109 bound_function = self.raw_function.__get__(obj, objtype)
110 result = self.__class__(bound_function, self._config, self._validate_return)
111 if self._name is not None:
112 if obj is not None:
113 object.__setattr__(obj, self._name, result)
114 else:
115 object.__setattr__(objtype, self._name, result)
116 return result
117
118 def __set_name__(self, owner: Any, name: str) -> None:
119 self._name = name
120
121 def __repr__(self) -> str:
122 return f'ValidateCallWrapper({self.raw_function})'
123
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pydantic/_internal/_validate_call.py b/pydantic/_internal/_validate_call.py
--- a/pydantic/_internal/_validate_call.py
+++ b/pydantic/_internal/_validate_call.py
@@ -108,6 +108,11 @@
bound_function = self.raw_function.__get__(obj, objtype)
result = self.__class__(bound_function, self._config, self._validate_return)
+
+ # skip binding to instance when obj or objtype has __slots__ attribute
+ if hasattr(obj, '__slots__') or hasattr(objtype, '__slots__'):
+ return result
+
if self._name is not None:
if obj is not None:
object.__setattr__(obj, self._name, result)
@@ -120,3 +125,6 @@
def __repr__(self) -> str:
return f'ValidateCallWrapper({self.raw_function})'
+
+ def __eq__(self, other):
+ return self.raw_function == other.raw_function
| {"golden_diff": "diff --git a/pydantic/_internal/_validate_call.py b/pydantic/_internal/_validate_call.py\n--- a/pydantic/_internal/_validate_call.py\n+++ b/pydantic/_internal/_validate_call.py\n@@ -108,6 +108,11 @@\n \n bound_function = self.raw_function.__get__(obj, objtype)\n result = self.__class__(bound_function, self._config, self._validate_return)\n+\n+ # skip binding to instance when obj or objtype has __slots__ attribute\n+ if hasattr(obj, '__slots__') or hasattr(objtype, '__slots__'):\n+ return result\n+\n if self._name is not None:\n if obj is not None:\n object.__setattr__(obj, self._name, result)\n@@ -120,3 +125,6 @@\n \n def __repr__(self) -> str:\n return f'ValidateCallWrapper({self.raw_function})'\n+\n+ def __eq__(self, other):\n+ return self.raw_function == other.raw_function\n", "issue": "@validate_call on instance method of class with `__slots__` raises \"AttributeError: object attribute is read-only\"\n### Initial Checks\n\n- [X] I confirm that I'm using Pydantic V2\n\n### Description\n\nWhen the @validate_call decorator is applied to an instance method of a class with `__slots__` set, calls to the instance method raise an AttributeError:\r\n```\r\nTraceback (most recent call last):\r\n File \"./test_validate_call.py\", line 25, in <module>\r\n c.any_instance_method()\r\n File \"./.venv/lib/python3.8/site-packages/pydantic/_internal/_validate_call.py\", line 118, in __get__\r\n object.__setattr__(obj, self._name, result)\r\nAttributeError: 'ClassWithSlots' object attribute 'any_instance_method' is read-only\r\n```\r\n\r\nThis error is not raised when using the `@validate_arguments` decorator instead of `@validate_call`.\n\n### Example Code\n\n```Python\nfrom pydantic import validate_call\r\n\r\nclass ClassWithSlots():\r\n __slots__ = {}\r\n\r\n @validate_call\r\n def any_instance_method(self):\r\n pass\r\n\r\nc = ClassWithSlots()\r\nc.any_instance_method()\n```\n\n\n### Python, Pydantic & OS Version\n\n```Text\npydantic version: 2.3.0\r\n pydantic-core version: 2.6.3\r\n pydantic-core build: profile=release pgo=true\r\n install path: ./.venv/lib/python3.8/site-packages/pydantic\r\n python version: 3.8.10 (default, May 26 2023, 14:05:08) [GCC 9.4.0]\r\n platform: Linux-5.15.0-83-generic-x86_64-with-glibc2.29\r\n optional deps. installed: ['typing-extensions']\n```\n\n", "before_files": [{"content": "from __future__ import annotations as _annotations\n\nimport inspect\nfrom dataclasses import dataclass\nfrom functools import partial\nfrom typing import Any, Awaitable, Callable\n\nimport pydantic_core\n\nfrom ..config import ConfigDict\nfrom ..plugin._schema_validator import create_schema_validator\nfrom . import _generate_schema, _typing_extra\nfrom ._config import ConfigWrapper\n\n\n@dataclass\nclass CallMarker:\n function: Callable[..., Any]\n validate_return: bool\n\n\nclass ValidateCallWrapper:\n \"\"\"This is a wrapper around a function that validates the arguments passed to it, and optionally the return value.\n\n It's partially inspired by `wraps` which in turn uses `partial`, but extended to be a descriptor so\n these functions can be applied to instance methods, class methods, static methods, as well as normal functions.\n \"\"\"\n\n __slots__ = (\n 'raw_function',\n '_config',\n '_validate_return',\n '__pydantic_core_schema__',\n '__pydantic_validator__',\n '__signature__',\n '__name__',\n '__qualname__',\n '__annotations__',\n '__dict__', # required for __module__\n )\n\n def __init__(self, function: Callable[..., Any], config: ConfigDict | None, validate_return: bool):\n self.raw_function = function\n self._config = config\n self._validate_return = validate_return\n self.__signature__ = inspect.signature(function)\n if isinstance(function, partial):\n func = function.func\n self.__name__ = f'partial({func.__name__})'\n self.__qualname__ = f'partial({func.__qualname__})'\n self.__annotations__ = func.__annotations__\n self.__module__ = func.__module__\n self.__doc__ = func.__doc__\n else:\n self.__name__ = function.__name__\n self.__qualname__ = function.__qualname__\n self.__annotations__ = function.__annotations__\n self.__module__ = function.__module__\n self.__doc__ = function.__doc__\n\n namespace = _typing_extra.add_module_globals(function, None)\n config_wrapper = ConfigWrapper(config)\n gen_schema = _generate_schema.GenerateSchema(config_wrapper, namespace)\n schema = gen_schema.clean_schema(gen_schema.generate_schema(function))\n self.__pydantic_core_schema__ = schema\n core_config = config_wrapper.core_config(self)\n self.__pydantic_validator__ = create_schema_validator(schema, core_config, config_wrapper.plugin_settings)\n\n if self._validate_return:\n return_type = (\n self.__signature__.return_annotation\n if self.__signature__.return_annotation is not self.__signature__.empty\n else Any\n )\n gen_schema = _generate_schema.GenerateSchema(config_wrapper, namespace)\n schema = gen_schema.clean_schema(gen_schema.generate_schema(return_type))\n self.__return_pydantic_core_schema__ = schema\n validator = create_schema_validator(schema, core_config, config_wrapper.plugin_settings)\n if inspect.iscoroutinefunction(self.raw_function):\n\n async def return_val_wrapper(aw: Awaitable[Any]) -> None:\n return validator.validate_python(await aw)\n\n self.__return_pydantic_validator__ = return_val_wrapper\n else:\n self.__return_pydantic_validator__ = validator.validate_python\n else:\n self.__return_pydantic_core_schema__ = None\n self.__return_pydantic_validator__ = None\n\n self._name: str | None = None # set by __get__, used to set the instance attribute when decorating methods\n\n def __call__(self, *args: Any, **kwargs: Any) -> Any:\n res = self.__pydantic_validator__.validate_python(pydantic_core.ArgsKwargs(args, kwargs))\n if self.__return_pydantic_validator__:\n return self.__return_pydantic_validator__(res)\n return res\n\n def __get__(self, obj: Any, objtype: type[Any] | None = None) -> ValidateCallWrapper:\n \"\"\"Bind the raw function and return another ValidateCallWrapper wrapping that.\"\"\"\n if obj is None:\n try:\n # Handle the case where a method is accessed as a class attribute\n return objtype.__getattribute__(objtype, self._name) # type: ignore\n except AttributeError:\n # This will happen the first time the attribute is accessed\n pass\n\n bound_function = self.raw_function.__get__(obj, objtype)\n result = self.__class__(bound_function, self._config, self._validate_return)\n if self._name is not None:\n if obj is not None:\n object.__setattr__(obj, self._name, result)\n else:\n object.__setattr__(objtype, self._name, result)\n return result\n\n def __set_name__(self, owner: Any, name: str) -> None:\n self._name = name\n\n def __repr__(self) -> str:\n return f'ValidateCallWrapper({self.raw_function})'\n", "path": "pydantic/_internal/_validate_call.py"}], "after_files": [{"content": "from __future__ import annotations as _annotations\n\nimport inspect\nfrom dataclasses import dataclass\nfrom functools import partial\nfrom typing import Any, Awaitable, Callable\n\nimport pydantic_core\n\nfrom ..config import ConfigDict\nfrom ..plugin._schema_validator import create_schema_validator\nfrom . import _generate_schema, _typing_extra\nfrom ._config import ConfigWrapper\n\n\n@dataclass\nclass CallMarker:\n function: Callable[..., Any]\n validate_return: bool\n\n\nclass ValidateCallWrapper:\n \"\"\"This is a wrapper around a function that validates the arguments passed to it, and optionally the return value.\n\n It's partially inspired by `wraps` which in turn uses `partial`, but extended to be a descriptor so\n these functions can be applied to instance methods, class methods, static methods, as well as normal functions.\n \"\"\"\n\n __slots__ = (\n 'raw_function',\n '_config',\n '_validate_return',\n '__pydantic_core_schema__',\n '__pydantic_validator__',\n '__signature__',\n '__name__',\n '__qualname__',\n '__annotations__',\n '__dict__', # required for __module__\n )\n\n def __init__(self, function: Callable[..., Any], config: ConfigDict | None, validate_return: bool):\n self.raw_function = function\n self._config = config\n self._validate_return = validate_return\n self.__signature__ = inspect.signature(function)\n if isinstance(function, partial):\n func = function.func\n self.__name__ = f'partial({func.__name__})'\n self.__qualname__ = f'partial({func.__qualname__})'\n self.__annotations__ = func.__annotations__\n self.__module__ = func.__module__\n self.__doc__ = func.__doc__\n else:\n self.__name__ = function.__name__\n self.__qualname__ = function.__qualname__\n self.__annotations__ = function.__annotations__\n self.__module__ = function.__module__\n self.__doc__ = function.__doc__\n\n namespace = _typing_extra.add_module_globals(function, None)\n config_wrapper = ConfigWrapper(config)\n gen_schema = _generate_schema.GenerateSchema(config_wrapper, namespace)\n schema = gen_schema.clean_schema(gen_schema.generate_schema(function))\n self.__pydantic_core_schema__ = schema\n core_config = config_wrapper.core_config(self)\n self.__pydantic_validator__ = create_schema_validator(schema, core_config, config_wrapper.plugin_settings)\n\n if self._validate_return:\n return_type = (\n self.__signature__.return_annotation\n if self.__signature__.return_annotation is not self.__signature__.empty\n else Any\n )\n gen_schema = _generate_schema.GenerateSchema(config_wrapper, namespace)\n schema = gen_schema.clean_schema(gen_schema.generate_schema(return_type))\n self.__return_pydantic_core_schema__ = schema\n validator = create_schema_validator(schema, core_config, config_wrapper.plugin_settings)\n if inspect.iscoroutinefunction(self.raw_function):\n\n async def return_val_wrapper(aw: Awaitable[Any]) -> None:\n return validator.validate_python(await aw)\n\n self.__return_pydantic_validator__ = return_val_wrapper\n else:\n self.__return_pydantic_validator__ = validator.validate_python\n else:\n self.__return_pydantic_core_schema__ = None\n self.__return_pydantic_validator__ = None\n\n self._name: str | None = None # set by __get__, used to set the instance attribute when decorating methods\n\n def __call__(self, *args: Any, **kwargs: Any) -> Any:\n res = self.__pydantic_validator__.validate_python(pydantic_core.ArgsKwargs(args, kwargs))\n if self.__return_pydantic_validator__:\n return self.__return_pydantic_validator__(res)\n return res\n\n def __get__(self, obj: Any, objtype: type[Any] | None = None) -> ValidateCallWrapper:\n \"\"\"Bind the raw function and return another ValidateCallWrapper wrapping that.\"\"\"\n if obj is None:\n try:\n # Handle the case where a method is accessed as a class attribute\n return objtype.__getattribute__(objtype, self._name) # type: ignore\n except AttributeError:\n # This will happen the first time the attribute is accessed\n pass\n\n bound_function = self.raw_function.__get__(obj, objtype)\n result = self.__class__(bound_function, self._config, self._validate_return)\n\n # skip binding to instance when obj or objtype has __slots__ attribute\n if hasattr(obj, '__slots__') or hasattr(objtype, '__slots__'):\n return result\n\n if self._name is not None:\n if obj is not None:\n object.__setattr__(obj, self._name, result)\n else:\n object.__setattr__(objtype, self._name, result)\n return result\n\n def __set_name__(self, owner: Any, name: str) -> None:\n self._name = name\n\n def __repr__(self) -> str:\n return f'ValidateCallWrapper({self.raw_function})'\n\n def __eq__(self, other):\n return self.raw_function == other.raw_function\n", "path": "pydantic/_internal/_validate_call.py"}]} | 2,015 | 231 |
gh_patches_debug_12165 | rasdani/github-patches | git_diff | ytdl-org__youtube-dl-4599 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
cannot download Daum tvpot url
cannot download video from Daum tvpot which is Korean internet portal service
http://tvpot.daum.net/v/vfb45BtrWBWWHKEkOwXEghB
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `youtube_dl/extractor/daum.py`
Content:
```
1 # encoding: utf-8
2
3 from __future__ import unicode_literals
4
5 import re
6
7 from .common import InfoExtractor
8 from ..compat import (
9 compat_urllib_parse,
10 )
11
12
13 class DaumIE(InfoExtractor):
14 _VALID_URL = r'https?://(?:m\.)?tvpot\.daum\.net/(?:v/|.*?clipid=)(?P<id>[^?#&]+)'
15 IE_NAME = 'daum.net'
16
17 _TESTS = [{
18 'url': 'http://tvpot.daum.net/clip/ClipView.do?clipid=52554690',
19 'info_dict': {
20 'id': '52554690',
21 'ext': 'mp4',
22 'title': 'DOTA 2GETHER 시즌2 6회 - 2부',
23 'description': 'DOTA 2GETHER 시즌2 6회 - 2부',
24 'upload_date': '20130831',
25 'duration': 3868,
26 },
27 }, {
28 'url': 'http://tvpot.daum.net/v/vab4dyeDBysyBssyukBUjBz',
29 'only_matching': True,
30 }, {
31 'url': 'http://tvpot.daum.net/v/07dXWRka62Y%24',
32 'only_matching': True,
33 }]
34
35 def _real_extract(self, url):
36 mobj = re.match(self._VALID_URL, url)
37 video_id = mobj.group('id')
38 canonical_url = 'http://tvpot.daum.net/v/%s' % video_id
39 webpage = self._download_webpage(canonical_url, video_id)
40 full_id = self._search_regex(
41 r'<iframe src="http://videofarm.daum.net/controller/video/viewer/Video.html\?.*?vid=(.+?)[&"]',
42 webpage, 'full id')
43 query = compat_urllib_parse.urlencode({'vid': full_id})
44 info = self._download_xml(
45 'http://tvpot.daum.net/clip/ClipInfoXml.do?' + query, video_id,
46 'Downloading video info')
47 urls = self._download_xml(
48 'http://videofarm.daum.net/controller/api/open/v1_2/MovieData.apixml?' + query,
49 video_id, 'Downloading video formats info')
50
51 formats = []
52 for format_el in urls.findall('result/output_list/output_list'):
53 profile = format_el.attrib['profile']
54 format_query = compat_urllib_parse.urlencode({
55 'vid': full_id,
56 'profile': profile,
57 })
58 url_doc = self._download_xml(
59 'http://videofarm.daum.net/controller/api/open/v1_2/MovieLocation.apixml?' + format_query,
60 video_id, note='Downloading video data for %s format' % profile)
61 format_url = url_doc.find('result/url').text
62 formats.append({
63 'url': format_url,
64 'format_id': profile,
65 })
66
67 return {
68 'id': video_id,
69 'title': info.find('TITLE').text,
70 'formats': formats,
71 'thumbnail': self._og_search_thumbnail(webpage),
72 'description': info.find('CONTENTS').text,
73 'duration': int(info.find('DURATION').text),
74 'upload_date': info.find('REGDTTM').text[:8],
75 }
76
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/youtube_dl/extractor/daum.py b/youtube_dl/extractor/daum.py
--- a/youtube_dl/extractor/daum.py
+++ b/youtube_dl/extractor/daum.py
@@ -38,7 +38,7 @@
canonical_url = 'http://tvpot.daum.net/v/%s' % video_id
webpage = self._download_webpage(canonical_url, video_id)
full_id = self._search_regex(
- r'<iframe src="http://videofarm.daum.net/controller/video/viewer/Video.html\?.*?vid=(.+?)[&"]',
+ r'src=["\']http://videofarm.daum.net/controller/video/viewer/Video.html\?.*?vid=(.+?)[&"\']',
webpage, 'full id')
query = compat_urllib_parse.urlencode({'vid': full_id})
info = self._download_xml(
| {"golden_diff": "diff --git a/youtube_dl/extractor/daum.py b/youtube_dl/extractor/daum.py\n--- a/youtube_dl/extractor/daum.py\n+++ b/youtube_dl/extractor/daum.py\n@@ -38,7 +38,7 @@\n canonical_url = 'http://tvpot.daum.net/v/%s' % video_id\n webpage = self._download_webpage(canonical_url, video_id)\n full_id = self._search_regex(\n- r'<iframe src=\"http://videofarm.daum.net/controller/video/viewer/Video.html\\?.*?vid=(.+?)[&\"]',\n+ r'src=[\"\\']http://videofarm.daum.net/controller/video/viewer/Video.html\\?.*?vid=(.+?)[&\"\\']',\n webpage, 'full id')\n query = compat_urllib_parse.urlencode({'vid': full_id})\n info = self._download_xml(\n", "issue": "cannot download Daum tvpot url\ncannot download video from Daum tvpot which is Korean internet portal service\nhttp://tvpot.daum.net/v/vfb45BtrWBWWHKEkOwXEghB\n\n", "before_files": [{"content": "# encoding: utf-8\n\nfrom __future__ import unicode_literals\n\nimport re\n\nfrom .common import InfoExtractor\nfrom ..compat import (\n compat_urllib_parse,\n)\n\n\nclass DaumIE(InfoExtractor):\n _VALID_URL = r'https?://(?:m\\.)?tvpot\\.daum\\.net/(?:v/|.*?clipid=)(?P<id>[^?#&]+)'\n IE_NAME = 'daum.net'\n\n _TESTS = [{\n 'url': 'http://tvpot.daum.net/clip/ClipView.do?clipid=52554690',\n 'info_dict': {\n 'id': '52554690',\n 'ext': 'mp4',\n 'title': 'DOTA 2GETHER \uc2dc\uc98c2 6\ud68c - 2\ubd80',\n 'description': 'DOTA 2GETHER \uc2dc\uc98c2 6\ud68c - 2\ubd80',\n 'upload_date': '20130831',\n 'duration': 3868,\n },\n }, {\n 'url': 'http://tvpot.daum.net/v/vab4dyeDBysyBssyukBUjBz',\n 'only_matching': True,\n }, {\n 'url': 'http://tvpot.daum.net/v/07dXWRka62Y%24',\n 'only_matching': True,\n }]\n\n def _real_extract(self, url):\n mobj = re.match(self._VALID_URL, url)\n video_id = mobj.group('id')\n canonical_url = 'http://tvpot.daum.net/v/%s' % video_id\n webpage = self._download_webpage(canonical_url, video_id)\n full_id = self._search_regex(\n r'<iframe src=\"http://videofarm.daum.net/controller/video/viewer/Video.html\\?.*?vid=(.+?)[&\"]',\n webpage, 'full id')\n query = compat_urllib_parse.urlencode({'vid': full_id})\n info = self._download_xml(\n 'http://tvpot.daum.net/clip/ClipInfoXml.do?' + query, video_id,\n 'Downloading video info')\n urls = self._download_xml(\n 'http://videofarm.daum.net/controller/api/open/v1_2/MovieData.apixml?' + query,\n video_id, 'Downloading video formats info')\n\n formats = []\n for format_el in urls.findall('result/output_list/output_list'):\n profile = format_el.attrib['profile']\n format_query = compat_urllib_parse.urlencode({\n 'vid': full_id,\n 'profile': profile,\n })\n url_doc = self._download_xml(\n 'http://videofarm.daum.net/controller/api/open/v1_2/MovieLocation.apixml?' + format_query,\n video_id, note='Downloading video data for %s format' % profile)\n format_url = url_doc.find('result/url').text\n formats.append({\n 'url': format_url,\n 'format_id': profile,\n })\n\n return {\n 'id': video_id,\n 'title': info.find('TITLE').text,\n 'formats': formats,\n 'thumbnail': self._og_search_thumbnail(webpage),\n 'description': info.find('CONTENTS').text,\n 'duration': int(info.find('DURATION').text),\n 'upload_date': info.find('REGDTTM').text[:8],\n }\n", "path": "youtube_dl/extractor/daum.py"}], "after_files": [{"content": "# encoding: utf-8\n\nfrom __future__ import unicode_literals\n\nimport re\n\nfrom .common import InfoExtractor\nfrom ..compat import (\n compat_urllib_parse,\n)\n\n\nclass DaumIE(InfoExtractor):\n _VALID_URL = r'https?://(?:m\\.)?tvpot\\.daum\\.net/(?:v/|.*?clipid=)(?P<id>[^?#&]+)'\n IE_NAME = 'daum.net'\n\n _TESTS = [{\n 'url': 'http://tvpot.daum.net/clip/ClipView.do?clipid=52554690',\n 'info_dict': {\n 'id': '52554690',\n 'ext': 'mp4',\n 'title': 'DOTA 2GETHER \uc2dc\uc98c2 6\ud68c - 2\ubd80',\n 'description': 'DOTA 2GETHER \uc2dc\uc98c2 6\ud68c - 2\ubd80',\n 'upload_date': '20130831',\n 'duration': 3868,\n },\n }, {\n 'url': 'http://tvpot.daum.net/v/vab4dyeDBysyBssyukBUjBz',\n 'only_matching': True,\n }, {\n 'url': 'http://tvpot.daum.net/v/07dXWRka62Y%24',\n 'only_matching': True,\n }]\n\n def _real_extract(self, url):\n mobj = re.match(self._VALID_URL, url)\n video_id = mobj.group('id')\n canonical_url = 'http://tvpot.daum.net/v/%s' % video_id\n webpage = self._download_webpage(canonical_url, video_id)\n full_id = self._search_regex(\n r'src=[\"\\']http://videofarm.daum.net/controller/video/viewer/Video.html\\?.*?vid=(.+?)[&\"\\']',\n webpage, 'full id')\n query = compat_urllib_parse.urlencode({'vid': full_id})\n info = self._download_xml(\n 'http://tvpot.daum.net/clip/ClipInfoXml.do?' + query, video_id,\n 'Downloading video info')\n urls = self._download_xml(\n 'http://videofarm.daum.net/controller/api/open/v1_2/MovieData.apixml?' + query,\n video_id, 'Downloading video formats info')\n\n formats = []\n for format_el in urls.findall('result/output_list/output_list'):\n profile = format_el.attrib['profile']\n format_query = compat_urllib_parse.urlencode({\n 'vid': full_id,\n 'profile': profile,\n })\n url_doc = self._download_xml(\n 'http://videofarm.daum.net/controller/api/open/v1_2/MovieLocation.apixml?' + format_query,\n video_id, note='Downloading video data for %s format' % profile)\n format_url = url_doc.find('result/url').text\n formats.append({\n 'url': format_url,\n 'format_id': profile,\n })\n\n return {\n 'id': video_id,\n 'title': info.find('TITLE').text,\n 'formats': formats,\n 'thumbnail': self._og_search_thumbnail(webpage),\n 'description': info.find('CONTENTS').text,\n 'duration': int(info.find('DURATION').text),\n 'upload_date': info.find('REGDTTM').text[:8],\n }\n", "path": "youtube_dl/extractor/daum.py"}]} | 1,217 | 209 |
gh_patches_debug_28355 | rasdani/github-patches | git_diff | pymeasure__pymeasure-982 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Find USB device from hardware information
The COM port of virtual Com ports via USB may change from time to time.
I propose a simple helper method, which retrieves the COM port of a USB virtual port according to hardware information (vendor id, product id, serial number).
This method allows to specify the hardware information and then to connect always to the correct device, independent of the COM port number assigned at that time.
Rough idea:
~~~python
from serial.tools import list_ports
def findPort(vid: int, pid: int, sn: str) -> str:
"""Find the port name ('COM5') of the first device with the given USB information.
:param int vid: Vendor ID.
:param int pid: Product ID.
:param str sn: Serial number.
:return: Port as a string as `device` attribute returned from `list_ports.comports`.
"""
for port in sorted(list_ports.comports()):
if port.vid == vid and port.pid == pid and port.serial_number == str(sn):
return port.device
raise AttributeError("No device found for the given data.")
~~~
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pymeasure/instruments/__init__.py`
Content:
```
1 #
2 # This file is part of the PyMeasure package.
3 #
4 # Copyright (c) 2013-2023 PyMeasure Developers
5 #
6 # Permission is hereby granted, free of charge, to any person obtaining a copy
7 # of this software and associated documentation files (the "Software"), to deal
8 # in the Software without restriction, including without limitation the rights
9 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
10 # copies of the Software, and to permit persons to whom the Software is
11 # furnished to do so, subject to the following conditions:
12 #
13 # The above copyright notice and this permission notice shall be included in
14 # all copies or substantial portions of the Software.
15 #
16 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
17 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
18 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
19 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
20 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
21 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
22 # THE SOFTWARE.
23 #
24
25 from ..errors import RangeError, RangeException
26 from .channel import Channel
27 from .instrument import Instrument
28 from .resources import list_resources
29 from .validators import discreteTruncate
30
31 from . import activetechnologies
32 from . import advantest
33 from . import agilent
34 from . import aja
35 from . import ametek
36 from . import ami
37 from . import anaheimautomation
38 from . import anapico
39 from . import andeenhagerling
40 from . import anritsu
41 from . import attocube
42 from . import bkprecision
43 from . import danfysik
44 from . import deltaelektronika
45 from . import edwards
46 from . import eurotest
47 from . import fluke
48 from . import fwbell
49 from . import hcp
50 from . import heidenhain
51 from . import hp
52 from . import ipgphotonics
53 from . import keithley
54 from . import keysight
55 from . import lakeshore
56 from . import lecroy
57 from . import mksinst
58 from . import newport
59 from . import ni
60 from . import novanta
61 from . import oxfordinstruments
62 from . import parker
63 from . import pendulum
64 from . import razorbill
65 from . import rohdeschwarz
66 from . import siglenttechnologies
67 from . import signalrecovery
68 from . import srs
69 from . import tcpowerconversion
70 from . import tektronix
71 from . import teledyne
72 from . import temptronic
73 from . import texio
74 from . import thermotron
75 from . import thorlabs
76 from . import thyracont
77 from . import toptica
78 from . import velleman
79 from . import yokogawa
80
```
Path: `pymeasure/instruments/resources.py`
Content:
```
1 #
2 # This file is part of the PyMeasure package.
3 #
4 # Copyright (c) 2013-2023 PyMeasure Developers
5 #
6 # Permission is hereby granted, free of charge, to any person obtaining a copy
7 # of this software and associated documentation files (the "Software"), to deal
8 # in the Software without restriction, including without limitation the rights
9 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
10 # copies of the Software, and to permit persons to whom the Software is
11 # furnished to do so, subject to the following conditions:
12 #
13 # The above copyright notice and this permission notice shall be included in
14 # all copies or substantial portions of the Software.
15 #
16 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
17 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
18 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
19 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
20 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
21 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
22 # THE SOFTWARE.
23 #
24
25 import pyvisa
26
27
28 def list_resources():
29 """
30 Prints the available resources, and returns a list of VISA resource names
31
32 .. code-block:: python
33
34 resources = list_resources()
35 #prints (e.g.)
36 #0 : GPIB0::22::INSTR : Agilent Technologies,34410A,******
37 #1 : GPIB0::26::INSTR : Keithley Instruments Inc., Model 2612, *****
38 dmm = Agilent34410(resources[0])
39
40 """
41 rm = pyvisa.ResourceManager()
42 instrs = rm.list_resources()
43 for n, instr in enumerate(instrs):
44 # trying to catch errors in comunication
45 try:
46 res = rm.open_resource(instr)
47 # try to avoid errors from *idn?
48 try:
49 # noinspection PyUnresolvedReferences
50 idn = res.query('*idn?')[:-1]
51 except pyvisa.Error:
52 idn = "Not known"
53 finally:
54 res.close()
55 print(n, ":", instr, ":", idn)
56 except pyvisa.VisaIOError as e:
57 print(n, ":", instr, ":", "Visa IO Error: check connections")
58 print(e)
59 rm.close()
60 return instrs
61
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pymeasure/instruments/__init__.py b/pymeasure/instruments/__init__.py
--- a/pymeasure/instruments/__init__.py
+++ b/pymeasure/instruments/__init__.py
@@ -25,7 +25,7 @@
from ..errors import RangeError, RangeException
from .channel import Channel
from .instrument import Instrument
-from .resources import list_resources
+from .resources import find_serial_port, list_resources
from .validators import discreteTruncate
from . import activetechnologies
diff --git a/pymeasure/instruments/resources.py b/pymeasure/instruments/resources.py
--- a/pymeasure/instruments/resources.py
+++ b/pymeasure/instruments/resources.py
@@ -23,6 +23,7 @@
#
import pyvisa
+from serial.tools import list_ports
def list_resources():
@@ -58,3 +59,28 @@
print(e)
rm.close()
return instrs
+
+
+def find_serial_port(vendor_id=None, product_id=None, serial_number=None):
+ """Find the VISA port name of the first serial device with the given USB information.
+
+ Use `None` as a value if you do not want to check for that parameter.
+
+ .. code-block:: python
+
+ resource_name = find_serial_port(vendor_id=1256, serial_number="SN12345")
+ dmm = Agilent34410(resource_name)
+
+ :param int vid: Vendor ID.
+ :param int pid: Product ID.
+ :param str sn: Serial number.
+ :return str: Port as a VISA string for a serial device (e.g. "ASRL5" or "ASRL/dev/ttyACM5").
+ """
+ for port in sorted(list_ports.comports()):
+ if ((vendor_id is None or port.vid == vendor_id)
+ and (product_id is None or port.pid == product_id)
+ and (serial_number is None or port.serial_number == str(serial_number))):
+ # remove "COM" from windows serial port names.
+ port_name = port.device.replace("COM", "")
+ return "ASRL" + port_name
+ raise AttributeError("No device found for the given data.")
| {"golden_diff": "diff --git a/pymeasure/instruments/__init__.py b/pymeasure/instruments/__init__.py\n--- a/pymeasure/instruments/__init__.py\n+++ b/pymeasure/instruments/__init__.py\n@@ -25,7 +25,7 @@\n from ..errors import RangeError, RangeException\n from .channel import Channel\n from .instrument import Instrument\n-from .resources import list_resources\n+from .resources import find_serial_port, list_resources\n from .validators import discreteTruncate\n \n from . import activetechnologies\ndiff --git a/pymeasure/instruments/resources.py b/pymeasure/instruments/resources.py\n--- a/pymeasure/instruments/resources.py\n+++ b/pymeasure/instruments/resources.py\n@@ -23,6 +23,7 @@\n #\n \n import pyvisa\n+from serial.tools import list_ports\n \n \n def list_resources():\n@@ -58,3 +59,28 @@\n print(e)\n rm.close()\n return instrs\n+\n+\n+def find_serial_port(vendor_id=None, product_id=None, serial_number=None):\n+ \"\"\"Find the VISA port name of the first serial device with the given USB information.\n+\n+ Use `None` as a value if you do not want to check for that parameter.\n+\n+ .. code-block:: python\n+\n+ resource_name = find_serial_port(vendor_id=1256, serial_number=\"SN12345\")\n+ dmm = Agilent34410(resource_name)\n+\n+ :param int vid: Vendor ID.\n+ :param int pid: Product ID.\n+ :param str sn: Serial number.\n+ :return str: Port as a VISA string for a serial device (e.g. \"ASRL5\" or \"ASRL/dev/ttyACM5\").\n+ \"\"\"\n+ for port in sorted(list_ports.comports()):\n+ if ((vendor_id is None or port.vid == vendor_id)\n+ and (product_id is None or port.pid == product_id)\n+ and (serial_number is None or port.serial_number == str(serial_number))):\n+ # remove \"COM\" from windows serial port names.\n+ port_name = port.device.replace(\"COM\", \"\")\n+ return \"ASRL\" + port_name\n+ raise AttributeError(\"No device found for the given data.\")\n", "issue": "Find USB device from hardware information\nThe COM port of virtual Com ports via USB may change from time to time.\r\nI propose a simple helper method, which retrieves the COM port of a USB virtual port according to hardware information (vendor id, product id, serial number).\r\nThis method allows to specify the hardware information and then to connect always to the correct device, independent of the COM port number assigned at that time.\r\n\r\nRough idea:\r\n\r\n~~~python\r\nfrom serial.tools import list_ports\r\n\r\ndef findPort(vid: int, pid: int, sn: str) -> str:\r\n \"\"\"Find the port name ('COM5') of the first device with the given USB information.\r\n\r\n :param int vid: Vendor ID.\r\n :param int pid: Product ID.\r\n :param str sn: Serial number.\r\n :return: Port as a string as `device` attribute returned from `list_ports.comports`.\r\n \"\"\"\r\n for port in sorted(list_ports.comports()):\r\n if port.vid == vid and port.pid == pid and port.serial_number == str(sn):\r\n return port.device\r\n raise AttributeError(\"No device found for the given data.\")\r\n~~~\n", "before_files": [{"content": "#\n# This file is part of the PyMeasure package.\n#\n# Copyright (c) 2013-2023 PyMeasure Developers\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n# THE SOFTWARE.\n#\n\nfrom ..errors import RangeError, RangeException\nfrom .channel import Channel\nfrom .instrument import Instrument\nfrom .resources import list_resources\nfrom .validators import discreteTruncate\n\nfrom . import activetechnologies\nfrom . import advantest\nfrom . import agilent\nfrom . import aja\nfrom . import ametek\nfrom . import ami\nfrom . import anaheimautomation\nfrom . import anapico\nfrom . import andeenhagerling\nfrom . import anritsu\nfrom . import attocube\nfrom . import bkprecision\nfrom . import danfysik\nfrom . import deltaelektronika\nfrom . import edwards\nfrom . import eurotest\nfrom . import fluke\nfrom . import fwbell\nfrom . import hcp\nfrom . import heidenhain\nfrom . import hp\nfrom . import ipgphotonics\nfrom . import keithley\nfrom . import keysight\nfrom . import lakeshore\nfrom . import lecroy\nfrom . import mksinst\nfrom . import newport\nfrom . import ni\nfrom . import novanta\nfrom . import oxfordinstruments\nfrom . import parker\nfrom . import pendulum\nfrom . import razorbill\nfrom . import rohdeschwarz\nfrom . import siglenttechnologies\nfrom . import signalrecovery\nfrom . import srs\nfrom . import tcpowerconversion\nfrom . import tektronix\nfrom . import teledyne\nfrom . import temptronic\nfrom . import texio\nfrom . import thermotron\nfrom . import thorlabs\nfrom . import thyracont\nfrom . import toptica\nfrom . import velleman\nfrom . import yokogawa\n", "path": "pymeasure/instruments/__init__.py"}, {"content": "#\n# This file is part of the PyMeasure package.\n#\n# Copyright (c) 2013-2023 PyMeasure Developers\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n# THE SOFTWARE.\n#\n\nimport pyvisa\n\n\ndef list_resources():\n \"\"\"\n Prints the available resources, and returns a list of VISA resource names\n\n .. code-block:: python\n\n resources = list_resources()\n #prints (e.g.)\n #0 : GPIB0::22::INSTR : Agilent Technologies,34410A,******\n #1 : GPIB0::26::INSTR : Keithley Instruments Inc., Model 2612, *****\n dmm = Agilent34410(resources[0])\n\n \"\"\"\n rm = pyvisa.ResourceManager()\n instrs = rm.list_resources()\n for n, instr in enumerate(instrs):\n # trying to catch errors in comunication\n try:\n res = rm.open_resource(instr)\n # try to avoid errors from *idn?\n try:\n # noinspection PyUnresolvedReferences\n idn = res.query('*idn?')[:-1]\n except pyvisa.Error:\n idn = \"Not known\"\n finally:\n res.close()\n print(n, \":\", instr, \":\", idn)\n except pyvisa.VisaIOError as e:\n print(n, \":\", instr, \":\", \"Visa IO Error: check connections\")\n print(e)\n rm.close()\n return instrs\n", "path": "pymeasure/instruments/resources.py"}], "after_files": [{"content": "#\n# This file is part of the PyMeasure package.\n#\n# Copyright (c) 2013-2023 PyMeasure Developers\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n# THE SOFTWARE.\n#\n\nfrom ..errors import RangeError, RangeException\nfrom .channel import Channel\nfrom .instrument import Instrument\nfrom .resources import find_serial_port, list_resources\nfrom .validators import discreteTruncate\n\nfrom . import activetechnologies\nfrom . import advantest\nfrom . import agilent\nfrom . import aja\nfrom . import ametek\nfrom . import ami\nfrom . import anaheimautomation\nfrom . import anapico\nfrom . import andeenhagerling\nfrom . import anritsu\nfrom . import attocube\nfrom . import bkprecision\nfrom . import danfysik\nfrom . import deltaelektronika\nfrom . import edwards\nfrom . import eurotest\nfrom . import fluke\nfrom . import fwbell\nfrom . import hcp\nfrom . import heidenhain\nfrom . import hp\nfrom . import ipgphotonics\nfrom . import keithley\nfrom . import keysight\nfrom . import lakeshore\nfrom . import lecroy\nfrom . import mksinst\nfrom . import newport\nfrom . import ni\nfrom . import novanta\nfrom . import oxfordinstruments\nfrom . import parker\nfrom . import pendulum\nfrom . import razorbill\nfrom . import rohdeschwarz\nfrom . import siglenttechnologies\nfrom . import signalrecovery\nfrom . import srs\nfrom . import tcpowerconversion\nfrom . import tektronix\nfrom . import teledyne\nfrom . import temptronic\nfrom . import texio\nfrom . import thermotron\nfrom . import thorlabs\nfrom . import thyracont\nfrom . import toptica\nfrom . import velleman\nfrom . import yokogawa\n", "path": "pymeasure/instruments/__init__.py"}, {"content": "#\n# This file is part of the PyMeasure package.\n#\n# Copyright (c) 2013-2023 PyMeasure Developers\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n# THE SOFTWARE.\n#\n\nimport pyvisa\nfrom serial.tools import list_ports\n\n\ndef list_resources():\n \"\"\"\n Prints the available resources, and returns a list of VISA resource names\n\n .. code-block:: python\n\n resources = list_resources()\n #prints (e.g.)\n #0 : GPIB0::22::INSTR : Agilent Technologies,34410A,******\n #1 : GPIB0::26::INSTR : Keithley Instruments Inc., Model 2612, *****\n dmm = Agilent34410(resources[0])\n\n \"\"\"\n rm = pyvisa.ResourceManager()\n instrs = rm.list_resources()\n for n, instr in enumerate(instrs):\n # trying to catch errors in comunication\n try:\n res = rm.open_resource(instr)\n # try to avoid errors from *idn?\n try:\n # noinspection PyUnresolvedReferences\n idn = res.query('*idn?')[:-1]\n except pyvisa.Error:\n idn = \"Not known\"\n finally:\n res.close()\n print(n, \":\", instr, \":\", idn)\n except pyvisa.VisaIOError as e:\n print(n, \":\", instr, \":\", \"Visa IO Error: check connections\")\n print(e)\n rm.close()\n return instrs\n\n\ndef find_serial_port(vendor_id=None, product_id=None, serial_number=None):\n \"\"\"Find the VISA port name of the first serial device with the given USB information.\n\n Use `None` as a value if you do not want to check for that parameter.\n\n .. code-block:: python\n\n resource_name = find_serial_port(vendor_id=1256, serial_number=\"SN12345\")\n dmm = Agilent34410(resource_name)\n\n :param int vid: Vendor ID.\n :param int pid: Product ID.\n :param str sn: Serial number.\n :return str: Port as a VISA string for a serial device (e.g. \"ASRL5\" or \"ASRL/dev/ttyACM5\").\n \"\"\"\n for port in sorted(list_ports.comports()):\n if ((vendor_id is None or port.vid == vendor_id)\n and (product_id is None or port.pid == product_id)\n and (serial_number is None or port.serial_number == str(serial_number))):\n # remove \"COM\" from windows serial port names.\n port_name = port.device.replace(\"COM\", \"\")\n return \"ASRL\" + port_name\n raise AttributeError(\"No device found for the given data.\")\n", "path": "pymeasure/instruments/resources.py"}]} | 1,944 | 504 |
gh_patches_debug_7980 | rasdani/github-patches | git_diff | opensearch-project__opensearch-build-508 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add bwctest.sh in OpenSearch and one plugin
- [x] OpenSearch Core - https://github.com/opensearch-project/OpenSearch/pull/1150
- [x] Anomaly Detection Plugin - https://github.com/opensearch-project/anomaly-detection/pull/189
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bundle-workflow/src/paths/script_finder.py`
Content:
```
1 # SPDX-License-Identifier: Apache-2.0
2 #
3 # The OpenSearch Contributors require contributions made to
4 # this file be licensed under the Apache-2.0 license or a
5 # compatible open source license.
6
7 import os
8
9
10 class ScriptFinder:
11 class ScriptNotFoundError(Exception):
12 def __init__(self, kind, paths):
13 self.kind = kind
14 self.paths = paths
15 super().__init__(f"Could not find {kind} script. Looked in {paths}.")
16
17 component_scripts_path = os.path.realpath(
18 os.path.join(
19 os.path.dirname(os.path.abspath(__file__)), "../../scripts/components"
20 )
21 )
22
23 default_scripts_path = os.path.realpath(
24 os.path.join(
25 os.path.dirname(os.path.abspath(__file__)), "../../scripts/default"
26 )
27 )
28
29 """
30 ScriptFinder is a helper that abstracts away the details of where to look for build, test and install scripts.
31
32 For build.sh and integtest.sh scripts, given a component name and a checked-out Git repository,
33 it will look in the following locations, in order:
34 * Root of the Git repository
35 * /scripts/<script-name> in the Git repository
36 * <component_scripts_path>/<component_name>/<script-name>
37 * <default_scripts_path>/<script-name>
38
39 For install.sh scripts, given a component name, it will look in the following locations, in order:
40 * <component_scripts_path>/<component_name>/<script-name>
41 * <default_scripts_path>/<script-name>
42 """
43
44 @classmethod
45 def __find_script(cls, name, paths):
46 script = next(filter(lambda path: os.path.exists(path), paths), None)
47 if script is None:
48 raise ScriptFinder.ScriptNotFoundError(name, paths)
49 return script
50
51 @classmethod
52 def find_build_script(cls, component_name, git_dir):
53 paths = [
54 os.path.realpath(os.path.join(git_dir, "build.sh")),
55 os.path.realpath(os.path.join(git_dir, "scripts/build.sh")),
56 os.path.realpath(
57 os.path.join(cls.component_scripts_path, component_name, "build.sh")
58 ),
59 os.path.realpath(os.path.join(cls.default_scripts_path, "build.sh")),
60 ]
61
62 return cls.__find_script("build.sh", paths)
63
64 @classmethod
65 def find_integ_test_script(cls, component_name, git_dir):
66 paths = [
67 # TODO: Uncomment this after the integtest.sh tool is removed from plugin repos. See issue #497
68 # os.path.realpath(os.path.join(git_dir, "integtest.sh")),
69 # os.path.realpath(os.path.join(git_dir, "scripts/integtest.sh")),
70 os.path.realpath(
71 os.path.join(cls.component_scripts_path, component_name, "integtest.sh")
72 ),
73 os.path.realpath(os.path.join(cls.default_scripts_path, "integtest.sh")),
74 ]
75
76 return cls.__find_script("integtest.sh", paths)
77
78 @classmethod
79 def find_install_script(cls, component_name):
80 paths = [
81 os.path.realpath(
82 os.path.join(cls.component_scripts_path, component_name, "install.sh")
83 ),
84 os.path.realpath(os.path.join(cls.default_scripts_path, "install.sh")),
85 ]
86
87 return cls.__find_script("install.sh", paths)
88
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bundle-workflow/src/paths/script_finder.py b/bundle-workflow/src/paths/script_finder.py
--- a/bundle-workflow/src/paths/script_finder.py
+++ b/bundle-workflow/src/paths/script_finder.py
@@ -85,3 +85,16 @@
]
return cls.__find_script("install.sh", paths)
+
+ @classmethod
+ def find_bwc_test_script(cls, component_name, git_dir):
+ paths = [
+ os.path.realpath(os.path.join(git_dir, "bwctest.sh")),
+ os.path.realpath(os.path.join(git_dir, "scripts/bwctest.sh")),
+ os.path.realpath(
+ os.path.join(cls.component_scripts_path, component_name, "bwctest.sh")
+ ),
+ os.path.realpath(os.path.join(cls.default_scripts_path, "bwctest.sh")),
+ ]
+
+ return cls.__find_script("bwctest.sh", paths)
| {"golden_diff": "diff --git a/bundle-workflow/src/paths/script_finder.py b/bundle-workflow/src/paths/script_finder.py\n--- a/bundle-workflow/src/paths/script_finder.py\n+++ b/bundle-workflow/src/paths/script_finder.py\n@@ -85,3 +85,16 @@\n ]\n \n return cls.__find_script(\"install.sh\", paths)\n+\n+ @classmethod\n+ def find_bwc_test_script(cls, component_name, git_dir):\n+ paths = [\n+ os.path.realpath(os.path.join(git_dir, \"bwctest.sh\")),\n+ os.path.realpath(os.path.join(git_dir, \"scripts/bwctest.sh\")),\n+ os.path.realpath(\n+ os.path.join(cls.component_scripts_path, component_name, \"bwctest.sh\")\n+ ),\n+ os.path.realpath(os.path.join(cls.default_scripts_path, \"bwctest.sh\")),\n+ ]\n+\n+ return cls.__find_script(\"bwctest.sh\", paths)\n", "issue": "Add bwctest.sh in OpenSearch and one plugin\n- [x] OpenSearch Core - https://github.com/opensearch-project/OpenSearch/pull/1150\r\n- [x] Anomaly Detection Plugin - https://github.com/opensearch-project/anomaly-detection/pull/189\n", "before_files": [{"content": "# SPDX-License-Identifier: Apache-2.0\n#\n# The OpenSearch Contributors require contributions made to\n# this file be licensed under the Apache-2.0 license or a\n# compatible open source license.\n\nimport os\n\n\nclass ScriptFinder:\n class ScriptNotFoundError(Exception):\n def __init__(self, kind, paths):\n self.kind = kind\n self.paths = paths\n super().__init__(f\"Could not find {kind} script. Looked in {paths}.\")\n\n component_scripts_path = os.path.realpath(\n os.path.join(\n os.path.dirname(os.path.abspath(__file__)), \"../../scripts/components\"\n )\n )\n\n default_scripts_path = os.path.realpath(\n os.path.join(\n os.path.dirname(os.path.abspath(__file__)), \"../../scripts/default\"\n )\n )\n\n \"\"\"\n ScriptFinder is a helper that abstracts away the details of where to look for build, test and install scripts.\n\n For build.sh and integtest.sh scripts, given a component name and a checked-out Git repository,\n it will look in the following locations, in order:\n * Root of the Git repository\n * /scripts/<script-name> in the Git repository\n * <component_scripts_path>/<component_name>/<script-name>\n * <default_scripts_path>/<script-name>\n\n For install.sh scripts, given a component name, it will look in the following locations, in order:\n * <component_scripts_path>/<component_name>/<script-name>\n * <default_scripts_path>/<script-name>\n \"\"\"\n\n @classmethod\n def __find_script(cls, name, paths):\n script = next(filter(lambda path: os.path.exists(path), paths), None)\n if script is None:\n raise ScriptFinder.ScriptNotFoundError(name, paths)\n return script\n\n @classmethod\n def find_build_script(cls, component_name, git_dir):\n paths = [\n os.path.realpath(os.path.join(git_dir, \"build.sh\")),\n os.path.realpath(os.path.join(git_dir, \"scripts/build.sh\")),\n os.path.realpath(\n os.path.join(cls.component_scripts_path, component_name, \"build.sh\")\n ),\n os.path.realpath(os.path.join(cls.default_scripts_path, \"build.sh\")),\n ]\n\n return cls.__find_script(\"build.sh\", paths)\n\n @classmethod\n def find_integ_test_script(cls, component_name, git_dir):\n paths = [\n # TODO: Uncomment this after the integtest.sh tool is removed from plugin repos. See issue #497\n # os.path.realpath(os.path.join(git_dir, \"integtest.sh\")),\n # os.path.realpath(os.path.join(git_dir, \"scripts/integtest.sh\")),\n os.path.realpath(\n os.path.join(cls.component_scripts_path, component_name, \"integtest.sh\")\n ),\n os.path.realpath(os.path.join(cls.default_scripts_path, \"integtest.sh\")),\n ]\n\n return cls.__find_script(\"integtest.sh\", paths)\n\n @classmethod\n def find_install_script(cls, component_name):\n paths = [\n os.path.realpath(\n os.path.join(cls.component_scripts_path, component_name, \"install.sh\")\n ),\n os.path.realpath(os.path.join(cls.default_scripts_path, \"install.sh\")),\n ]\n\n return cls.__find_script(\"install.sh\", paths)\n", "path": "bundle-workflow/src/paths/script_finder.py"}], "after_files": [{"content": "# SPDX-License-Identifier: Apache-2.0\n#\n# The OpenSearch Contributors require contributions made to\n# this file be licensed under the Apache-2.0 license or a\n# compatible open source license.\n\nimport os\n\n\nclass ScriptFinder:\n class ScriptNotFoundError(Exception):\n def __init__(self, kind, paths):\n self.kind = kind\n self.paths = paths\n super().__init__(f\"Could not find {kind} script. Looked in {paths}.\")\n\n component_scripts_path = os.path.realpath(\n os.path.join(\n os.path.dirname(os.path.abspath(__file__)), \"../../scripts/components\"\n )\n )\n\n default_scripts_path = os.path.realpath(\n os.path.join(\n os.path.dirname(os.path.abspath(__file__)), \"../../scripts/default\"\n )\n )\n\n \"\"\"\n ScriptFinder is a helper that abstracts away the details of where to look for build, test and install scripts.\n\n For build.sh and integtest.sh scripts, given a component name and a checked-out Git repository,\n it will look in the following locations, in order:\n * Root of the Git repository\n * /scripts/<script-name> in the Git repository\n * <component_scripts_path>/<component_name>/<script-name>\n * <default_scripts_path>/<script-name>\n\n For install.sh scripts, given a component name, it will look in the following locations, in order:\n * <component_scripts_path>/<component_name>/<script-name>\n * <default_scripts_path>/<script-name>\n \"\"\"\n\n @classmethod\n def __find_script(cls, name, paths):\n script = next(filter(lambda path: os.path.exists(path), paths), None)\n if script is None:\n raise ScriptFinder.ScriptNotFoundError(name, paths)\n return script\n\n @classmethod\n def find_build_script(cls, component_name, git_dir):\n paths = [\n os.path.realpath(os.path.join(git_dir, \"build.sh\")),\n os.path.realpath(os.path.join(git_dir, \"scripts/build.sh\")),\n os.path.realpath(\n os.path.join(cls.component_scripts_path, component_name, \"build.sh\")\n ),\n os.path.realpath(os.path.join(cls.default_scripts_path, \"build.sh\")),\n ]\n\n return cls.__find_script(\"build.sh\", paths)\n\n @classmethod\n def find_integ_test_script(cls, component_name, git_dir):\n paths = [\n # TODO: Uncomment this after the integtest.sh tool is removed from plugin repos. See issue #497\n # os.path.realpath(os.path.join(git_dir, \"integtest.sh\")),\n # os.path.realpath(os.path.join(git_dir, \"scripts/integtest.sh\")),\n os.path.realpath(\n os.path.join(cls.component_scripts_path, component_name, \"integtest.sh\")\n ),\n os.path.realpath(os.path.join(cls.default_scripts_path, \"integtest.sh\")),\n ]\n\n return cls.__find_script(\"integtest.sh\", paths)\n\n @classmethod\n def find_install_script(cls, component_name):\n paths = [\n os.path.realpath(\n os.path.join(cls.component_scripts_path, component_name, \"install.sh\")\n ),\n os.path.realpath(os.path.join(cls.default_scripts_path, \"install.sh\")),\n ]\n\n return cls.__find_script(\"install.sh\", paths)\n\n @classmethod\n def find_bwc_test_script(cls, component_name, git_dir):\n paths = [\n os.path.realpath(os.path.join(git_dir, \"bwctest.sh\")),\n os.path.realpath(os.path.join(git_dir, \"scripts/bwctest.sh\")),\n os.path.realpath(\n os.path.join(cls.component_scripts_path, component_name, \"bwctest.sh\")\n ),\n os.path.realpath(os.path.join(cls.default_scripts_path, \"bwctest.sh\")),\n ]\n\n return cls.__find_script(\"bwctest.sh\", paths)\n", "path": "bundle-workflow/src/paths/script_finder.py"}]} | 1,200 | 206 |
gh_patches_debug_645 | rasdani/github-patches | git_diff | zulip__zulip-12366 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Scrollbar drag can result in unintended click actions
Split off from #11792:
> * on the settings pages, if you click on the scrollbar, drag it down, and then release your click when the mouse is outside the settings modal (e.g. below it or to the right), it closes the settings modal. I don't know if this is an existing thing or a regression, but I ran into it a bunch of times when testing even after knowing the behavior.
This was not a regression from perfect-scrollbar, but I fixed it in Grsmto/simplebar#312 and Grsmto/simplebar#317. Just waiting for the fixes to be included in a new upstream release.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `version.py`
Content:
```
1 ZULIP_VERSION = "2.0.3+git"
2 LATEST_MAJOR_VERSION = "2.0"
3 LATEST_RELEASE_VERSION = "2.0.3"
4 LATEST_RELEASE_ANNOUNCEMENT = "https://blog.zulip.org/2019/03/01/zulip-2-0-released/"
5
6 # Bump the minor PROVISION_VERSION to indicate that folks should provision
7 # only when going from an old version of the code to a newer version. Bump
8 # the major version to indicate that folks should provision in both
9 # directions.
10
11 # Typically, adding a dependency only requires a minor version bump, and
12 # removing a dependency requires a major version bump.
13
14 PROVISION_VERSION = '32.0'
15
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/version.py b/version.py
--- a/version.py
+++ b/version.py
@@ -11,4 +11,4 @@
# Typically, adding a dependency only requires a minor version bump, and
# removing a dependency requires a major version bump.
-PROVISION_VERSION = '32.0'
+PROVISION_VERSION = '32.1'
| {"golden_diff": "diff --git a/version.py b/version.py\n--- a/version.py\n+++ b/version.py\n@@ -11,4 +11,4 @@\n # Typically, adding a dependency only requires a minor version bump, and\n # removing a dependency requires a major version bump.\n \n-PROVISION_VERSION = '32.0'\n+PROVISION_VERSION = '32.1'\n", "issue": "Scrollbar drag can result in unintended click actions\nSplit off from #11792:\r\n\r\n> * on the settings pages, if you click on the scrollbar, drag it down, and then release your click when the mouse is outside the settings modal (e.g. below it or to the right), it closes the settings modal. I don't know if this is an existing thing or a regression, but I ran into it a bunch of times when testing even after knowing the behavior.\r\n\r\nThis was not a regression from perfect-scrollbar, but I fixed it in Grsmto/simplebar#312 and Grsmto/simplebar#317. Just waiting for the fixes to be included in a new upstream release.\n", "before_files": [{"content": "ZULIP_VERSION = \"2.0.3+git\"\nLATEST_MAJOR_VERSION = \"2.0\"\nLATEST_RELEASE_VERSION = \"2.0.3\"\nLATEST_RELEASE_ANNOUNCEMENT = \"https://blog.zulip.org/2019/03/01/zulip-2-0-released/\"\n\n# Bump the minor PROVISION_VERSION to indicate that folks should provision\n# only when going from an old version of the code to a newer version. Bump\n# the major version to indicate that folks should provision in both\n# directions.\n\n# Typically, adding a dependency only requires a minor version bump, and\n# removing a dependency requires a major version bump.\n\nPROVISION_VERSION = '32.0'\n", "path": "version.py"}], "after_files": [{"content": "ZULIP_VERSION = \"2.0.3+git\"\nLATEST_MAJOR_VERSION = \"2.0\"\nLATEST_RELEASE_VERSION = \"2.0.3\"\nLATEST_RELEASE_ANNOUNCEMENT = \"https://blog.zulip.org/2019/03/01/zulip-2-0-released/\"\n\n# Bump the minor PROVISION_VERSION to indicate that folks should provision\n# only when going from an old version of the code to a newer version. Bump\n# the major version to indicate that folks should provision in both\n# directions.\n\n# Typically, adding a dependency only requires a minor version bump, and\n# removing a dependency requires a major version bump.\n\nPROVISION_VERSION = '32.1'\n", "path": "version.py"}]} | 584 | 80 |
gh_patches_debug_13680 | rasdani/github-patches | git_diff | sanic-org__sanic-2919 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
(websockets 12.0) DeprecationWarning: websockets.connection was renamed to websockets.protocol and Connection was renamed to Protocol
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Describe the bug
The DeprecationWarning is thrown here:
https://github.com/sanic-org/sanic/blob/82bf46bea3b55e6b1371107cccda280add8e70eb/sanic/server/protocols/websocket_protocol.py#L4-L9
With websockets 12 the try block would run successfully with the warning while the catch block does not have chance to be run.
### Code snippet
_No response_
### Expected Behavior
The catch block is being run instead.
### How do you run Sanic?
Sanic CLI
### Operating System
MacOS
### Sanic Version
Sanic 23.6.0; Routing 23.6.0
### Additional context
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sanic/server/protocols/websocket_protocol.py`
Content:
```
1 from typing import Optional, Sequence, cast
2
3
4 try: # websockets < 11.0
5 from websockets.connection import State
6 from websockets.server import ServerConnection as ServerProtocol
7 except ImportError: # websockets >= 11.0
8 from websockets.protocol import State # type: ignore
9 from websockets.server import ServerProtocol # type: ignore
10
11 from websockets import http11
12 from websockets.datastructures import Headers as WSHeaders
13 from websockets.typing import Subprotocol
14
15 from sanic.exceptions import SanicException
16 from sanic.log import logger
17 from sanic.request import Request
18 from sanic.server import HttpProtocol
19
20 from ..websockets.impl import WebsocketImplProtocol
21
22
23 OPEN = State.OPEN
24 CLOSING = State.CLOSING
25 CLOSED = State.CLOSED
26
27
28 class WebSocketProtocol(HttpProtocol):
29 __slots__ = (
30 "websocket",
31 "websocket_timeout",
32 "websocket_max_size",
33 "websocket_ping_interval",
34 "websocket_ping_timeout",
35 )
36
37 def __init__(
38 self,
39 *args,
40 websocket_timeout: float = 10.0,
41 websocket_max_size: Optional[int] = None,
42 websocket_ping_interval: Optional[float] = 20.0,
43 websocket_ping_timeout: Optional[float] = 20.0,
44 **kwargs,
45 ):
46 super().__init__(*args, **kwargs)
47 self.websocket: Optional[WebsocketImplProtocol] = None
48 self.websocket_timeout = websocket_timeout
49 self.websocket_max_size = websocket_max_size
50 self.websocket_ping_interval = websocket_ping_interval
51 self.websocket_ping_timeout = websocket_ping_timeout
52
53 def connection_lost(self, exc):
54 if self.websocket is not None:
55 self.websocket.connection_lost(exc)
56 super().connection_lost(exc)
57
58 def data_received(self, data):
59 if self.websocket is not None:
60 self.websocket.data_received(data)
61 else:
62 # Pass it to HttpProtocol handler first
63 # That will (hopefully) upgrade it to a websocket.
64 super().data_received(data)
65
66 def eof_received(self) -> Optional[bool]:
67 if self.websocket is not None:
68 return self.websocket.eof_received()
69 else:
70 return False
71
72 def close(self, timeout: Optional[float] = None):
73 # Called by HttpProtocol at the end of connection_task
74 # If we've upgraded to websocket, we do our own closing
75 if self.websocket is not None:
76 # Note, we don't want to use websocket.close()
77 # That is used for user's application code to send a
78 # websocket close packet. This is different.
79 self.websocket.end_connection(1001)
80 else:
81 super().close()
82
83 def close_if_idle(self):
84 # Called by Sanic Server when shutting down
85 # If we've upgraded to websocket, shut it down
86 if self.websocket is not None:
87 if self.websocket.ws_proto.state in (CLOSING, CLOSED):
88 return True
89 elif self.websocket.loop is not None:
90 self.websocket.loop.create_task(self.websocket.close(1001))
91 else:
92 self.websocket.end_connection(1001)
93 else:
94 return super().close_if_idle()
95
96 @staticmethod
97 def sanic_request_to_ws_request(request: Request):
98 return http11.Request(
99 path=request.path,
100 headers=WSHeaders(request.headers),
101 )
102
103 async def websocket_handshake(
104 self, request, subprotocols: Optional[Sequence[str]] = None
105 ):
106 # let the websockets package do the handshake with the client
107 try:
108 if subprotocols is not None:
109 # subprotocols can be a set or frozenset,
110 # but ServerProtocol needs a list
111 subprotocols = cast(
112 Optional[Sequence[Subprotocol]],
113 list(
114 [
115 Subprotocol(subprotocol)
116 for subprotocol in subprotocols
117 ]
118 ),
119 )
120 ws_proto = ServerProtocol(
121 max_size=self.websocket_max_size,
122 subprotocols=subprotocols,
123 state=OPEN,
124 logger=logger,
125 )
126 resp = ws_proto.accept(self.sanic_request_to_ws_request(request))
127 except Exception:
128 msg = (
129 "Failed to open a WebSocket connection.\n"
130 "See server log for more information.\n"
131 )
132 raise SanicException(msg, status_code=500)
133 if 100 <= resp.status_code <= 299:
134 first_line = (
135 f"HTTP/1.1 {resp.status_code} {resp.reason_phrase}\r\n"
136 ).encode()
137 rbody = bytearray(first_line)
138 rbody += (
139 "".join([f"{k}: {v}\r\n" for k, v in resp.headers.items()])
140 ).encode()
141 rbody += b"\r\n"
142 if resp.body is not None:
143 rbody += resp.body
144 rbody += b"\r\n\r\n"
145 await super().send(rbody)
146 else:
147 raise SanicException(resp.body, resp.status_code)
148 self.websocket = WebsocketImplProtocol(
149 ws_proto,
150 ping_interval=self.websocket_ping_interval,
151 ping_timeout=self.websocket_ping_timeout,
152 close_timeout=self.websocket_timeout,
153 )
154 loop = (
155 request.transport.loop
156 if hasattr(request, "transport")
157 and hasattr(request.transport, "loop")
158 else None
159 )
160 await self.websocket.connection_made(self, loop=loop)
161 return self.websocket
162
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sanic/server/protocols/websocket_protocol.py b/sanic/server/protocols/websocket_protocol.py
--- a/sanic/server/protocols/websocket_protocol.py
+++ b/sanic/server/protocols/websocket_protocol.py
@@ -1,12 +1,12 @@
from typing import Optional, Sequence, cast
-try: # websockets < 11.0
- from websockets.connection import State
- from websockets.server import ServerConnection as ServerProtocol
-except ImportError: # websockets >= 11.0
+try: # websockets >= 11.0
from websockets.protocol import State # type: ignore
from websockets.server import ServerProtocol # type: ignore
+except ImportError: # websockets < 11.0
+ from websockets.connection import State
+ from websockets.server import ServerConnection as ServerProtocol
from websockets import http11
from websockets.datastructures import Headers as WSHeaders
| {"golden_diff": "diff --git a/sanic/server/protocols/websocket_protocol.py b/sanic/server/protocols/websocket_protocol.py\n--- a/sanic/server/protocols/websocket_protocol.py\n+++ b/sanic/server/protocols/websocket_protocol.py\n@@ -1,12 +1,12 @@\n from typing import Optional, Sequence, cast\n \n \n-try: # websockets < 11.0\n- from websockets.connection import State\n- from websockets.server import ServerConnection as ServerProtocol\n-except ImportError: # websockets >= 11.0\n+try: # websockets >= 11.0\n from websockets.protocol import State # type: ignore\n from websockets.server import ServerProtocol # type: ignore\n+except ImportError: # websockets < 11.0\n+ from websockets.connection import State\n+ from websockets.server import ServerConnection as ServerProtocol\n \n from websockets import http11\n from websockets.datastructures import Headers as WSHeaders\n", "issue": "(websockets 12.0) DeprecationWarning: websockets.connection was renamed to websockets.protocol and Connection was renamed to Protocol\n### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Describe the bug\n\nThe DeprecationWarning is thrown here:\r\n\r\nhttps://github.com/sanic-org/sanic/blob/82bf46bea3b55e6b1371107cccda280add8e70eb/sanic/server/protocols/websocket_protocol.py#L4-L9\r\n\r\nWith websockets 12 the try block would run successfully with the warning while the catch block does not have chance to be run.\n\n### Code snippet\n\n_No response_\n\n### Expected Behavior\n\nThe catch block is being run instead.\n\n### How do you run Sanic?\n\nSanic CLI\n\n### Operating System\n\nMacOS\n\n### Sanic Version\n\nSanic 23.6.0; Routing 23.6.0\n\n### Additional context\n\n_No response_\n", "before_files": [{"content": "from typing import Optional, Sequence, cast\n\n\ntry: # websockets < 11.0\n from websockets.connection import State\n from websockets.server import ServerConnection as ServerProtocol\nexcept ImportError: # websockets >= 11.0\n from websockets.protocol import State # type: ignore\n from websockets.server import ServerProtocol # type: ignore\n\nfrom websockets import http11\nfrom websockets.datastructures import Headers as WSHeaders\nfrom websockets.typing import Subprotocol\n\nfrom sanic.exceptions import SanicException\nfrom sanic.log import logger\nfrom sanic.request import Request\nfrom sanic.server import HttpProtocol\n\nfrom ..websockets.impl import WebsocketImplProtocol\n\n\nOPEN = State.OPEN\nCLOSING = State.CLOSING\nCLOSED = State.CLOSED\n\n\nclass WebSocketProtocol(HttpProtocol):\n __slots__ = (\n \"websocket\",\n \"websocket_timeout\",\n \"websocket_max_size\",\n \"websocket_ping_interval\",\n \"websocket_ping_timeout\",\n )\n\n def __init__(\n self,\n *args,\n websocket_timeout: float = 10.0,\n websocket_max_size: Optional[int] = None,\n websocket_ping_interval: Optional[float] = 20.0,\n websocket_ping_timeout: Optional[float] = 20.0,\n **kwargs,\n ):\n super().__init__(*args, **kwargs)\n self.websocket: Optional[WebsocketImplProtocol] = None\n self.websocket_timeout = websocket_timeout\n self.websocket_max_size = websocket_max_size\n self.websocket_ping_interval = websocket_ping_interval\n self.websocket_ping_timeout = websocket_ping_timeout\n\n def connection_lost(self, exc):\n if self.websocket is not None:\n self.websocket.connection_lost(exc)\n super().connection_lost(exc)\n\n def data_received(self, data):\n if self.websocket is not None:\n self.websocket.data_received(data)\n else:\n # Pass it to HttpProtocol handler first\n # That will (hopefully) upgrade it to a websocket.\n super().data_received(data)\n\n def eof_received(self) -> Optional[bool]:\n if self.websocket is not None:\n return self.websocket.eof_received()\n else:\n return False\n\n def close(self, timeout: Optional[float] = None):\n # Called by HttpProtocol at the end of connection_task\n # If we've upgraded to websocket, we do our own closing\n if self.websocket is not None:\n # Note, we don't want to use websocket.close()\n # That is used for user's application code to send a\n # websocket close packet. This is different.\n self.websocket.end_connection(1001)\n else:\n super().close()\n\n def close_if_idle(self):\n # Called by Sanic Server when shutting down\n # If we've upgraded to websocket, shut it down\n if self.websocket is not None:\n if self.websocket.ws_proto.state in (CLOSING, CLOSED):\n return True\n elif self.websocket.loop is not None:\n self.websocket.loop.create_task(self.websocket.close(1001))\n else:\n self.websocket.end_connection(1001)\n else:\n return super().close_if_idle()\n\n @staticmethod\n def sanic_request_to_ws_request(request: Request):\n return http11.Request(\n path=request.path,\n headers=WSHeaders(request.headers),\n )\n\n async def websocket_handshake(\n self, request, subprotocols: Optional[Sequence[str]] = None\n ):\n # let the websockets package do the handshake with the client\n try:\n if subprotocols is not None:\n # subprotocols can be a set or frozenset,\n # but ServerProtocol needs a list\n subprotocols = cast(\n Optional[Sequence[Subprotocol]],\n list(\n [\n Subprotocol(subprotocol)\n for subprotocol in subprotocols\n ]\n ),\n )\n ws_proto = ServerProtocol(\n max_size=self.websocket_max_size,\n subprotocols=subprotocols,\n state=OPEN,\n logger=logger,\n )\n resp = ws_proto.accept(self.sanic_request_to_ws_request(request))\n except Exception:\n msg = (\n \"Failed to open a WebSocket connection.\\n\"\n \"See server log for more information.\\n\"\n )\n raise SanicException(msg, status_code=500)\n if 100 <= resp.status_code <= 299:\n first_line = (\n f\"HTTP/1.1 {resp.status_code} {resp.reason_phrase}\\r\\n\"\n ).encode()\n rbody = bytearray(first_line)\n rbody += (\n \"\".join([f\"{k}: {v}\\r\\n\" for k, v in resp.headers.items()])\n ).encode()\n rbody += b\"\\r\\n\"\n if resp.body is not None:\n rbody += resp.body\n rbody += b\"\\r\\n\\r\\n\"\n await super().send(rbody)\n else:\n raise SanicException(resp.body, resp.status_code)\n self.websocket = WebsocketImplProtocol(\n ws_proto,\n ping_interval=self.websocket_ping_interval,\n ping_timeout=self.websocket_ping_timeout,\n close_timeout=self.websocket_timeout,\n )\n loop = (\n request.transport.loop\n if hasattr(request, \"transport\")\n and hasattr(request.transport, \"loop\")\n else None\n )\n await self.websocket.connection_made(self, loop=loop)\n return self.websocket\n", "path": "sanic/server/protocols/websocket_protocol.py"}], "after_files": [{"content": "from typing import Optional, Sequence, cast\n\n\ntry: # websockets >= 11.0\n from websockets.protocol import State # type: ignore\n from websockets.server import ServerProtocol # type: ignore\nexcept ImportError: # websockets < 11.0\n from websockets.connection import State\n from websockets.server import ServerConnection as ServerProtocol\n\nfrom websockets import http11\nfrom websockets.datastructures import Headers as WSHeaders\nfrom websockets.typing import Subprotocol\n\nfrom sanic.exceptions import SanicException\nfrom sanic.log import logger\nfrom sanic.request import Request\nfrom sanic.server import HttpProtocol\n\nfrom ..websockets.impl import WebsocketImplProtocol\n\n\nOPEN = State.OPEN\nCLOSING = State.CLOSING\nCLOSED = State.CLOSED\n\n\nclass WebSocketProtocol(HttpProtocol):\n __slots__ = (\n \"websocket\",\n \"websocket_timeout\",\n \"websocket_max_size\",\n \"websocket_ping_interval\",\n \"websocket_ping_timeout\",\n )\n\n def __init__(\n self,\n *args,\n websocket_timeout: float = 10.0,\n websocket_max_size: Optional[int] = None,\n websocket_ping_interval: Optional[float] = 20.0,\n websocket_ping_timeout: Optional[float] = 20.0,\n **kwargs,\n ):\n super().__init__(*args, **kwargs)\n self.websocket: Optional[WebsocketImplProtocol] = None\n self.websocket_timeout = websocket_timeout\n self.websocket_max_size = websocket_max_size\n self.websocket_ping_interval = websocket_ping_interval\n self.websocket_ping_timeout = websocket_ping_timeout\n\n def connection_lost(self, exc):\n if self.websocket is not None:\n self.websocket.connection_lost(exc)\n super().connection_lost(exc)\n\n def data_received(self, data):\n if self.websocket is not None:\n self.websocket.data_received(data)\n else:\n # Pass it to HttpProtocol handler first\n # That will (hopefully) upgrade it to a websocket.\n super().data_received(data)\n\n def eof_received(self) -> Optional[bool]:\n if self.websocket is not None:\n return self.websocket.eof_received()\n else:\n return False\n\n def close(self, timeout: Optional[float] = None):\n # Called by HttpProtocol at the end of connection_task\n # If we've upgraded to websocket, we do our own closing\n if self.websocket is not None:\n # Note, we don't want to use websocket.close()\n # That is used for user's application code to send a\n # websocket close packet. This is different.\n self.websocket.end_connection(1001)\n else:\n super().close()\n\n def close_if_idle(self):\n # Called by Sanic Server when shutting down\n # If we've upgraded to websocket, shut it down\n if self.websocket is not None:\n if self.websocket.ws_proto.state in (CLOSING, CLOSED):\n return True\n elif self.websocket.loop is not None:\n self.websocket.loop.create_task(self.websocket.close(1001))\n else:\n self.websocket.end_connection(1001)\n else:\n return super().close_if_idle()\n\n @staticmethod\n def sanic_request_to_ws_request(request: Request):\n return http11.Request(\n path=request.path,\n headers=WSHeaders(request.headers),\n )\n\n async def websocket_handshake(\n self, request, subprotocols: Optional[Sequence[str]] = None\n ):\n # let the websockets package do the handshake with the client\n try:\n if subprotocols is not None:\n # subprotocols can be a set or frozenset,\n # but ServerProtocol needs a list\n subprotocols = cast(\n Optional[Sequence[Subprotocol]],\n list(\n [\n Subprotocol(subprotocol)\n for subprotocol in subprotocols\n ]\n ),\n )\n ws_proto = ServerProtocol(\n max_size=self.websocket_max_size,\n subprotocols=subprotocols,\n state=OPEN,\n logger=logger,\n )\n resp = ws_proto.accept(self.sanic_request_to_ws_request(request))\n except Exception:\n msg = (\n \"Failed to open a WebSocket connection.\\n\"\n \"See server log for more information.\\n\"\n )\n raise SanicException(msg, status_code=500)\n if 100 <= resp.status_code <= 299:\n first_line = (\n f\"HTTP/1.1 {resp.status_code} {resp.reason_phrase}\\r\\n\"\n ).encode()\n rbody = bytearray(first_line)\n rbody += (\n \"\".join([f\"{k}: {v}\\r\\n\" for k, v in resp.headers.items()])\n ).encode()\n rbody += b\"\\r\\n\"\n if resp.body is not None:\n rbody += resp.body\n rbody += b\"\\r\\n\\r\\n\"\n await super().send(rbody)\n else:\n raise SanicException(resp.body, resp.status_code)\n self.websocket = WebsocketImplProtocol(\n ws_proto,\n ping_interval=self.websocket_ping_interval,\n ping_timeout=self.websocket_ping_timeout,\n close_timeout=self.websocket_timeout,\n )\n loop = (\n request.transport.loop\n if hasattr(request, \"transport\")\n and hasattr(request.transport, \"loop\")\n else None\n )\n await self.websocket.connection_made(self, loop=loop)\n return self.websocket\n", "path": "sanic/server/protocols/websocket_protocol.py"}]} | 2,029 | 218 |
gh_patches_debug_10296 | rasdani/github-patches | git_diff | getredash__redash-3634 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Queries search is not working on release/7.0.x
### Issue Summary
I try building docker image with the redash dockerfile and start by docker-compose. But queries search is not working
### Steps to Reproduce

Check out the queries table and find that search_vector is null.

### Technical details:
* Redash Version: release/7.0.x
* Browser/OS: Chrome Mac
* How did you install Redash: redash setup
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `redash/cli/database.py`
Content:
```
1 import time
2
3 from flask.cli import AppGroup
4 from flask_migrate import stamp
5 from sqlalchemy.exc import DatabaseError
6
7 manager = AppGroup(help="Manage the database (create/drop tables).")
8
9
10 def _wait_for_db_connection(db):
11 retried = False
12 while not retried:
13 try:
14 db.engine.execute('SELECT 1;')
15 return
16 except DatabaseError:
17 time.sleep(30)
18
19 retried = True
20
21
22 @manager.command()
23 def create_tables():
24 """Create the database tables."""
25 from redash.models import db
26
27 _wait_for_db_connection(db)
28 db.create_all()
29
30 # Need to mark current DB as up to date
31 stamp()
32
33
34 @manager.command()
35 def drop_tables():
36 """Drop the database tables."""
37 from redash.models import db
38
39 _wait_for_db_connection(db)
40 db.drop_all()
41
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/redash/cli/database.py b/redash/cli/database.py
--- a/redash/cli/database.py
+++ b/redash/cli/database.py
@@ -1,5 +1,6 @@
import time
+import sqlalchemy
from flask.cli import AppGroup
from flask_migrate import stamp
from sqlalchemy.exc import DatabaseError
@@ -25,6 +26,8 @@
from redash.models import db
_wait_for_db_connection(db)
+ # To create triggers for searchable models, we need to call configure_mappers().
+ sqlalchemy.orm.configure_mappers()
db.create_all()
# Need to mark current DB as up to date
| {"golden_diff": "diff --git a/redash/cli/database.py b/redash/cli/database.py\n--- a/redash/cli/database.py\n+++ b/redash/cli/database.py\n@@ -1,5 +1,6 @@\n import time\n \n+import sqlalchemy\n from flask.cli import AppGroup\n from flask_migrate import stamp\n from sqlalchemy.exc import DatabaseError\n@@ -25,6 +26,8 @@\n from redash.models import db\n \n _wait_for_db_connection(db)\n+ # To create triggers for searchable models, we need to call configure_mappers().\n+ sqlalchemy.orm.configure_mappers()\n db.create_all()\n \n # Need to mark current DB as up to date\n", "issue": "Queries search is not working on release/7.0.x\n### Issue Summary\r\n\r\nI try building docker image with the redash dockerfile and start by docker-compose. But queries search is not working\r\n\r\n### Steps to Reproduce\r\n\r\n\r\n\r\nCheck out the queries table and find that search_vector is null.\r\n\r\n\r\n### Technical details:\r\n\r\n* Redash Version: release/7.0.x\r\n* Browser/OS: Chrome Mac\r\n* How did you install Redash: redash setup\r\n\n", "before_files": [{"content": "import time\n\nfrom flask.cli import AppGroup\nfrom flask_migrate import stamp\nfrom sqlalchemy.exc import DatabaseError\n\nmanager = AppGroup(help=\"Manage the database (create/drop tables).\")\n\n\ndef _wait_for_db_connection(db):\n retried = False\n while not retried:\n try:\n db.engine.execute('SELECT 1;')\n return\n except DatabaseError:\n time.sleep(30)\n\n retried = True\n\n\[email protected]()\ndef create_tables():\n \"\"\"Create the database tables.\"\"\"\n from redash.models import db\n\n _wait_for_db_connection(db)\n db.create_all()\n\n # Need to mark current DB as up to date\n stamp()\n\n\[email protected]()\ndef drop_tables():\n \"\"\"Drop the database tables.\"\"\"\n from redash.models import db\n\n _wait_for_db_connection(db)\n db.drop_all()\n", "path": "redash/cli/database.py"}], "after_files": [{"content": "import time\n\nimport sqlalchemy\nfrom flask.cli import AppGroup\nfrom flask_migrate import stamp\nfrom sqlalchemy.exc import DatabaseError\n\nmanager = AppGroup(help=\"Manage the database (create/drop tables).\")\n\n\ndef _wait_for_db_connection(db):\n retried = False\n while not retried:\n try:\n db.engine.execute('SELECT 1;')\n return\n except DatabaseError:\n time.sleep(30)\n\n retried = True\n\n\[email protected]()\ndef create_tables():\n \"\"\"Create the database tables.\"\"\"\n from redash.models import db\n\n _wait_for_db_connection(db)\n # To create triggers for searchable models, we need to call configure_mappers().\n sqlalchemy.orm.configure_mappers()\n db.create_all()\n\n # Need to mark current DB as up to date\n stamp()\n\n\[email protected]()\ndef drop_tables():\n \"\"\"Drop the database tables.\"\"\"\n from redash.models import db\n\n _wait_for_db_connection(db)\n db.drop_all()\n", "path": "redash/cli/database.py"}]} | 749 | 142 |
gh_patches_debug_27768 | rasdani/github-patches | git_diff | svthalia__concrexit-2021 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Registration message in calendar wrong
### Describe the bug
In the calendar, when you have to register for an event, it says in the block; You can register on .......
However, this seems to be the date of when the event will take place (or the registration deadline, can't tell for sure).
See screenshots for an example.
### How to reproduce
Steps to reproduce the behaviour:
1. Go to the calendar and look for an event with a registration.
2. Look at the message in the calendar view
3. Compare with the registration date in the event itself.
### Expected behaviour
The message in the calendar view matches the date of when the registrations will open.
### Screenshots


--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `website/events/api/calendarjs/serializers.py`
Content:
```
1 from datetime import timedelta
2
3 from django.contrib.humanize.templatetags.humanize import naturaltime
4 from django.template.defaultfilters import date
5 from django.utils import timezone
6 from django.utils.translation import gettext_lazy as _
7 from rest_framework.reverse import reverse
8
9 from events import services
10 from events.models import Event
11 from thaliawebsite.api.calendarjs.serializers import CalenderJSSerializer
12
13
14 class EventsCalenderJSSerializer(CalenderJSSerializer):
15 class Meta(CalenderJSSerializer.Meta):
16 model = Event
17
18 def _url(self, instance):
19 return reverse("events:event", kwargs={"pk": instance.id})
20
21 def _class_names(self, instance):
22 if self.context["member"] and services.is_user_registered(
23 self.context["member"], instance
24 ):
25 if services.user_registration_pending(self.context["member"], instance):
26 return ["regular-event-pending-registration"]
27 else:
28 return ["regular-event-has-registration"]
29 elif (not instance.registration_required) or instance.registration_allowed:
30 return ["regular-event-registration-open"]
31 else:
32 # I think this handles the case that registration is needed, but not yet possible
33 return ["regular-event-registration-closed"]
34
35 def _registration_info(self, instance: Event):
36 # If registered in some way
37 if self.context["member"] and services.is_user_registered(
38 self.context["member"], instance
39 ):
40 queue_pos = services.user_registration_pending(
41 self.context["member"], instance
42 )
43 # In waiting list
44 if type(queue_pos) is int:
45 return _("In waiting list at position {queue_pos}").format(
46 queue_pos=queue_pos
47 )
48 # Actually registered
49 else:
50 return _("You are registered for this event")
51 # Optional registration possible
52 elif instance.optional_registration_allowed:
53 return _("You can optionally register for this event")
54 # No places left
55 elif instance.reached_participants_limit():
56 return _("You can put yourself on the waiting list for this event")
57 # Registration still possible
58 elif instance.registration_allowed:
59 return _("You can register for this event")
60 # Not registration time yet
61 elif instance.registration_end:
62 now = timezone.now()
63 if instance.registration_end < now:
64 return _("Registrations have been closed")
65 elif instance.registration_end <= now + timedelta(days=2):
66 return _("You can register {at_time}").format(
67 at_time=naturaltime(instance.registration_end)
68 )
69 else:
70 return _("You can register on {date}").format(
71 date=date(instance.registration_end)
72 )
73
74
75 class UnpublishedEventsCalenderJSSerializer(CalenderJSSerializer):
76 """See CalenderJSSerializer, customised classes."""
77
78 class Meta(CalenderJSSerializer.Meta):
79 model = Event
80
81 def _class_names(self, instance):
82 return ["unpublished-event"]
83
84 def _url(self, instance):
85 return reverse("admin:events_event_details", kwargs={"pk": instance.id})
86
87 def _registration_info(self, instance):
88 return "Unpublished event"
89
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/website/events/api/calendarjs/serializers.py b/website/events/api/calendarjs/serializers.py
--- a/website/events/api/calendarjs/serializers.py
+++ b/website/events/api/calendarjs/serializers.py
@@ -50,7 +50,7 @@
return _("You are registered for this event")
# Optional registration possible
elif instance.optional_registration_allowed:
- return _("You can optionally register for this event")
+ return _("Registering for this event is optional")
# No places left
elif instance.reached_participants_limit():
return _("You can put yourself on the waiting list for this event")
@@ -62,13 +62,13 @@
now = timezone.now()
if instance.registration_end < now:
return _("Registrations have been closed")
- elif instance.registration_end <= now + timedelta(days=2):
- return _("You can register {at_time}").format(
- at_time=naturaltime(instance.registration_end)
+ elif instance.registration_start <= now + timedelta(days=2):
+ return _("Registrations open {at_time}").format(
+ at_time=naturaltime(instance.registration_start)
)
else:
- return _("You can register on {date}").format(
- date=date(instance.registration_end)
+ return _("Registrations open {date}").format(
+ date=date(instance.registration_start)
)
| {"golden_diff": "diff --git a/website/events/api/calendarjs/serializers.py b/website/events/api/calendarjs/serializers.py\n--- a/website/events/api/calendarjs/serializers.py\n+++ b/website/events/api/calendarjs/serializers.py\n@@ -50,7 +50,7 @@\n return _(\"You are registered for this event\")\n # Optional registration possible\n elif instance.optional_registration_allowed:\n- return _(\"You can optionally register for this event\")\n+ return _(\"Registering for this event is optional\")\n # No places left\n elif instance.reached_participants_limit():\n return _(\"You can put yourself on the waiting list for this event\")\n@@ -62,13 +62,13 @@\n now = timezone.now()\n if instance.registration_end < now:\n return _(\"Registrations have been closed\")\n- elif instance.registration_end <= now + timedelta(days=2):\n- return _(\"You can register {at_time}\").format(\n- at_time=naturaltime(instance.registration_end)\n+ elif instance.registration_start <= now + timedelta(days=2):\n+ return _(\"Registrations open {at_time}\").format(\n+ at_time=naturaltime(instance.registration_start)\n )\n else:\n- return _(\"You can register on {date}\").format(\n- date=date(instance.registration_end)\n+ return _(\"Registrations open {date}\").format(\n+ date=date(instance.registration_start)\n )\n", "issue": "Registration message in calendar wrong\n### Describe the bug\r\nIn the calendar, when you have to register for an event, it says in the block; You can register on .......\r\nHowever, this seems to be the date of when the event will take place (or the registration deadline, can't tell for sure).\r\nSee screenshots for an example.\r\n\r\n### How to reproduce\r\nSteps to reproduce the behaviour:\r\n1. Go to the calendar and look for an event with a registration.\r\n2. Look at the message in the calendar view\r\n3. Compare with the registration date in the event itself.\r\n\r\n### Expected behaviour\r\nThe message in the calendar view matches the date of when the registrations will open.\r\n\r\n### Screenshots\r\n\r\n\r\n\n", "before_files": [{"content": "from datetime import timedelta\n\nfrom django.contrib.humanize.templatetags.humanize import naturaltime\nfrom django.template.defaultfilters import date\nfrom django.utils import timezone\nfrom django.utils.translation import gettext_lazy as _\nfrom rest_framework.reverse import reverse\n\nfrom events import services\nfrom events.models import Event\nfrom thaliawebsite.api.calendarjs.serializers import CalenderJSSerializer\n\n\nclass EventsCalenderJSSerializer(CalenderJSSerializer):\n class Meta(CalenderJSSerializer.Meta):\n model = Event\n\n def _url(self, instance):\n return reverse(\"events:event\", kwargs={\"pk\": instance.id})\n\n def _class_names(self, instance):\n if self.context[\"member\"] and services.is_user_registered(\n self.context[\"member\"], instance\n ):\n if services.user_registration_pending(self.context[\"member\"], instance):\n return [\"regular-event-pending-registration\"]\n else:\n return [\"regular-event-has-registration\"]\n elif (not instance.registration_required) or instance.registration_allowed:\n return [\"regular-event-registration-open\"]\n else:\n # I think this handles the case that registration is needed, but not yet possible\n return [\"regular-event-registration-closed\"]\n\n def _registration_info(self, instance: Event):\n # If registered in some way\n if self.context[\"member\"] and services.is_user_registered(\n self.context[\"member\"], instance\n ):\n queue_pos = services.user_registration_pending(\n self.context[\"member\"], instance\n )\n # In waiting list\n if type(queue_pos) is int:\n return _(\"In waiting list at position {queue_pos}\").format(\n queue_pos=queue_pos\n )\n # Actually registered\n else:\n return _(\"You are registered for this event\")\n # Optional registration possible\n elif instance.optional_registration_allowed:\n return _(\"You can optionally register for this event\")\n # No places left\n elif instance.reached_participants_limit():\n return _(\"You can put yourself on the waiting list for this event\")\n # Registration still possible\n elif instance.registration_allowed:\n return _(\"You can register for this event\")\n # Not registration time yet\n elif instance.registration_end:\n now = timezone.now()\n if instance.registration_end < now:\n return _(\"Registrations have been closed\")\n elif instance.registration_end <= now + timedelta(days=2):\n return _(\"You can register {at_time}\").format(\n at_time=naturaltime(instance.registration_end)\n )\n else:\n return _(\"You can register on {date}\").format(\n date=date(instance.registration_end)\n )\n\n\nclass UnpublishedEventsCalenderJSSerializer(CalenderJSSerializer):\n \"\"\"See CalenderJSSerializer, customised classes.\"\"\"\n\n class Meta(CalenderJSSerializer.Meta):\n model = Event\n\n def _class_names(self, instance):\n return [\"unpublished-event\"]\n\n def _url(self, instance):\n return reverse(\"admin:events_event_details\", kwargs={\"pk\": instance.id})\n\n def _registration_info(self, instance):\n return \"Unpublished event\"\n", "path": "website/events/api/calendarjs/serializers.py"}], "after_files": [{"content": "from datetime import timedelta\n\nfrom django.contrib.humanize.templatetags.humanize import naturaltime\nfrom django.template.defaultfilters import date\nfrom django.utils import timezone\nfrom django.utils.translation import gettext_lazy as _\nfrom rest_framework.reverse import reverse\n\nfrom events import services\nfrom events.models import Event\nfrom thaliawebsite.api.calendarjs.serializers import CalenderJSSerializer\n\n\nclass EventsCalenderJSSerializer(CalenderJSSerializer):\n class Meta(CalenderJSSerializer.Meta):\n model = Event\n\n def _url(self, instance):\n return reverse(\"events:event\", kwargs={\"pk\": instance.id})\n\n def _class_names(self, instance):\n if self.context[\"member\"] and services.is_user_registered(\n self.context[\"member\"], instance\n ):\n if services.user_registration_pending(self.context[\"member\"], instance):\n return [\"regular-event-pending-registration\"]\n else:\n return [\"regular-event-has-registration\"]\n elif (not instance.registration_required) or instance.registration_allowed:\n return [\"regular-event-registration-open\"]\n else:\n # I think this handles the case that registration is needed, but not yet possible\n return [\"regular-event-registration-closed\"]\n\n def _registration_info(self, instance: Event):\n # If registered in some way\n if self.context[\"member\"] and services.is_user_registered(\n self.context[\"member\"], instance\n ):\n queue_pos = services.user_registration_pending(\n self.context[\"member\"], instance\n )\n # In waiting list\n if type(queue_pos) is int:\n return _(\"In waiting list at position {queue_pos}\").format(\n queue_pos=queue_pos\n )\n # Actually registered\n else:\n return _(\"You are registered for this event\")\n # Optional registration possible\n elif instance.optional_registration_allowed:\n return _(\"Registering for this event is optional\")\n # No places left\n elif instance.reached_participants_limit():\n return _(\"You can put yourself on the waiting list for this event\")\n # Registration still possible\n elif instance.registration_allowed:\n return _(\"You can register for this event\")\n # Not registration time yet\n elif instance.registration_end:\n now = timezone.now()\n if instance.registration_end < now:\n return _(\"Registrations have been closed\")\n elif instance.registration_start <= now + timedelta(days=2):\n return _(\"Registrations open {at_time}\").format(\n at_time=naturaltime(instance.registration_start)\n )\n else:\n return _(\"Registrations open {date}\").format(\n date=date(instance.registration_start)\n )\n\n\nclass UnpublishedEventsCalenderJSSerializer(CalenderJSSerializer):\n \"\"\"See CalenderJSSerializer, customised classes.\"\"\"\n\n class Meta(CalenderJSSerializer.Meta):\n model = Event\n\n def _class_names(self, instance):\n return [\"unpublished-event\"]\n\n def _url(self, instance):\n return reverse(\"admin:events_event_details\", kwargs={\"pk\": instance.id})\n\n def _registration_info(self, instance):\n return \"Unpublished event\"\n", "path": "website/events/api/calendarjs/serializers.py"}]} | 1,358 | 306 |
gh_patches_debug_43117 | rasdani/github-patches | git_diff | spyder-ide__spyder-7902 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
4.0beta1 is offered as an update when using a development version and a virtualenv
We need to avoid showing beta versions when checking for updates
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `spyder/workers/updates.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright © Spyder Project Contributors
4 # Licensed under the terms of the MIT License
5 # (see spyder/__init__.py for details)
6
7 # Standard library imports
8 import json
9 import os
10 import ssl
11 import sys
12
13 # Third party imports
14 from qtpy.QtCore import QObject, Signal
15
16 # Local imports
17 from spyder import __version__
18 from spyder.config.base import _
19 from spyder.py3compat import PY3
20 from spyder.config.utils import is_anaconda
21 from spyder.utils.programs import check_version, is_stable_version
22
23
24 if PY3:
25 from urllib.request import urlopen
26 from urllib.error import URLError, HTTPError
27 else:
28 from urllib2 import urlopen, URLError, HTTPError
29
30
31 class WorkerUpdates(QObject):
32 """
33 Worker that checks for releases using either the Anaconda
34 default channels or the Github Releases page without
35 blocking the Spyder user interface, in case of connection
36 issues.
37 """
38 sig_ready = Signal()
39
40 def __init__(self, parent, startup, version=""):
41 QObject.__init__(self)
42 self._parent = parent
43 self.error = None
44 self.latest_release = None
45 self.startup = startup
46 self.version = version
47
48 def check_update_available(self, version, releases, github=False):
49 """Checks if there is an update available.
50
51 It takes as parameters the current version of Spyder and a list of
52 valid cleaned releases in chronological order.
53 Example: ['2.3.2', '2.3.3' ...] or with github ['2.3.4', '2.3.3' ...]
54 """
55 if is_stable_version(version):
56 # Remove non stable versions from the list
57 releases = [r for r in releases if is_stable_version(r)]
58
59 if github:
60 latest_release = releases[0]
61 else:
62 latest_release = releases[-1]
63
64 if version.endswith('dev'):
65 return (False, latest_release)
66
67 return (check_version(version, latest_release, '<'), latest_release)
68
69 def start(self):
70 """Main method of the WorkerUpdates worker"""
71 if is_anaconda():
72 self.url = 'https://repo.anaconda.com/pkgs/main'
73 if os.name == 'nt':
74 self.url += '/win-64/repodata.json'
75 elif sys.platform == 'darwin':
76 self.url += '/osx-64/repodata.json'
77 else:
78 self.url += '/linux-64/repodata.json'
79 else:
80 self.url = ('https://api.github.com/repos/'
81 'spyder-ide/spyder/releases')
82 self.update_available = False
83 self.latest_release = __version__
84
85 error_msg = None
86
87 try:
88 if hasattr(ssl, '_create_unverified_context'):
89 # Fix for issue # 2685 [Works only with Python >=2.7.9]
90 # More info: https://www.python.org/dev/peps/pep-0476/#opting-out
91 context = ssl._create_unverified_context()
92 page = urlopen(self.url, context=context)
93 else:
94 page = urlopen(self.url)
95 try:
96 data = page.read()
97
98 # Needed step for python3 compatibility
99 if not isinstance(data, str):
100 data = data.decode()
101
102 data = json.loads(data)
103 if not self.version:
104 self.version = __version__
105
106 if is_anaconda():
107 releases = []
108 for item in data['packages']:
109 if 'spyder' in item and 'spyder-kernels' not in item:
110 releases.append(item.split('-')[1])
111 result = self.check_update_available(self.version,
112 releases)
113 else:
114 releases = [item['tag_name'].replace('v', '')
115 for item in data]
116 result = self.check_update_available(self.version,
117 releases,
118 github=True)
119 self.update_available, self.latest_release = result
120 except Exception:
121 error_msg = _('Unable to retrieve information.')
122 except HTTPError:
123 error_msg = _('Unable to retrieve information.')
124 except URLError:
125 error_msg = _('Unable to connect to the internet. <br><br>Make '
126 'sure the connection is working properly.')
127 except Exception:
128 error_msg = _('Unable to check for updates.')
129
130 # Don't show dialog when starting up spyder and an error occur
131 if not (self.startup and error_msg is not None):
132 self.error = error_msg
133 self.sig_ready.emit()
134
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/spyder/workers/updates.py b/spyder/workers/updates.py
--- a/spyder/workers/updates.py
+++ b/spyder/workers/updates.py
@@ -7,6 +7,7 @@
# Standard library imports
import json
import os
+import re
import ssl
import sys
@@ -37,34 +38,41 @@
"""
sig_ready = Signal()
- def __init__(self, parent, startup, version=""):
+ def __init__(self, parent, startup, version="", releases=None):
QObject.__init__(self)
self._parent = parent
self.error = None
self.latest_release = None
self.startup = startup
- self.version = version
+ self.releases = releases
- def check_update_available(self, version, releases, github=False):
+ if not version:
+ self.version = __version__
+ else:
+ self.version = version
+
+ def check_update_available(self):
"""Checks if there is an update available.
It takes as parameters the current version of Spyder and a list of
valid cleaned releases in chronological order.
Example: ['2.3.2', '2.3.3' ...] or with github ['2.3.4', '2.3.3' ...]
"""
- if is_stable_version(version):
- # Remove non stable versions from the list
- releases = [r for r in releases if is_stable_version(r)]
+ # Don't perform any check for development versions
+ if 'dev' in self.version:
+ return (False, latest_release)
- if github:
- latest_release = releases[0]
+ # Filter releases
+ if is_stable_version(self.version):
+ releases = [r for r in self.releases if is_stable_version(r)]
else:
- latest_release = releases[-1]
+ releases = [r for r in self.releases
+ if not is_stable_version(r) or r in self.version]
- if version.endswith('dev'):
- return (False, latest_release)
+ latest_release = releases[-1]
- return (check_version(version, latest_release, '<'), latest_release)
+ return (check_version(self.version, latest_release, '<'),
+ latest_release)
def start(self):
"""Main method of the WorkerUpdates worker"""
@@ -98,24 +106,23 @@
# Needed step for python3 compatibility
if not isinstance(data, str):
data = data.decode()
-
data = json.loads(data)
- if not self.version:
- self.version = __version__
if is_anaconda():
- releases = []
- for item in data['packages']:
- if 'spyder' in item and 'spyder-kernels' not in item:
- releases.append(item.split('-')[1])
- result = self.check_update_available(self.version,
- releases)
+ if self.releases is None:
+ self.releases = []
+ for item in data['packages']:
+ if ('spyder' in item and
+ not re.search(r'spyder-[a-zA-Z]', item)):
+ self.releases.append(item.split('-')[1])
+ result = self.check_update_available()
else:
- releases = [item['tag_name'].replace('v', '')
- for item in data]
- result = self.check_update_available(self.version,
- releases,
- github=True)
+ if self.releases is None:
+ self.releases = [item['tag_name'].replace('v', '')
+ for item in data]
+ self.releases = list(reversed(self.releases))
+
+ result = self.check_update_available()
self.update_available, self.latest_release = result
except Exception:
error_msg = _('Unable to retrieve information.')
| {"golden_diff": "diff --git a/spyder/workers/updates.py b/spyder/workers/updates.py\n--- a/spyder/workers/updates.py\n+++ b/spyder/workers/updates.py\n@@ -7,6 +7,7 @@\n # Standard library imports\n import json\n import os\n+import re\n import ssl\n import sys\n \n@@ -37,34 +38,41 @@\n \"\"\"\n sig_ready = Signal()\n \n- def __init__(self, parent, startup, version=\"\"):\n+ def __init__(self, parent, startup, version=\"\", releases=None):\n QObject.__init__(self)\n self._parent = parent\n self.error = None\n self.latest_release = None\n self.startup = startup\n- self.version = version\n+ self.releases = releases\n \n- def check_update_available(self, version, releases, github=False):\n+ if not version:\n+ self.version = __version__\n+ else:\n+ self.version = version\n+\n+ def check_update_available(self):\n \"\"\"Checks if there is an update available.\n \n It takes as parameters the current version of Spyder and a list of\n valid cleaned releases in chronological order.\n Example: ['2.3.2', '2.3.3' ...] or with github ['2.3.4', '2.3.3' ...]\n \"\"\"\n- if is_stable_version(version):\n- # Remove non stable versions from the list\n- releases = [r for r in releases if is_stable_version(r)]\n+ # Don't perform any check for development versions\n+ if 'dev' in self.version:\n+ return (False, latest_release)\n \n- if github:\n- latest_release = releases[0]\n+ # Filter releases\n+ if is_stable_version(self.version):\n+ releases = [r for r in self.releases if is_stable_version(r)]\n else:\n- latest_release = releases[-1]\n+ releases = [r for r in self.releases\n+ if not is_stable_version(r) or r in self.version]\n \n- if version.endswith('dev'):\n- return (False, latest_release)\n+ latest_release = releases[-1]\n \n- return (check_version(version, latest_release, '<'), latest_release)\n+ return (check_version(self.version, latest_release, '<'),\n+ latest_release)\n \n def start(self):\n \"\"\"Main method of the WorkerUpdates worker\"\"\"\n@@ -98,24 +106,23 @@\n # Needed step for python3 compatibility\n if not isinstance(data, str):\n data = data.decode()\n-\n data = json.loads(data)\n- if not self.version:\n- self.version = __version__\n \n if is_anaconda():\n- releases = []\n- for item in data['packages']:\n- if 'spyder' in item and 'spyder-kernels' not in item:\n- releases.append(item.split('-')[1])\n- result = self.check_update_available(self.version,\n- releases)\n+ if self.releases is None:\n+ self.releases = []\n+ for item in data['packages']:\n+ if ('spyder' in item and\n+ not re.search(r'spyder-[a-zA-Z]', item)):\n+ self.releases.append(item.split('-')[1])\n+ result = self.check_update_available()\n else:\n- releases = [item['tag_name'].replace('v', '')\n- for item in data]\n- result = self.check_update_available(self.version,\n- releases,\n- github=True)\n+ if self.releases is None:\n+ self.releases = [item['tag_name'].replace('v', '')\n+ for item in data]\n+ self.releases = list(reversed(self.releases))\n+\n+ result = self.check_update_available()\n self.update_available, self.latest_release = result\n except Exception:\n error_msg = _('Unable to retrieve information.')\n", "issue": "4.0beta1 is offered as an update when using a development version and a virtualenv\nWe need to avoid showing beta versions when checking for updates\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright \u00a9 Spyder Project Contributors\n# Licensed under the terms of the MIT License\n# (see spyder/__init__.py for details)\n\n# Standard library imports\nimport json\nimport os\nimport ssl\nimport sys\n\n# Third party imports\nfrom qtpy.QtCore import QObject, Signal\n\n# Local imports\nfrom spyder import __version__\nfrom spyder.config.base import _\nfrom spyder.py3compat import PY3\nfrom spyder.config.utils import is_anaconda\nfrom spyder.utils.programs import check_version, is_stable_version\n\n\nif PY3:\n from urllib.request import urlopen\n from urllib.error import URLError, HTTPError\nelse:\n from urllib2 import urlopen, URLError, HTTPError\n\n\nclass WorkerUpdates(QObject):\n \"\"\"\n Worker that checks for releases using either the Anaconda\n default channels or the Github Releases page without\n blocking the Spyder user interface, in case of connection\n issues.\n \"\"\"\n sig_ready = Signal()\n\n def __init__(self, parent, startup, version=\"\"):\n QObject.__init__(self)\n self._parent = parent\n self.error = None\n self.latest_release = None\n self.startup = startup\n self.version = version\n\n def check_update_available(self, version, releases, github=False):\n \"\"\"Checks if there is an update available.\n\n It takes as parameters the current version of Spyder and a list of\n valid cleaned releases in chronological order.\n Example: ['2.3.2', '2.3.3' ...] or with github ['2.3.4', '2.3.3' ...]\n \"\"\"\n if is_stable_version(version):\n # Remove non stable versions from the list\n releases = [r for r in releases if is_stable_version(r)]\n\n if github:\n latest_release = releases[0]\n else:\n latest_release = releases[-1]\n\n if version.endswith('dev'):\n return (False, latest_release)\n\n return (check_version(version, latest_release, '<'), latest_release)\n\n def start(self):\n \"\"\"Main method of the WorkerUpdates worker\"\"\"\n if is_anaconda():\n self.url = 'https://repo.anaconda.com/pkgs/main'\n if os.name == 'nt':\n self.url += '/win-64/repodata.json'\n elif sys.platform == 'darwin':\n self.url += '/osx-64/repodata.json'\n else:\n self.url += '/linux-64/repodata.json'\n else:\n self.url = ('https://api.github.com/repos/'\n 'spyder-ide/spyder/releases')\n self.update_available = False\n self.latest_release = __version__\n\n error_msg = None\n\n try:\n if hasattr(ssl, '_create_unverified_context'):\n # Fix for issue # 2685 [Works only with Python >=2.7.9]\n # More info: https://www.python.org/dev/peps/pep-0476/#opting-out\n context = ssl._create_unverified_context()\n page = urlopen(self.url, context=context)\n else:\n page = urlopen(self.url)\n try:\n data = page.read()\n\n # Needed step for python3 compatibility\n if not isinstance(data, str):\n data = data.decode()\n\n data = json.loads(data)\n if not self.version:\n self.version = __version__\n\n if is_anaconda():\n releases = []\n for item in data['packages']:\n if 'spyder' in item and 'spyder-kernels' not in item:\n releases.append(item.split('-')[1])\n result = self.check_update_available(self.version,\n releases)\n else:\n releases = [item['tag_name'].replace('v', '')\n for item in data]\n result = self.check_update_available(self.version,\n releases,\n github=True)\n self.update_available, self.latest_release = result\n except Exception:\n error_msg = _('Unable to retrieve information.')\n except HTTPError:\n error_msg = _('Unable to retrieve information.')\n except URLError:\n error_msg = _('Unable to connect to the internet. <br><br>Make '\n 'sure the connection is working properly.')\n except Exception:\n error_msg = _('Unable to check for updates.')\n\n # Don't show dialog when starting up spyder and an error occur\n if not (self.startup and error_msg is not None):\n self.error = error_msg\n self.sig_ready.emit()\n", "path": "spyder/workers/updates.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright \u00a9 Spyder Project Contributors\n# Licensed under the terms of the MIT License\n# (see spyder/__init__.py for details)\n\n# Standard library imports\nimport json\nimport os\nimport re\nimport ssl\nimport sys\n\n# Third party imports\nfrom qtpy.QtCore import QObject, Signal\n\n# Local imports\nfrom spyder import __version__\nfrom spyder.config.base import _\nfrom spyder.py3compat import PY3\nfrom spyder.config.utils import is_anaconda\nfrom spyder.utils.programs import check_version, is_stable_version\n\n\nif PY3:\n from urllib.request import urlopen\n from urllib.error import URLError, HTTPError\nelse:\n from urllib2 import urlopen, URLError, HTTPError\n\n\nclass WorkerUpdates(QObject):\n \"\"\"\n Worker that checks for releases using either the Anaconda\n default channels or the Github Releases page without\n blocking the Spyder user interface, in case of connection\n issues.\n \"\"\"\n sig_ready = Signal()\n\n def __init__(self, parent, startup, version=\"\", releases=None):\n QObject.__init__(self)\n self._parent = parent\n self.error = None\n self.latest_release = None\n self.startup = startup\n self.releases = releases\n\n if not version:\n self.version = __version__\n else:\n self.version = version\n\n def check_update_available(self):\n \"\"\"Checks if there is an update available.\n\n It takes as parameters the current version of Spyder and a list of\n valid cleaned releases in chronological order.\n Example: ['2.3.2', '2.3.3' ...] or with github ['2.3.4', '2.3.3' ...]\n \"\"\"\n # Don't perform any check for development versions\n if 'dev' in self.version:\n return (False, latest_release)\n\n # Filter releases\n if is_stable_version(self.version):\n releases = [r for r in self.releases if is_stable_version(r)]\n else:\n releases = [r for r in self.releases\n if not is_stable_version(r) or r in self.version]\n\n latest_release = releases[-1]\n\n return (check_version(self.version, latest_release, '<'),\n latest_release)\n\n def start(self):\n \"\"\"Main method of the WorkerUpdates worker\"\"\"\n if is_anaconda():\n self.url = 'https://repo.anaconda.com/pkgs/main'\n if os.name == 'nt':\n self.url += '/win-64/repodata.json'\n elif sys.platform == 'darwin':\n self.url += '/osx-64/repodata.json'\n else:\n self.url += '/linux-64/repodata.json'\n else:\n self.url = ('https://api.github.com/repos/'\n 'spyder-ide/spyder/releases')\n self.update_available = False\n self.latest_release = __version__\n\n error_msg = None\n\n try:\n if hasattr(ssl, '_create_unverified_context'):\n # Fix for issue # 2685 [Works only with Python >=2.7.9]\n # More info: https://www.python.org/dev/peps/pep-0476/#opting-out\n context = ssl._create_unverified_context()\n page = urlopen(self.url, context=context)\n else:\n page = urlopen(self.url)\n try:\n data = page.read()\n\n # Needed step for python3 compatibility\n if not isinstance(data, str):\n data = data.decode()\n data = json.loads(data)\n\n if is_anaconda():\n if self.releases is None:\n self.releases = []\n for item in data['packages']:\n if ('spyder' in item and\n not re.search(r'spyder-[a-zA-Z]', item)):\n self.releases.append(item.split('-')[1])\n result = self.check_update_available()\n else:\n if self.releases is None:\n self.releases = [item['tag_name'].replace('v', '')\n for item in data]\n self.releases = list(reversed(self.releases))\n\n result = self.check_update_available()\n self.update_available, self.latest_release = result\n except Exception:\n error_msg = _('Unable to retrieve information.')\n except HTTPError:\n error_msg = _('Unable to retrieve information.')\n except URLError:\n error_msg = _('Unable to connect to the internet. <br><br>Make '\n 'sure the connection is working properly.')\n except Exception:\n error_msg = _('Unable to check for updates.')\n\n # Don't show dialog when starting up spyder and an error occur\n if not (self.startup and error_msg is not None):\n self.error = error_msg\n self.sig_ready.emit()\n", "path": "spyder/workers/updates.py"}]} | 1,582 | 868 |
gh_patches_debug_28096 | rasdani/github-patches | git_diff | open-mmlab__mmpretrain-149 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
bug report
The attribute **meta_keys** is lost in Collect.
```python
@PIPELINES.register_module()
class Collect(object):
"""
Collect data from the loader relevant to the specific task.
This is usually the last stage of the data loader pipeline. Typically keys
is set to some subset of "img" and "gt_label".
"""
def __init__(self, keys):
self.keys = keys
def __call__(self, results):
data = {}
for key in self.keys:
data[key] = results[key]
return data
def __repr__(self):
return self.__class__.__name__ + \
f'(keys={self.keys}, meta_keys={self.meta_keys})'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mmcls/datasets/pipelines/formating.py`
Content:
```
1 from collections.abc import Sequence
2
3 import mmcv
4 import numpy as np
5 import torch
6 from PIL import Image
7
8 from ..builder import PIPELINES
9
10
11 def to_tensor(data):
12 """Convert objects of various python types to :obj:`torch.Tensor`.
13
14 Supported types are: :class:`numpy.ndarray`, :class:`torch.Tensor`,
15 :class:`Sequence`, :class:`int` and :class:`float`.
16 """
17 if isinstance(data, torch.Tensor):
18 return data
19 elif isinstance(data, np.ndarray):
20 return torch.from_numpy(data)
21 elif isinstance(data, Sequence) and not mmcv.is_str(data):
22 return torch.tensor(data)
23 elif isinstance(data, int):
24 return torch.LongTensor([data])
25 elif isinstance(data, float):
26 return torch.FloatTensor([data])
27 else:
28 raise TypeError(
29 f'Type {type(data)} cannot be converted to tensor.'
30 'Supported types are: `numpy.ndarray`, `torch.Tensor`, '
31 '`Sequence`, `int` and `float`')
32
33
34 @PIPELINES.register_module()
35 class ToTensor(object):
36
37 def __init__(self, keys):
38 self.keys = keys
39
40 def __call__(self, results):
41 for key in self.keys:
42 results[key] = to_tensor(results[key])
43 return results
44
45 def __repr__(self):
46 return self.__class__.__name__ + f'(keys={self.keys})'
47
48
49 @PIPELINES.register_module()
50 class ImageToTensor(object):
51
52 def __init__(self, keys):
53 self.keys = keys
54
55 def __call__(self, results):
56 for key in self.keys:
57 img = results[key]
58 if len(img.shape) < 3:
59 img = np.expand_dims(img, -1)
60 results[key] = to_tensor(img.transpose(2, 0, 1))
61 return results
62
63 def __repr__(self):
64 return self.__class__.__name__ + f'(keys={self.keys})'
65
66
67 @PIPELINES.register_module()
68 class Transpose(object):
69
70 def __init__(self, keys, order):
71 self.keys = keys
72 self.order = order
73
74 def __call__(self, results):
75 for key in self.keys:
76 results[key] = results[key].transpose(self.order)
77 return results
78
79 def __repr__(self):
80 return self.__class__.__name__ + \
81 f'(keys={self.keys}, order={self.order})'
82
83
84 @PIPELINES.register_module()
85 class ToPIL(object):
86
87 def __init__(self):
88 pass
89
90 def __call__(self, results):
91 results['img'] = Image.fromarray(results['img'])
92 return results
93
94
95 @PIPELINES.register_module()
96 class ToNumpy(object):
97
98 def __init__(self):
99 pass
100
101 def __call__(self, results):
102 results['img'] = np.array(results['img'], dtype=np.float32)
103 return results
104
105
106 @PIPELINES.register_module()
107 class Collect(object):
108 """
109 Collect data from the loader relevant to the specific task.
110
111 This is usually the last stage of the data loader pipeline. Typically keys
112 is set to some subset of "img" and "gt_label".
113 """
114
115 def __init__(self, keys):
116 self.keys = keys
117
118 def __call__(self, results):
119 data = {}
120 for key in self.keys:
121 data[key] = results[key]
122 return data
123
124 def __repr__(self):
125 return self.__class__.__name__ + \
126 f'(keys={self.keys}, meta_keys={self.meta_keys})'
127
128
129 @PIPELINES.register_module()
130 class WrapFieldsToLists(object):
131 """Wrap fields of the data dictionary into lists for evaluation.
132
133 This class can be used as a last step of a test or validation
134 pipeline for single image evaluation or inference.
135
136 Example:
137 >>> test_pipeline = [
138 >>> dict(type='LoadImageFromFile'),
139 >>> dict(type='Normalize',
140 mean=[123.675, 116.28, 103.53],
141 std=[58.395, 57.12, 57.375],
142 to_rgb=True),
143 >>> dict(type='ImageToTensor', keys=['img']),
144 >>> dict(type='Collect', keys=['img']),
145 >>> dict(type='WrapIntoLists')
146 >>> ]
147 """
148
149 def __call__(self, results):
150 # Wrap dict fields into lists
151 for key, val in results.items():
152 results[key] = [val]
153 return results
154
155 def __repr__(self):
156 return f'{self.__class__.__name__}()'
157
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mmcls/datasets/pipelines/formating.py b/mmcls/datasets/pipelines/formating.py
--- a/mmcls/datasets/pipelines/formating.py
+++ b/mmcls/datasets/pipelines/formating.py
@@ -3,6 +3,7 @@
import mmcv
import numpy as np
import torch
+from mmcv.parallel import DataContainer as DC
from PIL import Image
from ..builder import PIPELINES
@@ -110,13 +111,34 @@
This is usually the last stage of the data loader pipeline. Typically keys
is set to some subset of "img" and "gt_label".
+
+ Args:
+ keys (Sequence[str]): Keys of results to be collected in ``data``.
+ meta_keys (Sequence[str], optional): Meta keys to be converted to
+ ``mmcv.DataContainer`` and collected in ``data[img_metas]``.
+ Default: ``('filename', 'ori_shape', 'img_shape', 'flip',
+ 'flip_direction', 'img_norm_cfg')``
+
+ Returns:
+ dict: The result dict contains the following keys
+ - keys in``self.keys``
+ - ``img_metas`` if avaliable
"""
- def __init__(self, keys):
+ def __init__(self,
+ keys,
+ meta_keys=('filename', 'ori_shape', 'img_shape', 'flip',
+ 'flip_direction', 'img_norm_cfg')):
self.keys = keys
+ self.meta_keys = meta_keys
def __call__(self, results):
data = {}
+ img_meta = {}
+ for key in self.meta_keys:
+ if key in results:
+ img_meta[key] = results[key]
+ data['img_metas'] = DC(img_meta, cpu_only=True)
for key in self.keys:
data[key] = results[key]
return data
| {"golden_diff": "diff --git a/mmcls/datasets/pipelines/formating.py b/mmcls/datasets/pipelines/formating.py\n--- a/mmcls/datasets/pipelines/formating.py\n+++ b/mmcls/datasets/pipelines/formating.py\n@@ -3,6 +3,7 @@\n import mmcv\n import numpy as np\n import torch\n+from mmcv.parallel import DataContainer as DC\n from PIL import Image\n \n from ..builder import PIPELINES\n@@ -110,13 +111,34 @@\n \n This is usually the last stage of the data loader pipeline. Typically keys\n is set to some subset of \"img\" and \"gt_label\".\n+\n+ Args:\n+ keys (Sequence[str]): Keys of results to be collected in ``data``.\n+ meta_keys (Sequence[str], optional): Meta keys to be converted to\n+ ``mmcv.DataContainer`` and collected in ``data[img_metas]``.\n+ Default: ``('filename', 'ori_shape', 'img_shape', 'flip',\n+ 'flip_direction', 'img_norm_cfg')``\n+\n+ Returns:\n+ dict: The result dict contains the following keys\n+ - keys in``self.keys``\n+ - ``img_metas`` if avaliable\n \"\"\"\n \n- def __init__(self, keys):\n+ def __init__(self,\n+ keys,\n+ meta_keys=('filename', 'ori_shape', 'img_shape', 'flip',\n+ 'flip_direction', 'img_norm_cfg')):\n self.keys = keys\n+ self.meta_keys = meta_keys\n \n def __call__(self, results):\n data = {}\n+ img_meta = {}\n+ for key in self.meta_keys:\n+ if key in results:\n+ img_meta[key] = results[key]\n+ data['img_metas'] = DC(img_meta, cpu_only=True)\n for key in self.keys:\n data[key] = results[key]\n return data\n", "issue": "bug report\nThe attribute **meta_keys** is lost in Collect.\r\n```python\r\[email protected]_module()\r\nclass Collect(object):\r\n \"\"\"\r\n Collect data from the loader relevant to the specific task.\r\n This is usually the last stage of the data loader pipeline. Typically keys\r\n is set to some subset of \"img\" and \"gt_label\".\r\n \"\"\"\r\n\r\n def __init__(self, keys):\r\n self.keys = keys\r\n\r\n def __call__(self, results):\r\n data = {}\r\n for key in self.keys:\r\n data[key] = results[key]\r\n return data\r\n\r\n def __repr__(self):\r\n return self.__class__.__name__ + \\\r\n f'(keys={self.keys}, meta_keys={self.meta_keys})'\r\n```\n", "before_files": [{"content": "from collections.abc import Sequence\n\nimport mmcv\nimport numpy as np\nimport torch\nfrom PIL import Image\n\nfrom ..builder import PIPELINES\n\n\ndef to_tensor(data):\n \"\"\"Convert objects of various python types to :obj:`torch.Tensor`.\n\n Supported types are: :class:`numpy.ndarray`, :class:`torch.Tensor`,\n :class:`Sequence`, :class:`int` and :class:`float`.\n \"\"\"\n if isinstance(data, torch.Tensor):\n return data\n elif isinstance(data, np.ndarray):\n return torch.from_numpy(data)\n elif isinstance(data, Sequence) and not mmcv.is_str(data):\n return torch.tensor(data)\n elif isinstance(data, int):\n return torch.LongTensor([data])\n elif isinstance(data, float):\n return torch.FloatTensor([data])\n else:\n raise TypeError(\n f'Type {type(data)} cannot be converted to tensor.'\n 'Supported types are: `numpy.ndarray`, `torch.Tensor`, '\n '`Sequence`, `int` and `float`')\n\n\[email protected]_module()\nclass ToTensor(object):\n\n def __init__(self, keys):\n self.keys = keys\n\n def __call__(self, results):\n for key in self.keys:\n results[key] = to_tensor(results[key])\n return results\n\n def __repr__(self):\n return self.__class__.__name__ + f'(keys={self.keys})'\n\n\[email protected]_module()\nclass ImageToTensor(object):\n\n def __init__(self, keys):\n self.keys = keys\n\n def __call__(self, results):\n for key in self.keys:\n img = results[key]\n if len(img.shape) < 3:\n img = np.expand_dims(img, -1)\n results[key] = to_tensor(img.transpose(2, 0, 1))\n return results\n\n def __repr__(self):\n return self.__class__.__name__ + f'(keys={self.keys})'\n\n\[email protected]_module()\nclass Transpose(object):\n\n def __init__(self, keys, order):\n self.keys = keys\n self.order = order\n\n def __call__(self, results):\n for key in self.keys:\n results[key] = results[key].transpose(self.order)\n return results\n\n def __repr__(self):\n return self.__class__.__name__ + \\\n f'(keys={self.keys}, order={self.order})'\n\n\[email protected]_module()\nclass ToPIL(object):\n\n def __init__(self):\n pass\n\n def __call__(self, results):\n results['img'] = Image.fromarray(results['img'])\n return results\n\n\[email protected]_module()\nclass ToNumpy(object):\n\n def __init__(self):\n pass\n\n def __call__(self, results):\n results['img'] = np.array(results['img'], dtype=np.float32)\n return results\n\n\[email protected]_module()\nclass Collect(object):\n \"\"\"\n Collect data from the loader relevant to the specific task.\n\n This is usually the last stage of the data loader pipeline. Typically keys\n is set to some subset of \"img\" and \"gt_label\".\n \"\"\"\n\n def __init__(self, keys):\n self.keys = keys\n\n def __call__(self, results):\n data = {}\n for key in self.keys:\n data[key] = results[key]\n return data\n\n def __repr__(self):\n return self.__class__.__name__ + \\\n f'(keys={self.keys}, meta_keys={self.meta_keys})'\n\n\[email protected]_module()\nclass WrapFieldsToLists(object):\n \"\"\"Wrap fields of the data dictionary into lists for evaluation.\n\n This class can be used as a last step of a test or validation\n pipeline for single image evaluation or inference.\n\n Example:\n >>> test_pipeline = [\n >>> dict(type='LoadImageFromFile'),\n >>> dict(type='Normalize',\n mean=[123.675, 116.28, 103.53],\n std=[58.395, 57.12, 57.375],\n to_rgb=True),\n >>> dict(type='ImageToTensor', keys=['img']),\n >>> dict(type='Collect', keys=['img']),\n >>> dict(type='WrapIntoLists')\n >>> ]\n \"\"\"\n\n def __call__(self, results):\n # Wrap dict fields into lists\n for key, val in results.items():\n results[key] = [val]\n return results\n\n def __repr__(self):\n return f'{self.__class__.__name__}()'\n", "path": "mmcls/datasets/pipelines/formating.py"}], "after_files": [{"content": "from collections.abc import Sequence\n\nimport mmcv\nimport numpy as np\nimport torch\nfrom mmcv.parallel import DataContainer as DC\nfrom PIL import Image\n\nfrom ..builder import PIPELINES\n\n\ndef to_tensor(data):\n \"\"\"Convert objects of various python types to :obj:`torch.Tensor`.\n\n Supported types are: :class:`numpy.ndarray`, :class:`torch.Tensor`,\n :class:`Sequence`, :class:`int` and :class:`float`.\n \"\"\"\n if isinstance(data, torch.Tensor):\n return data\n elif isinstance(data, np.ndarray):\n return torch.from_numpy(data)\n elif isinstance(data, Sequence) and not mmcv.is_str(data):\n return torch.tensor(data)\n elif isinstance(data, int):\n return torch.LongTensor([data])\n elif isinstance(data, float):\n return torch.FloatTensor([data])\n else:\n raise TypeError(\n f'Type {type(data)} cannot be converted to tensor.'\n 'Supported types are: `numpy.ndarray`, `torch.Tensor`, '\n '`Sequence`, `int` and `float`')\n\n\[email protected]_module()\nclass ToTensor(object):\n\n def __init__(self, keys):\n self.keys = keys\n\n def __call__(self, results):\n for key in self.keys:\n results[key] = to_tensor(results[key])\n return results\n\n def __repr__(self):\n return self.__class__.__name__ + f'(keys={self.keys})'\n\n\[email protected]_module()\nclass ImageToTensor(object):\n\n def __init__(self, keys):\n self.keys = keys\n\n def __call__(self, results):\n for key in self.keys:\n img = results[key]\n if len(img.shape) < 3:\n img = np.expand_dims(img, -1)\n results[key] = to_tensor(img.transpose(2, 0, 1))\n return results\n\n def __repr__(self):\n return self.__class__.__name__ + f'(keys={self.keys})'\n\n\[email protected]_module()\nclass Transpose(object):\n\n def __init__(self, keys, order):\n self.keys = keys\n self.order = order\n\n def __call__(self, results):\n for key in self.keys:\n results[key] = results[key].transpose(self.order)\n return results\n\n def __repr__(self):\n return self.__class__.__name__ + \\\n f'(keys={self.keys}, order={self.order})'\n\n\[email protected]_module()\nclass ToPIL(object):\n\n def __init__(self):\n pass\n\n def __call__(self, results):\n results['img'] = Image.fromarray(results['img'])\n return results\n\n\[email protected]_module()\nclass ToNumpy(object):\n\n def __init__(self):\n pass\n\n def __call__(self, results):\n results['img'] = np.array(results['img'], dtype=np.float32)\n return results\n\n\[email protected]_module()\nclass Collect(object):\n \"\"\"\n Collect data from the loader relevant to the specific task.\n\n This is usually the last stage of the data loader pipeline. Typically keys\n is set to some subset of \"img\" and \"gt_label\".\n\n Args:\n keys (Sequence[str]): Keys of results to be collected in ``data``.\n meta_keys (Sequence[str], optional): Meta keys to be converted to\n ``mmcv.DataContainer`` and collected in ``data[img_metas]``.\n Default: ``('filename', 'ori_shape', 'img_shape', 'flip',\n 'flip_direction', 'img_norm_cfg')``\n\n Returns:\n dict: The result dict contains the following keys\n - keys in``self.keys``\n - ``img_metas`` if avaliable\n \"\"\"\n\n def __init__(self,\n keys,\n meta_keys=('filename', 'ori_shape', 'img_shape', 'flip',\n 'flip_direction', 'img_norm_cfg')):\n self.keys = keys\n self.meta_keys = meta_keys\n\n def __call__(self, results):\n data = {}\n img_meta = {}\n for key in self.meta_keys:\n if key in results:\n img_meta[key] = results[key]\n data['img_metas'] = DC(img_meta, cpu_only=True)\n for key in self.keys:\n data[key] = results[key]\n return data\n\n def __repr__(self):\n return self.__class__.__name__ + \\\n f'(keys={self.keys}, meta_keys={self.meta_keys})'\n\n\[email protected]_module()\nclass WrapFieldsToLists(object):\n \"\"\"Wrap fields of the data dictionary into lists for evaluation.\n\n This class can be used as a last step of a test or validation\n pipeline for single image evaluation or inference.\n\n Example:\n >>> test_pipeline = [\n >>> dict(type='LoadImageFromFile'),\n >>> dict(type='Normalize',\n mean=[123.675, 116.28, 103.53],\n std=[58.395, 57.12, 57.375],\n to_rgb=True),\n >>> dict(type='ImageToTensor', keys=['img']),\n >>> dict(type='Collect', keys=['img']),\n >>> dict(type='WrapIntoLists')\n >>> ]\n \"\"\"\n\n def __call__(self, results):\n # Wrap dict fields into lists\n for key, val in results.items():\n results[key] = [val]\n return results\n\n def __repr__(self):\n return f'{self.__class__.__name__}()'\n", "path": "mmcls/datasets/pipelines/formating.py"}]} | 1,809 | 421 |
gh_patches_debug_28586 | rasdani/github-patches | git_diff | uccser__cs-unplugged-764 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Provide clearer entry points for new visitors on website
Landing page interface redesign
Current:
https://cs-unplugged-dev.appspot.com/en/
Proposed new design:

**Redesign based on older current landing page.
The main changes are the jumbotron banner, nav bar and footer ( issue #695 )
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `csunplugged/general/urls.py`
Content:
```
1 """URL routing for the general application."""
2
3 from django.conf.urls import url
4
5 from . import views
6
7 urlpatterns = [
8 url(
9 r"^$",
10 views.GeneralIndexView.as_view(),
11 name="home"
12 ),
13 url(
14 r"^about/$",
15 views.GeneralAboutView.as_view(),
16 name="about"
17 ),
18 url(
19 r"^computational-thinking/$",
20 views.ComputationalThinkingView.as_view(),
21 name="computational_thinking"
22 ),
23 url(
24 r"^contact/$",
25 views.GeneralContactView.as_view(),
26 name="contact"
27 ),
28 url(
29 r"^people/$",
30 views.GeneralPeopleView.as_view(),
31 name="people"
32 ),
33 url(
34 r"^principles/$",
35 views.GeneralPrinciplesView.as_view(),
36 name="principles"
37 ),
38 ]
39
```
Path: `csunplugged/general/views.py`
Content:
```
1 """Views for the general application."""
2
3 from django.views.generic import TemplateView
4 from django.http import HttpResponse
5
6
7 class GeneralIndexView(TemplateView):
8 """View for the homepage that renders from a template."""
9
10 template_name = "general/index.html"
11
12
13 class GeneralAboutView(TemplateView):
14 """View for the about page that renders from a template."""
15
16 template_name = "general/about.html"
17
18
19 class GeneralContactView(TemplateView):
20 """View for the contact page that renders from a template."""
21
22 template_name = "general/contact.html"
23
24
25 class GeneralPeopleView(TemplateView):
26 """View for the people page that renders from a template."""
27
28 template_name = "general/people.html"
29
30
31 class GeneralPrinciplesView(TemplateView):
32 """View for the princples page that renders from a template."""
33
34 template_name = "general/principles.html"
35
36
37 class ComputationalThinkingView(TemplateView):
38 """View for the Computational Thinking page that renders from a template."""
39
40 template_name = "general/computational-thinking.html"
41
42
43 def health_check(request):
44 """Return heath check response for Google App Engine.
45
46 Returns a 200 HTTP response for Google App Engine to detect the system
47 is running.
48 """
49 return HttpResponse(status=200)
50
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/csunplugged/general/urls.py b/csunplugged/general/urls.py
--- a/csunplugged/general/urls.py
+++ b/csunplugged/general/urls.py
@@ -15,11 +15,21 @@
views.GeneralAboutView.as_view(),
name="about"
),
+ url(
+ r"^what-is-computer-science/$",
+ views.WhatIsCSView.as_view(),
+ name="what_is_cs"
+ ),
url(
r"^computational-thinking/$",
views.ComputationalThinkingView.as_view(),
name="computational_thinking"
),
+ url(
+ r"^how-do-i-teach-cs-unplugged/$",
+ views.HowDoITeachCSUnpluggedView.as_view(),
+ name="how_do_i_teach_cs_unplugged"
+ ),
url(
r"^contact/$",
views.GeneralContactView.as_view(),
diff --git a/csunplugged/general/views.py b/csunplugged/general/views.py
--- a/csunplugged/general/views.py
+++ b/csunplugged/general/views.py
@@ -34,12 +34,24 @@
template_name = "general/principles.html"
+class WhatIsCSView(TemplateView):
+ """View for the 'What is Computer Science?' page that renders from a template."""
+
+ template_name = "general/what-is-computer-science.html"
+
+
class ComputationalThinkingView(TemplateView):
"""View for the Computational Thinking page that renders from a template."""
template_name = "general/computational-thinking.html"
+class HowDoITeachCSUnpluggedView(TemplateView):
+ """View for the 'How do I teach CS Unplugged?' page that renders from a template."""
+
+ template_name = "general/how-do-i-teach-cs-unplugged.html"
+
+
def health_check(request):
"""Return heath check response for Google App Engine.
| {"golden_diff": "diff --git a/csunplugged/general/urls.py b/csunplugged/general/urls.py\n--- a/csunplugged/general/urls.py\n+++ b/csunplugged/general/urls.py\n@@ -15,11 +15,21 @@\n views.GeneralAboutView.as_view(),\n name=\"about\"\n ),\n+ url(\n+ r\"^what-is-computer-science/$\",\n+ views.WhatIsCSView.as_view(),\n+ name=\"what_is_cs\"\n+ ),\n url(\n r\"^computational-thinking/$\",\n views.ComputationalThinkingView.as_view(),\n name=\"computational_thinking\"\n ),\n+ url(\n+ r\"^how-do-i-teach-cs-unplugged/$\",\n+ views.HowDoITeachCSUnpluggedView.as_view(),\n+ name=\"how_do_i_teach_cs_unplugged\"\n+ ),\n url(\n r\"^contact/$\",\n views.GeneralContactView.as_view(),\ndiff --git a/csunplugged/general/views.py b/csunplugged/general/views.py\n--- a/csunplugged/general/views.py\n+++ b/csunplugged/general/views.py\n@@ -34,12 +34,24 @@\n template_name = \"general/principles.html\"\n \n \n+class WhatIsCSView(TemplateView):\n+ \"\"\"View for the 'What is Computer Science?' page that renders from a template.\"\"\"\n+\n+ template_name = \"general/what-is-computer-science.html\"\n+\n+\n class ComputationalThinkingView(TemplateView):\n \"\"\"View for the Computational Thinking page that renders from a template.\"\"\"\n \n template_name = \"general/computational-thinking.html\"\n \n \n+class HowDoITeachCSUnpluggedView(TemplateView):\n+ \"\"\"View for the 'How do I teach CS Unplugged?' page that renders from a template.\"\"\"\n+\n+ template_name = \"general/how-do-i-teach-cs-unplugged.html\"\n+\n+\n def health_check(request):\n \"\"\"Return heath check response for Google App Engine.\n", "issue": "Provide clearer entry points for new visitors on website\n\nLanding page interface redesign\nCurrent:\r\nhttps://cs-unplugged-dev.appspot.com/en/\r\n\r\nProposed new design:\r\n\r\n\r\n**Redesign based on older current landing page.\r\n\r\nThe main changes are the jumbotron banner, nav bar and footer ( issue #695 )\n", "before_files": [{"content": "\"\"\"URL routing for the general application.\"\"\"\n\nfrom django.conf.urls import url\n\nfrom . import views\n\nurlpatterns = [\n url(\n r\"^$\",\n views.GeneralIndexView.as_view(),\n name=\"home\"\n ),\n url(\n r\"^about/$\",\n views.GeneralAboutView.as_view(),\n name=\"about\"\n ),\n url(\n r\"^computational-thinking/$\",\n views.ComputationalThinkingView.as_view(),\n name=\"computational_thinking\"\n ),\n url(\n r\"^contact/$\",\n views.GeneralContactView.as_view(),\n name=\"contact\"\n ),\n url(\n r\"^people/$\",\n views.GeneralPeopleView.as_view(),\n name=\"people\"\n ),\n url(\n r\"^principles/$\",\n views.GeneralPrinciplesView.as_view(),\n name=\"principles\"\n ),\n]\n", "path": "csunplugged/general/urls.py"}, {"content": "\"\"\"Views for the general application.\"\"\"\n\nfrom django.views.generic import TemplateView\nfrom django.http import HttpResponse\n\n\nclass GeneralIndexView(TemplateView):\n \"\"\"View for the homepage that renders from a template.\"\"\"\n\n template_name = \"general/index.html\"\n\n\nclass GeneralAboutView(TemplateView):\n \"\"\"View for the about page that renders from a template.\"\"\"\n\n template_name = \"general/about.html\"\n\n\nclass GeneralContactView(TemplateView):\n \"\"\"View for the contact page that renders from a template.\"\"\"\n\n template_name = \"general/contact.html\"\n\n\nclass GeneralPeopleView(TemplateView):\n \"\"\"View for the people page that renders from a template.\"\"\"\n\n template_name = \"general/people.html\"\n\n\nclass GeneralPrinciplesView(TemplateView):\n \"\"\"View for the princples page that renders from a template.\"\"\"\n\n template_name = \"general/principles.html\"\n\n\nclass ComputationalThinkingView(TemplateView):\n \"\"\"View for the Computational Thinking page that renders from a template.\"\"\"\n\n template_name = \"general/computational-thinking.html\"\n\n\ndef health_check(request):\n \"\"\"Return heath check response for Google App Engine.\n\n Returns a 200 HTTP response for Google App Engine to detect the system\n is running.\n \"\"\"\n return HttpResponse(status=200)\n", "path": "csunplugged/general/views.py"}], "after_files": [{"content": "\"\"\"URL routing for the general application.\"\"\"\n\nfrom django.conf.urls import url\n\nfrom . import views\n\nurlpatterns = [\n url(\n r\"^$\",\n views.GeneralIndexView.as_view(),\n name=\"home\"\n ),\n url(\n r\"^about/$\",\n views.GeneralAboutView.as_view(),\n name=\"about\"\n ),\n url(\n r\"^what-is-computer-science/$\",\n views.WhatIsCSView.as_view(),\n name=\"what_is_cs\"\n ),\n url(\n r\"^computational-thinking/$\",\n views.ComputationalThinkingView.as_view(),\n name=\"computational_thinking\"\n ),\n url(\n r\"^how-do-i-teach-cs-unplugged/$\",\n views.HowDoITeachCSUnpluggedView.as_view(),\n name=\"how_do_i_teach_cs_unplugged\"\n ),\n url(\n r\"^contact/$\",\n views.GeneralContactView.as_view(),\n name=\"contact\"\n ),\n url(\n r\"^people/$\",\n views.GeneralPeopleView.as_view(),\n name=\"people\"\n ),\n url(\n r\"^principles/$\",\n views.GeneralPrinciplesView.as_view(),\n name=\"principles\"\n ),\n]\n", "path": "csunplugged/general/urls.py"}, {"content": "\"\"\"Views for the general application.\"\"\"\n\nfrom django.views.generic import TemplateView\nfrom django.http import HttpResponse\n\n\nclass GeneralIndexView(TemplateView):\n \"\"\"View for the homepage that renders from a template.\"\"\"\n\n template_name = \"general/index.html\"\n\n\nclass GeneralAboutView(TemplateView):\n \"\"\"View for the about page that renders from a template.\"\"\"\n\n template_name = \"general/about.html\"\n\n\nclass GeneralContactView(TemplateView):\n \"\"\"View for the contact page that renders from a template.\"\"\"\n\n template_name = \"general/contact.html\"\n\n\nclass GeneralPeopleView(TemplateView):\n \"\"\"View for the people page that renders from a template.\"\"\"\n\n template_name = \"general/people.html\"\n\n\nclass GeneralPrinciplesView(TemplateView):\n \"\"\"View for the princples page that renders from a template.\"\"\"\n\n template_name = \"general/principles.html\"\n\n\nclass WhatIsCSView(TemplateView):\n \"\"\"View for the 'What is Computer Science?' page that renders from a template.\"\"\"\n\n template_name = \"general/what-is-computer-science.html\"\n\n\nclass ComputationalThinkingView(TemplateView):\n \"\"\"View for the Computational Thinking page that renders from a template.\"\"\"\n\n template_name = \"general/computational-thinking.html\"\n\n\nclass HowDoITeachCSUnpluggedView(TemplateView):\n \"\"\"View for the 'How do I teach CS Unplugged?' page that renders from a template.\"\"\"\n\n template_name = \"general/how-do-i-teach-cs-unplugged.html\"\n\n\ndef health_check(request):\n \"\"\"Return heath check response for Google App Engine.\n\n Returns a 200 HTTP response for Google App Engine to detect the system\n is running.\n \"\"\"\n return HttpResponse(status=200)\n", "path": "csunplugged/general/views.py"}]} | 1,027 | 437 |
gh_patches_debug_16392 | rasdani/github-patches | git_diff | cal-itp__benefits-864 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Improve checks in `oauth` `authorize` view
We ran into a case while deploying #862 where our `auth_provider` model is configured with a `scope`, but the response from the Auth server doesn't actually contain the scope (probably still being set up).
The `oauth` `authorize` function doesn't handle this case and raises an `AttributeError`
```
File "/home/calitp/app/benefits/oauth/views.py", line 70, in authorize
claim_flag = (userinfo.get(verifier_claim) if userinfo else "false").lower() == "true"
AttributeError: 'NoneType' object has no attribute 'lower'
```
Let's add a check and tests for this so that the logs for this are more helpful.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `benefits/oauth/views.py`
Content:
```
1 import logging
2
3 from django.shortcuts import redirect
4 from django.urls import reverse
5 from django.utils.decorators import decorator_from_middleware
6
7 from benefits.core import session
8 from benefits.core.middleware import VerifierSessionRequired
9 from . import analytics, redirects
10 from .client import oauth
11
12
13 logger = logging.getLogger(__name__)
14
15
16 ROUTE_AUTH = "oauth:authorize"
17 ROUTE_START = "eligibility:start"
18 ROUTE_CONFIRM = "eligibility:confirm"
19 ROUTE_UNVERIFIED = "eligibility:unverified"
20 ROUTE_POST_LOGOUT = "oauth:post_logout"
21
22
23 @decorator_from_middleware(VerifierSessionRequired)
24 def login(request):
25 """View implementing OIDC authorize_redirect."""
26 verifier = session.verifier(request)
27 oauth_client = oauth.create_client(verifier.auth_provider.client_name)
28
29 if not oauth_client:
30 raise Exception(f"oauth_client not registered: {verifier.auth_provider.client_name}")
31
32 route = reverse(ROUTE_AUTH)
33 redirect_uri = redirects.generate_redirect_uri(request, route)
34
35 logger.debug(f"OAuth authorize_redirect with redirect_uri: {redirect_uri}")
36
37 analytics.started_sign_in(request)
38
39 return oauth_client.authorize_redirect(request, redirect_uri)
40
41
42 @decorator_from_middleware(VerifierSessionRequired)
43 def authorize(request):
44 """View implementing OIDC token authorization."""
45 verifier = session.verifier(request)
46 oauth_client = oauth.create_client(verifier.auth_provider.client_name)
47
48 if not oauth_client:
49 raise Exception(f"oauth_client not registered: {verifier.auth_provider.client_name}")
50
51 logger.debug("Attempting to authorize OAuth access token")
52 token = oauth_client.authorize_access_token(request)
53
54 if token is None:
55 logger.warning("Could not authorize OAuth access token")
56 return redirect(ROUTE_START)
57
58 logger.debug("OAuth access token authorized")
59
60 # We store the id_token in the user's session. This is the minimal amount of information needed later to log the user out.
61 id_token = token["id_token"]
62
63 # We store the returned claim in case it can be used later in eligibility verification.
64 verifier_claim = verifier.auth_provider.claim
65 stored_claim = None
66
67 if verifier_claim:
68 userinfo = token.get("userinfo")
69 # the claim comes back in userinfo like { "claim": "True" | "False" }
70 claim_flag = (userinfo.get(verifier_claim) if userinfo else "false").lower() == "true"
71 # if userinfo contains our claim and the flag is true, store the *claim*
72 stored_claim = verifier_claim if claim_flag else None
73
74 session.update(request, oauth_token=id_token, oauth_claim=stored_claim)
75
76 analytics.finished_sign_in(request)
77
78 return redirect(ROUTE_CONFIRM)
79
80
81 def cancel(request):
82 """View implementing cancellation of OIDC authorization."""
83
84 analytics.canceled_sign_in(request)
85
86 return redirect(ROUTE_UNVERIFIED)
87
88
89 @decorator_from_middleware(VerifierSessionRequired)
90 def logout(request):
91 """View implementing OIDC and application sign out."""
92 verifier = session.verifier(request)
93 oauth_client = oauth.create_client(verifier.auth_provider.client_name)
94
95 if not oauth_client:
96 raise Exception(f"oauth_client not registered: {verifier.auth_provider.client_name}")
97
98 analytics.started_sign_out(request)
99
100 # overwrite the oauth session token, the user is signed out of the app
101 token = session.oauth_token(request)
102 session.logout(request)
103
104 route = reverse(ROUTE_POST_LOGOUT)
105 redirect_uri = redirects.generate_redirect_uri(request, route)
106
107 logger.debug(f"OAuth end_session_endpoint with redirect_uri: {redirect_uri}")
108
109 # send the user through the end_session_endpoint, redirecting back to
110 # the post_logout route
111 return redirects.deauthorize_redirect(oauth_client, token, redirect_uri)
112
113
114 def post_logout(request):
115 """View routes the user to their origin after sign out."""
116
117 analytics.finished_sign_out(request)
118
119 origin = session.origin(request)
120 return redirect(origin)
121
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/benefits/oauth/views.py b/benefits/oauth/views.py
--- a/benefits/oauth/views.py
+++ b/benefits/oauth/views.py
@@ -66,10 +66,15 @@
if verifier_claim:
userinfo = token.get("userinfo")
- # the claim comes back in userinfo like { "claim": "True" | "False" }
- claim_flag = (userinfo.get(verifier_claim) if userinfo else "false").lower() == "true"
- # if userinfo contains our claim and the flag is true, store the *claim*
- stored_claim = verifier_claim if claim_flag else None
+
+ if userinfo:
+ claim_value = userinfo.get(verifier_claim)
+ # the claim comes back in userinfo like { "claim": "True" | "False" }
+ if claim_value is None:
+ logger.warning(f"userinfo did not contain: {verifier_claim}")
+ elif claim_value.lower() == "true":
+ # if userinfo contains our claim and the flag is true, store the *claim*
+ stored_claim = verifier_claim
session.update(request, oauth_token=id_token, oauth_claim=stored_claim)
| {"golden_diff": "diff --git a/benefits/oauth/views.py b/benefits/oauth/views.py\n--- a/benefits/oauth/views.py\n+++ b/benefits/oauth/views.py\n@@ -66,10 +66,15 @@\n \n if verifier_claim:\n userinfo = token.get(\"userinfo\")\n- # the claim comes back in userinfo like { \"claim\": \"True\" | \"False\" }\n- claim_flag = (userinfo.get(verifier_claim) if userinfo else \"false\").lower() == \"true\"\n- # if userinfo contains our claim and the flag is true, store the *claim*\n- stored_claim = verifier_claim if claim_flag else None\n+\n+ if userinfo:\n+ claim_value = userinfo.get(verifier_claim)\n+ # the claim comes back in userinfo like { \"claim\": \"True\" | \"False\" }\n+ if claim_value is None:\n+ logger.warning(f\"userinfo did not contain: {verifier_claim}\")\n+ elif claim_value.lower() == \"true\":\n+ # if userinfo contains our claim and the flag is true, store the *claim*\n+ stored_claim = verifier_claim\n \n session.update(request, oauth_token=id_token, oauth_claim=stored_claim)\n", "issue": "Improve checks in `oauth` `authorize` view\nWe ran into a case while deploying #862 where our `auth_provider` model is configured with a `scope`, but the response from the Auth server doesn't actually contain the scope (probably still being set up).\r\n\r\nThe `oauth` `authorize` function doesn't handle this case and raises an `AttributeError`\r\n```\r\n File \"/home/calitp/app/benefits/oauth/views.py\", line 70, in authorize\r\n claim_flag = (userinfo.get(verifier_claim) if userinfo else \"false\").lower() == \"true\"\r\nAttributeError: 'NoneType' object has no attribute 'lower'\r\n```\r\n\r\nLet's add a check and tests for this so that the logs for this are more helpful.\n", "before_files": [{"content": "import logging\n\nfrom django.shortcuts import redirect\nfrom django.urls import reverse\nfrom django.utils.decorators import decorator_from_middleware\n\nfrom benefits.core import session\nfrom benefits.core.middleware import VerifierSessionRequired\nfrom . import analytics, redirects\nfrom .client import oauth\n\n\nlogger = logging.getLogger(__name__)\n\n\nROUTE_AUTH = \"oauth:authorize\"\nROUTE_START = \"eligibility:start\"\nROUTE_CONFIRM = \"eligibility:confirm\"\nROUTE_UNVERIFIED = \"eligibility:unverified\"\nROUTE_POST_LOGOUT = \"oauth:post_logout\"\n\n\n@decorator_from_middleware(VerifierSessionRequired)\ndef login(request):\n \"\"\"View implementing OIDC authorize_redirect.\"\"\"\n verifier = session.verifier(request)\n oauth_client = oauth.create_client(verifier.auth_provider.client_name)\n\n if not oauth_client:\n raise Exception(f\"oauth_client not registered: {verifier.auth_provider.client_name}\")\n\n route = reverse(ROUTE_AUTH)\n redirect_uri = redirects.generate_redirect_uri(request, route)\n\n logger.debug(f\"OAuth authorize_redirect with redirect_uri: {redirect_uri}\")\n\n analytics.started_sign_in(request)\n\n return oauth_client.authorize_redirect(request, redirect_uri)\n\n\n@decorator_from_middleware(VerifierSessionRequired)\ndef authorize(request):\n \"\"\"View implementing OIDC token authorization.\"\"\"\n verifier = session.verifier(request)\n oauth_client = oauth.create_client(verifier.auth_provider.client_name)\n\n if not oauth_client:\n raise Exception(f\"oauth_client not registered: {verifier.auth_provider.client_name}\")\n\n logger.debug(\"Attempting to authorize OAuth access token\")\n token = oauth_client.authorize_access_token(request)\n\n if token is None:\n logger.warning(\"Could not authorize OAuth access token\")\n return redirect(ROUTE_START)\n\n logger.debug(\"OAuth access token authorized\")\n\n # We store the id_token in the user's session. This is the minimal amount of information needed later to log the user out.\n id_token = token[\"id_token\"]\n\n # We store the returned claim in case it can be used later in eligibility verification.\n verifier_claim = verifier.auth_provider.claim\n stored_claim = None\n\n if verifier_claim:\n userinfo = token.get(\"userinfo\")\n # the claim comes back in userinfo like { \"claim\": \"True\" | \"False\" }\n claim_flag = (userinfo.get(verifier_claim) if userinfo else \"false\").lower() == \"true\"\n # if userinfo contains our claim and the flag is true, store the *claim*\n stored_claim = verifier_claim if claim_flag else None\n\n session.update(request, oauth_token=id_token, oauth_claim=stored_claim)\n\n analytics.finished_sign_in(request)\n\n return redirect(ROUTE_CONFIRM)\n\n\ndef cancel(request):\n \"\"\"View implementing cancellation of OIDC authorization.\"\"\"\n\n analytics.canceled_sign_in(request)\n\n return redirect(ROUTE_UNVERIFIED)\n\n\n@decorator_from_middleware(VerifierSessionRequired)\ndef logout(request):\n \"\"\"View implementing OIDC and application sign out.\"\"\"\n verifier = session.verifier(request)\n oauth_client = oauth.create_client(verifier.auth_provider.client_name)\n\n if not oauth_client:\n raise Exception(f\"oauth_client not registered: {verifier.auth_provider.client_name}\")\n\n analytics.started_sign_out(request)\n\n # overwrite the oauth session token, the user is signed out of the app\n token = session.oauth_token(request)\n session.logout(request)\n\n route = reverse(ROUTE_POST_LOGOUT)\n redirect_uri = redirects.generate_redirect_uri(request, route)\n\n logger.debug(f\"OAuth end_session_endpoint with redirect_uri: {redirect_uri}\")\n\n # send the user through the end_session_endpoint, redirecting back to\n # the post_logout route\n return redirects.deauthorize_redirect(oauth_client, token, redirect_uri)\n\n\ndef post_logout(request):\n \"\"\"View routes the user to their origin after sign out.\"\"\"\n\n analytics.finished_sign_out(request)\n\n origin = session.origin(request)\n return redirect(origin)\n", "path": "benefits/oauth/views.py"}], "after_files": [{"content": "import logging\n\nfrom django.shortcuts import redirect\nfrom django.urls import reverse\nfrom django.utils.decorators import decorator_from_middleware\n\nfrom benefits.core import session\nfrom benefits.core.middleware import VerifierSessionRequired\nfrom . import analytics, redirects\nfrom .client import oauth\n\n\nlogger = logging.getLogger(__name__)\n\n\nROUTE_AUTH = \"oauth:authorize\"\nROUTE_START = \"eligibility:start\"\nROUTE_CONFIRM = \"eligibility:confirm\"\nROUTE_UNVERIFIED = \"eligibility:unverified\"\nROUTE_POST_LOGOUT = \"oauth:post_logout\"\n\n\n@decorator_from_middleware(VerifierSessionRequired)\ndef login(request):\n \"\"\"View implementing OIDC authorize_redirect.\"\"\"\n verifier = session.verifier(request)\n oauth_client = oauth.create_client(verifier.auth_provider.client_name)\n\n if not oauth_client:\n raise Exception(f\"oauth_client not registered: {verifier.auth_provider.client_name}\")\n\n route = reverse(ROUTE_AUTH)\n redirect_uri = redirects.generate_redirect_uri(request, route)\n\n logger.debug(f\"OAuth authorize_redirect with redirect_uri: {redirect_uri}\")\n\n analytics.started_sign_in(request)\n\n return oauth_client.authorize_redirect(request, redirect_uri)\n\n\n@decorator_from_middleware(VerifierSessionRequired)\ndef authorize(request):\n \"\"\"View implementing OIDC token authorization.\"\"\"\n verifier = session.verifier(request)\n oauth_client = oauth.create_client(verifier.auth_provider.client_name)\n\n if not oauth_client:\n raise Exception(f\"oauth_client not registered: {verifier.auth_provider.client_name}\")\n\n logger.debug(\"Attempting to authorize OAuth access token\")\n token = oauth_client.authorize_access_token(request)\n\n if token is None:\n logger.warning(\"Could not authorize OAuth access token\")\n return redirect(ROUTE_START)\n\n logger.debug(\"OAuth access token authorized\")\n\n # We store the id_token in the user's session. This is the minimal amount of information needed later to log the user out.\n id_token = token[\"id_token\"]\n\n # We store the returned claim in case it can be used later in eligibility verification.\n verifier_claim = verifier.auth_provider.claim\n stored_claim = None\n\n if verifier_claim:\n userinfo = token.get(\"userinfo\")\n\n if userinfo:\n claim_value = userinfo.get(verifier_claim)\n # the claim comes back in userinfo like { \"claim\": \"True\" | \"False\" }\n if claim_value is None:\n logger.warning(f\"userinfo did not contain: {verifier_claim}\")\n elif claim_value.lower() == \"true\":\n # if userinfo contains our claim and the flag is true, store the *claim*\n stored_claim = verifier_claim\n\n session.update(request, oauth_token=id_token, oauth_claim=stored_claim)\n\n analytics.finished_sign_in(request)\n\n return redirect(ROUTE_CONFIRM)\n\n\ndef cancel(request):\n \"\"\"View implementing cancellation of OIDC authorization.\"\"\"\n\n analytics.canceled_sign_in(request)\n\n return redirect(ROUTE_UNVERIFIED)\n\n\n@decorator_from_middleware(VerifierSessionRequired)\ndef logout(request):\n \"\"\"View implementing OIDC and application sign out.\"\"\"\n verifier = session.verifier(request)\n oauth_client = oauth.create_client(verifier.auth_provider.client_name)\n\n if not oauth_client:\n raise Exception(f\"oauth_client not registered: {verifier.auth_provider.client_name}\")\n\n analytics.started_sign_out(request)\n\n # overwrite the oauth session token, the user is signed out of the app\n token = session.oauth_token(request)\n session.logout(request)\n\n route = reverse(ROUTE_POST_LOGOUT)\n redirect_uri = redirects.generate_redirect_uri(request, route)\n\n logger.debug(f\"OAuth end_session_endpoint with redirect_uri: {redirect_uri}\")\n\n # send the user through the end_session_endpoint, redirecting back to\n # the post_logout route\n return redirects.deauthorize_redirect(oauth_client, token, redirect_uri)\n\n\ndef post_logout(request):\n \"\"\"View routes the user to their origin after sign out.\"\"\"\n\n analytics.finished_sign_out(request)\n\n origin = session.origin(request)\n return redirect(origin)\n", "path": "benefits/oauth/views.py"}]} | 1,522 | 259 |
gh_patches_debug_32597 | rasdani/github-patches | git_diff | RedHatInsights__insights-core-2401 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add version command/flag to the Insights CLI
Add `version` command and `--version` flag/arg to print version info from `insights.get_nvr()`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `insights/command_parser.py`
Content:
```
1 #!/usr/bin/env python
2 """
3 Command Parser module
4 ---------------------
5 Implements the ``insights`` command line. Each function is the first
6 argument followed by the function specific arguments. See USAGE text
7 below.
8 """
9 from __future__ import print_function
10
11 import argparse
12 import sys
13
14 USAGE = """insights <command> [<args>]
15 Available commands:
16 cat Execute a spec and show the output
17 collect Collect all specs against the client and create an Insights archive.
18 inspect Execute component and shell out to ipython for evaluation.
19 info View info and docs for Insights Core components.
20 ocpshell Interactive evaluation of archives, directories, or individual yaml files.
21 run Run insights-core against host or an archive.
22 """
23
24
25 class InsightsCli(object):
26 """
27 Class to implement the cli module.
28 Each command is called as a method of this class and all
29 arg parsing is performed in the separate module that
30 actually implements the command. the args "insights command"
31 are not passed to the submodule.
32 """
33
34 def __init__(self):
35 parser = argparse.ArgumentParser(
36 description="Insights Core command line execution",
37 usage=USAGE)
38 parser.add_argument('command', help='Insights Core command to run')
39 args = parser.parse_args(sys.argv[1:2])
40 if not hasattr(self, args.command):
41 print('Unrecognized command')
42 parser.print_help()
43 sys.exit(1)
44 # remove the sub-command arg from sys.argv
45 sys.argv.pop(1)
46 # Use dispatch pattern to execute command
47 getattr(self, args.command)()
48
49 def cat(self):
50 from .tools.cat import main as cat_main
51 cat_main()
52
53 def collect(self):
54 from .collect import main as collect_main
55 collect_main()
56
57 def info(self):
58 from .tools.query import main as query_main
59 query_main()
60
61 def inspect(self):
62 from .tools.insights_inspect import main as inspect_main
63 inspect_main()
64
65 def ocpshell(self):
66 from .ocpshell import main as ocpshell_main
67 ocpshell_main()
68
69 def run(self):
70 from insights import run
71 if "" not in sys.path:
72 sys.path.insert(0, "")
73 run(print_summary=True)
74
75
76 def fix_arg_dashes():
77
78 en_dash = '\u2013'
79 em_dash = '\u2014'
80
81 # replace unicode (en dash and em dash) dashes from argument definitions that may have been copy
82 # and pasted from another source
83 i = 1
84 for a in sys.argv[1:]:
85 first = list(a)
86 first[0] = first[0].replace(em_dash, "--").replace(en_dash, "-")
87 sys.argv[i] = "".join(first)
88 i += 1
89
90
91 def main():
92 fix_arg_dashes()
93 try:
94 InsightsCli()
95 except SystemExit:
96 raise
97 except BaseException as ex:
98 print(ex)
99
100
101 if __name__ == "__main__":
102 main()
103
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/insights/command_parser.py b/insights/command_parser.py
--- a/insights/command_parser.py
+++ b/insights/command_parser.py
@@ -19,6 +19,7 @@
info View info and docs for Insights Core components.
ocpshell Interactive evaluation of archives, directories, or individual yaml files.
run Run insights-core against host or an archive.
+ version Show Insights Core version information and exit.
"""
@@ -36,6 +37,9 @@
description="Insights Core command line execution",
usage=USAGE)
parser.add_argument('command', help='Insights Core command to run')
+ parser.add_argument('--version', action='store_true', help='show Insights Core version information and exit')
+ if self._parse_version_arg():
+ self.version()
args = parser.parse_args(sys.argv[1:2])
if not hasattr(self, args.command):
print('Unrecognized command')
@@ -46,6 +50,12 @@
# Use dispatch pattern to execute command
getattr(self, args.command)()
+ def _parse_version_arg(self):
+ """
+ Manually check for version argument/flag in cases when command is not provided.
+ """
+ return '--version' in sys.argv[1:3]
+
def cat(self):
from .tools.cat import main as cat_main
cat_main()
@@ -72,6 +82,14 @@
sys.path.insert(0, "")
run(print_summary=True)
+ def version(self):
+ """
+ Print version information (NVR) and exit.
+ """
+ from insights import get_nvr
+ print(get_nvr())
+ sys.exit()
+
def fix_arg_dashes():
| {"golden_diff": "diff --git a/insights/command_parser.py b/insights/command_parser.py\n--- a/insights/command_parser.py\n+++ b/insights/command_parser.py\n@@ -19,6 +19,7 @@\n info View info and docs for Insights Core components.\n ocpshell Interactive evaluation of archives, directories, or individual yaml files.\n run Run insights-core against host or an archive.\n+ version Show Insights Core version information and exit.\n \"\"\"\n \n \n@@ -36,6 +37,9 @@\n description=\"Insights Core command line execution\",\n usage=USAGE)\n parser.add_argument('command', help='Insights Core command to run')\n+ parser.add_argument('--version', action='store_true', help='show Insights Core version information and exit')\n+ if self._parse_version_arg():\n+ self.version()\n args = parser.parse_args(sys.argv[1:2])\n if not hasattr(self, args.command):\n print('Unrecognized command')\n@@ -46,6 +50,12 @@\n # Use dispatch pattern to execute command\n getattr(self, args.command)()\n \n+ def _parse_version_arg(self):\n+ \"\"\"\n+ Manually check for version argument/flag in cases when command is not provided.\n+ \"\"\"\n+ return '--version' in sys.argv[1:3]\n+\n def cat(self):\n from .tools.cat import main as cat_main\n cat_main()\n@@ -72,6 +82,14 @@\n sys.path.insert(0, \"\")\n run(print_summary=True)\n \n+ def version(self):\n+ \"\"\"\n+ Print version information (NVR) and exit.\n+ \"\"\"\n+ from insights import get_nvr\n+ print(get_nvr())\n+ sys.exit()\n+\n \n def fix_arg_dashes():\n", "issue": "Add version command/flag to the Insights CLI \nAdd `version` command and `--version` flag/arg to print version info from `insights.get_nvr()`. \n", "before_files": [{"content": "#!/usr/bin/env python\n\"\"\"\nCommand Parser module\n---------------------\nImplements the ``insights`` command line. Each function is the first\nargument followed by the function specific arguments. See USAGE text\nbelow.\n\"\"\"\nfrom __future__ import print_function\n\nimport argparse\nimport sys\n\nUSAGE = \"\"\"insights <command> [<args>]\nAvailable commands:\n cat Execute a spec and show the output\n collect Collect all specs against the client and create an Insights archive.\n inspect Execute component and shell out to ipython for evaluation.\n info View info and docs for Insights Core components.\n ocpshell Interactive evaluation of archives, directories, or individual yaml files.\n run Run insights-core against host or an archive.\n\"\"\"\n\n\nclass InsightsCli(object):\n \"\"\"\n Class to implement the cli module.\n Each command is called as a method of this class and all\n arg parsing is performed in the separate module that\n actually implements the command. the args \"insights command\"\n are not passed to the submodule.\n \"\"\"\n\n def __init__(self):\n parser = argparse.ArgumentParser(\n description=\"Insights Core command line execution\",\n usage=USAGE)\n parser.add_argument('command', help='Insights Core command to run')\n args = parser.parse_args(sys.argv[1:2])\n if not hasattr(self, args.command):\n print('Unrecognized command')\n parser.print_help()\n sys.exit(1)\n # remove the sub-command arg from sys.argv\n sys.argv.pop(1)\n # Use dispatch pattern to execute command\n getattr(self, args.command)()\n\n def cat(self):\n from .tools.cat import main as cat_main\n cat_main()\n\n def collect(self):\n from .collect import main as collect_main\n collect_main()\n\n def info(self):\n from .tools.query import main as query_main\n query_main()\n\n def inspect(self):\n from .tools.insights_inspect import main as inspect_main\n inspect_main()\n\n def ocpshell(self):\n from .ocpshell import main as ocpshell_main\n ocpshell_main()\n\n def run(self):\n from insights import run\n if \"\" not in sys.path:\n sys.path.insert(0, \"\")\n run(print_summary=True)\n\n\ndef fix_arg_dashes():\n\n en_dash = '\\u2013'\n em_dash = '\\u2014'\n\n # replace unicode (en dash and em dash) dashes from argument definitions that may have been copy\n # and pasted from another source\n i = 1\n for a in sys.argv[1:]:\n first = list(a)\n first[0] = first[0].replace(em_dash, \"--\").replace(en_dash, \"-\")\n sys.argv[i] = \"\".join(first)\n i += 1\n\n\ndef main():\n fix_arg_dashes()\n try:\n InsightsCli()\n except SystemExit:\n raise\n except BaseException as ex:\n print(ex)\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "insights/command_parser.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\"\"\"\nCommand Parser module\n---------------------\nImplements the ``insights`` command line. Each function is the first\nargument followed by the function specific arguments. See USAGE text\nbelow.\n\"\"\"\nfrom __future__ import print_function\n\nimport argparse\nimport sys\n\nUSAGE = \"\"\"insights <command> [<args>]\nAvailable commands:\n cat Execute a spec and show the output\n collect Collect all specs against the client and create an Insights archive.\n inspect Execute component and shell out to ipython for evaluation.\n info View info and docs for Insights Core components.\n ocpshell Interactive evaluation of archives, directories, or individual yaml files.\n run Run insights-core against host or an archive.\n version Show Insights Core version information and exit.\n\"\"\"\n\n\nclass InsightsCli(object):\n \"\"\"\n Class to implement the cli module.\n Each command is called as a method of this class and all\n arg parsing is performed in the separate module that\n actually implements the command. the args \"insights command\"\n are not passed to the submodule.\n \"\"\"\n\n def __init__(self):\n parser = argparse.ArgumentParser(\n description=\"Insights Core command line execution\",\n usage=USAGE)\n parser.add_argument('command', help='Insights Core command to run')\n parser.add_argument('--version', action='store_true', help='show Insights Core version information and exit')\n if self._parse_version_arg():\n self.version()\n args = parser.parse_args(sys.argv[1:2])\n if not hasattr(self, args.command):\n print('Unrecognized command')\n parser.print_help()\n sys.exit(1)\n # remove the sub-command arg from sys.argv\n sys.argv.pop(1)\n # Use dispatch pattern to execute command\n getattr(self, args.command)()\n\n def _parse_version_arg(self):\n \"\"\"\n Manually check for version argument/flag in cases when command is not provided.\n \"\"\"\n return '--version' in sys.argv[1:3]\n\n def cat(self):\n from .tools.cat import main as cat_main\n cat_main()\n\n def collect(self):\n from .collect import main as collect_main\n collect_main()\n\n def info(self):\n from .tools.query import main as query_main\n query_main()\n\n def inspect(self):\n from .tools.insights_inspect import main as inspect_main\n inspect_main()\n\n def ocpshell(self):\n from .ocpshell import main as ocpshell_main\n ocpshell_main()\n\n def run(self):\n from insights import run\n if \"\" not in sys.path:\n sys.path.insert(0, \"\")\n run(print_summary=True)\n\n def version(self):\n \"\"\"\n Print version information (NVR) and exit.\n \"\"\"\n from insights import get_nvr\n print(get_nvr())\n sys.exit()\n\n\ndef fix_arg_dashes():\n\n en_dash = '\\u2013'\n em_dash = '\\u2014'\n\n # replace unicode (en dash and em dash) dashes from argument definitions that may have been copy\n # and pasted from another source\n i = 1\n for a in sys.argv[1:]:\n first = list(a)\n first[0] = first[0].replace(em_dash, \"--\").replace(en_dash, \"-\")\n sys.argv[i] = \"\".join(first)\n i += 1\n\n\ndef main():\n fix_arg_dashes()\n try:\n InsightsCli()\n except SystemExit:\n raise\n except BaseException as ex:\n print(ex)\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "insights/command_parser.py"}]} | 1,161 | 392 |
gh_patches_debug_865 | rasdani/github-patches | git_diff | kubeflow__pipelines-1666 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`pip install kfp` does not install CLI
**What happened:**
```
$ virtualenv .venv
...
$ pip install kfp==0.1.23
...
$ kfp
Traceback (most recent call last):
File "/private/tmp/.venv/bin/kfp", line 6, in <module>
from kfp.__main__ import main
File "/private/tmp/.venv/lib/python3.7/site-packages/kfp/__main__.py", line 15, in <module>
from .cli.cli import main
ModuleNotFoundError: No module named 'kfp.cli'
```
**What did you expect to happen:**
To run the CLI.
**Anything else you would like to add:**
I could be confused about what is expected to be available after installing the kfp package from pip - setup.py mentions an entrypoint named kfp in
https://github.com/kubeflow/pipelines/blob/812ca7f8836c47039c3b1f3daf23e68fbcee1a92/sdk/python/setup.py#L74
but main.py imports a `kfp.cli` package https://github.com/kubeflow/pipelines/blob/812ca7f8836c47039c3b1f3daf23e68fbcee1a92/sdk/python/kfp/__main__.py#L15
which is not included in the distribution https://github.com/kubeflow/pipelines/blob/812ca7f8836c47039c3b1f3daf23e68fbcee1a92/sdk/python/setup.py#L46-L54
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sdk/python/setup.py`
Content:
```
1 # Copyright 2018 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from setuptools import setup
16
17 NAME = 'kfp'
18 VERSION = '0.1.24'
19
20 REQUIRES = [
21 'urllib3>=1.15,<1.25', #Fixing the version conflict with the "requests" package
22 'six >= 1.10',
23 'certifi',
24 'python-dateutil',
25 'PyYAML',
26 'google-cloud-storage>=1.13.0',
27 'kubernetes>=8.0.0, <=9.0.0',
28 'PyJWT>=1.6.4',
29 'cryptography>=2.4.2',
30 'google-auth>=1.6.1',
31 'requests_toolbelt>=0.8.0',
32 'cloudpickle',
33 'kfp-server-api >= 0.1.18, < 0.1.19', #Update the upper version whenever a new version of the kfp-server-api package is released. Update the lower version when there is a breaking change in kfp-server-api.
34 'argo-models == 2.2.1a', #2.2.1a is equivalent to argo 2.2.1
35 'jsonschema >= 3.0.1',
36 'tabulate == 0.8.3',
37 'click == 7.0'
38 ]
39
40 setup(
41 name=NAME,
42 version=VERSION,
43 description='KubeFlow Pipelines SDK',
44 author='google',
45 install_requires=REQUIRES,
46 packages=[
47 'kfp',
48 'kfp.compiler',
49 'kfp.components',
50 'kfp.components.structures',
51 'kfp.components.structures.kubernetes',
52 'kfp.dsl',
53 'kfp.notebook',
54 ],
55 classifiers=[
56 'Intended Audience :: Developers',
57 'Intended Audience :: Education',
58 'Intended Audience :: Science/Research',
59 'License :: OSI Approved :: Apache Software License',
60 'Programming Language :: Python :: 3',
61 'Programming Language :: Python :: 3.5',
62 'Programming Language :: Python :: 3.6',
63 'Programming Language :: Python :: 3.7',
64 'Topic :: Scientific/Engineering',
65 'Topic :: Scientific/Engineering :: Artificial Intelligence',
66 'Topic :: Software Development',
67 'Topic :: Software Development :: Libraries',
68 'Topic :: Software Development :: Libraries :: Python Modules',
69 ],
70 python_requires='>=3.5.3',
71 include_package_data=True,
72 entry_points={'console_scripts': [
73 'dsl-compile = kfp.compiler.main:main',
74 'kfp=kfp.__main__:main']})
75
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sdk/python/setup.py b/sdk/python/setup.py
--- a/sdk/python/setup.py
+++ b/sdk/python/setup.py
@@ -45,6 +45,7 @@
install_requires=REQUIRES,
packages=[
'kfp',
+ 'kfp.cli',
'kfp.compiler',
'kfp.components',
'kfp.components.structures',
| {"golden_diff": "diff --git a/sdk/python/setup.py b/sdk/python/setup.py\n--- a/sdk/python/setup.py\n+++ b/sdk/python/setup.py\n@@ -45,6 +45,7 @@\n install_requires=REQUIRES,\n packages=[\n 'kfp',\n+ 'kfp.cli',\n 'kfp.compiler',\n 'kfp.components',\n 'kfp.components.structures',\n", "issue": "`pip install kfp` does not install CLI\n**What happened:**\r\n\r\n```\r\n$ virtualenv .venv\r\n...\r\n$ pip install kfp==0.1.23\r\n...\r\n$ kfp\r\nTraceback (most recent call last):\r\n File \"/private/tmp/.venv/bin/kfp\", line 6, in <module>\r\n from kfp.__main__ import main\r\n File \"/private/tmp/.venv/lib/python3.7/site-packages/kfp/__main__.py\", line 15, in <module>\r\n from .cli.cli import main\r\nModuleNotFoundError: No module named 'kfp.cli'\r\n```\r\n\r\n**What did you expect to happen:**\r\nTo run the CLI.\r\n\r\n**Anything else you would like to add:**\r\nI could be confused about what is expected to be available after installing the kfp package from pip - setup.py mentions an entrypoint named kfp in \r\n\r\nhttps://github.com/kubeflow/pipelines/blob/812ca7f8836c47039c3b1f3daf23e68fbcee1a92/sdk/python/setup.py#L74\r\n\r\nbut main.py imports a `kfp.cli` package https://github.com/kubeflow/pipelines/blob/812ca7f8836c47039c3b1f3daf23e68fbcee1a92/sdk/python/kfp/__main__.py#L15\r\n\r\nwhich is not included in the distribution https://github.com/kubeflow/pipelines/blob/812ca7f8836c47039c3b1f3daf23e68fbcee1a92/sdk/python/setup.py#L46-L54\r\n\n", "before_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom setuptools import setup\n\nNAME = 'kfp'\nVERSION = '0.1.24'\n\nREQUIRES = [\n 'urllib3>=1.15,<1.25', #Fixing the version conflict with the \"requests\" package\n 'six >= 1.10',\n 'certifi',\n 'python-dateutil',\n 'PyYAML',\n 'google-cloud-storage>=1.13.0',\n 'kubernetes>=8.0.0, <=9.0.0',\n 'PyJWT>=1.6.4',\n 'cryptography>=2.4.2',\n 'google-auth>=1.6.1',\n 'requests_toolbelt>=0.8.0',\n 'cloudpickle',\n 'kfp-server-api >= 0.1.18, < 0.1.19', #Update the upper version whenever a new version of the kfp-server-api package is released. Update the lower version when there is a breaking change in kfp-server-api.\n 'argo-models == 2.2.1a', #2.2.1a is equivalent to argo 2.2.1\n 'jsonschema >= 3.0.1',\n 'tabulate == 0.8.3',\n 'click == 7.0'\n]\n\nsetup(\n name=NAME,\n version=VERSION,\n description='KubeFlow Pipelines SDK',\n author='google',\n install_requires=REQUIRES,\n packages=[\n 'kfp',\n 'kfp.compiler',\n 'kfp.components',\n 'kfp.components.structures',\n 'kfp.components.structures.kubernetes',\n 'kfp.dsl',\n 'kfp.notebook',\n ],\n classifiers=[\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Topic :: Scientific/Engineering',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'Topic :: Software Development',\n 'Topic :: Software Development :: Libraries',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n ],\n python_requires='>=3.5.3',\n include_package_data=True,\n entry_points={'console_scripts': [\n 'dsl-compile = kfp.compiler.main:main',\n 'kfp=kfp.__main__:main']})\n", "path": "sdk/python/setup.py"}], "after_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom setuptools import setup\n\nNAME = 'kfp'\nVERSION = '0.1.24'\n\nREQUIRES = [\n 'urllib3>=1.15,<1.25', #Fixing the version conflict with the \"requests\" package\n 'six >= 1.10',\n 'certifi',\n 'python-dateutil',\n 'PyYAML',\n 'google-cloud-storage>=1.13.0',\n 'kubernetes>=8.0.0, <=9.0.0',\n 'PyJWT>=1.6.4',\n 'cryptography>=2.4.2',\n 'google-auth>=1.6.1',\n 'requests_toolbelt>=0.8.0',\n 'cloudpickle',\n 'kfp-server-api >= 0.1.18, < 0.1.19', #Update the upper version whenever a new version of the kfp-server-api package is released. Update the lower version when there is a breaking change in kfp-server-api.\n 'argo-models == 2.2.1a', #2.2.1a is equivalent to argo 2.2.1\n 'jsonschema >= 3.0.1',\n 'tabulate == 0.8.3',\n 'click == 7.0'\n]\n\nsetup(\n name=NAME,\n version=VERSION,\n description='KubeFlow Pipelines SDK',\n author='google',\n install_requires=REQUIRES,\n packages=[\n 'kfp',\n 'kfp.cli',\n 'kfp.compiler',\n 'kfp.components',\n 'kfp.components.structures',\n 'kfp.components.structures.kubernetes',\n 'kfp.dsl',\n 'kfp.notebook',\n ],\n classifiers=[\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Topic :: Scientific/Engineering',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'Topic :: Software Development',\n 'Topic :: Software Development :: Libraries',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n ],\n python_requires='>=3.5.3',\n include_package_data=True,\n entry_points={'console_scripts': [\n 'dsl-compile = kfp.compiler.main:main',\n 'kfp=kfp.__main__:main']})\n", "path": "sdk/python/setup.py"}]} | 1,474 | 82 |
gh_patches_debug_48074 | rasdani/github-patches | git_diff | OpenMined__PySyft-1792 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Modify documentation generation code to use napoleon
Napoleon https://www.sphinx-doc.org/en/master/usage/extensions/napoleon.html allows for us to use google style documentation with sphinx. This issue enables #1784
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/conf.py`
Content:
```
1 #!/usr/bin/env python3
2 # -*- coding: utf-8 -*-
3 #
4 # PySyft documentation build configuration file, created by
5 # sphinx-quickstart on Sat Jun 2 20:20:03 2018.
6 #
7 # This file is execfile()d with the current directory set to its
8 # containing dir.
9 #
10 # Note that not all possible configuration values are present in this
11 # autogenerated file.
12 #
13 # All configuration values have a default; values that are commented out
14 # serve to show the default.
15
16 # If extensions (or modules to document with autodoc) are in another directory,
17 # add these directories to sys.path here. If the directory is relative to the
18 # documentation root, use os.path.abspath to make it absolute, like shown here.
19
20 #
21 import os
22 import sys
23
24 sys.path.insert(0, os.path.abspath("../.."))
25
26
27 # -- General configuration ------------------------------------------------
28
29 # If your documentation needs a minimal Sphinx version, state it here.
30 #
31 # needs_sphinx = '1.0'
32
33 # Add any Sphinx extension module names here, as strings. They can be
34 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
35 # ones.
36 extensions = [
37 "sphinx.ext.autodoc",
38 "sphinx.ext.todo",
39 "sphinx.ext.coverage",
40 "sphinx.ext.mathjax",
41 "sphinx.ext.viewcode",
42 "sphinx.ext.githubpages",
43 "sphinx.ext.autosummary",
44 ]
45
46 # Add any paths that contain templates here, relative to this directory.
47 templates_path = ["_templates"]
48
49 # The suffix(es) of source filenames.
50 # You can specify multiple suffix as a list of string:
51 #
52 # source_suffix = ['.rst', '.md']
53 source_suffix = ".rst"
54
55 # The master toctree document.
56 master_doc = "index"
57
58 # General information about the project.
59 project = "PySyft"
60 copyright = "2018, OpenMined Contributors"
61 author = "Andrew Trask"
62
63 # The version info for the project you're documenting, acts as replacement for
64 # |version| and |release|, also used in various other places throughout the
65 # built documents.
66 #
67 # The short X.Y version.
68 version = "0.1"
69 # The full version, including alpha/beta/rc tags.
70 release = "0.1"
71
72 # The language for content autogenerated by Sphinx. Refer to documentation
73 # for a list of supported languages.
74 #
75 # This is also used if you do content translation via gettext catalogs.
76 # Usually you set "language" from the command line for these cases.
77 language = "python"
78
79 # List of patterns, relative to source directory, that match files and
80 # directories to ignore when looking for source files.
81 # This patterns also effect to html_static_path and html_extra_path
82 exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"]
83
84 # The name of the Pygments (syntax highlighting) style to use.
85 pygments_style = "sphinx"
86
87 # If true, `todo` and `todoList` produce output, else they produce nothing.
88 todo_include_todos = True
89
90 autosummary_generate = True
91
92
93 # -- Options for HTML output ----------------------------------------------
94
95 # The theme to use for HTML and HTML Help pages. See the documentation for
96 # a list of builtin themes.
97 #
98 html_theme = "sphinx_rtd_theme"
99 html_logo = "_static/img/logo.png"
100
101 # Theme options are theme-specific and customize the look and feel of a theme
102 # further. For a list of options available for each theme, see the
103 # documentation.
104 #
105 # html_theme_options = {}
106
107 # Add any paths that contain custom static files (such as style sheets) here,
108 # relative to this directory. They are copied after the builtin static files,
109 # so a file named "default.css" will overwrite the builtin "default.css".
110 html_static_path = ["_static"]
111
112
113 # -- Options for HTMLHelp output ------------------------------------------
114
115 # Output file base name for HTML help builder.
116 htmlhelp_basename = "PySyftdoc"
117
118 html_context = {"css_files": ["_static/css/PySyft_docs.css"]}
119
120 favicon = "_static/img/logo.png"
121
122
123 # -- Options for LaTeX output ---------------------------------------------
124
125 latex_elements = {
126 # The paper size ('letterpaper' or 'a4paper').
127 #
128 # 'papersize': 'letterpaper',
129 # The font size ('10pt', '11pt' or '12pt').
130 #
131 # 'pointsize': '10pt',
132 # Additional stuff for the LaTeX preamble.
133 #
134 # 'preamble': '',
135 # Latex figure (float) alignment
136 #
137 # 'figure_align': 'htbp',
138 }
139
140 # Grouping the document tree into LaTeX files. List of tuples
141 # (source start file, target name, title,
142 # author, documentclass [howto, manual, or own class]).
143 latex_documents = [(master_doc, "PySyft.tex", "PySyft Documentation", "Andrew Trask", "manual")]
144
145
146 # -- Options for manual page output ---------------------------------------
147
148 # One entry per manual page. List of tuples
149 # (source start file, name, description, authors, manual section).
150 man_pages = [(master_doc, "pysyft", "PySyft Documentation", [author], 1)]
151
152
153 # -- Options for Texinfo output -------------------------------------------
154
155 # Grouping the document tree into Texinfo files. List of tuples
156 # (source start file, target name, title, author,
157 # dir menu entry, description, category)
158 texinfo_documents = [
159 (
160 master_doc,
161 "PySyft",
162 "PySyft Documentation",
163 author,
164 "PySyft",
165 "One line description of project.",
166 "Miscellaneous",
167 )
168 ]
169
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -35,12 +35,13 @@
# ones.
extensions = [
"sphinx.ext.autodoc",
- "sphinx.ext.todo",
+ "sphinx.ext.autosummary",
"sphinx.ext.coverage",
+ "sphinx.ext.githubpages",
"sphinx.ext.mathjax",
+ "sphinx.ext.napoleon",
+ "sphinx.ext.todo",
"sphinx.ext.viewcode",
- "sphinx.ext.githubpages",
- "sphinx.ext.autosummary",
]
# Add any paths that contain templates here, relative to this directory.
| {"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -35,12 +35,13 @@\n # ones.\n extensions = [\n \"sphinx.ext.autodoc\",\n- \"sphinx.ext.todo\",\n+ \"sphinx.ext.autosummary\",\n \"sphinx.ext.coverage\",\n+ \"sphinx.ext.githubpages\",\n \"sphinx.ext.mathjax\",\n+ \"sphinx.ext.napoleon\",\n+ \"sphinx.ext.todo\",\n \"sphinx.ext.viewcode\",\n- \"sphinx.ext.githubpages\",\n- \"sphinx.ext.autosummary\",\n ]\n \n # Add any paths that contain templates here, relative to this directory.\n", "issue": "Modify documentation generation code to use napoleon\nNapoleon https://www.sphinx-doc.org/en/master/usage/extensions/napoleon.html allows for us to use google style documentation with sphinx. This issue enables #1784 \n", "before_files": [{"content": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n#\n# PySyft documentation build configuration file, created by\n# sphinx-quickstart on Sat Jun 2 20:20:03 2018.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n\n#\nimport os\nimport sys\n\nsys.path.insert(0, os.path.abspath(\"../..\"))\n\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.todo\",\n \"sphinx.ext.coverage\",\n \"sphinx.ext.mathjax\",\n \"sphinx.ext.viewcode\",\n \"sphinx.ext.githubpages\",\n \"sphinx.ext.autosummary\",\n]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\n# source_suffix = ['.rst', '.md']\nsource_suffix = \".rst\"\n\n# The master toctree document.\nmaster_doc = \"index\"\n\n# General information about the project.\nproject = \"PySyft\"\ncopyright = \"2018, OpenMined Contributors\"\nauthor = \"Andrew Trask\"\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = \"0.1\"\n# The full version, including alpha/beta/rc tags.\nrelease = \"0.1\"\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = \"python\"\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This patterns also effect to html_static_path and html_extra_path\nexclude_patterns = [\"_build\", \"Thumbs.db\", \".DS_Store\"]\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = \"sphinx\"\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = True\n\nautosummary_generate = True\n\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = \"sphinx_rtd_theme\"\nhtml_logo = \"_static/img/logo.png\"\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\n# html_theme_options = {}\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = [\"_static\"]\n\n\n# -- Options for HTMLHelp output ------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = \"PySyftdoc\"\n\nhtml_context = {\"css_files\": [\"_static/css/PySyft_docs.css\"]}\n\nfavicon = \"_static/img/logo.png\"\n\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n #\n # 'papersize': 'letterpaper',\n # The font size ('10pt', '11pt' or '12pt').\n #\n # 'pointsize': '10pt',\n # Additional stuff for the LaTeX preamble.\n #\n # 'preamble': '',\n # Latex figure (float) alignment\n #\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [(master_doc, \"PySyft.tex\", \"PySyft Documentation\", \"Andrew Trask\", \"manual\")]\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [(master_doc, \"pysyft\", \"PySyft Documentation\", [author], 1)]\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (\n master_doc,\n \"PySyft\",\n \"PySyft Documentation\",\n author,\n \"PySyft\",\n \"One line description of project.\",\n \"Miscellaneous\",\n )\n]\n", "path": "docs/conf.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n#\n# PySyft documentation build configuration file, created by\n# sphinx-quickstart on Sat Jun 2 20:20:03 2018.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n\n#\nimport os\nimport sys\n\nsys.path.insert(0, os.path.abspath(\"../..\"))\n\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.autosummary\",\n \"sphinx.ext.coverage\",\n \"sphinx.ext.githubpages\",\n \"sphinx.ext.mathjax\",\n \"sphinx.ext.napoleon\",\n \"sphinx.ext.todo\",\n \"sphinx.ext.viewcode\",\n]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\n# source_suffix = ['.rst', '.md']\nsource_suffix = \".rst\"\n\n# The master toctree document.\nmaster_doc = \"index\"\n\n# General information about the project.\nproject = \"PySyft\"\ncopyright = \"2018, OpenMined Contributors\"\nauthor = \"Andrew Trask\"\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = \"0.1\"\n# The full version, including alpha/beta/rc tags.\nrelease = \"0.1\"\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = \"python\"\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This patterns also effect to html_static_path and html_extra_path\nexclude_patterns = [\"_build\", \"Thumbs.db\", \".DS_Store\"]\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = \"sphinx\"\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = True\n\nautosummary_generate = True\n\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = \"sphinx_rtd_theme\"\nhtml_logo = \"_static/img/logo.png\"\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\n# html_theme_options = {}\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = [\"_static\"]\n\n\n# -- Options for HTMLHelp output ------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = \"PySyftdoc\"\n\nhtml_context = {\"css_files\": [\"_static/css/PySyft_docs.css\"]}\n\nfavicon = \"_static/img/logo.png\"\n\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n #\n # 'papersize': 'letterpaper',\n # The font size ('10pt', '11pt' or '12pt').\n #\n # 'pointsize': '10pt',\n # Additional stuff for the LaTeX preamble.\n #\n # 'preamble': '',\n # Latex figure (float) alignment\n #\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [(master_doc, \"PySyft.tex\", \"PySyft Documentation\", \"Andrew Trask\", \"manual\")]\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [(master_doc, \"pysyft\", \"PySyft Documentation\", [author], 1)]\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (\n master_doc,\n \"PySyft\",\n \"PySyft Documentation\",\n author,\n \"PySyft\",\n \"One line description of project.\",\n \"Miscellaneous\",\n )\n]\n", "path": "docs/conf.py"}]} | 1,952 | 159 |
gh_patches_debug_3094 | rasdani/github-patches | git_diff | streamlit__streamlit-2611 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
st.expander cuts off sliders
# Summary
Expander in sidebar cuts off sliders and potentially other stuff
# Steps to reproduce
Run the following code
```
import streamlit as st
sidebar_expander = st.sidebar.beta_expander("Bad sliders underneath!")
with sidebar_expander:
st.slider("Bad layout slider 1", 0, 100, value=0)
st.slider("Bad layout slider 2", 0, 100, value=(0,100))
st.sidebar.slider("Good layout slider")
```
## Expected behavior:
Sliders appear nicely under the expander
## Actual behavior:
Sliders ends are cut off under the expander (see picture)

## Is this a regression?
No
# Debug info
- Streamlit version: 0.69.1
- Python version: 3.8.2
- Using PipEnv
- OS version: Windows 10
- Browser version: Firefox and Chrome, recent versions
# Additional information
May also happen in containers outside of sidebar? Have not checked it though..
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `e2e/scripts/st_expander.py`
Content:
```
1 # Copyright 2018-2020 Streamlit Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import streamlit as st
16
17 container = st.beta_container()
18 container.write("I cannot collapse")
19
20 expander = st.beta_expander("Collapse me!", expanded=True)
21 expander.write("I can collapse")
22
23 collapsed = st.beta_expander("Expand me!")
24 collapsed.write("I am already collapsed")
25
26 sidebar = st.sidebar.beta_expander("Expand me!")
27 sidebar.write("I am in the sidebar")
28
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/e2e/scripts/st_expander.py b/e2e/scripts/st_expander.py
--- a/e2e/scripts/st_expander.py
+++ b/e2e/scripts/st_expander.py
@@ -19,6 +19,8 @@
expander = st.beta_expander("Collapse me!", expanded=True)
expander.write("I can collapse")
+expander.slider("I don't get cut off")
+expander.button("I'm also not cut off (while focused)")
collapsed = st.beta_expander("Expand me!")
collapsed.write("I am already collapsed")
| {"golden_diff": "diff --git a/e2e/scripts/st_expander.py b/e2e/scripts/st_expander.py\n--- a/e2e/scripts/st_expander.py\n+++ b/e2e/scripts/st_expander.py\n@@ -19,6 +19,8 @@\n \n expander = st.beta_expander(\"Collapse me!\", expanded=True)\n expander.write(\"I can collapse\")\n+expander.slider(\"I don't get cut off\")\n+expander.button(\"I'm also not cut off (while focused)\")\n \n collapsed = st.beta_expander(\"Expand me!\")\n collapsed.write(\"I am already collapsed\")\n", "issue": "st.expander cuts off sliders\n# Summary\r\n\r\nExpander in sidebar cuts off sliders and potentially other stuff\r\n\r\n# Steps to reproduce\r\nRun the following code\r\n```\r\nimport streamlit as st\r\n\r\nsidebar_expander = st.sidebar.beta_expander(\"Bad sliders underneath!\")\r\nwith sidebar_expander:\r\n st.slider(\"Bad layout slider 1\", 0, 100, value=0)\r\n st.slider(\"Bad layout slider 2\", 0, 100, value=(0,100))\r\n\r\nst.sidebar.slider(\"Good layout slider\")\r\n```\r\n## Expected behavior:\r\n\r\nSliders appear nicely under the expander\r\n\r\n## Actual behavior:\r\n\r\nSliders ends are cut off under the expander (see picture)\r\n\r\n\r\n\r\n\r\n## Is this a regression?\r\n\r\nNo\r\n\r\n# Debug info\r\n\r\n- Streamlit version: 0.69.1\r\n- Python version: 3.8.2\r\n- Using PipEnv\r\n- OS version: Windows 10\r\n- Browser version: Firefox and Chrome, recent versions\r\n\r\n# Additional information\r\n\r\nMay also happen in containers outside of sidebar? Have not checked it though..\n", "before_files": [{"content": "# Copyright 2018-2020 Streamlit Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport streamlit as st\n\ncontainer = st.beta_container()\ncontainer.write(\"I cannot collapse\")\n\nexpander = st.beta_expander(\"Collapse me!\", expanded=True)\nexpander.write(\"I can collapse\")\n\ncollapsed = st.beta_expander(\"Expand me!\")\ncollapsed.write(\"I am already collapsed\")\n\nsidebar = st.sidebar.beta_expander(\"Expand me!\")\nsidebar.write(\"I am in the sidebar\")\n", "path": "e2e/scripts/st_expander.py"}], "after_files": [{"content": "# Copyright 2018-2020 Streamlit Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport streamlit as st\n\ncontainer = st.beta_container()\ncontainer.write(\"I cannot collapse\")\n\nexpander = st.beta_expander(\"Collapse me!\", expanded=True)\nexpander.write(\"I can collapse\")\nexpander.slider(\"I don't get cut off\")\nexpander.button(\"I'm also not cut off (while focused)\")\n\ncollapsed = st.beta_expander(\"Expand me!\")\ncollapsed.write(\"I am already collapsed\")\n\nsidebar = st.sidebar.beta_expander(\"Expand me!\")\nsidebar.write(\"I am in the sidebar\")\n", "path": "e2e/scripts/st_expander.py"}]} | 821 | 125 |
gh_patches_debug_57166 | rasdani/github-patches | git_diff | unionai-oss__pandera-69 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
update readme on release of next version
suggest replacing
```**Supports:** python 2.7, 3.5, 3.6```
with:
```[](https://pypi.python.org/pypi/pandera/)```
when the new metadata is available on pypi.
It will be 'missing' before pypi is updated:
[](https://pypi.python.org/pypi/pandera/)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 from setuptools import setup
2
3 with open('README.md') as f:
4 long_description = f.read()
5
6 setup(
7 name="pandera",
8 version="0.1.3",
9 author="Niels Bantilan",
10 author_email="[email protected]",
11 description='A light-weight and flexible validation package for pandas data structures.',
12 long_description=long_description,
13 long_description_content_type='text/markdown',
14 url="https://github.com/cosmicBboy/pandera",
15 project_urls={
16 'Documentation': 'https://pandera.readthedocs.io',
17 'Issue Tracker': 'https://github.com/cosmicBboy/pandera/issues'
18 },
19 keywords=["pandas", "validation", "data-structures"],
20 license="MIT",
21 packages=[
22 "pandera",
23 ],
24 install_requires=[
25 "enum34 ; python_version<'3.4'",
26 "numpy >= 1.9.0",
27 "pandas >= 0.23.0",
28 "wrapt",
29 "scipy ; python_version<'2.7'",
30 ],
31 python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*',
32 platforms='any',
33 classifiers=[
34 'Development Status :: 5 - Production/Stable',
35 'Operating System :: OS Independent',
36 'Intended Audience :: Science/Research',
37 'Programming Language :: Python',
38 'Programming Language :: Python :: 3',
39 'Programming Language :: Python :: 3.5',
40 'Programming Language :: Python :: 3.6',
41 'Programming Language :: Python :: 3.7',
42 'Topic :: Scientific/Engineering'
43 ],
44
45 )
46
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -35,6 +35,7 @@
'Operating System :: OS Independent',
'Intended Audience :: Science/Research',
'Programming Language :: Python',
+ 'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -35,6 +35,7 @@\n 'Operating System :: OS Independent',\n 'Intended Audience :: Science/Research',\n 'Programming Language :: Python',\n+ 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n", "issue": "update readme on release of next version\nsuggest replacing\r\n\r\n```**Supports:** python 2.7, 3.5, 3.6```\r\n\r\nwith:\r\n\r\n```[](https://pypi.python.org/pypi/pandera/)```\r\n\r\nwhen the new metadata is available on pypi.\r\n\r\nIt will be 'missing' before pypi is updated:\r\n[](https://pypi.python.org/pypi/pandera/)\n", "before_files": [{"content": "from setuptools import setup\n\nwith open('README.md') as f:\n long_description = f.read()\n\nsetup(\n name=\"pandera\",\n version=\"0.1.3\",\n author=\"Niels Bantilan\",\n author_email=\"[email protected]\",\n description='A light-weight and flexible validation package for pandas data structures.',\n long_description=long_description,\n long_description_content_type='text/markdown',\n url=\"https://github.com/cosmicBboy/pandera\",\n project_urls={\n 'Documentation': 'https://pandera.readthedocs.io',\n 'Issue Tracker': 'https://github.com/cosmicBboy/pandera/issues'\n },\n keywords=[\"pandas\", \"validation\", \"data-structures\"],\n license=\"MIT\",\n packages=[\n \"pandera\",\n ],\n install_requires=[\n \"enum34 ; python_version<'3.4'\",\n \"numpy >= 1.9.0\",\n \"pandas >= 0.23.0\",\n \"wrapt\",\n \"scipy ; python_version<'2.7'\",\n ],\n python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*',\n platforms='any',\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Operating System :: OS Independent',\n 'Intended Audience :: Science/Research',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Topic :: Scientific/Engineering'\n ],\n\n)\n", "path": "setup.py"}], "after_files": [{"content": "from setuptools import setup\n\nwith open('README.md') as f:\n long_description = f.read()\n\nsetup(\n name=\"pandera\",\n version=\"0.1.3\",\n author=\"Niels Bantilan\",\n author_email=\"[email protected]\",\n description='A light-weight and flexible validation package for pandas data structures.',\n long_description=long_description,\n long_description_content_type='text/markdown',\n url=\"https://github.com/cosmicBboy/pandera\",\n project_urls={\n 'Documentation': 'https://pandera.readthedocs.io',\n 'Issue Tracker': 'https://github.com/cosmicBboy/pandera/issues'\n },\n keywords=[\"pandas\", \"validation\", \"data-structures\"],\n license=\"MIT\",\n packages=[\n \"pandera\",\n ],\n install_requires=[\n \"enum34 ; python_version<'3.4'\",\n \"numpy >= 1.9.0\",\n \"pandas >= 0.23.0\",\n \"wrapt\",\n \"scipy ; python_version<'2.7'\",\n ],\n python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*',\n platforms='any',\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Operating System :: OS Independent',\n 'Intended Audience :: Science/Research',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Topic :: Scientific/Engineering'\n ],\n\n)\n", "path": "setup.py"}]} | 844 | 104 |
gh_patches_debug_61695 | rasdani/github-patches | git_diff | mdn__kuma-7256 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
"Report a content problem" gets wrong title
**Summary**
You end up on https://github.com/mdn/sprints/issues/new?template=issue-template.md&projects=mdn/sprints/2&labels=user-report&title=%2Fen-US
That prefills the title only to `/en-US`.
**Steps To Reproduce (STR)**
1. Go to https://developer.mozilla.org/en-US/docs/Web/JavaScript
2. Click "Report a content problem" in the "Feedback" menu
**Actual behavior**
GitHub issue form title is just `/en-US`
**Expected behavior**
`/en-US/docs/Web/JavaScript`
**Additional context**
Should it be more than just the document URI?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kuma/wiki/templatetags/ssr.py`
Content:
```
1 import json
2 import os
3 from functools import lru_cache
4
5 import requests
6 import requests.exceptions
7 from django.conf import settings
8 from django_jinja import library
9
10
11 @lru_cache()
12 def get_localization_data(locale):
13 """
14 Read the frontend string catalog for the specified locale, parse
15 it as JSON, and return the resulting dict. The returned values
16 are cached so that we don't have to read files all the time.
17 """
18 path = os.path.join(settings.BASE_DIR, "static", "jsi18n", locale, "react.json")
19 with open(path, "r") as f:
20 return json.load(f)
21
22
23 @library.global_function
24 def render_react(component_name, locale, url, document_data, ssr=True):
25 """
26 Render a script tag to define the data and any other HTML tags needed
27 to enable the display of a React-based UI. By default, this does
28 server side rendering, falling back to client-side rendering if
29 the SSR attempt fails. Pass False as the second argument to do
30 client-side rendering unconditionally.
31
32 Note that we are not defining a generic Jinja template tag here.
33 The code in this file is specific to Kuma's React-based UI.
34 """
35 localization_data = get_localization_data(locale)
36
37 data = {
38 "locale": locale,
39 "stringCatalog": localization_data["catalog"],
40 "pluralExpression": localization_data["plural"],
41 "url": url,
42 "documentData": document_data,
43 }
44
45 if ssr:
46 return server_side_render(component_name, data)
47 else:
48 return client_side_render(component_name, data)
49
50
51 def _render(component_name, html, script, needs_serialization=False):
52 """A utility function used by both client side and server side rendering.
53 Returns a string that includes the specified HTML and a serialized
54 form of the state dict, in the format expected by the client-side code
55 in kuma/javascript/src/index.jsx.
56 """
57 if needs_serialization:
58 assert isinstance(script, dict), type(script)
59 script = json.dumps(script).replace("</", "<\\/")
60 else:
61 script = "JSON.parse({})".format(script)
62
63 return (
64 '<div id="react-container" data-component-name="{}">{}</div>\n'
65 "<script>window._react_data = {};</script>\n"
66 ).format(component_name, html, script)
67
68
69 def client_side_render(component_name, data):
70 """
71 Output an empty <div> and a script with complete state so that
72 the UI can be rendered on the client-side.
73 """
74 return _render(component_name, "", data, needs_serialization=True)
75
76
77 def server_side_render(component_name, data):
78 """
79 Pre-render the React UI to HTML and output it in a <div>, and then
80 also pass the necessary serialized state in a <script> so that
81 React on the client side can sync itself with the pre-rendred HTML.
82
83 If any exceptions are thrown during the server-side rendering, we
84 fall back to client-side rendering instead.
85 """
86 url = "{}/{}".format(settings.SSR_URL, component_name)
87 timeout = settings.SSR_TIMEOUT
88 # Try server side rendering
89 try:
90 # POST the document data as JSON to the SSR server and we
91 # should get HTML text (encoded as plain text) in the body
92 # of the response
93 response = requests.post(
94 url,
95 headers={"Content-Type": "application/json"},
96 data=json.dumps(data).encode("utf8"),
97 timeout=timeout,
98 )
99
100 # Even though we've got fully rendered HTML now, we still need to
101 # send the document data along with it so that React can sync its
102 # state on the client side with what is in the HTML. When rendering
103 # a document page, the data includes long strings of HTML that
104 # we can get away without duplicating. So as an optimization when
105 # component_name is "document", we're going to make a copy of the
106 # data (because the original belongs to our caller) and delete those
107 # strings from the copy.
108 #
109 # WARNING: This optimization can save 20kb in data transfer
110 # for typical pages, but it requires us to be very careful on
111 # the frontend. If any components render conditionally based on
112 # the state of bodyHTML, tocHTML or quickLinkHTML, then they will
113 # render differently on the client than during SSR, and the hydrate
114 # will not just work cleanly, and those components will re-render
115 # with empty strings. This has already caused Bug 1558308, and
116 # I've commented it out because the benefit in file size doesn't
117 # seem worth the risk of client-side bugs.
118 #
119 # As an alternative, it ought to be possible to extract the HTML
120 # strings from the SSR'ed document and rebuild the document object
121 # on the client right before we call hydrate(). So if you uncomment
122 # the lines below, you should also edit kuma/javascript/src/index.jsx
123 # to extract the HTML from the document as well.
124 #
125 # if component_name == 'document':
126 # data = data.copy()
127 # data['documentData'] = data['documentData'].copy()
128 # data['documentData'].update(bodyHTML='',
129 # tocHTML='',
130 # quickLinksHTML='')
131 response.raise_for_status()
132 result = response.json()
133 return _render(component_name, result["html"], result["script"])
134
135 except requests.exceptions.RequestException as exception:
136 print(f"{exception.__class__} error contacting SSR server.")
137 print("Falling back to client side rendering.")
138 return client_side_render(component_name, data)
139
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/kuma/wiki/templatetags/ssr.py b/kuma/wiki/templatetags/ssr.py
--- a/kuma/wiki/templatetags/ssr.py
+++ b/kuma/wiki/templatetags/ssr.py
@@ -41,7 +41,6 @@
"url": url,
"documentData": document_data,
}
-
if ssr:
return server_side_render(component_name, data)
else:
| {"golden_diff": "diff --git a/kuma/wiki/templatetags/ssr.py b/kuma/wiki/templatetags/ssr.py\n--- a/kuma/wiki/templatetags/ssr.py\n+++ b/kuma/wiki/templatetags/ssr.py\n@@ -41,7 +41,6 @@\n \"url\": url,\n \"documentData\": document_data,\n }\n-\n if ssr:\n return server_side_render(component_name, data)\n else:\n", "issue": "\"Report a content problem\" gets wrong title\n**Summary**\r\nYou end up on https://github.com/mdn/sprints/issues/new?template=issue-template.md&projects=mdn/sprints/2&labels=user-report&title=%2Fen-US\r\n\r\nThat prefills the title only to `/en-US`.\r\n\r\n\r\n**Steps To Reproduce (STR)**\r\n\r\n1. Go to https://developer.mozilla.org/en-US/docs/Web/JavaScript\r\n2. Click \"Report a content problem\" in the \"Feedback\" menu\r\n\r\n\r\n**Actual behavior**\r\nGitHub issue form title is just `/en-US`\r\n\r\n\r\n**Expected behavior**\r\n`/en-US/docs/Web/JavaScript`\r\n\r\n\r\n**Additional context**\r\nShould it be more than just the document URI?\r\n\n", "before_files": [{"content": "import json\nimport os\nfrom functools import lru_cache\n\nimport requests\nimport requests.exceptions\nfrom django.conf import settings\nfrom django_jinja import library\n\n\n@lru_cache()\ndef get_localization_data(locale):\n \"\"\"\n Read the frontend string catalog for the specified locale, parse\n it as JSON, and return the resulting dict. The returned values\n are cached so that we don't have to read files all the time.\n \"\"\"\n path = os.path.join(settings.BASE_DIR, \"static\", \"jsi18n\", locale, \"react.json\")\n with open(path, \"r\") as f:\n return json.load(f)\n\n\[email protected]_function\ndef render_react(component_name, locale, url, document_data, ssr=True):\n \"\"\"\n Render a script tag to define the data and any other HTML tags needed\n to enable the display of a React-based UI. By default, this does\n server side rendering, falling back to client-side rendering if\n the SSR attempt fails. Pass False as the second argument to do\n client-side rendering unconditionally.\n\n Note that we are not defining a generic Jinja template tag here.\n The code in this file is specific to Kuma's React-based UI.\n \"\"\"\n localization_data = get_localization_data(locale)\n\n data = {\n \"locale\": locale,\n \"stringCatalog\": localization_data[\"catalog\"],\n \"pluralExpression\": localization_data[\"plural\"],\n \"url\": url,\n \"documentData\": document_data,\n }\n\n if ssr:\n return server_side_render(component_name, data)\n else:\n return client_side_render(component_name, data)\n\n\ndef _render(component_name, html, script, needs_serialization=False):\n \"\"\"A utility function used by both client side and server side rendering.\n Returns a string that includes the specified HTML and a serialized\n form of the state dict, in the format expected by the client-side code\n in kuma/javascript/src/index.jsx.\n \"\"\"\n if needs_serialization:\n assert isinstance(script, dict), type(script)\n script = json.dumps(script).replace(\"</\", \"<\\\\/\")\n else:\n script = \"JSON.parse({})\".format(script)\n\n return (\n '<div id=\"react-container\" data-component-name=\"{}\">{}</div>\\n'\n \"<script>window._react_data = {};</script>\\n\"\n ).format(component_name, html, script)\n\n\ndef client_side_render(component_name, data):\n \"\"\"\n Output an empty <div> and a script with complete state so that\n the UI can be rendered on the client-side.\n \"\"\"\n return _render(component_name, \"\", data, needs_serialization=True)\n\n\ndef server_side_render(component_name, data):\n \"\"\"\n Pre-render the React UI to HTML and output it in a <div>, and then\n also pass the necessary serialized state in a <script> so that\n React on the client side can sync itself with the pre-rendred HTML.\n\n If any exceptions are thrown during the server-side rendering, we\n fall back to client-side rendering instead.\n \"\"\"\n url = \"{}/{}\".format(settings.SSR_URL, component_name)\n timeout = settings.SSR_TIMEOUT\n # Try server side rendering\n try:\n # POST the document data as JSON to the SSR server and we\n # should get HTML text (encoded as plain text) in the body\n # of the response\n response = requests.post(\n url,\n headers={\"Content-Type\": \"application/json\"},\n data=json.dumps(data).encode(\"utf8\"),\n timeout=timeout,\n )\n\n # Even though we've got fully rendered HTML now, we still need to\n # send the document data along with it so that React can sync its\n # state on the client side with what is in the HTML. When rendering\n # a document page, the data includes long strings of HTML that\n # we can get away without duplicating. So as an optimization when\n # component_name is \"document\", we're going to make a copy of the\n # data (because the original belongs to our caller) and delete those\n # strings from the copy.\n #\n # WARNING: This optimization can save 20kb in data transfer\n # for typical pages, but it requires us to be very careful on\n # the frontend. If any components render conditionally based on\n # the state of bodyHTML, tocHTML or quickLinkHTML, then they will\n # render differently on the client than during SSR, and the hydrate\n # will not just work cleanly, and those components will re-render\n # with empty strings. This has already caused Bug 1558308, and\n # I've commented it out because the benefit in file size doesn't\n # seem worth the risk of client-side bugs.\n #\n # As an alternative, it ought to be possible to extract the HTML\n # strings from the SSR'ed document and rebuild the document object\n # on the client right before we call hydrate(). So if you uncomment\n # the lines below, you should also edit kuma/javascript/src/index.jsx\n # to extract the HTML from the document as well.\n #\n # if component_name == 'document':\n # data = data.copy()\n # data['documentData'] = data['documentData'].copy()\n # data['documentData'].update(bodyHTML='',\n # tocHTML='',\n # quickLinksHTML='')\n response.raise_for_status()\n result = response.json()\n return _render(component_name, result[\"html\"], result[\"script\"])\n\n except requests.exceptions.RequestException as exception:\n print(f\"{exception.__class__} error contacting SSR server.\")\n print(\"Falling back to client side rendering.\")\n return client_side_render(component_name, data)\n", "path": "kuma/wiki/templatetags/ssr.py"}], "after_files": [{"content": "import json\nimport os\nfrom functools import lru_cache\n\nimport requests\nimport requests.exceptions\nfrom django.conf import settings\nfrom django_jinja import library\n\n\n@lru_cache()\ndef get_localization_data(locale):\n \"\"\"\n Read the frontend string catalog for the specified locale, parse\n it as JSON, and return the resulting dict. The returned values\n are cached so that we don't have to read files all the time.\n \"\"\"\n path = os.path.join(settings.BASE_DIR, \"static\", \"jsi18n\", locale, \"react.json\")\n with open(path, \"r\") as f:\n return json.load(f)\n\n\[email protected]_function\ndef render_react(component_name, locale, url, document_data, ssr=True):\n \"\"\"\n Render a script tag to define the data and any other HTML tags needed\n to enable the display of a React-based UI. By default, this does\n server side rendering, falling back to client-side rendering if\n the SSR attempt fails. Pass False as the second argument to do\n client-side rendering unconditionally.\n\n Note that we are not defining a generic Jinja template tag here.\n The code in this file is specific to Kuma's React-based UI.\n \"\"\"\n localization_data = get_localization_data(locale)\n\n data = {\n \"locale\": locale,\n \"stringCatalog\": localization_data[\"catalog\"],\n \"pluralExpression\": localization_data[\"plural\"],\n \"url\": url,\n \"documentData\": document_data,\n }\n if ssr:\n return server_side_render(component_name, data)\n else:\n return client_side_render(component_name, data)\n\n\ndef _render(component_name, html, script, needs_serialization=False):\n \"\"\"A utility function used by both client side and server side rendering.\n Returns a string that includes the specified HTML and a serialized\n form of the state dict, in the format expected by the client-side code\n in kuma/javascript/src/index.jsx.\n \"\"\"\n if needs_serialization:\n assert isinstance(script, dict), type(script)\n script = json.dumps(script).replace(\"</\", \"<\\\\/\")\n else:\n script = \"JSON.parse({})\".format(script)\n\n return (\n '<div id=\"react-container\" data-component-name=\"{}\">{}</div>\\n'\n \"<script>window._react_data = {};</script>\\n\"\n ).format(component_name, html, script)\n\n\ndef client_side_render(component_name, data):\n \"\"\"\n Output an empty <div> and a script with complete state so that\n the UI can be rendered on the client-side.\n \"\"\"\n return _render(component_name, \"\", data, needs_serialization=True)\n\n\ndef server_side_render(component_name, data):\n \"\"\"\n Pre-render the React UI to HTML and output it in a <div>, and then\n also pass the necessary serialized state in a <script> so that\n React on the client side can sync itself with the pre-rendred HTML.\n\n If any exceptions are thrown during the server-side rendering, we\n fall back to client-side rendering instead.\n \"\"\"\n url = \"{}/{}\".format(settings.SSR_URL, component_name)\n timeout = settings.SSR_TIMEOUT\n # Try server side rendering\n try:\n # POST the document data as JSON to the SSR server and we\n # should get HTML text (encoded as plain text) in the body\n # of the response\n response = requests.post(\n url,\n headers={\"Content-Type\": \"application/json\"},\n data=json.dumps(data).encode(\"utf8\"),\n timeout=timeout,\n )\n\n # Even though we've got fully rendered HTML now, we still need to\n # send the document data along with it so that React can sync its\n # state on the client side with what is in the HTML. When rendering\n # a document page, the data includes long strings of HTML that\n # we can get away without duplicating. So as an optimization when\n # component_name is \"document\", we're going to make a copy of the\n # data (because the original belongs to our caller) and delete those\n # strings from the copy.\n #\n # WARNING: This optimization can save 20kb in data transfer\n # for typical pages, but it requires us to be very careful on\n # the frontend. If any components render conditionally based on\n # the state of bodyHTML, tocHTML or quickLinkHTML, then they will\n # render differently on the client than during SSR, and the hydrate\n # will not just work cleanly, and those components will re-render\n # with empty strings. This has already caused Bug 1558308, and\n # I've commented it out because the benefit in file size doesn't\n # seem worth the risk of client-side bugs.\n #\n # As an alternative, it ought to be possible to extract the HTML\n # strings from the SSR'ed document and rebuild the document object\n # on the client right before we call hydrate(). So if you uncomment\n # the lines below, you should also edit kuma/javascript/src/index.jsx\n # to extract the HTML from the document as well.\n #\n # if component_name == 'document':\n # data = data.copy()\n # data['documentData'] = data['documentData'].copy()\n # data['documentData'].update(bodyHTML='',\n # tocHTML='',\n # quickLinksHTML='')\n response.raise_for_status()\n result = response.json()\n return _render(component_name, result[\"html\"], result[\"script\"])\n\n except requests.exceptions.RequestException as exception:\n print(f\"{exception.__class__} error contacting SSR server.\")\n print(\"Falling back to client side rendering.\")\n return client_side_render(component_name, data)\n", "path": "kuma/wiki/templatetags/ssr.py"}]} | 1,979 | 103 |
gh_patches_debug_3801 | rasdani/github-patches | git_diff | open-telemetry__opentelemetry-python-167 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Host docs, build on release
Decide where to host generated docs, configure CI to generate (and possibly upload) new docs on each release.
See #6 for CI bootstrapping.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/conf.py`
Content:
```
1 # Configuration file for the Sphinx documentation builder.
2 #
3 # This file only contains a selection of the most common options. For a full
4 # list see the documentation:
5 # http://www.sphinx-doc.org/en/master/config
6
7 # -- Path setup --------------------------------------------------------------
8
9 # If extensions (or modules to document with autodoc) are in another directory,
10 # add these directories to sys.path here. If the directory is relative to the
11 # documentation root, use os.path.abspath to make it absolute, like shown here.
12
13 import os
14 import sys
15
16 sys.path.insert(0, os.path.abspath("../opentelemetry-api/src/"))
17
18
19 # -- Project information -----------------------------------------------------
20
21 project = "OpenTelemetry"
22 copyright = "2019, OpenTelemetry Authors"
23 author = "OpenTelemetry Authors"
24
25
26 # -- General configuration ---------------------------------------------------
27
28 # Easy automatic cross-references for `code in backticks`
29 default_role = "any"
30
31 # Add any Sphinx extension module names here, as strings. They can be
32 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
33 # ones.
34 extensions = [
35 # API doc generation
36 "sphinx.ext.autodoc",
37 # Support for google-style docstrings
38 "sphinx.ext.napoleon",
39 # Infer types from hints instead of docstrings
40 "sphinx_autodoc_typehints",
41 # Add links to source from generated docs
42 "sphinx.ext.viewcode",
43 # Link to other sphinx docs
44 "sphinx.ext.intersphinx",
45 ]
46
47 intersphinx_mapping = {"python": ("https://docs.python.org/3/", None)}
48
49 # http://www.sphinx-doc.org/en/master/config.html#confval-nitpicky
50 # Sphinx will warn about all references where the target cannot be found.
51 nitpicky = True
52 nitpick_ignore = []
53
54 # Add any paths that contain templates here, relative to this directory.
55 templates_path = ["_templates"]
56
57 # List of patterns, relative to source directory, that match files and
58 # directories to ignore when looking for source files.
59 # This pattern also affects html_static_path and html_extra_path.
60 exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"]
61
62 autodoc_default_options = {
63 "members": True,
64 "undoc-members": True,
65 "show-inheritance": True,
66 "member-order": "bysource",
67 }
68
69 # -- Options for HTML output -------------------------------------------------
70
71 # The theme to use for HTML and HTML Help pages. See the documentation for
72 # a list of builtin themes.
73 #
74 html_theme = "sphinx_rtd_theme"
75
76 # Add any paths that contain custom static files (such as style sheets) here,
77 # relative to this directory. They are copied after the builtin static files,
78 # so a file named "default.css" will overwrite the builtin "default.css".
79 html_static_path = []
80
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -42,6 +42,9 @@
"sphinx.ext.viewcode",
# Link to other sphinx docs
"sphinx.ext.intersphinx",
+ # Add a .nojekyll file to the generated HTML docs
+ # https://help.github.com/en/articles/files-that-start-with-an-underscore-are-missing
+ "sphinx.ext.githubpages",
]
intersphinx_mapping = {"python": ("https://docs.python.org/3/", None)}
| {"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -42,6 +42,9 @@\n \"sphinx.ext.viewcode\",\n # Link to other sphinx docs\n \"sphinx.ext.intersphinx\",\n+ # Add a .nojekyll file to the generated HTML docs\n+ # https://help.github.com/en/articles/files-that-start-with-an-underscore-are-missing\n+ \"sphinx.ext.githubpages\",\n ]\n \n intersphinx_mapping = {\"python\": (\"https://docs.python.org/3/\", None)}\n", "issue": "Host docs, build on release\nDecide where to host generated docs, configure CI to generate (and possibly upload) new docs on each release.\r\n\r\nSee #6 for CI bootstrapping.\n", "before_files": [{"content": "# Configuration file for the Sphinx documentation builder.\n#\n# This file only contains a selection of the most common options. For a full\n# list see the documentation:\n# http://www.sphinx-doc.org/en/master/config\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n\nimport os\nimport sys\n\nsys.path.insert(0, os.path.abspath(\"../opentelemetry-api/src/\"))\n\n\n# -- Project information -----------------------------------------------------\n\nproject = \"OpenTelemetry\"\ncopyright = \"2019, OpenTelemetry Authors\"\nauthor = \"OpenTelemetry Authors\"\n\n\n# -- General configuration ---------------------------------------------------\n\n# Easy automatic cross-references for `code in backticks`\ndefault_role = \"any\"\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n # API doc generation\n \"sphinx.ext.autodoc\",\n # Support for google-style docstrings\n \"sphinx.ext.napoleon\",\n # Infer types from hints instead of docstrings\n \"sphinx_autodoc_typehints\",\n # Add links to source from generated docs\n \"sphinx.ext.viewcode\",\n # Link to other sphinx docs\n \"sphinx.ext.intersphinx\",\n]\n\nintersphinx_mapping = {\"python\": (\"https://docs.python.org/3/\", None)}\n\n# http://www.sphinx-doc.org/en/master/config.html#confval-nitpicky\n# Sphinx will warn about all references where the target cannot be found.\nnitpicky = True\nnitpick_ignore = []\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns = [\"_build\", \"Thumbs.db\", \".DS_Store\"]\n\nautodoc_default_options = {\n \"members\": True,\n \"undoc-members\": True,\n \"show-inheritance\": True,\n \"member-order\": \"bysource\",\n}\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = \"sphinx_rtd_theme\"\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = []\n", "path": "docs/conf.py"}], "after_files": [{"content": "# Configuration file for the Sphinx documentation builder.\n#\n# This file only contains a selection of the most common options. For a full\n# list see the documentation:\n# http://www.sphinx-doc.org/en/master/config\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n\nimport os\nimport sys\n\nsys.path.insert(0, os.path.abspath(\"../opentelemetry-api/src/\"))\n\n\n# -- Project information -----------------------------------------------------\n\nproject = \"OpenTelemetry\"\ncopyright = \"2019, OpenTelemetry Authors\"\nauthor = \"OpenTelemetry Authors\"\n\n\n# -- General configuration ---------------------------------------------------\n\n# Easy automatic cross-references for `code in backticks`\ndefault_role = \"any\"\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n # API doc generation\n \"sphinx.ext.autodoc\",\n # Support for google-style docstrings\n \"sphinx.ext.napoleon\",\n # Infer types from hints instead of docstrings\n \"sphinx_autodoc_typehints\",\n # Add links to source from generated docs\n \"sphinx.ext.viewcode\",\n # Link to other sphinx docs\n \"sphinx.ext.intersphinx\",\n # Add a .nojekyll file to the generated HTML docs\n # https://help.github.com/en/articles/files-that-start-with-an-underscore-are-missing\n \"sphinx.ext.githubpages\",\n]\n\nintersphinx_mapping = {\"python\": (\"https://docs.python.org/3/\", None)}\n\n# http://www.sphinx-doc.org/en/master/config.html#confval-nitpicky\n# Sphinx will warn about all references where the target cannot be found.\nnitpicky = True\nnitpick_ignore = []\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns = [\"_build\", \"Thumbs.db\", \".DS_Store\"]\n\nautodoc_default_options = {\n \"members\": True,\n \"undoc-members\": True,\n \"show-inheritance\": True,\n \"member-order\": \"bysource\",\n}\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = \"sphinx_rtd_theme\"\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = []\n", "path": "docs/conf.py"}]} | 1,057 | 129 |
gh_patches_debug_641 | rasdani/github-patches | git_diff | pex-tool__pex-2219 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Release 2.1.143
On the docket:
+ [x] pex fails to build pycryptodome due to filename too long #2087
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pex/version.py`
Content:
```
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = "2.1.142"
5
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = "2.1.142"
+__version__ = "2.1.143"
| {"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = \"2.1.142\"\n+__version__ = \"2.1.143\"\n", "issue": "Release 2.1.143\nOn the docket:\r\n+ [x] pex fails to build pycryptodome due to filename too long #2087 \n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.142\"\n", "path": "pex/version.py"}], "after_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.143\"\n", "path": "pex/version.py"}]} | 348 | 98 |
gh_patches_debug_21265 | rasdani/github-patches | git_diff | scikit-hep__pyhf-436 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Test Deploy feature is broken
# Description
See https://github.com/FaradayRF/faradayio/issues/35 for a related issue - I think. Failing job on master here: https://travis-ci.org/diana-hep/pyhf/builds/517678508?utm_source=github_status&utm_medium=notification
# Expected Behavior
Expect it to pass.
# Actual Behavior
Observe it failing.
# Checklist
- [x] Run `git fetch` to get the most up to date version of `master`
- [x] Searched through existing Issues to confirm this is not a duplicate issue
- [x] Filled out the Description, Expected Behavior, Actual Behavior, and Steps to Reproduce sections above or have edited/removed them in a way that fully describes the issue
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2
3 from setuptools import setup, find_packages
4 from os import path
5 import sys
6
7 this_directory = path.abspath(path.dirname(__file__))
8 if sys.version_info.major < 3:
9 from io import open
10 with open(path.join(this_directory, 'README.md'), encoding='utf-8') as readme_md:
11 long_description = readme_md.read()
12
13 extras_require = {
14 'tensorflow': [
15 'tensorflow~=1.13',
16 'tensorflow-probability~=0.5',
17 'numpy<=1.14.5,>=1.14.0', # Lower of 1.14.0 instead of 1.13.3 to ensure doctest pass
18 'setuptools<=39.1.0',
19 ],
20 'torch': ['torch~=1.0'],
21 'mxnet': ['mxnet~=1.0', 'requests~=2.18.4', 'numpy<1.15.0,>=1.8.2'],
22 # 'dask': [
23 # 'dask[array]'
24 # ],
25 'xmlio': ['uproot'],
26 'minuit': ['iminuit'],
27 'develop': [
28 'pyflakes',
29 'pytest~=3.5',
30 'pytest-cov>=2.5.1',
31 'pytest-mock',
32 'pytest-benchmark[histogram]',
33 'pytest-console-scripts',
34 'python-coveralls',
35 'coverage>=4.0', # coveralls
36 'matplotlib',
37 'jupyter',
38 'nbdime',
39 'uproot~=3.3',
40 'papermill~=0.16',
41 'graphviz',
42 'bumpversion',
43 'sphinx',
44 'sphinxcontrib-bibtex',
45 'sphinxcontrib-napoleon',
46 'sphinx_rtd_theme',
47 'nbsphinx',
48 'sphinx-issues',
49 'm2r',
50 'jsonpatch',
51 'ipython<7', # jupyter_console and ipython clash in dependency requirement -- downgrade ipython for now
52 'pre-commit',
53 'black;python_version>="3.6"', # Black is Python3 only
54 'twine',
55 ],
56 }
57 extras_require['complete'] = sorted(set(sum(extras_require.values(), [])))
58
59 setup(
60 name='pyhf',
61 version='0.0.16',
62 description='(partial) pure python histfactory implementation',
63 long_description=long_description,
64 long_description_content_type='text/markdown',
65 url='https://github.com/diana-hep/pyhf',
66 author='Lukas Heinrich',
67 author_email='[email protected]',
68 license='Apache',
69 keywords='physics fitting numpy scipy tensorflow pytorch mxnet dask',
70 classifiers=[
71 "Programming Language :: Python :: 2",
72 "Programming Language :: Python :: 2.7",
73 "Programming Language :: Python :: 3",
74 "Programming Language :: Python :: 3.6",
75 "Programming Language :: Python :: 3.7",
76 ],
77 packages=find_packages(),
78 include_package_data=True,
79 python_requires=">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*",
80 install_requires=[
81 'scipy', # requires numpy, which is required by pyhf, tensorflow, and mxnet
82 'click>=6.0', # for console scripts,
83 'tqdm', # for readxml
84 'six', # for modifiers
85 'jsonschema>=v3.0.0a2', # for utils, alpha-release for draft 6
86 'jsonpatch',
87 ],
88 extras_require=extras_require,
89 entry_points={'console_scripts': ['pyhf=pyhf.commandline:pyhf']},
90 dependency_links=[],
91 )
92
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -56,6 +56,29 @@
}
extras_require['complete'] = sorted(set(sum(extras_require.values(), [])))
+
+def _is_test_pypi():
+ """
+ Determine if the Travis CI environment has TESTPYPI_UPLOAD defined and
+ set to true (c.f. .travis.yml)
+
+ The use_scm_version kwarg accepts a callable for the local_scheme
+ configuration parameter with argument "version". This can be replaced
+ with a lambda as the desired version structure is {next_version}.dev{distance}
+ c.f. https://github.com/pypa/setuptools_scm/#importing-in-setuppy
+
+ As the scm versioning is only desired for TestPyPI, for depolyment to PyPI the version
+ controlled through bumpversion is used.
+ """
+ from os import getenv
+
+ return (
+ {'local_scheme': lambda version: ''}
+ if getenv('TESTPYPI_UPLOAD') == 'true'
+ else False
+ )
+
+
setup(
name='pyhf',
version='0.0.16',
@@ -88,4 +111,5 @@
extras_require=extras_require,
entry_points={'console_scripts': ['pyhf=pyhf.commandline:pyhf']},
dependency_links=[],
+ use_scm_version=_is_test_pypi(),
)
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -56,6 +56,29 @@\n }\n extras_require['complete'] = sorted(set(sum(extras_require.values(), [])))\n \n+\n+def _is_test_pypi():\n+ \"\"\"\n+ Determine if the Travis CI environment has TESTPYPI_UPLOAD defined and\n+ set to true (c.f. .travis.yml)\n+\n+ The use_scm_version kwarg accepts a callable for the local_scheme\n+ configuration parameter with argument \"version\". This can be replaced\n+ with a lambda as the desired version structure is {next_version}.dev{distance}\n+ c.f. https://github.com/pypa/setuptools_scm/#importing-in-setuppy\n+\n+ As the scm versioning is only desired for TestPyPI, for depolyment to PyPI the version\n+ controlled through bumpversion is used.\n+ \"\"\"\n+ from os import getenv\n+\n+ return (\n+ {'local_scheme': lambda version: ''}\n+ if getenv('TESTPYPI_UPLOAD') == 'true'\n+ else False\n+ )\n+\n+\n setup(\n name='pyhf',\n version='0.0.16',\n@@ -88,4 +111,5 @@\n extras_require=extras_require,\n entry_points={'console_scripts': ['pyhf=pyhf.commandline:pyhf']},\n dependency_links=[],\n+ use_scm_version=_is_test_pypi(),\n )\n", "issue": "Test Deploy feature is broken\n# Description\r\n\r\nSee https://github.com/FaradayRF/faradayio/issues/35 for a related issue - I think. Failing job on master here: https://travis-ci.org/diana-hep/pyhf/builds/517678508?utm_source=github_status&utm_medium=notification\r\n\r\n# Expected Behavior\r\n\r\nExpect it to pass.\r\n\r\n# Actual Behavior\r\n\r\nObserve it failing.\r\n\r\n# Checklist\r\n\r\n- [x] Run `git fetch` to get the most up to date version of `master`\r\n- [x] Searched through existing Issues to confirm this is not a duplicate issue\r\n- [x] Filled out the Description, Expected Behavior, Actual Behavior, and Steps to Reproduce sections above or have edited/removed them in a way that fully describes the issue\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\nfrom setuptools import setup, find_packages\nfrom os import path\nimport sys\n\nthis_directory = path.abspath(path.dirname(__file__))\nif sys.version_info.major < 3:\n from io import open\nwith open(path.join(this_directory, 'README.md'), encoding='utf-8') as readme_md:\n long_description = readme_md.read()\n\nextras_require = {\n 'tensorflow': [\n 'tensorflow~=1.13',\n 'tensorflow-probability~=0.5',\n 'numpy<=1.14.5,>=1.14.0', # Lower of 1.14.0 instead of 1.13.3 to ensure doctest pass\n 'setuptools<=39.1.0',\n ],\n 'torch': ['torch~=1.0'],\n 'mxnet': ['mxnet~=1.0', 'requests~=2.18.4', 'numpy<1.15.0,>=1.8.2'],\n # 'dask': [\n # 'dask[array]'\n # ],\n 'xmlio': ['uproot'],\n 'minuit': ['iminuit'],\n 'develop': [\n 'pyflakes',\n 'pytest~=3.5',\n 'pytest-cov>=2.5.1',\n 'pytest-mock',\n 'pytest-benchmark[histogram]',\n 'pytest-console-scripts',\n 'python-coveralls',\n 'coverage>=4.0', # coveralls\n 'matplotlib',\n 'jupyter',\n 'nbdime',\n 'uproot~=3.3',\n 'papermill~=0.16',\n 'graphviz',\n 'bumpversion',\n 'sphinx',\n 'sphinxcontrib-bibtex',\n 'sphinxcontrib-napoleon',\n 'sphinx_rtd_theme',\n 'nbsphinx',\n 'sphinx-issues',\n 'm2r',\n 'jsonpatch',\n 'ipython<7', # jupyter_console and ipython clash in dependency requirement -- downgrade ipython for now\n 'pre-commit',\n 'black;python_version>=\"3.6\"', # Black is Python3 only\n 'twine',\n ],\n}\nextras_require['complete'] = sorted(set(sum(extras_require.values(), [])))\n\nsetup(\n name='pyhf',\n version='0.0.16',\n description='(partial) pure python histfactory implementation',\n long_description=long_description,\n long_description_content_type='text/markdown',\n url='https://github.com/diana-hep/pyhf',\n author='Lukas Heinrich',\n author_email='[email protected]',\n license='Apache',\n keywords='physics fitting numpy scipy tensorflow pytorch mxnet dask',\n classifiers=[\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n ],\n packages=find_packages(),\n include_package_data=True,\n python_requires=\">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*\",\n install_requires=[\n 'scipy', # requires numpy, which is required by pyhf, tensorflow, and mxnet\n 'click>=6.0', # for console scripts,\n 'tqdm', # for readxml\n 'six', # for modifiers\n 'jsonschema>=v3.0.0a2', # for utils, alpha-release for draft 6\n 'jsonpatch',\n ],\n extras_require=extras_require,\n entry_points={'console_scripts': ['pyhf=pyhf.commandline:pyhf']},\n dependency_links=[],\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\nfrom setuptools import setup, find_packages\nfrom os import path\nimport sys\n\nthis_directory = path.abspath(path.dirname(__file__))\nif sys.version_info.major < 3:\n from io import open\nwith open(path.join(this_directory, 'README.md'), encoding='utf-8') as readme_md:\n long_description = readme_md.read()\n\nextras_require = {\n 'tensorflow': [\n 'tensorflow~=1.13',\n 'tensorflow-probability~=0.5',\n 'numpy<=1.14.5,>=1.14.0', # Lower of 1.14.0 instead of 1.13.3 to ensure doctest pass\n 'setuptools<=39.1.0',\n ],\n 'torch': ['torch~=1.0'],\n 'mxnet': ['mxnet~=1.0', 'requests~=2.18.4', 'numpy<1.15.0,>=1.8.2'],\n # 'dask': [\n # 'dask[array]'\n # ],\n 'xmlio': ['uproot'],\n 'minuit': ['iminuit'],\n 'develop': [\n 'pyflakes',\n 'pytest~=3.5',\n 'pytest-cov>=2.5.1',\n 'pytest-mock',\n 'pytest-benchmark[histogram]',\n 'pytest-console-scripts',\n 'python-coveralls',\n 'coverage>=4.0', # coveralls\n 'matplotlib',\n 'jupyter',\n 'nbdime',\n 'uproot~=3.3',\n 'papermill~=0.16',\n 'graphviz',\n 'bumpversion',\n 'sphinx',\n 'sphinxcontrib-bibtex',\n 'sphinxcontrib-napoleon',\n 'sphinx_rtd_theme',\n 'nbsphinx',\n 'sphinx-issues',\n 'm2r',\n 'jsonpatch',\n 'ipython<7', # jupyter_console and ipython clash in dependency requirement -- downgrade ipython for now\n 'pre-commit',\n 'black;python_version>=\"3.6\"', # Black is Python3 only\n 'twine',\n ],\n}\nextras_require['complete'] = sorted(set(sum(extras_require.values(), [])))\n\n\ndef _is_test_pypi():\n \"\"\"\n Determine if the Travis CI environment has TESTPYPI_UPLOAD defined and\n set to true (c.f. .travis.yml)\n\n The use_scm_version kwarg accepts a callable for the local_scheme\n configuration parameter with argument \"version\". This can be replaced\n with a lambda as the desired version structure is {next_version}.dev{distance}\n c.f. https://github.com/pypa/setuptools_scm/#importing-in-setuppy\n\n As the scm versioning is only desired for TestPyPI, for depolyment to PyPI the version\n controlled through bumpversion is used.\n \"\"\"\n from os import getenv\n\n return (\n {'local_scheme': lambda version: ''}\n if getenv('TESTPYPI_UPLOAD') == 'true'\n else False\n )\n\n\nsetup(\n name='pyhf',\n version='0.0.16',\n description='(partial) pure python histfactory implementation',\n long_description=long_description,\n long_description_content_type='text/markdown',\n url='https://github.com/diana-hep/pyhf',\n author='Lukas Heinrich',\n author_email='[email protected]',\n license='Apache',\n keywords='physics fitting numpy scipy tensorflow pytorch mxnet dask',\n classifiers=[\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n ],\n packages=find_packages(),\n include_package_data=True,\n python_requires=\">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*\",\n install_requires=[\n 'scipy', # requires numpy, which is required by pyhf, tensorflow, and mxnet\n 'click>=6.0', # for console scripts,\n 'tqdm', # for readxml\n 'six', # for modifiers\n 'jsonschema>=v3.0.0a2', # for utils, alpha-release for draft 6\n 'jsonpatch',\n ],\n extras_require=extras_require,\n entry_points={'console_scripts': ['pyhf=pyhf.commandline:pyhf']},\n dependency_links=[],\n use_scm_version=_is_test_pypi(),\n)\n", "path": "setup.py"}]} | 1,452 | 326 |
gh_patches_debug_29031 | rasdani/github-patches | git_diff | python-telegram-bot__python-telegram-bot-1495 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Forbid setting user and chat data attributes
Inevitably users will attempt to do `context.user_data = {'my': 'dict'}` or `context.chat_data = whatever`. This will lead to confusing behaviour.
I propose making those fields properties without a setter, which will more helpfully error. Depending on how far we want to go, we can also suggest the user to use `update(other_dict)` if they attempt to set the field.
See https://t.me/pythontelegrambotgroup/222138 onwards.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `telegram/ext/callbackcontext.py`
Content:
```
1 #!/usr/bin/env python
2 #
3 # A library that provides a Python interface to the Telegram Bot API
4 # Copyright (C) 2015-2018
5 # Leandro Toledo de Souza <[email protected]>
6 #
7 # This program is free software: you can redistribute it and/or modify
8 # it under the terms of the GNU Lesser Public License as published by
9 # the Free Software Foundation, either version 3 of the License, or
10 # (at your option) any later version.
11 #
12 # This program is distributed in the hope that it will be useful,
13 # but WITHOUT ANY WARRANTY; without even the implied warranty of
14 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 # GNU Lesser Public License for more details.
16 #
17 # You should have received a copy of the GNU Lesser Public License
18 # along with this program. If not, see [http://www.gnu.org/licenses/].
19 """This module contains the CallbackContext class."""
20
21 from telegram import Update
22
23
24 class CallbackContext(object):
25 """
26 This is a context object passed to the callback called by :class:`telegram.ext.Handler`
27 or by the :class:`telegram.ext.Dispatcher` in an error handler added by
28 :attr:`telegram.ext.Dispatcher.add_error_handler` or to the callback of a
29 :class:`telegram.ext.Job`.
30
31 Note:
32 :class:`telegram.ext.Dispatcher` will create a single context for an entire update. This
33 means that if you got 2 handlers in different groups and they both get called, they will
34 get passed the same `CallbackContext` object (of course with proper attributes like
35 `.matches` differing). This allows you to add custom attributes in a lower handler group
36 callback, and then subsequently access those attributes in a higher handler group callback.
37 Note that the attributes on `CallbackContext` might change in the future, so make sure to
38 use a fairly unique name for the attributes.
39
40 Warning:
41 Do not combine custom attributes and @run_async. Due to how @run_async works, it will
42 almost certainly execute the callbacks for an update out of order, and the attributes
43 that you think you added will not be present.
44
45 Attributes:
46 chat_data (:obj:`dict`, optional): A dict that can be used to keep any data in. For each
47 update from the same chat it will be the same ``dict``.
48 user_data (:obj:`dict`, optional): A dict that can be used to keep any data in. For each
49 update from the same user it will be the same ``dict``.
50 matches (List[:obj:`re match object`], optional): If the associated update originated from
51 a regex-supported handler or had a :class:`Filters.regex`, this will contain a list of
52 match objects for every pattern where ``re.search(pattern, string)`` returned a match.
53 Note that filters short circuit, so combined regex filters will not always
54 be evaluated.
55 args (List[:obj:`str`], optional): Arguments passed to a command if the associated update
56 is handled by :class:`telegram.ext.CommandHandler`, :class:`telegram.ext.PrefixHandler`
57 or :class:`telegram.ext.StringCommandHandler`. It contains a list of the words in the
58 text after the command, using any whitespace string as a delimiter.
59 error (:class:`telegram.TelegramError`, optional): The Telegram error that was raised.
60 Only present when passed to a error handler registered with
61 :attr:`telegram.ext.Dispatcher.add_error_handler`.
62 job (:class:`telegram.ext.Job`): The job that that originated this callback.
63 Only present when passed to the callback of :class:`telegram.ext.Job`.
64
65 """
66
67 def __init__(self, dispatcher):
68 """
69 Args:
70 dispatcher (:class:`telegram.ext.Dispatcher`):
71 """
72 if not dispatcher.use_context:
73 raise ValueError('CallbackContext should not be used with a non context aware '
74 'dispatcher!')
75 self._dispatcher = dispatcher
76 self.chat_data = None
77 self.user_data = None
78 self.args = None
79 self.matches = None
80 self.error = None
81 self.job = None
82
83 @classmethod
84 def from_error(cls, update, error, dispatcher):
85 self = cls.from_update(update, dispatcher)
86 self.error = error
87 return self
88
89 @classmethod
90 def from_update(cls, update, dispatcher):
91 self = cls(dispatcher)
92 if update is not None and isinstance(update, Update):
93 chat = update.effective_chat
94 user = update.effective_user
95
96 if chat:
97 self.chat_data = dispatcher.chat_data[chat.id]
98 if user:
99 self.user_data = dispatcher.user_data[user.id]
100 return self
101
102 @classmethod
103 def from_job(cls, job, dispatcher):
104 self = cls(dispatcher)
105 self.job = job
106 return self
107
108 def update(self, data):
109 self.__dict__.update(data)
110
111 @property
112 def bot(self):
113 """:class:`telegram.Bot`: The bot associated with this context."""
114 return self._dispatcher.bot
115
116 @property
117 def job_queue(self):
118 """
119 :class:`telegram.ext.JobQueue`: The ``JobQueue`` used by the
120 :class:`telegram.ext.Dispatcher` and (usually) the :class:`telegram.ext.Updater`
121 associated with this context.
122
123 """
124 return self._dispatcher.job_queue
125
126 @property
127 def update_queue(self):
128 """
129 :class:`queue.Queue`: The ``Queue`` instance used by the
130 :class:`telegram.ext.Dispatcher` and (usually) the :class:`telegram.ext.Updater`
131 associated with this context.
132
133 """
134 return self._dispatcher.update_queue
135
136 @property
137 def match(self):
138 """
139 `Regex match type`: The first match from :attr:`matches`.
140 Useful if you are only filtering using a single regex filter.
141 Returns `None` if :attr:`matches` is empty.
142 """
143 try:
144 return self.matches[0] # pylint: disable=unsubscriptable-object
145 except (IndexError, TypeError):
146 return None
147
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/telegram/ext/callbackcontext.py b/telegram/ext/callbackcontext.py
--- a/telegram/ext/callbackcontext.py
+++ b/telegram/ext/callbackcontext.py
@@ -73,13 +73,31 @@
raise ValueError('CallbackContext should not be used with a non context aware '
'dispatcher!')
self._dispatcher = dispatcher
- self.chat_data = None
- self.user_data = None
+ self._chat_data = None
+ self._user_data = None
self.args = None
self.matches = None
self.error = None
self.job = None
+ @property
+ def chat_data(self):
+ return self._chat_data
+
+ @chat_data.setter
+ def chat_data(self, value):
+ raise AttributeError("You can not assign a new value to chat_data, see "
+ "https://git.io/fjxKe")
+
+ @property
+ def user_data(self):
+ return self._user_data
+
+ @user_data.setter
+ def user_data(self, value):
+ raise AttributeError("You can not assign a new value to user_data, see "
+ "https://git.io/fjxKe")
+
@classmethod
def from_error(cls, update, error, dispatcher):
self = cls.from_update(update, dispatcher)
@@ -94,9 +112,9 @@
user = update.effective_user
if chat:
- self.chat_data = dispatcher.chat_data[chat.id]
+ self._chat_data = dispatcher.chat_data[chat.id]
if user:
- self.user_data = dispatcher.user_data[user.id]
+ self._user_data = dispatcher.user_data[user.id]
return self
@classmethod
| {"golden_diff": "diff --git a/telegram/ext/callbackcontext.py b/telegram/ext/callbackcontext.py\n--- a/telegram/ext/callbackcontext.py\n+++ b/telegram/ext/callbackcontext.py\n@@ -73,13 +73,31 @@\n raise ValueError('CallbackContext should not be used with a non context aware '\n 'dispatcher!')\n self._dispatcher = dispatcher\n- self.chat_data = None\n- self.user_data = None\n+ self._chat_data = None\n+ self._user_data = None\n self.args = None\n self.matches = None\n self.error = None\n self.job = None\n \n+ @property\n+ def chat_data(self):\n+ return self._chat_data\n+\n+ @chat_data.setter\n+ def chat_data(self, value):\n+ raise AttributeError(\"You can not assign a new value to chat_data, see \"\n+ \"https://git.io/fjxKe\")\n+\n+ @property\n+ def user_data(self):\n+ return self._user_data\n+\n+ @user_data.setter\n+ def user_data(self, value):\n+ raise AttributeError(\"You can not assign a new value to user_data, see \"\n+ \"https://git.io/fjxKe\")\n+\n @classmethod\n def from_error(cls, update, error, dispatcher):\n self = cls.from_update(update, dispatcher)\n@@ -94,9 +112,9 @@\n user = update.effective_user\n \n if chat:\n- self.chat_data = dispatcher.chat_data[chat.id]\n+ self._chat_data = dispatcher.chat_data[chat.id]\n if user:\n- self.user_data = dispatcher.user_data[user.id]\n+ self._user_data = dispatcher.user_data[user.id]\n return self\n \n @classmethod\n", "issue": "Forbid setting user and chat data attributes\nInevitably users will attempt to do `context.user_data = {'my': 'dict'}` or `context.chat_data = whatever`. This will lead to confusing behaviour.\r\n\r\nI propose making those fields properties without a setter, which will more helpfully error. Depending on how far we want to go, we can also suggest the user to use `update(other_dict)` if they attempt to set the field.\r\n\r\nSee https://t.me/pythontelegrambotgroup/222138 onwards.\n", "before_files": [{"content": "#!/usr/bin/env python\n#\n# A library that provides a Python interface to the Telegram Bot API\n# Copyright (C) 2015-2018\n# Leandro Toledo de Souza <[email protected]>\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Lesser Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Lesser Public License for more details.\n#\n# You should have received a copy of the GNU Lesser Public License\n# along with this program. If not, see [http://www.gnu.org/licenses/].\n\"\"\"This module contains the CallbackContext class.\"\"\"\n\nfrom telegram import Update\n\n\nclass CallbackContext(object):\n \"\"\"\n This is a context object passed to the callback called by :class:`telegram.ext.Handler`\n or by the :class:`telegram.ext.Dispatcher` in an error handler added by\n :attr:`telegram.ext.Dispatcher.add_error_handler` or to the callback of a\n :class:`telegram.ext.Job`.\n\n Note:\n :class:`telegram.ext.Dispatcher` will create a single context for an entire update. This\n means that if you got 2 handlers in different groups and they both get called, they will\n get passed the same `CallbackContext` object (of course with proper attributes like\n `.matches` differing). This allows you to add custom attributes in a lower handler group\n callback, and then subsequently access those attributes in a higher handler group callback.\n Note that the attributes on `CallbackContext` might change in the future, so make sure to\n use a fairly unique name for the attributes.\n\n Warning:\n Do not combine custom attributes and @run_async. Due to how @run_async works, it will\n almost certainly execute the callbacks for an update out of order, and the attributes\n that you think you added will not be present.\n\n Attributes:\n chat_data (:obj:`dict`, optional): A dict that can be used to keep any data in. For each\n update from the same chat it will be the same ``dict``.\n user_data (:obj:`dict`, optional): A dict that can be used to keep any data in. For each\n update from the same user it will be the same ``dict``.\n matches (List[:obj:`re match object`], optional): If the associated update originated from\n a regex-supported handler or had a :class:`Filters.regex`, this will contain a list of\n match objects for every pattern where ``re.search(pattern, string)`` returned a match.\n Note that filters short circuit, so combined regex filters will not always\n be evaluated.\n args (List[:obj:`str`], optional): Arguments passed to a command if the associated update\n is handled by :class:`telegram.ext.CommandHandler`, :class:`telegram.ext.PrefixHandler`\n or :class:`telegram.ext.StringCommandHandler`. It contains a list of the words in the\n text after the command, using any whitespace string as a delimiter.\n error (:class:`telegram.TelegramError`, optional): The Telegram error that was raised.\n Only present when passed to a error handler registered with\n :attr:`telegram.ext.Dispatcher.add_error_handler`.\n job (:class:`telegram.ext.Job`): The job that that originated this callback.\n Only present when passed to the callback of :class:`telegram.ext.Job`.\n\n \"\"\"\n\n def __init__(self, dispatcher):\n \"\"\"\n Args:\n dispatcher (:class:`telegram.ext.Dispatcher`):\n \"\"\"\n if not dispatcher.use_context:\n raise ValueError('CallbackContext should not be used with a non context aware '\n 'dispatcher!')\n self._dispatcher = dispatcher\n self.chat_data = None\n self.user_data = None\n self.args = None\n self.matches = None\n self.error = None\n self.job = None\n\n @classmethod\n def from_error(cls, update, error, dispatcher):\n self = cls.from_update(update, dispatcher)\n self.error = error\n return self\n\n @classmethod\n def from_update(cls, update, dispatcher):\n self = cls(dispatcher)\n if update is not None and isinstance(update, Update):\n chat = update.effective_chat\n user = update.effective_user\n\n if chat:\n self.chat_data = dispatcher.chat_data[chat.id]\n if user:\n self.user_data = dispatcher.user_data[user.id]\n return self\n\n @classmethod\n def from_job(cls, job, dispatcher):\n self = cls(dispatcher)\n self.job = job\n return self\n\n def update(self, data):\n self.__dict__.update(data)\n\n @property\n def bot(self):\n \"\"\":class:`telegram.Bot`: The bot associated with this context.\"\"\"\n return self._dispatcher.bot\n\n @property\n def job_queue(self):\n \"\"\"\n :class:`telegram.ext.JobQueue`: The ``JobQueue`` used by the\n :class:`telegram.ext.Dispatcher` and (usually) the :class:`telegram.ext.Updater`\n associated with this context.\n\n \"\"\"\n return self._dispatcher.job_queue\n\n @property\n def update_queue(self):\n \"\"\"\n :class:`queue.Queue`: The ``Queue`` instance used by the\n :class:`telegram.ext.Dispatcher` and (usually) the :class:`telegram.ext.Updater`\n associated with this context.\n\n \"\"\"\n return self._dispatcher.update_queue\n\n @property\n def match(self):\n \"\"\"\n `Regex match type`: The first match from :attr:`matches`.\n Useful if you are only filtering using a single regex filter.\n Returns `None` if :attr:`matches` is empty.\n \"\"\"\n try:\n return self.matches[0] # pylint: disable=unsubscriptable-object\n except (IndexError, TypeError):\n return None\n", "path": "telegram/ext/callbackcontext.py"}], "after_files": [{"content": "#!/usr/bin/env python\n#\n# A library that provides a Python interface to the Telegram Bot API\n# Copyright (C) 2015-2018\n# Leandro Toledo de Souza <[email protected]>\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Lesser Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Lesser Public License for more details.\n#\n# You should have received a copy of the GNU Lesser Public License\n# along with this program. If not, see [http://www.gnu.org/licenses/].\n\"\"\"This module contains the CallbackContext class.\"\"\"\n\nfrom telegram import Update\n\n\nclass CallbackContext(object):\n \"\"\"\n This is a context object passed to the callback called by :class:`telegram.ext.Handler`\n or by the :class:`telegram.ext.Dispatcher` in an error handler added by\n :attr:`telegram.ext.Dispatcher.add_error_handler` or to the callback of a\n :class:`telegram.ext.Job`.\n\n Note:\n :class:`telegram.ext.Dispatcher` will create a single context for an entire update. This\n means that if you got 2 handlers in different groups and they both get called, they will\n get passed the same `CallbackContext` object (of course with proper attributes like\n `.matches` differing). This allows you to add custom attributes in a lower handler group\n callback, and then subsequently access those attributes in a higher handler group callback.\n Note that the attributes on `CallbackContext` might change in the future, so make sure to\n use a fairly unique name for the attributes.\n\n Warning:\n Do not combine custom attributes and @run_async. Due to how @run_async works, it will\n almost certainly execute the callbacks for an update out of order, and the attributes\n that you think you added will not be present.\n\n Attributes:\n chat_data (:obj:`dict`, optional): A dict that can be used to keep any data in. For each\n update from the same chat it will be the same ``dict``.\n user_data (:obj:`dict`, optional): A dict that can be used to keep any data in. For each\n update from the same user it will be the same ``dict``.\n matches (List[:obj:`re match object`], optional): If the associated update originated from\n a regex-supported handler or had a :class:`Filters.regex`, this will contain a list of\n match objects for every pattern where ``re.search(pattern, string)`` returned a match.\n Note that filters short circuit, so combined regex filters will not always\n be evaluated.\n args (List[:obj:`str`], optional): Arguments passed to a command if the associated update\n is handled by :class:`telegram.ext.CommandHandler`, :class:`telegram.ext.PrefixHandler`\n or :class:`telegram.ext.StringCommandHandler`. It contains a list of the words in the\n text after the command, using any whitespace string as a delimiter.\n error (:class:`telegram.TelegramError`, optional): The Telegram error that was raised.\n Only present when passed to a error handler registered with\n :attr:`telegram.ext.Dispatcher.add_error_handler`.\n job (:class:`telegram.ext.Job`): The job that that originated this callback.\n Only present when passed to the callback of :class:`telegram.ext.Job`.\n\n \"\"\"\n\n def __init__(self, dispatcher):\n \"\"\"\n Args:\n dispatcher (:class:`telegram.ext.Dispatcher`):\n \"\"\"\n if not dispatcher.use_context:\n raise ValueError('CallbackContext should not be used with a non context aware '\n 'dispatcher!')\n self._dispatcher = dispatcher\n self._chat_data = None\n self._user_data = None\n self.args = None\n self.matches = None\n self.error = None\n self.job = None\n\n @property\n def chat_data(self):\n return self._chat_data\n\n @chat_data.setter\n def chat_data(self, value):\n raise AttributeError(\"You can not assign a new value to chat_data, see \"\n \"https://git.io/fjxKe\")\n\n @property\n def user_data(self):\n return self._user_data\n\n @user_data.setter\n def user_data(self, value):\n raise AttributeError(\"You can not assign a new value to user_data, see \"\n \"https://git.io/fjxKe\")\n\n @classmethod\n def from_error(cls, update, error, dispatcher):\n self = cls.from_update(update, dispatcher)\n self.error = error\n return self\n\n @classmethod\n def from_update(cls, update, dispatcher):\n self = cls(dispatcher)\n if update is not None and isinstance(update, Update):\n chat = update.effective_chat\n user = update.effective_user\n\n if chat:\n self._chat_data = dispatcher.chat_data[chat.id]\n if user:\n self._user_data = dispatcher.user_data[user.id]\n return self\n\n @classmethod\n def from_job(cls, job, dispatcher):\n self = cls(dispatcher)\n self.job = job\n return self\n\n def update(self, data):\n self.__dict__.update(data)\n\n @property\n def bot(self):\n \"\"\":class:`telegram.Bot`: The bot associated with this context.\"\"\"\n return self._dispatcher.bot\n\n @property\n def job_queue(self):\n \"\"\"\n :class:`telegram.ext.JobQueue`: The ``JobQueue`` used by the\n :class:`telegram.ext.Dispatcher` and (usually) the :class:`telegram.ext.Updater`\n associated with this context.\n\n \"\"\"\n return self._dispatcher.job_queue\n\n @property\n def update_queue(self):\n \"\"\"\n :class:`queue.Queue`: The ``Queue`` instance used by the\n :class:`telegram.ext.Dispatcher` and (usually) the :class:`telegram.ext.Updater`\n associated with this context.\n\n \"\"\"\n return self._dispatcher.update_queue\n\n @property\n def match(self):\n \"\"\"\n `Regex match type`: The first match from :attr:`matches`.\n Useful if you are only filtering using a single regex filter.\n Returns `None` if :attr:`matches` is empty.\n \"\"\"\n try:\n return self.matches[0] # pylint: disable=unsubscriptable-object\n except (IndexError, TypeError):\n return None\n", "path": "telegram/ext/callbackcontext.py"}]} | 2,020 | 386 |
gh_patches_debug_30825 | rasdani/github-patches | git_diff | mathesar-foundation__mathesar-2484 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Ordering by a non-orderable column (implicitly or explicitly) causes a backend crash
## Bug
Brent [writes](https://github.com/centerofci/mathesar/pull/2253#pullrequestreview-1251718380):
> At one point, you note in a comment that there's a bug whenever we have internal columns of JSON type, and no primary key in a table. This bug is still present for _any_ non-orderable column type if there is no primary key in the table (e.g., if the user attaches to a preexisting DB). It's easy to figure out whether or not a type is orderable in PostgreSQL. See the [docs page for ORDER BY](https://www.postgresql.org/docs/current/queries-order.html#id-1.5.6.9.5.10) and the [docs page for Operator Classes](https://www.postgresql.org/docs/9.5/indexes-opclass.html).
Brent is referring to [this comment](https://github.com/centerofci/mathesar/blob/8c215b2efdd740da0d530b624763cf260e12ca90/db/records/operations/sort.py#L47).
The error message received upon such a crash is: `could not identify an ordering operator for type json` (`json` is the type of the unorderable column in this case.
## Solution
The solution is, as explained by Brent, to make sure that every column we order by (implicitly or explicitly) is orderable.
## Severity
Because of this, a JSON column (it's non-orderable), can make a query/table view unrenderable, but only when the table/query does not have a primary key, which cannot happen in a Mathesar-created table. It can only happen when Mathesar is used to access a pre-existing database.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `db/records/operations/sort.py`
Content:
```
1 from collections import namedtuple
2 from sqlalchemy import select
3 from db.columns import utils as col_utils
4 from db.records.exceptions import BadSortFormat, SortFieldNotFound
5
6
7 def make_order_by_deterministic(relation, order_by=None):
8 """
9 Makes an order_by deterministic (totally ordering).
10
11 Given a relation, and a `order_by` spec, that defines the ordering to be applied to the
12 relation, returns a new order_by that is the totally ordered (deterministic) version of the
13 input order_by.
14
15 Appending primary key sort guarantees determinism, but if that fails, we revert to ordering by
16 all columns.
17 """
18 if order_by is None:
19 order_by = []
20 order_by = _append_primary_key_sort(relation, order_by)
21 if not order_by:
22 order_by = _build_order_by_all_columns_clause(relation)
23 return order_by
24
25
26 def _append_primary_key_sort(relation, order_by):
27 """
28 Given an order by clause, we can guarantee a deterministic ordering
29 overall by appending a final ordering by primary key if one exists.
30 """
31 pk_cols = col_utils.get_primary_key_column_collection_from_relation(relation)
32 order_by = list(order_by)
33 if pk_cols is not None:
34 order_by += [
35 {'field': col, 'direction': 'asc'}
36 for col
37 in set(pk_cols).intersection(relation.columns)
38 ]
39 return order_by
40
41
42 def _build_order_by_all_columns_clause(relation):
43 """
44 To be used when we have failed to find any other ordering criteria,
45 since ordering by all columns is inherently inefficient.
46
47 Note the filtering out of internal columns. Before applying this fix, psycopg was throwing an error
48 like "could not identify an ordering operator for type json", because we were trying to
49 sort by an internal column like `__mathesar_group_metadata`, which has type `json`, which
50 requires special handling to be sorted. The problem is bypassed by not attempting to sort on
51 internal columns.
52 """
53 return [
54 {'field': col, 'direction': 'asc'}
55 for col
56 in relation.columns
57 if not _is_internal_column(col)
58 ]
59
60
61 def _is_internal_column(col):
62 """
63 Might not be exhaustive, take care.
64 """
65 return col.name == '__mathesar_group_metadata'
66
67
68 def apply_relation_sorting(relation, sort_spec):
69 order_by_list = [
70 _get_sorted_column_obj_from_spec(relation, spec) for spec in sort_spec
71 ]
72 return select(relation).order_by(*order_by_list)
73
74
75 def _get_sorted_column_obj_from_spec(relation, spec):
76 try:
77 sort_spec = _deserialize_sort_spec(spec)
78 except (KeyError, TypeError, AssertionError):
79 raise BadSortFormat
80
81 try:
82 column = col_utils.get_column_obj_from_relation(relation, sort_spec.field)
83 except KeyError as e:
84 raise SortFieldNotFound(e)
85 except AttributeError:
86 raise BadSortFormat
87
88 try:
89 directed_col = _build_directed_column_expr(column, sort_spec)
90 except AttributeError:
91 raise BadSortFormat
92
93 return directed_col
94
95
96 def _deserialize_sort_spec(spec):
97 sort_spec = namedtuple(
98 '_sort_spec',
99 ['field', 'direction', 'nullsfirst', 'nullslast']
100 )(
101 field=spec['field'],
102 direction=spec['direction'],
103 nullsfirst=spec.get('nullsfirst', False),
104 nullslast=spec.get('nullslast', False)
105 )
106 # Since it's not valid to have both nullsfirst and nullslast.
107 assert not sort_spec.nullsfirst or not sort_spec.nullslast
108 return sort_spec
109
110
111 def _build_directed_column_expr(column, sort_spec):
112 directed_col = getattr(column, sort_spec.direction)()
113 if sort_spec.nullsfirst:
114 directed_col = directed_col.nulls_first()
115 elif sort_spec.nullslast:
116 directed_col = directed_col.nulls_last()
117 return directed_col
118
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/db/records/operations/sort.py b/db/records/operations/sort.py
--- a/db/records/operations/sort.py
+++ b/db/records/operations/sort.py
@@ -44,27 +44,37 @@
To be used when we have failed to find any other ordering criteria,
since ordering by all columns is inherently inefficient.
- Note the filtering out of internal columns. Before applying this fix, psycopg was throwing an error
- like "could not identify an ordering operator for type json", because we were trying to
- sort by an internal column like `__mathesar_group_metadata`, which has type `json`, which
- requires special handling to be sorted. The problem is bypassed by not attempting to sort on
- internal columns.
+ Note the filtering out some columns, namely internal columns and non-orderable columns. See
+ their docstrings for details.
"""
return [
{'field': col, 'direction': 'asc'}
for col
in relation.columns
- if not _is_internal_column(col)
+ if _is_col_orderable(col) and not _is_internal_column(col)
]
def _is_internal_column(col):
"""
+ Columns that Mathesar adds for its own devices and does not expose to the user. We don't want
+ to sort by these.
+
Might not be exhaustive, take care.
"""
return col.name == '__mathesar_group_metadata'
+def _is_col_orderable(col):
+ """
+ Some columns are not orderable (or at least don't have a non-ambiguous way to define order
+ without additional logic). We only want to order by orderably columns.
+ """
+ data_type = col.type
+ non_orderable_type = ['Binary', 'LargeBinary', 'PickleType', 'ARRAY', 'JSON', 'JSONB']
+ return str(data_type) not in non_orderable_type
+
+
def apply_relation_sorting(relation, sort_spec):
order_by_list = [
_get_sorted_column_obj_from_spec(relation, spec) for spec in sort_spec
| {"golden_diff": "diff --git a/db/records/operations/sort.py b/db/records/operations/sort.py\n--- a/db/records/operations/sort.py\n+++ b/db/records/operations/sort.py\n@@ -44,27 +44,37 @@\n To be used when we have failed to find any other ordering criteria,\n since ordering by all columns is inherently inefficient.\n \n- Note the filtering out of internal columns. Before applying this fix, psycopg was throwing an error\n- like \"could not identify an ordering operator for type json\", because we were trying to\n- sort by an internal column like `__mathesar_group_metadata`, which has type `json`, which\n- requires special handling to be sorted. The problem is bypassed by not attempting to sort on\n- internal columns.\n+ Note the filtering out some columns, namely internal columns and non-orderable columns. See\n+ their docstrings for details.\n \"\"\"\n return [\n {'field': col, 'direction': 'asc'}\n for col\n in relation.columns\n- if not _is_internal_column(col)\n+ if _is_col_orderable(col) and not _is_internal_column(col)\n ]\n \n \n def _is_internal_column(col):\n \"\"\"\n+ Columns that Mathesar adds for its own devices and does not expose to the user. We don't want\n+ to sort by these.\n+\n Might not be exhaustive, take care.\n \"\"\"\n return col.name == '__mathesar_group_metadata'\n \n \n+def _is_col_orderable(col):\n+ \"\"\"\n+ Some columns are not orderable (or at least don't have a non-ambiguous way to define order\n+ without additional logic). We only want to order by orderably columns.\n+ \"\"\"\n+ data_type = col.type\n+ non_orderable_type = ['Binary', 'LargeBinary', 'PickleType', 'ARRAY', 'JSON', 'JSONB']\n+ return str(data_type) not in non_orderable_type\n+\n+\n def apply_relation_sorting(relation, sort_spec):\n order_by_list = [\n _get_sorted_column_obj_from_spec(relation, spec) for spec in sort_spec\n", "issue": "Ordering by a non-orderable column (implicitly or explicitly) causes a backend crash\n## Bug\r\n\r\nBrent [writes](https://github.com/centerofci/mathesar/pull/2253#pullrequestreview-1251718380):\r\n\r\n> At one point, you note in a comment that there's a bug whenever we have internal columns of JSON type, and no primary key in a table. This bug is still present for _any_ non-orderable column type if there is no primary key in the table (e.g., if the user attaches to a preexisting DB). It's easy to figure out whether or not a type is orderable in PostgreSQL. See the [docs page for ORDER BY](https://www.postgresql.org/docs/current/queries-order.html#id-1.5.6.9.5.10) and the [docs page for Operator Classes](https://www.postgresql.org/docs/9.5/indexes-opclass.html).\r\n\r\nBrent is referring to [this comment](https://github.com/centerofci/mathesar/blob/8c215b2efdd740da0d530b624763cf260e12ca90/db/records/operations/sort.py#L47).\r\n\r\nThe error message received upon such a crash is: `could not identify an ordering operator for type json` (`json` is the type of the unorderable column in this case.\r\n\r\n## Solution\r\n\r\nThe solution is, as explained by Brent, to make sure that every column we order by (implicitly or explicitly) is orderable.\r\n\r\n## Severity\r\n\r\nBecause of this, a JSON column (it's non-orderable), can make a query/table view unrenderable, but only when the table/query does not have a primary key, which cannot happen in a Mathesar-created table. It can only happen when Mathesar is used to access a pre-existing database.\n", "before_files": [{"content": "from collections import namedtuple\nfrom sqlalchemy import select\nfrom db.columns import utils as col_utils\nfrom db.records.exceptions import BadSortFormat, SortFieldNotFound\n\n\ndef make_order_by_deterministic(relation, order_by=None):\n \"\"\"\n Makes an order_by deterministic (totally ordering).\n\n Given a relation, and a `order_by` spec, that defines the ordering to be applied to the\n relation, returns a new order_by that is the totally ordered (deterministic) version of the\n input order_by.\n\n Appending primary key sort guarantees determinism, but if that fails, we revert to ordering by\n all columns.\n \"\"\"\n if order_by is None:\n order_by = []\n order_by = _append_primary_key_sort(relation, order_by)\n if not order_by:\n order_by = _build_order_by_all_columns_clause(relation)\n return order_by\n\n\ndef _append_primary_key_sort(relation, order_by):\n \"\"\"\n Given an order by clause, we can guarantee a deterministic ordering\n overall by appending a final ordering by primary key if one exists.\n \"\"\"\n pk_cols = col_utils.get_primary_key_column_collection_from_relation(relation)\n order_by = list(order_by)\n if pk_cols is not None:\n order_by += [\n {'field': col, 'direction': 'asc'}\n for col\n in set(pk_cols).intersection(relation.columns)\n ]\n return order_by\n\n\ndef _build_order_by_all_columns_clause(relation):\n \"\"\"\n To be used when we have failed to find any other ordering criteria,\n since ordering by all columns is inherently inefficient.\n\n Note the filtering out of internal columns. Before applying this fix, psycopg was throwing an error\n like \"could not identify an ordering operator for type json\", because we were trying to\n sort by an internal column like `__mathesar_group_metadata`, which has type `json`, which\n requires special handling to be sorted. The problem is bypassed by not attempting to sort on\n internal columns.\n \"\"\"\n return [\n {'field': col, 'direction': 'asc'}\n for col\n in relation.columns\n if not _is_internal_column(col)\n ]\n\n\ndef _is_internal_column(col):\n \"\"\"\n Might not be exhaustive, take care.\n \"\"\"\n return col.name == '__mathesar_group_metadata'\n\n\ndef apply_relation_sorting(relation, sort_spec):\n order_by_list = [\n _get_sorted_column_obj_from_spec(relation, spec) for spec in sort_spec\n ]\n return select(relation).order_by(*order_by_list)\n\n\ndef _get_sorted_column_obj_from_spec(relation, spec):\n try:\n sort_spec = _deserialize_sort_spec(spec)\n except (KeyError, TypeError, AssertionError):\n raise BadSortFormat\n\n try:\n column = col_utils.get_column_obj_from_relation(relation, sort_spec.field)\n except KeyError as e:\n raise SortFieldNotFound(e)\n except AttributeError:\n raise BadSortFormat\n\n try:\n directed_col = _build_directed_column_expr(column, sort_spec)\n except AttributeError:\n raise BadSortFormat\n\n return directed_col\n\n\ndef _deserialize_sort_spec(spec):\n sort_spec = namedtuple(\n '_sort_spec',\n ['field', 'direction', 'nullsfirst', 'nullslast']\n )(\n field=spec['field'],\n direction=spec['direction'],\n nullsfirst=spec.get('nullsfirst', False),\n nullslast=spec.get('nullslast', False)\n )\n # Since it's not valid to have both nullsfirst and nullslast.\n assert not sort_spec.nullsfirst or not sort_spec.nullslast\n return sort_spec\n\n\ndef _build_directed_column_expr(column, sort_spec):\n directed_col = getattr(column, sort_spec.direction)()\n if sort_spec.nullsfirst:\n directed_col = directed_col.nulls_first()\n elif sort_spec.nullslast:\n directed_col = directed_col.nulls_last()\n return directed_col\n", "path": "db/records/operations/sort.py"}], "after_files": [{"content": "from collections import namedtuple\nfrom sqlalchemy import select\nfrom db.columns import utils as col_utils\nfrom db.records.exceptions import BadSortFormat, SortFieldNotFound\n\n\ndef make_order_by_deterministic(relation, order_by=None):\n \"\"\"\n Makes an order_by deterministic (totally ordering).\n\n Given a relation, and a `order_by` spec, that defines the ordering to be applied to the\n relation, returns a new order_by that is the totally ordered (deterministic) version of the\n input order_by.\n\n Appending primary key sort guarantees determinism, but if that fails, we revert to ordering by\n all columns.\n \"\"\"\n if order_by is None:\n order_by = []\n order_by = _append_primary_key_sort(relation, order_by)\n if not order_by:\n order_by = _build_order_by_all_columns_clause(relation)\n return order_by\n\n\ndef _append_primary_key_sort(relation, order_by):\n \"\"\"\n Given an order by clause, we can guarantee a deterministic ordering\n overall by appending a final ordering by primary key if one exists.\n \"\"\"\n pk_cols = col_utils.get_primary_key_column_collection_from_relation(relation)\n order_by = list(order_by)\n if pk_cols is not None:\n order_by += [\n {'field': col, 'direction': 'asc'}\n for col\n in set(pk_cols).intersection(relation.columns)\n ]\n return order_by\n\n\ndef _build_order_by_all_columns_clause(relation):\n \"\"\"\n To be used when we have failed to find any other ordering criteria,\n since ordering by all columns is inherently inefficient.\n\n Note the filtering out some columns, namely internal columns and non-orderable columns. See\n their docstrings for details.\n \"\"\"\n return [\n {'field': col, 'direction': 'asc'}\n for col\n in relation.columns\n if _is_col_orderable(col) and not _is_internal_column(col)\n ]\n\n\ndef _is_internal_column(col):\n \"\"\"\n Columns that Mathesar adds for its own devices and does not expose to the user. We don't want\n to sort by these.\n\n Might not be exhaustive, take care.\n \"\"\"\n return col.name == '__mathesar_group_metadata'\n\n\ndef _is_col_orderable(col):\n \"\"\"\n Some columns are not orderable (or at least don't have a non-ambiguous way to define order\n without additional logic). We only want to order by orderably columns.\n \"\"\"\n data_type = col.type\n non_orderable_type = ['Binary', 'LargeBinary', 'PickleType', 'ARRAY', 'JSON', 'JSONB']\n return str(data_type) not in non_orderable_type\n\n\ndef apply_relation_sorting(relation, sort_spec):\n order_by_list = [\n _get_sorted_column_obj_from_spec(relation, spec) for spec in sort_spec\n ]\n return select(relation).order_by(*order_by_list)\n\n\ndef _get_sorted_column_obj_from_spec(relation, spec):\n try:\n sort_spec = _deserialize_sort_spec(spec)\n except (KeyError, TypeError, AssertionError):\n raise BadSortFormat\n\n try:\n column = col_utils.get_column_obj_from_relation(relation, sort_spec.field)\n except KeyError as e:\n raise SortFieldNotFound(e)\n except AttributeError:\n raise BadSortFormat\n\n try:\n directed_col = _build_directed_column_expr(column, sort_spec)\n except AttributeError:\n raise BadSortFormat\n\n return directed_col\n\n\ndef _deserialize_sort_spec(spec):\n sort_spec = namedtuple(\n '_sort_spec',\n ['field', 'direction', 'nullsfirst', 'nullslast']\n )(\n field=spec['field'],\n direction=spec['direction'],\n nullsfirst=spec.get('nullsfirst', False),\n nullslast=spec.get('nullslast', False)\n )\n # Since it's not valid to have both nullsfirst and nullslast.\n assert not sort_spec.nullsfirst or not sort_spec.nullslast\n return sort_spec\n\n\ndef _build_directed_column_expr(column, sort_spec):\n directed_col = getattr(column, sort_spec.direction)()\n if sort_spec.nullsfirst:\n directed_col = directed_col.nulls_first()\n elif sort_spec.nullslast:\n directed_col = directed_col.nulls_last()\n return directed_col\n", "path": "db/records/operations/sort.py"}]} | 1,791 | 467 |
gh_patches_debug_3357 | rasdani/github-patches | git_diff | nipy__nipype-1574 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] Build documentation fails if dipy is not installed
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nipype/interfaces/dipy/base.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """ Base interfaces for dipy """
3 import os.path as op
4 import numpy as np
5 from nipype.interfaces.base import (traits, File, isdefined,
6 BaseInterface, BaseInterfaceInputSpec)
7 from ... import logging
8
9 IFLOGGER = logging.getLogger('interface')
10
11 HAVE_DIPY = True
12 try:
13 import dipy
14 except ImportError:
15 HAVE_DIPY = False
16
17
18 def no_dipy():
19 """ Check if dipy is available """
20 global HAVE_DIPY
21 return not HAVE_DIPY
22
23
24 def dipy_version():
25 """ Check dipy version """
26 if no_dipy():
27 return None
28
29 return dipy.__version__
30
31
32 class DipyBaseInterface(BaseInterface):
33
34 """
35 A base interface for py:mod:`dipy` computations
36 """
37 def __init__(self, **inputs):
38 if no_dipy():
39 IFLOGGER.error('dipy was not found')
40 # raise ImportError('dipy was not found')
41 super(DipyBaseInterface, self).__init__(**inputs)
42
43
44 class DipyBaseInterfaceInputSpec(BaseInterfaceInputSpec):
45 in_file = File(exists=True, mandatory=True, desc=('input diffusion data'))
46 in_bval = File(exists=True, mandatory=True, desc=('input b-values table'))
47 in_bvec = File(exists=True, mandatory=True, desc=('input b-vectors table'))
48 b0_thres = traits.Int(700, usedefault=True, desc=('b0 threshold'))
49 out_prefix = traits.Str(desc=('output prefix for file names'))
50
51
52 class DipyDiffusionInterface(DipyBaseInterface):
53
54 """
55 A base interface for py:mod:`dipy` computations
56 """
57 input_spec = DipyBaseInterfaceInputSpec
58
59 def _get_gradient_table(self):
60 bval = np.loadtxt(self.inputs.in_bval)
61 bvec = np.loadtxt(self.inputs.in_bvec).T
62 from dipy.core.gradients import gradient_table
63 gtab = gradient_table(bval, bvec)
64
65 gtab.b0_threshold = self.inputs.b0_thres
66 return gtab
67
68 def _gen_filename(self, name, ext=None):
69 fname, fext = op.splitext(op.basename(self.inputs.in_file))
70 if fext == '.gz':
71 fname, fext2 = op.splitext(fname)
72 fext = fext2 + fext
73
74 if not isdefined(self.inputs.out_prefix):
75 out_prefix = op.abspath(fname)
76 else:
77 out_prefix = self.inputs.out_prefix
78
79 if ext is None:
80 ext = fext
81
82 return out_prefix + '_' + name + ext
83
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/nipype/interfaces/dipy/base.py b/nipype/interfaces/dipy/base.py
--- a/nipype/interfaces/dipy/base.py
+++ b/nipype/interfaces/dipy/base.py
@@ -36,7 +36,7 @@
"""
def __init__(self, **inputs):
if no_dipy():
- IFLOGGER.error('dipy was not found')
+ IFLOGGER.warn('dipy was not found')
# raise ImportError('dipy was not found')
super(DipyBaseInterface, self).__init__(**inputs)
| {"golden_diff": "diff --git a/nipype/interfaces/dipy/base.py b/nipype/interfaces/dipy/base.py\n--- a/nipype/interfaces/dipy/base.py\n+++ b/nipype/interfaces/dipy/base.py\n@@ -36,7 +36,7 @@\n \"\"\"\n def __init__(self, **inputs):\n if no_dipy():\n- IFLOGGER.error('dipy was not found')\n+ IFLOGGER.warn('dipy was not found')\n # raise ImportError('dipy was not found')\n super(DipyBaseInterface, self).__init__(**inputs)\n", "issue": "[BUG] Build documentation fails if dipy is not installed\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\" Base interfaces for dipy \"\"\"\nimport os.path as op\nimport numpy as np\nfrom nipype.interfaces.base import (traits, File, isdefined,\n BaseInterface, BaseInterfaceInputSpec)\nfrom ... import logging\n\nIFLOGGER = logging.getLogger('interface')\n\nHAVE_DIPY = True\ntry:\n import dipy\nexcept ImportError:\n HAVE_DIPY = False\n\n\ndef no_dipy():\n \"\"\" Check if dipy is available \"\"\"\n global HAVE_DIPY\n return not HAVE_DIPY\n\n\ndef dipy_version():\n \"\"\" Check dipy version \"\"\"\n if no_dipy():\n return None\n\n return dipy.__version__\n\n\nclass DipyBaseInterface(BaseInterface):\n\n \"\"\"\n A base interface for py:mod:`dipy` computations\n \"\"\"\n def __init__(self, **inputs):\n if no_dipy():\n IFLOGGER.error('dipy was not found')\n # raise ImportError('dipy was not found')\n super(DipyBaseInterface, self).__init__(**inputs)\n\n\nclass DipyBaseInterfaceInputSpec(BaseInterfaceInputSpec):\n in_file = File(exists=True, mandatory=True, desc=('input diffusion data'))\n in_bval = File(exists=True, mandatory=True, desc=('input b-values table'))\n in_bvec = File(exists=True, mandatory=True, desc=('input b-vectors table'))\n b0_thres = traits.Int(700, usedefault=True, desc=('b0 threshold'))\n out_prefix = traits.Str(desc=('output prefix for file names'))\n\n\nclass DipyDiffusionInterface(DipyBaseInterface):\n\n \"\"\"\n A base interface for py:mod:`dipy` computations\n \"\"\"\n input_spec = DipyBaseInterfaceInputSpec\n\n def _get_gradient_table(self):\n bval = np.loadtxt(self.inputs.in_bval)\n bvec = np.loadtxt(self.inputs.in_bvec).T\n from dipy.core.gradients import gradient_table\n gtab = gradient_table(bval, bvec)\n\n gtab.b0_threshold = self.inputs.b0_thres\n return gtab\n\n def _gen_filename(self, name, ext=None):\n fname, fext = op.splitext(op.basename(self.inputs.in_file))\n if fext == '.gz':\n fname, fext2 = op.splitext(fname)\n fext = fext2 + fext\n\n if not isdefined(self.inputs.out_prefix):\n out_prefix = op.abspath(fname)\n else:\n out_prefix = self.inputs.out_prefix\n\n if ext is None:\n ext = fext\n\n return out_prefix + '_' + name + ext\n", "path": "nipype/interfaces/dipy/base.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\" Base interfaces for dipy \"\"\"\nimport os.path as op\nimport numpy as np\nfrom nipype.interfaces.base import (traits, File, isdefined,\n BaseInterface, BaseInterfaceInputSpec)\nfrom ... import logging\n\nIFLOGGER = logging.getLogger('interface')\n\nHAVE_DIPY = True\ntry:\n import dipy\nexcept ImportError:\n HAVE_DIPY = False\n\n\ndef no_dipy():\n \"\"\" Check if dipy is available \"\"\"\n global HAVE_DIPY\n return not HAVE_DIPY\n\n\ndef dipy_version():\n \"\"\" Check dipy version \"\"\"\n if no_dipy():\n return None\n\n return dipy.__version__\n\n\nclass DipyBaseInterface(BaseInterface):\n\n \"\"\"\n A base interface for py:mod:`dipy` computations\n \"\"\"\n def __init__(self, **inputs):\n if no_dipy():\n IFLOGGER.warn('dipy was not found')\n # raise ImportError('dipy was not found')\n super(DipyBaseInterface, self).__init__(**inputs)\n\n\nclass DipyBaseInterfaceInputSpec(BaseInterfaceInputSpec):\n in_file = File(exists=True, mandatory=True, desc=('input diffusion data'))\n in_bval = File(exists=True, mandatory=True, desc=('input b-values table'))\n in_bvec = File(exists=True, mandatory=True, desc=('input b-vectors table'))\n b0_thres = traits.Int(700, usedefault=True, desc=('b0 threshold'))\n out_prefix = traits.Str(desc=('output prefix for file names'))\n\n\nclass DipyDiffusionInterface(DipyBaseInterface):\n\n \"\"\"\n A base interface for py:mod:`dipy` computations\n \"\"\"\n input_spec = DipyBaseInterfaceInputSpec\n\n def _get_gradient_table(self):\n bval = np.loadtxt(self.inputs.in_bval)\n bvec = np.loadtxt(self.inputs.in_bvec).T\n from dipy.core.gradients import gradient_table\n gtab = gradient_table(bval, bvec)\n\n gtab.b0_threshold = self.inputs.b0_thres\n return gtab\n\n def _gen_filename(self, name, ext=None):\n fname, fext = op.splitext(op.basename(self.inputs.in_file))\n if fext == '.gz':\n fname, fext2 = op.splitext(fname)\n fext = fext2 + fext\n\n if not isdefined(self.inputs.out_prefix):\n out_prefix = op.abspath(fname)\n else:\n out_prefix = self.inputs.out_prefix\n\n if ext is None:\n ext = fext\n\n return out_prefix + '_' + name + ext\n", "path": "nipype/interfaces/dipy/base.py"}]} | 1,005 | 123 |
gh_patches_debug_60523 | rasdani/github-patches | git_diff | streamlit__streamlit-2811 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Sliders should show current value [regression] [Baseweb]
# Summary
(via @tvst: )
Our sliders right now require you to hover in order to see the selected value. This makes it really hard to understand what the user selected. I reported this before, but I just spent some time debugging my app thinking it was broken because I was reading the wrong slider value. Frustrating.
I understand this is the new behavior of sliders in Base Web, but we have alternatives:
**1. Roll back Base Web to a previous version**
This is the preferable solution in order to get this fix out ASAP. Even if we decide it's only a temporary solution.
2. Try to find a solution using the latest Base Web
3. Copy/paste the old Baseweb slider into our own repo and modify it there. Their slider is based on another library, btw (I forget which), so maybe we should just use that library directly instead?
## Is this a regression?
yes
# Debug info
- Streamlit version: 0.75-special
Allow hiding tracebacks
Currently, when a Streamlit app throws an exception, we print the traceback to the browser. This isn't necessarily the right thing to do for all apps; we should allow this to be configurable.
Maybe a `[client] showTracebacks = false` option? And presumably, if tracebacks are disabled, we should filter them at the server level, so that the client never even receives the string, in case the user is worried about leaking internal app details.
(Related discussion here: https://discuss.streamlit.io/t/dont-show-users-tracebacks/1746)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `e2e/scripts/st_columns.py`
Content:
```
1 # Copyright 2018-2021 Streamlit Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import streamlit as st
16
17 CAT_IMAGE = "https://images.unsplash.com/photo-1552933529-e359b2477252?ixlib=rb-1.2.1&ixid=eyJhcHBfaWQiOjEyMDd9&auto=format&fit=crop&w=950&q=80"
18
19 if st.button("Layout should not shift when this is pressed"):
20 st.write("Pressed!")
21
22 # Same-width columns
23 c1, c2, c3 = st.beta_columns(3)
24 c1.image(CAT_IMAGE, use_column_width=True)
25 c2.image(CAT_IMAGE, use_column_width=True)
26 c3.image(CAT_IMAGE, use_column_width=True)
27
28
29 # Variable-width columns
30 for c in st.beta_columns((1, 2, 4, 8)):
31 c.image(CAT_IMAGE, use_column_width=True)
32
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/e2e/scripts/st_columns.py b/e2e/scripts/st_columns.py
--- a/e2e/scripts/st_columns.py
+++ b/e2e/scripts/st_columns.py
@@ -27,5 +27,5 @@
# Variable-width columns
-for c in st.beta_columns((1, 2, 4, 8)):
+for c in st.beta_columns((1, 2, 3, 4)):
c.image(CAT_IMAGE, use_column_width=True)
| {"golden_diff": "diff --git a/e2e/scripts/st_columns.py b/e2e/scripts/st_columns.py\n--- a/e2e/scripts/st_columns.py\n+++ b/e2e/scripts/st_columns.py\n@@ -27,5 +27,5 @@\n \n \n # Variable-width columns\n-for c in st.beta_columns((1, 2, 4, 8)):\n+for c in st.beta_columns((1, 2, 3, 4)):\n c.image(CAT_IMAGE, use_column_width=True)\n", "issue": "Sliders should show current value [regression] [Baseweb]\n# Summary\r\n\r\n(via @tvst: )\r\n\r\nOur sliders right now require you to hover in order to see the selected value. This makes it really hard to understand what the user selected. I reported this before, but I just spent some time debugging my app thinking it was broken because I was reading the wrong slider value. Frustrating.\r\n\r\nI understand this is the new behavior of sliders in Base Web, but we have alternatives:\r\n\r\n**1. Roll back Base Web to a previous version**\r\n\r\n This is the preferable solution in order to get this fix out ASAP. Even if we decide it's only a temporary solution.\r\n\r\n2. Try to find a solution using the latest Base Web\r\n\r\n3. Copy/paste the old Baseweb slider into our own repo and modify it there. Their slider is based on another library, btw (I forget which), so maybe we should just use that library directly instead?\r\n\r\n\r\n## Is this a regression?\r\n\r\nyes \r\n\r\n# Debug info\r\n\r\n- Streamlit version: 0.75-special\nAllow hiding tracebacks\nCurrently, when a Streamlit app throws an exception, we print the traceback to the browser. This isn't necessarily the right thing to do for all apps; we should allow this to be configurable.\r\n\r\nMaybe a `[client] showTracebacks = false` option? And presumably, if tracebacks are disabled, we should filter them at the server level, so that the client never even receives the string, in case the user is worried about leaking internal app details.\r\n\r\n(Related discussion here: https://discuss.streamlit.io/t/dont-show-users-tracebacks/1746)\n", "before_files": [{"content": "# Copyright 2018-2021 Streamlit Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport streamlit as st\n\nCAT_IMAGE = \"https://images.unsplash.com/photo-1552933529-e359b2477252?ixlib=rb-1.2.1&ixid=eyJhcHBfaWQiOjEyMDd9&auto=format&fit=crop&w=950&q=80\"\n\nif st.button(\"Layout should not shift when this is pressed\"):\n st.write(\"Pressed!\")\n\n# Same-width columns\nc1, c2, c3 = st.beta_columns(3)\nc1.image(CAT_IMAGE, use_column_width=True)\nc2.image(CAT_IMAGE, use_column_width=True)\nc3.image(CAT_IMAGE, use_column_width=True)\n\n\n# Variable-width columns\nfor c in st.beta_columns((1, 2, 4, 8)):\n c.image(CAT_IMAGE, use_column_width=True)\n", "path": "e2e/scripts/st_columns.py"}], "after_files": [{"content": "# Copyright 2018-2021 Streamlit Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport streamlit as st\n\nCAT_IMAGE = \"https://images.unsplash.com/photo-1552933529-e359b2477252?ixlib=rb-1.2.1&ixid=eyJhcHBfaWQiOjEyMDd9&auto=format&fit=crop&w=950&q=80\"\n\nif st.button(\"Layout should not shift when this is pressed\"):\n st.write(\"Pressed!\")\n\n# Same-width columns\nc1, c2, c3 = st.beta_columns(3)\nc1.image(CAT_IMAGE, use_column_width=True)\nc2.image(CAT_IMAGE, use_column_width=True)\nc3.image(CAT_IMAGE, use_column_width=True)\n\n\n# Variable-width columns\nfor c in st.beta_columns((1, 2, 3, 4)):\n c.image(CAT_IMAGE, use_column_width=True)\n", "path": "e2e/scripts/st_columns.py"}]} | 1,005 | 107 |
gh_patches_debug_11279 | rasdani/github-patches | git_diff | abey79__vpype-440 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Replace cKDTree by KDTree
see note here: https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.cKDTree.html#scipy.spatial.cKDTree
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `vpype/line_index.py`
Content:
```
1 from __future__ import annotations
2
3 import logging
4 from typing import Iterable
5
6 import numpy as np
7 from scipy.spatial import cKDTree as KDTree
8
9 # REMINDER: anything added here must be added to docs/api.rst
10 __all__ = ["LineIndex"]
11
12
13 class LineIndex:
14 """Wrapper to scipy.spatial.cKDTree to facilitate systematic processing of a line
15 collection.
16
17 Implementation note: we use the `available` bool array because deleting stuff from the
18 index is costly.
19 """
20
21 def __init__(self, lines: Iterable[np.ndarray], reverse: bool = False):
22 self.lines = [line for line in lines if len(line) > 0]
23 self.reverse = reverse
24 self._make_index()
25
26 def _make_index(self) -> None:
27 logging.info(f"LineIndex: creating index for {len(self.lines)} lines")
28 self.available = np.ones(shape=len(self.lines), dtype=bool)
29
30 # create rtree index
31 self.index = KDTree(
32 np.array([(line[0].real, line[0].imag) for line in self.lines]).reshape(-1, 2)
33 )
34
35 # create reverse index
36 if self.reverse:
37 self.rindex = KDTree(
38 np.array([(line[-1].real, line[-1].imag) for line in self.lines]).reshape(
39 -1, 2
40 )
41 )
42
43 def _reindex(self) -> None:
44 self.lines = [line for idx, line in enumerate(self.lines) if self.available[idx]]
45 self._make_index()
46
47 def __len__(self) -> int:
48 return np.count_nonzero(self.available)
49
50 def __getitem__(self, item):
51 return self.lines[item]
52
53 def pop_front(self) -> np.ndarray:
54 if len(self) == 0:
55 raise RuntimeError
56 idx = int(np.argmax(self.available))
57 self.available[idx] = False
58 return self.lines[idx]
59
60 def pop(self, idx: int) -> np.ndarray | None:
61 if not self.available[idx]:
62 return None
63 self.available[idx] = False
64 return self.lines[idx]
65
66 def find_nearest_within(self, p: complex, max_dist: float) -> tuple[int | None, bool]:
67 """Find the closest line, assuming a maximum admissible distance.
68 Returns a tuple of (idx, reverse), where `idx` may be None if nothing is found.
69 `reverse` indicates whether or not a line ending has been matched instead of a start.
70 False is always returned if index was created with `reverse=False`.s
71 """
72
73 ridx = None
74 rdist: float | None = 0.0
75
76 while True:
77 reindex, idx, dist = self._find_nearest_within_in_index(p, max_dist, self.index)
78 if reindex:
79 self._reindex()
80 continue
81
82 if self.reverse:
83 reindex, ridx, rdist = self._find_nearest_within_in_index(
84 p, max_dist, self.rindex
85 )
86 if reindex:
87 self._reindex()
88 continue
89 break
90
91 if self.reverse:
92 if idx is None and ridx is None:
93 return None, False
94 elif idx is not None and ridx is None:
95 return idx, False
96 elif idx is None and ridx is not None:
97 return ridx, True
98 elif rdist < dist: # type: ignore
99 return ridx, True
100 else:
101 return idx, False
102 else:
103 return idx, False
104
105 def _find_nearest_within_in_index(
106 self, p: complex, max_dist: float, index: KDTree
107 ) -> tuple[bool, int | None, float | None]:
108 """Find nearest in specific index. Return (reindex, idx, dist) tuple, where
109 reindex indicates if a reindex is needed.
110 """
111
112 # For performance reason, we query only a max of k candidates. In the special case
113 # where all distances are not inf and none are available, we might have more than k
114 # suitable candidate, so we reindex and loop. Otherwise, we check the query results
115 # for availability and not inf and return anything found
116 dists, idxs = index.query((p.real, p.imag), k=50, distance_upper_bound=max_dist)
117 dists = np.array(dists)
118
119 not_inf = ~np.isinf(dists)
120 if np.all(not_inf) and np.all(~self.available[idxs[not_inf]]):
121 return True, None, 0
122
123 candidates = self.available[idxs[not_inf]]
124
125 if np.any(candidates):
126 idx = np.argmax(candidates)
127 return False, idxs[not_inf][idx], dists[not_inf][idx]
128 else:
129 return False, None, 0
130
131 # noinspection PyUnboundLocalVariable
132 def find_nearest(self, p: complex) -> tuple[int, bool]:
133 while True:
134 idx, dist = self._find_nearest_in_index(p, self.index)
135 if self.reverse:
136 ridx, rdist = self._find_nearest_in_index(p, self.rindex)
137
138 if ridx is not None and idx is not None:
139 break
140 elif idx is not None:
141 break
142 self._reindex()
143
144 if self.reverse:
145 if rdist < dist:
146 return ridx, True # type: ignore
147 else:
148 return idx, False
149 else:
150 return idx, False
151
152 def _find_nearest_in_index(self, p: complex, index: KDTree) -> tuple[int | None, float]:
153 """Check the N nearest lines, hopefully find one that is active."""
154
155 dists, idxs = index.query((p.real, p.imag), k=100)
156 for dist, idx in zip(dists, idxs):
157 if ~np.isinf(dist) and self.available[idx]:
158 return idx, dist
159
160 return None, 0.0
161
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/vpype/line_index.py b/vpype/line_index.py
--- a/vpype/line_index.py
+++ b/vpype/line_index.py
@@ -4,14 +4,14 @@
from typing import Iterable
import numpy as np
-from scipy.spatial import cKDTree as KDTree
+from scipy.spatial import KDTree
# REMINDER: anything added here must be added to docs/api.rst
__all__ = ["LineIndex"]
class LineIndex:
- """Wrapper to scipy.spatial.cKDTree to facilitate systematic processing of a line
+ """Wrapper to scipy.spatial.KDTree to facilitate systematic processing of a line
collection.
Implementation note: we use the `available` bool array because deleting stuff from the
| {"golden_diff": "diff --git a/vpype/line_index.py b/vpype/line_index.py\n--- a/vpype/line_index.py\n+++ b/vpype/line_index.py\n@@ -4,14 +4,14 @@\n from typing import Iterable\n \n import numpy as np\n-from scipy.spatial import cKDTree as KDTree\n+from scipy.spatial import KDTree\n \n # REMINDER: anything added here must be added to docs/api.rst\n __all__ = [\"LineIndex\"]\n \n \n class LineIndex:\n- \"\"\"Wrapper to scipy.spatial.cKDTree to facilitate systematic processing of a line\n+ \"\"\"Wrapper to scipy.spatial.KDTree to facilitate systematic processing of a line\n collection.\n \n Implementation note: we use the `available` bool array because deleting stuff from the\n", "issue": "Replace cKDTree by KDTree\nsee note here: https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.cKDTree.html#scipy.spatial.cKDTree\n", "before_files": [{"content": "from __future__ import annotations\n\nimport logging\nfrom typing import Iterable\n\nimport numpy as np\nfrom scipy.spatial import cKDTree as KDTree\n\n# REMINDER: anything added here must be added to docs/api.rst\n__all__ = [\"LineIndex\"]\n\n\nclass LineIndex:\n \"\"\"Wrapper to scipy.spatial.cKDTree to facilitate systematic processing of a line\n collection.\n\n Implementation note: we use the `available` bool array because deleting stuff from the\n index is costly.\n \"\"\"\n\n def __init__(self, lines: Iterable[np.ndarray], reverse: bool = False):\n self.lines = [line for line in lines if len(line) > 0]\n self.reverse = reverse\n self._make_index()\n\n def _make_index(self) -> None:\n logging.info(f\"LineIndex: creating index for {len(self.lines)} lines\")\n self.available = np.ones(shape=len(self.lines), dtype=bool)\n\n # create rtree index\n self.index = KDTree(\n np.array([(line[0].real, line[0].imag) for line in self.lines]).reshape(-1, 2)\n )\n\n # create reverse index\n if self.reverse:\n self.rindex = KDTree(\n np.array([(line[-1].real, line[-1].imag) for line in self.lines]).reshape(\n -1, 2\n )\n )\n\n def _reindex(self) -> None:\n self.lines = [line for idx, line in enumerate(self.lines) if self.available[idx]]\n self._make_index()\n\n def __len__(self) -> int:\n return np.count_nonzero(self.available)\n\n def __getitem__(self, item):\n return self.lines[item]\n\n def pop_front(self) -> np.ndarray:\n if len(self) == 0:\n raise RuntimeError\n idx = int(np.argmax(self.available))\n self.available[idx] = False\n return self.lines[idx]\n\n def pop(self, idx: int) -> np.ndarray | None:\n if not self.available[idx]:\n return None\n self.available[idx] = False\n return self.lines[idx]\n\n def find_nearest_within(self, p: complex, max_dist: float) -> tuple[int | None, bool]:\n \"\"\"Find the closest line, assuming a maximum admissible distance.\n Returns a tuple of (idx, reverse), where `idx` may be None if nothing is found.\n `reverse` indicates whether or not a line ending has been matched instead of a start.\n False is always returned if index was created with `reverse=False`.s\n \"\"\"\n\n ridx = None\n rdist: float | None = 0.0\n\n while True:\n reindex, idx, dist = self._find_nearest_within_in_index(p, max_dist, self.index)\n if reindex:\n self._reindex()\n continue\n\n if self.reverse:\n reindex, ridx, rdist = self._find_nearest_within_in_index(\n p, max_dist, self.rindex\n )\n if reindex:\n self._reindex()\n continue\n break\n\n if self.reverse:\n if idx is None and ridx is None:\n return None, False\n elif idx is not None and ridx is None:\n return idx, False\n elif idx is None and ridx is not None:\n return ridx, True\n elif rdist < dist: # type: ignore\n return ridx, True\n else:\n return idx, False\n else:\n return idx, False\n\n def _find_nearest_within_in_index(\n self, p: complex, max_dist: float, index: KDTree\n ) -> tuple[bool, int | None, float | None]:\n \"\"\"Find nearest in specific index. Return (reindex, idx, dist) tuple, where\n reindex indicates if a reindex is needed.\n \"\"\"\n\n # For performance reason, we query only a max of k candidates. In the special case\n # where all distances are not inf and none are available, we might have more than k\n # suitable candidate, so we reindex and loop. Otherwise, we check the query results\n # for availability and not inf and return anything found\n dists, idxs = index.query((p.real, p.imag), k=50, distance_upper_bound=max_dist)\n dists = np.array(dists)\n\n not_inf = ~np.isinf(dists)\n if np.all(not_inf) and np.all(~self.available[idxs[not_inf]]):\n return True, None, 0\n\n candidates = self.available[idxs[not_inf]]\n\n if np.any(candidates):\n idx = np.argmax(candidates)\n return False, idxs[not_inf][idx], dists[not_inf][idx]\n else:\n return False, None, 0\n\n # noinspection PyUnboundLocalVariable\n def find_nearest(self, p: complex) -> tuple[int, bool]:\n while True:\n idx, dist = self._find_nearest_in_index(p, self.index)\n if self.reverse:\n ridx, rdist = self._find_nearest_in_index(p, self.rindex)\n\n if ridx is not None and idx is not None:\n break\n elif idx is not None:\n break\n self._reindex()\n\n if self.reverse:\n if rdist < dist:\n return ridx, True # type: ignore\n else:\n return idx, False\n else:\n return idx, False\n\n def _find_nearest_in_index(self, p: complex, index: KDTree) -> tuple[int | None, float]:\n \"\"\"Check the N nearest lines, hopefully find one that is active.\"\"\"\n\n dists, idxs = index.query((p.real, p.imag), k=100)\n for dist, idx in zip(dists, idxs):\n if ~np.isinf(dist) and self.available[idx]:\n return idx, dist\n\n return None, 0.0\n", "path": "vpype/line_index.py"}], "after_files": [{"content": "from __future__ import annotations\n\nimport logging\nfrom typing import Iterable\n\nimport numpy as np\nfrom scipy.spatial import KDTree\n\n# REMINDER: anything added here must be added to docs/api.rst\n__all__ = [\"LineIndex\"]\n\n\nclass LineIndex:\n \"\"\"Wrapper to scipy.spatial.KDTree to facilitate systematic processing of a line\n collection.\n\n Implementation note: we use the `available` bool array because deleting stuff from the\n index is costly.\n \"\"\"\n\n def __init__(self, lines: Iterable[np.ndarray], reverse: bool = False):\n self.lines = [line for line in lines if len(line) > 0]\n self.reverse = reverse\n self._make_index()\n\n def _make_index(self) -> None:\n logging.info(f\"LineIndex: creating index for {len(self.lines)} lines\")\n self.available = np.ones(shape=len(self.lines), dtype=bool)\n\n # create rtree index\n self.index = KDTree(\n np.array([(line[0].real, line[0].imag) for line in self.lines]).reshape(-1, 2)\n )\n\n # create reverse index\n if self.reverse:\n self.rindex = KDTree(\n np.array([(line[-1].real, line[-1].imag) for line in self.lines]).reshape(\n -1, 2\n )\n )\n\n def _reindex(self) -> None:\n self.lines = [line for idx, line in enumerate(self.lines) if self.available[idx]]\n self._make_index()\n\n def __len__(self) -> int:\n return np.count_nonzero(self.available)\n\n def __getitem__(self, item):\n return self.lines[item]\n\n def pop_front(self) -> np.ndarray:\n if len(self) == 0:\n raise RuntimeError\n idx = int(np.argmax(self.available))\n self.available[idx] = False\n return self.lines[idx]\n\n def pop(self, idx: int) -> np.ndarray | None:\n if not self.available[idx]:\n return None\n self.available[idx] = False\n return self.lines[idx]\n\n def find_nearest_within(self, p: complex, max_dist: float) -> tuple[int | None, bool]:\n \"\"\"Find the closest line, assuming a maximum admissible distance.\n Returns a tuple of (idx, reverse), where `idx` may be None if nothing is found.\n `reverse` indicates whether or not a line ending has been matched instead of a start.\n False is always returned if index was created with `reverse=False`.s\n \"\"\"\n\n ridx = None\n rdist: float | None = 0.0\n\n while True:\n reindex, idx, dist = self._find_nearest_within_in_index(p, max_dist, self.index)\n if reindex:\n self._reindex()\n continue\n\n if self.reverse:\n reindex, ridx, rdist = self._find_nearest_within_in_index(\n p, max_dist, self.rindex\n )\n if reindex:\n self._reindex()\n continue\n break\n\n if self.reverse:\n if idx is None and ridx is None:\n return None, False\n elif idx is not None and ridx is None:\n return idx, False\n elif idx is None and ridx is not None:\n return ridx, True\n elif rdist < dist: # type: ignore\n return ridx, True\n else:\n return idx, False\n else:\n return idx, False\n\n def _find_nearest_within_in_index(\n self, p: complex, max_dist: float, index: KDTree\n ) -> tuple[bool, int | None, float | None]:\n \"\"\"Find nearest in specific index. Return (reindex, idx, dist) tuple, where\n reindex indicates if a reindex is needed.\n \"\"\"\n\n # For performance reason, we query only a max of k candidates. In the special case\n # where all distances are not inf and none are available, we might have more than k\n # suitable candidate, so we reindex and loop. Otherwise, we check the query results\n # for availability and not inf and return anything found\n dists, idxs = index.query((p.real, p.imag), k=50, distance_upper_bound=max_dist)\n dists = np.array(dists)\n\n not_inf = ~np.isinf(dists)\n if np.all(not_inf) and np.all(~self.available[idxs[not_inf]]):\n return True, None, 0\n\n candidates = self.available[idxs[not_inf]]\n\n if np.any(candidates):\n idx = np.argmax(candidates)\n return False, idxs[not_inf][idx], dists[not_inf][idx]\n else:\n return False, None, 0\n\n # noinspection PyUnboundLocalVariable\n def find_nearest(self, p: complex) -> tuple[int, bool]:\n while True:\n idx, dist = self._find_nearest_in_index(p, self.index)\n if self.reverse:\n ridx, rdist = self._find_nearest_in_index(p, self.rindex)\n\n if ridx is not None and idx is not None:\n break\n elif idx is not None:\n break\n self._reindex()\n\n if self.reverse:\n if rdist < dist:\n return ridx, True # type: ignore\n else:\n return idx, False\n else:\n return idx, False\n\n def _find_nearest_in_index(self, p: complex, index: KDTree) -> tuple[int | None, float]:\n \"\"\"Check the N nearest lines, hopefully find one that is active.\"\"\"\n\n dists, idxs = index.query((p.real, p.imag), k=100)\n for dist, idx in zip(dists, idxs):\n if ~np.isinf(dist) and self.available[idx]:\n return idx, dist\n\n return None, 0.0\n", "path": "vpype/line_index.py"}]} | 1,997 | 168 |
gh_patches_debug_32004 | rasdani/github-patches | git_diff | OCA__bank-payment-18 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
many transactions not regonized
After importing a dutch bank statements file from the ING, I got the following screen:

Many transactions are not processed because of not recognizing the correct bankaccount number.
The system is complaining about "Statements found for unknown account 68xxx" and "Statements found for account 06xxxxxx, but no default journal was defined.
This is a little strange, because I have setup "Default Import Settings for Bank Accounts"
in the settings of the accounting section.
I also put a default journal in there.
My information entered there is the IBAN number of my account, together with the national account (with leading zero). I also tried without the leading zero in the national account, but this doesn't make any difference at all.
Why is openerp mixing up the number of the bankaccount. (with/without zero) and not matching the national number.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `account_banking/res_partner_bank.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 ##############################################################################
3 #
4 # Copyright (C) 2009 EduSense BV (<http://www.edusense.nl>).
5 # (C) 2011 - 2014 Therp BV (<http://therp.nl>).
6 #
7 # This program is free software: you can redistribute it and/or modify
8 # it under the terms of the GNU Affero General Public License as
9 # published by the Free Software Foundation, either version 3 of the
10 # License, or (at your option) any later version.
11 #
12 # This program is distributed in the hope that it will be useful,
13 # but WITHOUT ANY WARRANTY; without even the implied warranty of
14 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 # GNU Affero General Public License for more details.
16 #
17 # You should have received a copy of the GNU Affero General Public License
18 # along with this program. If not, see <http://www.gnu.org/licenses/>.
19 #
20 ##############################################################################
21 from openerp.osv import orm
22 from openerp.addons.account_banking import sepa
23
24
25 class ResPartnerBank(orm.Model):
26 _inherit = 'res.partner.bank'
27
28 def online_account_info(
29 self, cr, uid, country_code, acc_number, context=None):
30 """
31 API hook for legacy online lookup of account info,
32 to be removed in OpenERP 8.0.
33 """
34 return False
35
36 def search(self, cr, uid, args, *rest, **kwargs):
37 """
38 When a complete IBAN is searched, also search for its BBAN
39 if we have the domestic column. Disregard spaces
40 when comparing IBANs.
41 """
42
43 def is_term(arg):
44 '''Flag an arg as term or otherwise'''
45 return isinstance(arg, (list, tuple)) and len(arg) == 3
46
47 def extended_filter_term(term):
48 '''
49 Extend the search criteria in term when appropriate.
50 '''
51 result = [term]
52 extra_terms = []
53 if term[0].lower() == 'acc_number' and term[1] in ('=', '=='):
54 iban = sepa.IBAN(term[2])
55 if iban.valid:
56 # Disregard spaces when comparing IBANs
57 cr.execute(
58 """
59 SELECT id FROM res_partner_bank
60 WHERE replace(acc_number, ' ', '') = %s
61 """, (term[2].replace(' ', ''),))
62 ids = [row[0] for row in cr.fetchall()]
63 result = [('id', 'in', ids)]
64
65 if 'acc_number_domestic' in self._columns:
66 bban = iban.localized_BBAN
67 # Prevent empty search filters
68 if bban:
69 extra_terms.append(
70 ('acc_number_domestic', term[1], bban))
71 for extra_term in extra_terms:
72 result = ['|'] + result + [extra_term]
73 return result
74
75 def extended_search_expression(args):
76 '''
77 Extend the search expression in args when appropriate.
78 The expression itself is in reverse polish notation, so recursion
79 is not needed.
80 '''
81 if not args:
82 return []
83
84 result = []
85 if is_term(args[0]) and len(args) > 1:
86 # Classic filter, implicit '&'
87 result += ['&']
88
89 for arg in args:
90 if is_term(arg):
91 result += extended_filter_term(arg)
92 else:
93 result += arg
94 return result
95
96 # Extend search filter
97 newargs = extended_search_expression(args)
98
99 # Original search
100 return super(ResPartnerBank, self).search(
101 cr, uid, newargs, *rest, **kwargs)
102
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/account_banking/res_partner_bank.py b/account_banking/res_partner_bank.py
--- a/account_banking/res_partner_bank.py
+++ b/account_banking/res_partner_bank.py
@@ -44,6 +44,27 @@
'''Flag an arg as term or otherwise'''
return isinstance(arg, (list, tuple)) and len(arg) == 3
+ def search_domestic(account):
+ """
+ Performing raw sql search for full account numbers,
+ ignoring leading zeroes.
+ """
+ account = (account or '').lstrip('0')
+ if not account:
+ return ('id', '=', 0)
+ query = """
+ SELECT id FROM res_partner_bank
+ WHERE (
+ STATE != 'iban'
+ AND TRIM(LEADING '0' FROM acc_number) = %(account)s)
+ """
+ if 'acc_number_domestic' in self._columns:
+ query += """
+ OR TRIM(LEADING '0' FROM acc_number_domestic)
+ = %(account)s"""
+ cr.execute(query, {'account': account})
+ return ('id', 'in', [row[0] for row in cr.fetchall()])
+
def extended_filter_term(term):
'''
Extend the search criteria in term when appropriate.
@@ -62,12 +83,11 @@
ids = [row[0] for row in cr.fetchall()]
result = [('id', 'in', ids)]
- if 'acc_number_domestic' in self._columns:
- bban = iban.localized_BBAN
- # Prevent empty search filters
- if bban:
- extra_terms.append(
- ('acc_number_domestic', term[1], bban))
+ bban = iban.localized_BBAN
+ if bban:
+ extra_terms.append(search_domestic(bban))
+ else:
+ result = [search_domestic(term[2])]
for extra_term in extra_terms:
result = ['|'] + result + [extra_term]
return result
| {"golden_diff": "diff --git a/account_banking/res_partner_bank.py b/account_banking/res_partner_bank.py\n--- a/account_banking/res_partner_bank.py\n+++ b/account_banking/res_partner_bank.py\n@@ -44,6 +44,27 @@\n '''Flag an arg as term or otherwise'''\n return isinstance(arg, (list, tuple)) and len(arg) == 3\n \n+ def search_domestic(account):\n+ \"\"\"\n+ Performing raw sql search for full account numbers,\n+ ignoring leading zeroes.\n+ \"\"\"\n+ account = (account or '').lstrip('0')\n+ if not account:\n+ return ('id', '=', 0)\n+ query = \"\"\"\n+ SELECT id FROM res_partner_bank\n+ WHERE (\n+ STATE != 'iban'\n+ AND TRIM(LEADING '0' FROM acc_number) = %(account)s)\n+ \"\"\"\n+ if 'acc_number_domestic' in self._columns:\n+ query += \"\"\"\n+ OR TRIM(LEADING '0' FROM acc_number_domestic)\n+ = %(account)s\"\"\"\n+ cr.execute(query, {'account': account})\n+ return ('id', 'in', [row[0] for row in cr.fetchall()])\n+\n def extended_filter_term(term):\n '''\n Extend the search criteria in term when appropriate.\n@@ -62,12 +83,11 @@\n ids = [row[0] for row in cr.fetchall()]\n result = [('id', 'in', ids)]\n \n- if 'acc_number_domestic' in self._columns:\n- bban = iban.localized_BBAN\n- # Prevent empty search filters\n- if bban:\n- extra_terms.append(\n- ('acc_number_domestic', term[1], bban))\n+ bban = iban.localized_BBAN\n+ if bban:\n+ extra_terms.append(search_domestic(bban))\n+ else:\n+ result = [search_domestic(term[2])]\n for extra_term in extra_terms:\n result = ['|'] + result + [extra_term]\n return result\n", "issue": "many transactions not regonized\nAfter importing a dutch bank statements file from the ING, I got the following screen:\n\n\n\nMany transactions are not processed because of not recognizing the correct bankaccount number. \n\nThe system is complaining about \"Statements found for unknown account 68xxx\" and \"Statements found for account 06xxxxxx, but no default journal was defined.\n\nThis is a little strange, because I have setup \"Default Import Settings for Bank Accounts\"\nin the settings of the accounting section.\n\nI also put a default journal in there.\n\nMy information entered there is the IBAN number of my account, together with the national account (with leading zero). I also tried without the leading zero in the national account, but this doesn't make any difference at all.\n\nWhy is openerp mixing up the number of the bankaccount. (with/without zero) and not matching the national number.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n##############################################################################\n#\n# Copyright (C) 2009 EduSense BV (<http://www.edusense.nl>).\n# (C) 2011 - 2014 Therp BV (<http://therp.nl>).\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Affero General Public License as\n# published by the Free Software Foundation, either version 3 of the\n# License, or (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Affero General Public License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program. If not, see <http://www.gnu.org/licenses/>.\n#\n##############################################################################\nfrom openerp.osv import orm\nfrom openerp.addons.account_banking import sepa\n\n\nclass ResPartnerBank(orm.Model):\n _inherit = 'res.partner.bank'\n\n def online_account_info(\n self, cr, uid, country_code, acc_number, context=None):\n \"\"\"\n API hook for legacy online lookup of account info,\n to be removed in OpenERP 8.0.\n \"\"\"\n return False\n\n def search(self, cr, uid, args, *rest, **kwargs):\n \"\"\"\n When a complete IBAN is searched, also search for its BBAN\n if we have the domestic column. Disregard spaces\n when comparing IBANs.\n \"\"\"\n\n def is_term(arg):\n '''Flag an arg as term or otherwise'''\n return isinstance(arg, (list, tuple)) and len(arg) == 3\n\n def extended_filter_term(term):\n '''\n Extend the search criteria in term when appropriate.\n '''\n result = [term]\n extra_terms = []\n if term[0].lower() == 'acc_number' and term[1] in ('=', '=='):\n iban = sepa.IBAN(term[2])\n if iban.valid:\n # Disregard spaces when comparing IBANs\n cr.execute(\n \"\"\"\n SELECT id FROM res_partner_bank\n WHERE replace(acc_number, ' ', '') = %s\n \"\"\", (term[2].replace(' ', ''),))\n ids = [row[0] for row in cr.fetchall()]\n result = [('id', 'in', ids)]\n\n if 'acc_number_domestic' in self._columns:\n bban = iban.localized_BBAN\n # Prevent empty search filters\n if bban:\n extra_terms.append(\n ('acc_number_domestic', term[1], bban))\n for extra_term in extra_terms:\n result = ['|'] + result + [extra_term]\n return result\n\n def extended_search_expression(args):\n '''\n Extend the search expression in args when appropriate.\n The expression itself is in reverse polish notation, so recursion\n is not needed.\n '''\n if not args:\n return []\n\n result = []\n if is_term(args[0]) and len(args) > 1:\n # Classic filter, implicit '&'\n result += ['&']\n\n for arg in args:\n if is_term(arg):\n result += extended_filter_term(arg)\n else:\n result += arg\n return result\n\n # Extend search filter\n newargs = extended_search_expression(args)\n\n # Original search\n return super(ResPartnerBank, self).search(\n cr, uid, newargs, *rest, **kwargs)\n", "path": "account_banking/res_partner_bank.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n##############################################################################\n#\n# Copyright (C) 2009 EduSense BV (<http://www.edusense.nl>).\n# (C) 2011 - 2014 Therp BV (<http://therp.nl>).\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Affero General Public License as\n# published by the Free Software Foundation, either version 3 of the\n# License, or (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Affero General Public License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program. If not, see <http://www.gnu.org/licenses/>.\n#\n##############################################################################\nfrom openerp.osv import orm\nfrom openerp.addons.account_banking import sepa\n\n\nclass ResPartnerBank(orm.Model):\n _inherit = 'res.partner.bank'\n\n def online_account_info(\n self, cr, uid, country_code, acc_number, context=None):\n \"\"\"\n API hook for legacy online lookup of account info,\n to be removed in OpenERP 8.0.\n \"\"\"\n return False\n\n def search(self, cr, uid, args, *rest, **kwargs):\n \"\"\"\n When a complete IBAN is searched, also search for its BBAN\n if we have the domestic column. Disregard spaces\n when comparing IBANs.\n \"\"\"\n\n def is_term(arg):\n '''Flag an arg as term or otherwise'''\n return isinstance(arg, (list, tuple)) and len(arg) == 3\n\n def search_domestic(account):\n \"\"\"\n Performing raw sql search for full account numbers,\n ignoring leading zeroes.\n \"\"\"\n account = (account or '').lstrip('0')\n if not account:\n return ('id', '=', 0)\n query = \"\"\"\n SELECT id FROM res_partner_bank\n WHERE (\n STATE != 'iban'\n AND TRIM(LEADING '0' FROM acc_number) = %(account)s)\n \"\"\"\n if 'acc_number_domestic' in self._columns:\n query += \"\"\"\n OR TRIM(LEADING '0' FROM acc_number_domestic)\n = %(account)s\"\"\"\n cr.execute(query, {'account': account})\n return ('id', 'in', [row[0] for row in cr.fetchall()])\n\n def extended_filter_term(term):\n '''\n Extend the search criteria in term when appropriate.\n '''\n result = [term]\n extra_terms = []\n if term[0].lower() == 'acc_number' and term[1] in ('=', '=='):\n iban = sepa.IBAN(term[2])\n if iban.valid:\n # Disregard spaces when comparing IBANs\n cr.execute(\n \"\"\"\n SELECT id FROM res_partner_bank\n WHERE replace(acc_number, ' ', '') = %s\n \"\"\", (term[2].replace(' ', ''),))\n ids = [row[0] for row in cr.fetchall()]\n result = [('id', 'in', ids)]\n\n bban = iban.localized_BBAN\n if bban:\n extra_terms.append(search_domestic(bban))\n else:\n result = [search_domestic(term[2])]\n for extra_term in extra_terms:\n result = ['|'] + result + [extra_term]\n return result\n\n def extended_search_expression(args):\n '''\n Extend the search expression in args when appropriate.\n The expression itself is in reverse polish notation, so recursion\n is not needed.\n '''\n if not args:\n return []\n\n result = []\n if is_term(args[0]) and len(args) > 1:\n # Classic filter, implicit '&'\n result += ['&']\n\n for arg in args:\n if is_term(arg):\n result += extended_filter_term(arg)\n else:\n result += arg\n return result\n\n # Extend search filter\n newargs = extended_search_expression(args)\n\n # Original search\n return super(ResPartnerBank, self).search(\n cr, uid, newargs, *rest, **kwargs)\n", "path": "account_banking/res_partner_bank.py"}]} | 1,504 | 452 |
gh_patches_debug_31740 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-2808 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Spider arby is broken
During the global build at 2021-05-26-14-42-23, spider **arby** failed with **0 features** and **0 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/logs/arby.log) and [the output](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/arby.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/arby.geojson))
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `locations/spiders/arbys.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 import json
3 import re
4 import scrapy
5 from locations.items import GeojsonPointItem
6
7
8 class ArbysSpider(scrapy.Spider):
9
10 name = "arby"
11 item_attributes = { 'brand': "Arby's", 'brand_wikidata': "Q630866" }
12 allowed_domains = ["locations.arbys.com"]
13 download_delay = 0.2
14 start_urls = (
15 'https://locations.arbys.com/browse/',
16 )
17
18 def get_store_info(self, response):
19 data = response.xpath('//script[@type="application/ld+json"]/text()').extract_first()
20 if data:
21 data = json.loads(data)[0]
22
23 properties = {
24 # store name is after the pipe, e.g. Fast Food Drive-Thru
25 # Restaurants | Arby's 8437
26 'ref': data["name"].rsplit("|", 1)[-1].strip(),
27 'name': data["name"],
28 'addr_full': data["address"]["streetAddress"].strip(),
29 'city': data["address"]["addressLocality"].strip(),
30 'state': data["address"]["addressRegion"],
31 'postcode': data["address"]["postalCode"],
32 'phone': data.get("telephone", None),
33 'lat': float(data["geo"]["latitude"]),
34 'lon': float(data["geo"]["longitude"]),
35 'website': response.url,
36 'opening_hours': data["openingHours"],
37 }
38
39 yield GeojsonPointItem(**properties)
40
41 def parse_store(self, response):
42 city_stores = response.xpath('//a[@class="location-name ga-link"]/@href').extract()
43 for city_store in city_stores:
44 yield scrapy.Request(
45 response.urljoin(city_store),
46 callback=self.get_store_info
47 )
48
49 def parse_state(self, response):
50
51 cities = response.xpath('//a[@class="ga-link"]/@href').extract()
52 for city in cities:
53 yield scrapy.Request(
54 response.urljoin(city),
55 callback=self.parse_store
56 )
57
58 def parse(self, response):
59 states = response.xpath('//a[@class="ga-link"]/@href').extract()
60
61 for state in states:
62 yield scrapy.Request(
63 response.urljoin(state),
64 callback=self.parse_state
65 )
66
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/locations/spiders/arbys.py b/locations/spiders/arbys.py
--- a/locations/spiders/arbys.py
+++ b/locations/spiders/arbys.py
@@ -12,24 +12,31 @@
allowed_domains = ["locations.arbys.com"]
download_delay = 0.2
start_urls = (
- 'https://locations.arbys.com/browse/',
+ 'https://locations.arbys.com/',
)
def get_store_info(self, response):
data = response.xpath('//script[@type="application/ld+json"]/text()').extract_first()
if data:
- data = json.loads(data)[0]
+ try:
+ data = json.loads(data)[0]
+ except json.JSONDecodeError:
+ # Unescaped " on two pages
+ lines = data.split('\n')
+ i = 2 + next(i for (i, line) in enumerate(lines) if "mainContentOfPage" in line)
+ lines[i] = '"text": ""}'
+ data = '\n'.join(lines)
+ data = json.loads(data)[0]
+
properties = {
- # store name is after the pipe, e.g. Fast Food Drive-Thru
- # Restaurants | Arby's 8437
- 'ref': data["name"].rsplit("|", 1)[-1].strip(),
+ 'ref': response.css('div.store-id::text').get().split(': ')[-1],
'name': data["name"],
'addr_full': data["address"]["streetAddress"].strip(),
'city': data["address"]["addressLocality"].strip(),
'state': data["address"]["addressRegion"],
'postcode': data["address"]["postalCode"],
- 'phone': data.get("telephone", None),
+ 'phone': data["address"]["telephone"],
'lat': float(data["geo"]["latitude"]),
'lon': float(data["geo"]["longitude"]),
'website': response.url,
| {"golden_diff": "diff --git a/locations/spiders/arbys.py b/locations/spiders/arbys.py\n--- a/locations/spiders/arbys.py\n+++ b/locations/spiders/arbys.py\n@@ -12,24 +12,31 @@\n allowed_domains = [\"locations.arbys.com\"]\n download_delay = 0.2\n start_urls = (\n- 'https://locations.arbys.com/browse/',\n+ 'https://locations.arbys.com/',\n )\n \n def get_store_info(self, response):\n data = response.xpath('//script[@type=\"application/ld+json\"]/text()').extract_first()\n if data:\n- data = json.loads(data)[0]\n+ try:\n+ data = json.loads(data)[0]\n+ except json.JSONDecodeError:\n+ # Unescaped \" on two pages\n+ lines = data.split('\\n')\n+ i = 2 + next(i for (i, line) in enumerate(lines) if \"mainContentOfPage\" in line)\n+ lines[i] = '\"text\": \"\"}'\n+ data = '\\n'.join(lines)\n+ data = json.loads(data)[0]\n+\n \n properties = {\n- # store name is after the pipe, e.g. Fast Food Drive-Thru\n- # Restaurants | Arby's 8437\n- 'ref': data[\"name\"].rsplit(\"|\", 1)[-1].strip(),\n+ 'ref': response.css('div.store-id::text').get().split(': ')[-1],\n 'name': data[\"name\"],\n 'addr_full': data[\"address\"][\"streetAddress\"].strip(),\n 'city': data[\"address\"][\"addressLocality\"].strip(),\n 'state': data[\"address\"][\"addressRegion\"],\n 'postcode': data[\"address\"][\"postalCode\"],\n- 'phone': data.get(\"telephone\", None),\n+ 'phone': data[\"address\"][\"telephone\"],\n 'lat': float(data[\"geo\"][\"latitude\"]),\n 'lon': float(data[\"geo\"][\"longitude\"]),\n 'website': response.url,\n", "issue": "Spider arby is broken\nDuring the global build at 2021-05-26-14-42-23, spider **arby** failed with **0 features** and **0 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/logs/arby.log) and [the output](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/arby.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/arby.geojson))\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport json\nimport re\nimport scrapy\nfrom locations.items import GeojsonPointItem\n\n\nclass ArbysSpider(scrapy.Spider):\n\n name = \"arby\"\n item_attributes = { 'brand': \"Arby's\", 'brand_wikidata': \"Q630866\" }\n allowed_domains = [\"locations.arbys.com\"]\n download_delay = 0.2\n start_urls = (\n 'https://locations.arbys.com/browse/',\n )\n\n def get_store_info(self, response):\n data = response.xpath('//script[@type=\"application/ld+json\"]/text()').extract_first()\n if data:\n data = json.loads(data)[0]\n\n properties = {\n # store name is after the pipe, e.g. Fast Food Drive-Thru\n # Restaurants | Arby's 8437\n 'ref': data[\"name\"].rsplit(\"|\", 1)[-1].strip(),\n 'name': data[\"name\"],\n 'addr_full': data[\"address\"][\"streetAddress\"].strip(),\n 'city': data[\"address\"][\"addressLocality\"].strip(),\n 'state': data[\"address\"][\"addressRegion\"],\n 'postcode': data[\"address\"][\"postalCode\"],\n 'phone': data.get(\"telephone\", None),\n 'lat': float(data[\"geo\"][\"latitude\"]),\n 'lon': float(data[\"geo\"][\"longitude\"]),\n 'website': response.url,\n 'opening_hours': data[\"openingHours\"],\n }\n\n yield GeojsonPointItem(**properties)\n\n def parse_store(self, response):\n city_stores = response.xpath('//a[@class=\"location-name ga-link\"]/@href').extract()\n for city_store in city_stores:\n yield scrapy.Request(\n response.urljoin(city_store),\n callback=self.get_store_info\n )\n\n def parse_state(self, response):\n\n cities = response.xpath('//a[@class=\"ga-link\"]/@href').extract()\n for city in cities:\n yield scrapy.Request(\n response.urljoin(city),\n callback=self.parse_store\n )\n\n def parse(self, response):\n states = response.xpath('//a[@class=\"ga-link\"]/@href').extract()\n\n for state in states:\n yield scrapy.Request(\n response.urljoin(state),\n callback=self.parse_state\n )\n", "path": "locations/spiders/arbys.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nimport json\nimport re\nimport scrapy\nfrom locations.items import GeojsonPointItem\n\n\nclass ArbysSpider(scrapy.Spider):\n\n name = \"arby\"\n item_attributes = { 'brand': \"Arby's\", 'brand_wikidata': \"Q630866\" }\n allowed_domains = [\"locations.arbys.com\"]\n download_delay = 0.2\n start_urls = (\n 'https://locations.arbys.com/',\n )\n\n def get_store_info(self, response):\n data = response.xpath('//script[@type=\"application/ld+json\"]/text()').extract_first()\n if data:\n try:\n data = json.loads(data)[0]\n except json.JSONDecodeError:\n # Unescaped \" on two pages\n lines = data.split('\\n')\n i = 2 + next(i for (i, line) in enumerate(lines) if \"mainContentOfPage\" in line)\n lines[i] = '\"text\": \"\"}'\n data = '\\n'.join(lines)\n data = json.loads(data)[0]\n\n\n properties = {\n 'ref': response.css('div.store-id::text').get().split(': ')[-1],\n 'name': data[\"name\"],\n 'addr_full': data[\"address\"][\"streetAddress\"].strip(),\n 'city': data[\"address\"][\"addressLocality\"].strip(),\n 'state': data[\"address\"][\"addressRegion\"],\n 'postcode': data[\"address\"][\"postalCode\"],\n 'phone': data[\"address\"][\"telephone\"],\n 'lat': float(data[\"geo\"][\"latitude\"]),\n 'lon': float(data[\"geo\"][\"longitude\"]),\n 'website': response.url,\n 'opening_hours': data[\"openingHours\"],\n }\n\n yield GeojsonPointItem(**properties)\n\n def parse_store(self, response):\n city_stores = response.xpath('//a[@class=\"location-name ga-link\"]/@href').extract()\n for city_store in city_stores:\n yield scrapy.Request(\n response.urljoin(city_store),\n callback=self.get_store_info\n )\n\n def parse_state(self, response):\n\n cities = response.xpath('//a[@class=\"ga-link\"]/@href').extract()\n for city in cities:\n yield scrapy.Request(\n response.urljoin(city),\n callback=self.parse_store\n )\n\n def parse(self, response):\n states = response.xpath('//a[@class=\"ga-link\"]/@href').extract()\n\n for state in states:\n yield scrapy.Request(\n response.urljoin(state),\n callback=self.parse_state\n )\n", "path": "locations/spiders/arbys.py"}]} | 1,062 | 446 |
gh_patches_debug_33993 | rasdani/github-patches | git_diff | mozilla__kitsune-3175 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Improve performance of _get_creator_counts util function
`kitsune.community.utils._get_creator_counts` until function is DB heavy and takes a lot of time to execute. Evaluate its usefulness and provide a way to optimize the query and/or cache the results.
This issue is related to the degraded performance SUMO experienced on Fri March 30th ([NR Error](https://rpm.newrelic.com/accounts/1299394/applications/45097089/downtime/34422892))
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kitsune/community/utils.py`
Content:
```
1 import hashlib
2
3 from datetime import datetime, date, timedelta
4 from django.conf import settings
5 from django.core.cache import cache
6 from django.db.models import Count, F
7
8 from kitsune.products.models import Product
9 from kitsune.questions.models import Answer
10 from kitsune.users.models import User
11 from kitsune.users.templatetags.jinja_helpers import profile_avatar
12 from kitsune.wiki.models import Revision
13
14
15 def top_contributors_questions(start=None, end=None, locale=None, product=None,
16 count=10, page=1, use_cache=True):
17 """Get the top Support Forum contributors."""
18 if use_cache:
19 cache_key = u'{}_{}_{}_{}_{}_{}'.format(start, end, locale, product, count, page)
20 cache_key = hashlib.sha1(cache_key.encode('utf-8')).hexdigest()
21 cache_key = 'top_contributors_questions_{}'.format(cache_key)
22 cached = cache.get(cache_key, None)
23 if cached:
24 return cached
25
26 answers = (Answer.objects
27 .exclude(is_spam=True)
28 .exclude(question__is_spam=True)
29 # Adding answer to your own question, isn't a contribution.
30 .exclude(creator_id=F('question__creator_id')))
31
32 if start is None:
33 # By default we go back 90 days.
34 start = date.today() - timedelta(days=90)
35 answers = answers.filter(created__gte=start)
36 if end:
37 # If no end is specified, we don't need to filter by it.
38 answers = answers.filter(created__lt=end)
39 if locale:
40 answers = answers.filter(question__locale=locale)
41 if product:
42 if isinstance(product, Product):
43 product = product.slug
44 answers = answers.filter(question__product__slug=product)
45
46 users = (User.objects
47 .filter(answers__in=answers)
48 .annotate(query_count=Count('answers'))
49 .order_by('-query_count'))
50 counts = _get_creator_counts(users, count, page)
51
52 if use_cache:
53 cache.set(cache_key, counts, 60*15) # 15 minutes
54 return counts
55
56
57 def top_contributors_kb(start=None, end=None, product=None, count=10, page=1, use_cache=True):
58 """Get the top KB editors (locale='en-US')."""
59 return top_contributors_l10n(
60 start, end, settings.WIKI_DEFAULT_LANGUAGE, product, count, use_cache)
61
62
63 def top_contributors_l10n(start=None, end=None, locale=None, product=None,
64 count=10, page=1, use_cache=True):
65 """Get the top l10n contributors for the KB."""
66 if use_cache:
67 cache_key = u'{}_{}_{}_{}_{}_{}'.format(start, end, locale, product, count, page)
68 cache_key = hashlib.sha1(cache_key.encode('utf-8')).hexdigest()
69 cache_key = u'top_contributors_l10n_{}'.format(cache_key)
70 cached = cache.get(cache_key, None)
71 if cached:
72 return cached
73
74 # Get the user ids and contribution count of the top contributors.
75 revisions = Revision.objects.all()
76 if locale is None:
77 # If there is no locale specified, exclude en-US only. The rest are
78 # l10n.
79 revisions = revisions.exclude(document__locale=settings.WIKI_DEFAULT_LANGUAGE)
80 if start is None:
81 # By default we go back 90 days.
82 start = date.today() - timedelta(days=90)
83 revisions = revisions.filter(created__gte=start)
84 if end:
85 # If no end is specified, we don't need to filter by it.
86 revisions = revisions.filter(created__lt=end)
87 if locale:
88 revisions = revisions.filter(document__locale=locale)
89 if product:
90 if isinstance(product, Product):
91 product = product.slug
92 revisions = revisions.filter(document__products__slug=product)
93
94 users = (User.objects
95 .filter(created_revisions__in=revisions)
96 .annotate(query_count=Count('created_revisions'))
97 .order_by('-query_count'))
98 counts = _get_creator_counts(users, count, page)
99
100 if use_cache:
101 cache.set(cache_key, counts, 60*15) # 15 minutes
102 return counts
103
104
105 def top_contributors_aoa(start=None, end=None, locale=None, count=10, page=1, use_cache=True):
106 """Get the top Army of Awesome contributors."""
107 # AoA is deprecated, return 0 until we remove all related code.
108 return ([], 0)
109
110
111 def _get_creator_counts(query, count, page):
112 total = query.count()
113 results = []
114 now = datetime.now()
115 for user in query[((page - 1) * count):(page * count)]:
116 last_contribution_date = user.profile.last_contribution_date
117 days_since_last_activity = None
118 if last_contribution_date:
119 days_since_last_activity = now - last_contribution_date
120
121 data = {
122 'count': user.query_count,
123 'term': user.id,
124 'user': {
125 'id': user.id,
126 'username': user.username,
127 'display_name': user.profile.display_name,
128 'avatar': profile_avatar(user, size=120),
129 'twitter_usernames': user.profile.twitter_usernames,
130 'last_contribution_date': last_contribution_date,
131 'days_since_last_activity': days_since_last_activity,
132 }
133 }
134 results.append(data)
135
136 return (results, total)
137
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/kitsune/community/utils.py b/kitsune/community/utils.py
--- a/kitsune/community/utils.py
+++ b/kitsune/community/utils.py
@@ -7,8 +7,7 @@
from kitsune.products.models import Product
from kitsune.questions.models import Answer
-from kitsune.users.models import User
-from kitsune.users.templatetags.jinja_helpers import profile_avatar
+from kitsune.users.models import User, UserMappingType
from kitsune.wiki.models import Revision
@@ -110,27 +109,36 @@
def _get_creator_counts(query, count, page):
total = query.count()
+
+ start = (page - 1) * count
+ end = page * count
+ query_data = query.values('id', 'query_count')[start:end]
+
+ query_data = {obj['id']: obj['query_count'] for obj in query_data}
+
+ users_data = (UserMappingType.search().filter(id__in=query_data.keys())
+ .values_dict('id', 'username', 'display_name',
+ 'avatar', 'twitter_usernames',
+ 'last_contribution_date')[:count])
+
+ users_data = UserMappingType.reshape(users_data)
+
results = []
now = datetime.now()
- for user in query[((page - 1) * count):(page * count)]:
- last_contribution_date = user.profile.last_contribution_date
- days_since_last_activity = None
- if last_contribution_date:
- days_since_last_activity = now - last_contribution_date
+
+ for u_data in users_data:
+ user_id = u_data.get('id')
+ last_contribution_date = u_data.get('last_contribution_date', None)
+
+ u_data['days_since_last_activity'] = ((now - last_contribution_date).days
+ if last_contribution_date else None)
data = {
- 'count': user.query_count,
- 'term': user.id,
- 'user': {
- 'id': user.id,
- 'username': user.username,
- 'display_name': user.profile.display_name,
- 'avatar': profile_avatar(user, size=120),
- 'twitter_usernames': user.profile.twitter_usernames,
- 'last_contribution_date': last_contribution_date,
- 'days_since_last_activity': days_since_last_activity,
- }
+ 'count': query_data.get(user_id),
+ 'term': user_id,
+ 'user': u_data
}
+
results.append(data)
- return (results, total)
+ return results, total
| {"golden_diff": "diff --git a/kitsune/community/utils.py b/kitsune/community/utils.py\n--- a/kitsune/community/utils.py\n+++ b/kitsune/community/utils.py\n@@ -7,8 +7,7 @@\n \n from kitsune.products.models import Product\n from kitsune.questions.models import Answer\n-from kitsune.users.models import User\n-from kitsune.users.templatetags.jinja_helpers import profile_avatar\n+from kitsune.users.models import User, UserMappingType\n from kitsune.wiki.models import Revision\n \n \n@@ -110,27 +109,36 @@\n \n def _get_creator_counts(query, count, page):\n total = query.count()\n+\n+ start = (page - 1) * count\n+ end = page * count\n+ query_data = query.values('id', 'query_count')[start:end]\n+\n+ query_data = {obj['id']: obj['query_count'] for obj in query_data}\n+\n+ users_data = (UserMappingType.search().filter(id__in=query_data.keys())\n+ .values_dict('id', 'username', 'display_name',\n+ 'avatar', 'twitter_usernames',\n+ 'last_contribution_date')[:count])\n+\n+ users_data = UserMappingType.reshape(users_data)\n+\n results = []\n now = datetime.now()\n- for user in query[((page - 1) * count):(page * count)]:\n- last_contribution_date = user.profile.last_contribution_date\n- days_since_last_activity = None\n- if last_contribution_date:\n- days_since_last_activity = now - last_contribution_date\n+\n+ for u_data in users_data:\n+ user_id = u_data.get('id')\n+ last_contribution_date = u_data.get('last_contribution_date', None)\n+\n+ u_data['days_since_last_activity'] = ((now - last_contribution_date).days\n+ if last_contribution_date else None)\n \n data = {\n- 'count': user.query_count,\n- 'term': user.id,\n- 'user': {\n- 'id': user.id,\n- 'username': user.username,\n- 'display_name': user.profile.display_name,\n- 'avatar': profile_avatar(user, size=120),\n- 'twitter_usernames': user.profile.twitter_usernames,\n- 'last_contribution_date': last_contribution_date,\n- 'days_since_last_activity': days_since_last_activity,\n- }\n+ 'count': query_data.get(user_id),\n+ 'term': user_id,\n+ 'user': u_data\n }\n+\n results.append(data)\n \n- return (results, total)\n+ return results, total\n", "issue": "Improve performance of _get_creator_counts util function\n`kitsune.community.utils._get_creator_counts` until function is DB heavy and takes a lot of time to execute. Evaluate its usefulness and provide a way to optimize the query and/or cache the results. \r\n\r\nThis issue is related to the degraded performance SUMO experienced on Fri March 30th ([NR Error](https://rpm.newrelic.com/accounts/1299394/applications/45097089/downtime/34422892))\n", "before_files": [{"content": "import hashlib\n\nfrom datetime import datetime, date, timedelta\nfrom django.conf import settings\nfrom django.core.cache import cache\nfrom django.db.models import Count, F\n\nfrom kitsune.products.models import Product\nfrom kitsune.questions.models import Answer\nfrom kitsune.users.models import User\nfrom kitsune.users.templatetags.jinja_helpers import profile_avatar\nfrom kitsune.wiki.models import Revision\n\n\ndef top_contributors_questions(start=None, end=None, locale=None, product=None,\n count=10, page=1, use_cache=True):\n \"\"\"Get the top Support Forum contributors.\"\"\"\n if use_cache:\n cache_key = u'{}_{}_{}_{}_{}_{}'.format(start, end, locale, product, count, page)\n cache_key = hashlib.sha1(cache_key.encode('utf-8')).hexdigest()\n cache_key = 'top_contributors_questions_{}'.format(cache_key)\n cached = cache.get(cache_key, None)\n if cached:\n return cached\n\n answers = (Answer.objects\n .exclude(is_spam=True)\n .exclude(question__is_spam=True)\n # Adding answer to your own question, isn't a contribution.\n .exclude(creator_id=F('question__creator_id')))\n\n if start is None:\n # By default we go back 90 days.\n start = date.today() - timedelta(days=90)\n answers = answers.filter(created__gte=start)\n if end:\n # If no end is specified, we don't need to filter by it.\n answers = answers.filter(created__lt=end)\n if locale:\n answers = answers.filter(question__locale=locale)\n if product:\n if isinstance(product, Product):\n product = product.slug\n answers = answers.filter(question__product__slug=product)\n\n users = (User.objects\n .filter(answers__in=answers)\n .annotate(query_count=Count('answers'))\n .order_by('-query_count'))\n counts = _get_creator_counts(users, count, page)\n\n if use_cache:\n cache.set(cache_key, counts, 60*15) # 15 minutes\n return counts\n\n\ndef top_contributors_kb(start=None, end=None, product=None, count=10, page=1, use_cache=True):\n \"\"\"Get the top KB editors (locale='en-US').\"\"\"\n return top_contributors_l10n(\n start, end, settings.WIKI_DEFAULT_LANGUAGE, product, count, use_cache)\n\n\ndef top_contributors_l10n(start=None, end=None, locale=None, product=None,\n count=10, page=1, use_cache=True):\n \"\"\"Get the top l10n contributors for the KB.\"\"\"\n if use_cache:\n cache_key = u'{}_{}_{}_{}_{}_{}'.format(start, end, locale, product, count, page)\n cache_key = hashlib.sha1(cache_key.encode('utf-8')).hexdigest()\n cache_key = u'top_contributors_l10n_{}'.format(cache_key)\n cached = cache.get(cache_key, None)\n if cached:\n return cached\n\n # Get the user ids and contribution count of the top contributors.\n revisions = Revision.objects.all()\n if locale is None:\n # If there is no locale specified, exclude en-US only. The rest are\n # l10n.\n revisions = revisions.exclude(document__locale=settings.WIKI_DEFAULT_LANGUAGE)\n if start is None:\n # By default we go back 90 days.\n start = date.today() - timedelta(days=90)\n revisions = revisions.filter(created__gte=start)\n if end:\n # If no end is specified, we don't need to filter by it.\n revisions = revisions.filter(created__lt=end)\n if locale:\n revisions = revisions.filter(document__locale=locale)\n if product:\n if isinstance(product, Product):\n product = product.slug\n revisions = revisions.filter(document__products__slug=product)\n\n users = (User.objects\n .filter(created_revisions__in=revisions)\n .annotate(query_count=Count('created_revisions'))\n .order_by('-query_count'))\n counts = _get_creator_counts(users, count, page)\n\n if use_cache:\n cache.set(cache_key, counts, 60*15) # 15 minutes\n return counts\n\n\ndef top_contributors_aoa(start=None, end=None, locale=None, count=10, page=1, use_cache=True):\n \"\"\"Get the top Army of Awesome contributors.\"\"\"\n # AoA is deprecated, return 0 until we remove all related code.\n return ([], 0)\n\n\ndef _get_creator_counts(query, count, page):\n total = query.count()\n results = []\n now = datetime.now()\n for user in query[((page - 1) * count):(page * count)]:\n last_contribution_date = user.profile.last_contribution_date\n days_since_last_activity = None\n if last_contribution_date:\n days_since_last_activity = now - last_contribution_date\n\n data = {\n 'count': user.query_count,\n 'term': user.id,\n 'user': {\n 'id': user.id,\n 'username': user.username,\n 'display_name': user.profile.display_name,\n 'avatar': profile_avatar(user, size=120),\n 'twitter_usernames': user.profile.twitter_usernames,\n 'last_contribution_date': last_contribution_date,\n 'days_since_last_activity': days_since_last_activity,\n }\n }\n results.append(data)\n\n return (results, total)\n", "path": "kitsune/community/utils.py"}], "after_files": [{"content": "import hashlib\n\nfrom datetime import datetime, date, timedelta\nfrom django.conf import settings\nfrom django.core.cache import cache\nfrom django.db.models import Count, F\n\nfrom kitsune.products.models import Product\nfrom kitsune.questions.models import Answer\nfrom kitsune.users.models import User, UserMappingType\nfrom kitsune.wiki.models import Revision\n\n\ndef top_contributors_questions(start=None, end=None, locale=None, product=None,\n count=10, page=1, use_cache=True):\n \"\"\"Get the top Support Forum contributors.\"\"\"\n if use_cache:\n cache_key = u'{}_{}_{}_{}_{}_{}'.format(start, end, locale, product, count, page)\n cache_key = hashlib.sha1(cache_key.encode('utf-8')).hexdigest()\n cache_key = 'top_contributors_questions_{}'.format(cache_key)\n cached = cache.get(cache_key, None)\n if cached:\n return cached\n\n answers = (Answer.objects\n .exclude(is_spam=True)\n .exclude(question__is_spam=True)\n # Adding answer to your own question, isn't a contribution.\n .exclude(creator_id=F('question__creator_id')))\n\n if start is None:\n # By default we go back 90 days.\n start = date.today() - timedelta(days=90)\n answers = answers.filter(created__gte=start)\n if end:\n # If no end is specified, we don't need to filter by it.\n answers = answers.filter(created__lt=end)\n if locale:\n answers = answers.filter(question__locale=locale)\n if product:\n if isinstance(product, Product):\n product = product.slug\n answers = answers.filter(question__product__slug=product)\n\n users = (User.objects\n .filter(answers__in=answers)\n .annotate(query_count=Count('answers'))\n .order_by('-query_count'))\n counts = _get_creator_counts(users, count, page)\n\n if use_cache:\n cache.set(cache_key, counts, 60*15) # 15 minutes\n return counts\n\n\ndef top_contributors_kb(start=None, end=None, product=None, count=10, page=1, use_cache=True):\n \"\"\"Get the top KB editors (locale='en-US').\"\"\"\n return top_contributors_l10n(\n start, end, settings.WIKI_DEFAULT_LANGUAGE, product, count, use_cache)\n\n\ndef top_contributors_l10n(start=None, end=None, locale=None, product=None,\n count=10, page=1, use_cache=True):\n \"\"\"Get the top l10n contributors for the KB.\"\"\"\n if use_cache:\n cache_key = u'{}_{}_{}_{}_{}_{}'.format(start, end, locale, product, count, page)\n cache_key = hashlib.sha1(cache_key.encode('utf-8')).hexdigest()\n cache_key = u'top_contributors_l10n_{}'.format(cache_key)\n cached = cache.get(cache_key, None)\n if cached:\n return cached\n\n # Get the user ids and contribution count of the top contributors.\n revisions = Revision.objects.all()\n if locale is None:\n # If there is no locale specified, exclude en-US only. The rest are\n # l10n.\n revisions = revisions.exclude(document__locale=settings.WIKI_DEFAULT_LANGUAGE)\n if start is None:\n # By default we go back 90 days.\n start = date.today() - timedelta(days=90)\n revisions = revisions.filter(created__gte=start)\n if end:\n # If no end is specified, we don't need to filter by it.\n revisions = revisions.filter(created__lt=end)\n if locale:\n revisions = revisions.filter(document__locale=locale)\n if product:\n if isinstance(product, Product):\n product = product.slug\n revisions = revisions.filter(document__products__slug=product)\n\n users = (User.objects\n .filter(created_revisions__in=revisions)\n .annotate(query_count=Count('created_revisions'))\n .order_by('-query_count'))\n counts = _get_creator_counts(users, count, page)\n\n if use_cache:\n cache.set(cache_key, counts, 60*15) # 15 minutes\n return counts\n\n\ndef top_contributors_aoa(start=None, end=None, locale=None, count=10, page=1, use_cache=True):\n \"\"\"Get the top Army of Awesome contributors.\"\"\"\n # AoA is deprecated, return 0 until we remove all related code.\n return ([], 0)\n\n\ndef _get_creator_counts(query, count, page):\n total = query.count()\n\n start = (page - 1) * count\n end = page * count\n query_data = query.values('id', 'query_count')[start:end]\n\n query_data = {obj['id']: obj['query_count'] for obj in query_data}\n\n users_data = (UserMappingType.search().filter(id__in=query_data.keys())\n .values_dict('id', 'username', 'display_name',\n 'avatar', 'twitter_usernames',\n 'last_contribution_date')[:count])\n\n users_data = UserMappingType.reshape(users_data)\n\n results = []\n now = datetime.now()\n\n for u_data in users_data:\n user_id = u_data.get('id')\n last_contribution_date = u_data.get('last_contribution_date', None)\n\n u_data['days_since_last_activity'] = ((now - last_contribution_date).days\n if last_contribution_date else None)\n\n data = {\n 'count': query_data.get(user_id),\n 'term': user_id,\n 'user': u_data\n }\n\n results.append(data)\n\n return results, total\n", "path": "kitsune/community/utils.py"}]} | 1,881 | 584 |
gh_patches_debug_55589 | rasdani/github-patches | git_diff | wagtail__wagtail-8800 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
patternlibrary_override_tags breaks all non-development installations
#8665 added a `patternlibrary_override_tags` tag library to wagtail.admin, which depends on the django-pattern-library package listed in our testing_extras dependencies. However, this tag library will be loaded on all environments that have `wagtail.admin` in their INSTALLED_APPS, so any Wagtail installation that doesn't include testing_extras is currently broken.
As a non-frontend dev, I don't know what this tag library is for, and there's no documentation for me to find out. Deleting it and uninstalling django-pattern-library doesn't break any tests, so it seems to me that it doesn't really belong in the testing dependencies. (If it really is a testing dependency, I'd suggest that moving patternlibrary_override_tags.py into one of the test apps in wagtail/test/ would let it do whatever it's meant to do without breaking non-testing environments.)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `wagtail/admin/templatetags/patternlibrary_override_tags.py`
Content:
```
1 from pattern_library.monkey_utils import override_tag
2
3 from wagtail.admin.templatetags.wagtailadmin_tags import register
4
5 override_tag(register, name="test_page_is_public")
6
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/wagtail/admin/templatetags/patternlibrary_override_tags.py b/wagtail/admin/templatetags/patternlibrary_override_tags.py
deleted file mode 100644
--- a/wagtail/admin/templatetags/patternlibrary_override_tags.py
+++ /dev/null
@@ -1,5 +0,0 @@
-from pattern_library.monkey_utils import override_tag
-
-from wagtail.admin.templatetags.wagtailadmin_tags import register
-
-override_tag(register, name="test_page_is_public")
| {"golden_diff": "diff --git a/wagtail/admin/templatetags/patternlibrary_override_tags.py b/wagtail/admin/templatetags/patternlibrary_override_tags.py\ndeleted file mode 100644\n--- a/wagtail/admin/templatetags/patternlibrary_override_tags.py\n+++ /dev/null\n@@ -1,5 +0,0 @@\n-from pattern_library.monkey_utils import override_tag\n-\n-from wagtail.admin.templatetags.wagtailadmin_tags import register\n-\n-override_tag(register, name=\"test_page_is_public\")\n", "issue": "patternlibrary_override_tags breaks all non-development installations\n#8665 added a `patternlibrary_override_tags` tag library to wagtail.admin, which depends on the django-pattern-library package listed in our testing_extras dependencies. However, this tag library will be loaded on all environments that have `wagtail.admin` in their INSTALLED_APPS, so any Wagtail installation that doesn't include testing_extras is currently broken.\r\n\r\nAs a non-frontend dev, I don't know what this tag library is for, and there's no documentation for me to find out. Deleting it and uninstalling django-pattern-library doesn't break any tests, so it seems to me that it doesn't really belong in the testing dependencies. (If it really is a testing dependency, I'd suggest that moving patternlibrary_override_tags.py into one of the test apps in wagtail/test/ would let it do whatever it's meant to do without breaking non-testing environments.)\n", "before_files": [{"content": "from pattern_library.monkey_utils import override_tag\n\nfrom wagtail.admin.templatetags.wagtailadmin_tags import register\n\noverride_tag(register, name=\"test_page_is_public\")\n", "path": "wagtail/admin/templatetags/patternlibrary_override_tags.py"}], "after_files": [{"content": null, "path": "wagtail/admin/templatetags/patternlibrary_override_tags.py"}]} | 507 | 121 |
gh_patches_debug_17682 | rasdani/github-patches | git_diff | DataDog__dd-trace-py-3868 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
mypy invalid syntax error with python 2
### Which version of dd-trace-py are you using?
v1.2.0
### Which version of pip are you using?
22.1.2
### Which version of the libraries are you using?
```
ddtrace==1.2.0
mypy[python2]==0.961
```
### How can we reproduce your problem?
1. Clone https://gist.github.com/majorgreys/720201c4bdd33131e4c231207ef716ce
2. Run `docker run --rm -it (docker build .)`
### What is the result that you get?
```
usr/local/lib/python3.9/site-packages/ddtrace/contrib/redis/asyncio_patch.py:12: error: invalid syntax
Found 1 error in 1 file (errors prevented further checking)
```
### What is the result that you expected?
```
Success: no issues found in 1 source file
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ddtrace/contrib/redis/patch.py`
Content:
```
1 import redis
2
3 from ddtrace import config
4 from ddtrace.vendor import wrapt
5
6 from ...internal.utils.formats import stringify_cache_args
7 from ...pin import Pin
8 from ..trace_utils import unwrap
9 from .util import _trace_redis_cmd
10 from .util import _trace_redis_execute_pipeline
11
12
13 config._add("redis", dict(_default_service="redis"))
14
15
16 def patch():
17 """Patch the instrumented methods
18
19 This duplicated doesn't look nice. The nicer alternative is to use an ObjectProxy on top
20 of Redis and StrictRedis. However, it means that any "import redis.Redis" won't be instrumented.
21 """
22 if getattr(redis, "_datadog_patch", False):
23 return
24 setattr(redis, "_datadog_patch", True)
25
26 _w = wrapt.wrap_function_wrapper
27
28 if redis.VERSION < (3, 0, 0):
29 _w("redis", "StrictRedis.execute_command", traced_execute_command)
30 _w("redis", "StrictRedis.pipeline", traced_pipeline)
31 _w("redis", "Redis.pipeline", traced_pipeline)
32 _w("redis.client", "BasePipeline.execute", traced_execute_pipeline)
33 _w("redis.client", "BasePipeline.immediate_execute_command", traced_execute_command)
34 else:
35 _w("redis", "Redis.execute_command", traced_execute_command)
36 _w("redis", "Redis.pipeline", traced_pipeline)
37 _w("redis.client", "Pipeline.execute", traced_execute_pipeline)
38 _w("redis.client", "Pipeline.immediate_execute_command", traced_execute_command)
39 if redis.VERSION >= (4, 2, 0):
40 from .asyncio_patch import traced_async_execute_command
41 from .asyncio_patch import traced_async_execute_pipeline
42
43 _w("redis.asyncio.client", "Redis.execute_command", traced_async_execute_command)
44 _w("redis.asyncio.client", "Redis.pipeline", traced_pipeline)
45 _w("redis.asyncio.client", "Pipeline.execute", traced_async_execute_pipeline)
46 _w("redis.asyncio.client", "Pipeline.immediate_execute_command", traced_async_execute_command)
47 Pin(service=None).onto(redis.asyncio.Redis)
48 Pin(service=None).onto(redis.StrictRedis)
49
50
51 def unpatch():
52 if getattr(redis, "_datadog_patch", False):
53 setattr(redis, "_datadog_patch", False)
54
55 if redis.VERSION < (3, 0, 0):
56 unwrap(redis.StrictRedis, "execute_command")
57 unwrap(redis.StrictRedis, "pipeline")
58 unwrap(redis.Redis, "pipeline")
59 unwrap(redis.client.BasePipeline, "execute")
60 unwrap(redis.client.BasePipeline, "immediate_execute_command")
61 else:
62 unwrap(redis.Redis, "execute_command")
63 unwrap(redis.Redis, "pipeline")
64 unwrap(redis.client.Pipeline, "execute")
65 unwrap(redis.client.Pipeline, "immediate_execute_command")
66 if redis.VERSION >= (4, 2, 0):
67 unwrap(redis.asyncio.client.Redis, "execute_command")
68 unwrap(redis.asyncio.client.Redis, "pipeline")
69 unwrap(redis.asyncio.client.Pipeline, "execute")
70 unwrap(redis.asyncio.client.Pipeline, "immediate_execute_command")
71
72
73 #
74 # tracing functions
75 #
76 def traced_execute_command(func, instance, args, kwargs):
77 pin = Pin.get_from(instance)
78 if not pin or not pin.enabled():
79 return func(*args, **kwargs)
80
81 with _trace_redis_cmd(pin, config.redis, instance, args):
82 return func(*args, **kwargs)
83
84
85 def traced_pipeline(func, instance, args, kwargs):
86 pipeline = func(*args, **kwargs)
87 pin = Pin.get_from(instance)
88 if pin:
89 pin.onto(pipeline)
90 return pipeline
91
92
93 def traced_execute_pipeline(func, instance, args, kwargs):
94 pin = Pin.get_from(instance)
95 if not pin or not pin.enabled():
96 return func(*args, **kwargs)
97
98 cmds = [stringify_cache_args(c) for c, _ in instance.command_stack]
99 resource = "\n".join(cmds)
100 with _trace_redis_execute_pipeline(pin, config.redis, resource, instance):
101 return func(*args, **kwargs)
102
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ddtrace/contrib/redis/patch.py b/ddtrace/contrib/redis/patch.py
--- a/ddtrace/contrib/redis/patch.py
+++ b/ddtrace/contrib/redis/patch.py
@@ -1,4 +1,5 @@
import redis
+from six import PY3
from ddtrace import config
from ddtrace.vendor import wrapt
@@ -36,7 +37,8 @@
_w("redis", "Redis.pipeline", traced_pipeline)
_w("redis.client", "Pipeline.execute", traced_execute_pipeline)
_w("redis.client", "Pipeline.immediate_execute_command", traced_execute_command)
- if redis.VERSION >= (4, 2, 0):
+ # Avoid mypy invalid syntax errors when parsing Python 2 files
+ if PY3 and redis.VERSION >= (4, 2, 0):
from .asyncio_patch import traced_async_execute_command
from .asyncio_patch import traced_async_execute_pipeline
| {"golden_diff": "diff --git a/ddtrace/contrib/redis/patch.py b/ddtrace/contrib/redis/patch.py\n--- a/ddtrace/contrib/redis/patch.py\n+++ b/ddtrace/contrib/redis/patch.py\n@@ -1,4 +1,5 @@\n import redis\n+from six import PY3\n \n from ddtrace import config\n from ddtrace.vendor import wrapt\n@@ -36,7 +37,8 @@\n _w(\"redis\", \"Redis.pipeline\", traced_pipeline)\n _w(\"redis.client\", \"Pipeline.execute\", traced_execute_pipeline)\n _w(\"redis.client\", \"Pipeline.immediate_execute_command\", traced_execute_command)\n- if redis.VERSION >= (4, 2, 0):\n+ # Avoid mypy invalid syntax errors when parsing Python 2 files\n+ if PY3 and redis.VERSION >= (4, 2, 0):\n from .asyncio_patch import traced_async_execute_command\n from .asyncio_patch import traced_async_execute_pipeline\n", "issue": "mypy invalid syntax error with python 2\n### Which version of dd-trace-py are you using?\r\n\r\nv1.2.0\r\n\r\n### Which version of pip are you using?\r\n\r\n22.1.2\r\n\r\n### Which version of the libraries are you using?\r\n\r\n```\r\nddtrace==1.2.0\r\nmypy[python2]==0.961\r\n```\r\n\r\n### How can we reproduce your problem?\r\n\r\n1. Clone https://gist.github.com/majorgreys/720201c4bdd33131e4c231207ef716ce\r\n2. Run `docker run --rm -it (docker build .)`\r\n\r\n### What is the result that you get?\r\n\r\n```\r\nusr/local/lib/python3.9/site-packages/ddtrace/contrib/redis/asyncio_patch.py:12: error: invalid syntax\r\nFound 1 error in 1 file (errors prevented further checking)\r\n```\r\n\r\n### What is the result that you expected?\r\n\r\n```\r\nSuccess: no issues found in 1 source file\r\n```\r\n\n", "before_files": [{"content": "import redis\n\nfrom ddtrace import config\nfrom ddtrace.vendor import wrapt\n\nfrom ...internal.utils.formats import stringify_cache_args\nfrom ...pin import Pin\nfrom ..trace_utils import unwrap\nfrom .util import _trace_redis_cmd\nfrom .util import _trace_redis_execute_pipeline\n\n\nconfig._add(\"redis\", dict(_default_service=\"redis\"))\n\n\ndef patch():\n \"\"\"Patch the instrumented methods\n\n This duplicated doesn't look nice. The nicer alternative is to use an ObjectProxy on top\n of Redis and StrictRedis. However, it means that any \"import redis.Redis\" won't be instrumented.\n \"\"\"\n if getattr(redis, \"_datadog_patch\", False):\n return\n setattr(redis, \"_datadog_patch\", True)\n\n _w = wrapt.wrap_function_wrapper\n\n if redis.VERSION < (3, 0, 0):\n _w(\"redis\", \"StrictRedis.execute_command\", traced_execute_command)\n _w(\"redis\", \"StrictRedis.pipeline\", traced_pipeline)\n _w(\"redis\", \"Redis.pipeline\", traced_pipeline)\n _w(\"redis.client\", \"BasePipeline.execute\", traced_execute_pipeline)\n _w(\"redis.client\", \"BasePipeline.immediate_execute_command\", traced_execute_command)\n else:\n _w(\"redis\", \"Redis.execute_command\", traced_execute_command)\n _w(\"redis\", \"Redis.pipeline\", traced_pipeline)\n _w(\"redis.client\", \"Pipeline.execute\", traced_execute_pipeline)\n _w(\"redis.client\", \"Pipeline.immediate_execute_command\", traced_execute_command)\n if redis.VERSION >= (4, 2, 0):\n from .asyncio_patch import traced_async_execute_command\n from .asyncio_patch import traced_async_execute_pipeline\n\n _w(\"redis.asyncio.client\", \"Redis.execute_command\", traced_async_execute_command)\n _w(\"redis.asyncio.client\", \"Redis.pipeline\", traced_pipeline)\n _w(\"redis.asyncio.client\", \"Pipeline.execute\", traced_async_execute_pipeline)\n _w(\"redis.asyncio.client\", \"Pipeline.immediate_execute_command\", traced_async_execute_command)\n Pin(service=None).onto(redis.asyncio.Redis)\n Pin(service=None).onto(redis.StrictRedis)\n\n\ndef unpatch():\n if getattr(redis, \"_datadog_patch\", False):\n setattr(redis, \"_datadog_patch\", False)\n\n if redis.VERSION < (3, 0, 0):\n unwrap(redis.StrictRedis, \"execute_command\")\n unwrap(redis.StrictRedis, \"pipeline\")\n unwrap(redis.Redis, \"pipeline\")\n unwrap(redis.client.BasePipeline, \"execute\")\n unwrap(redis.client.BasePipeline, \"immediate_execute_command\")\n else:\n unwrap(redis.Redis, \"execute_command\")\n unwrap(redis.Redis, \"pipeline\")\n unwrap(redis.client.Pipeline, \"execute\")\n unwrap(redis.client.Pipeline, \"immediate_execute_command\")\n if redis.VERSION >= (4, 2, 0):\n unwrap(redis.asyncio.client.Redis, \"execute_command\")\n unwrap(redis.asyncio.client.Redis, \"pipeline\")\n unwrap(redis.asyncio.client.Pipeline, \"execute\")\n unwrap(redis.asyncio.client.Pipeline, \"immediate_execute_command\")\n\n\n#\n# tracing functions\n#\ndef traced_execute_command(func, instance, args, kwargs):\n pin = Pin.get_from(instance)\n if not pin or not pin.enabled():\n return func(*args, **kwargs)\n\n with _trace_redis_cmd(pin, config.redis, instance, args):\n return func(*args, **kwargs)\n\n\ndef traced_pipeline(func, instance, args, kwargs):\n pipeline = func(*args, **kwargs)\n pin = Pin.get_from(instance)\n if pin:\n pin.onto(pipeline)\n return pipeline\n\n\ndef traced_execute_pipeline(func, instance, args, kwargs):\n pin = Pin.get_from(instance)\n if not pin or not pin.enabled():\n return func(*args, **kwargs)\n\n cmds = [stringify_cache_args(c) for c, _ in instance.command_stack]\n resource = \"\\n\".join(cmds)\n with _trace_redis_execute_pipeline(pin, config.redis, resource, instance):\n return func(*args, **kwargs)\n", "path": "ddtrace/contrib/redis/patch.py"}], "after_files": [{"content": "import redis\nfrom six import PY3\n\nfrom ddtrace import config\nfrom ddtrace.vendor import wrapt\n\nfrom ...internal.utils.formats import stringify_cache_args\nfrom ...pin import Pin\nfrom ..trace_utils import unwrap\nfrom .util import _trace_redis_cmd\nfrom .util import _trace_redis_execute_pipeline\n\n\nconfig._add(\"redis\", dict(_default_service=\"redis\"))\n\n\ndef patch():\n \"\"\"Patch the instrumented methods\n\n This duplicated doesn't look nice. The nicer alternative is to use an ObjectProxy on top\n of Redis and StrictRedis. However, it means that any \"import redis.Redis\" won't be instrumented.\n \"\"\"\n if getattr(redis, \"_datadog_patch\", False):\n return\n setattr(redis, \"_datadog_patch\", True)\n\n _w = wrapt.wrap_function_wrapper\n\n if redis.VERSION < (3, 0, 0):\n _w(\"redis\", \"StrictRedis.execute_command\", traced_execute_command)\n _w(\"redis\", \"StrictRedis.pipeline\", traced_pipeline)\n _w(\"redis\", \"Redis.pipeline\", traced_pipeline)\n _w(\"redis.client\", \"BasePipeline.execute\", traced_execute_pipeline)\n _w(\"redis.client\", \"BasePipeline.immediate_execute_command\", traced_execute_command)\n else:\n _w(\"redis\", \"Redis.execute_command\", traced_execute_command)\n _w(\"redis\", \"Redis.pipeline\", traced_pipeline)\n _w(\"redis.client\", \"Pipeline.execute\", traced_execute_pipeline)\n _w(\"redis.client\", \"Pipeline.immediate_execute_command\", traced_execute_command)\n # Avoid mypy invalid syntax errors when parsing Python 2 files\n if PY3 and redis.VERSION >= (4, 2, 0):\n from .asyncio_patch import traced_async_execute_command\n from .asyncio_patch import traced_async_execute_pipeline\n\n _w(\"redis.asyncio.client\", \"Redis.execute_command\", traced_async_execute_command)\n _w(\"redis.asyncio.client\", \"Redis.pipeline\", traced_pipeline)\n _w(\"redis.asyncio.client\", \"Pipeline.execute\", traced_async_execute_pipeline)\n _w(\"redis.asyncio.client\", \"Pipeline.immediate_execute_command\", traced_async_execute_command)\n Pin(service=None).onto(redis.asyncio.Redis)\n Pin(service=None).onto(redis.StrictRedis)\n\n\ndef unpatch():\n if getattr(redis, \"_datadog_patch\", False):\n setattr(redis, \"_datadog_patch\", False)\n\n if redis.VERSION < (3, 0, 0):\n unwrap(redis.StrictRedis, \"execute_command\")\n unwrap(redis.StrictRedis, \"pipeline\")\n unwrap(redis.Redis, \"pipeline\")\n unwrap(redis.client.BasePipeline, \"execute\")\n unwrap(redis.client.BasePipeline, \"immediate_execute_command\")\n else:\n unwrap(redis.Redis, \"execute_command\")\n unwrap(redis.Redis, \"pipeline\")\n unwrap(redis.client.Pipeline, \"execute\")\n unwrap(redis.client.Pipeline, \"immediate_execute_command\")\n if redis.VERSION >= (4, 2, 0):\n unwrap(redis.asyncio.client.Redis, \"execute_command\")\n unwrap(redis.asyncio.client.Redis, \"pipeline\")\n unwrap(redis.asyncio.client.Pipeline, \"execute\")\n unwrap(redis.asyncio.client.Pipeline, \"immediate_execute_command\")\n\n\n#\n# tracing functions\n#\ndef traced_execute_command(func, instance, args, kwargs):\n pin = Pin.get_from(instance)\n if not pin or not pin.enabled():\n return func(*args, **kwargs)\n\n with _trace_redis_cmd(pin, config.redis, instance, args):\n return func(*args, **kwargs)\n\n\ndef traced_pipeline(func, instance, args, kwargs):\n pipeline = func(*args, **kwargs)\n pin = Pin.get_from(instance)\n if pin:\n pin.onto(pipeline)\n return pipeline\n\n\ndef traced_execute_pipeline(func, instance, args, kwargs):\n pin = Pin.get_from(instance)\n if not pin or not pin.enabled():\n return func(*args, **kwargs)\n\n cmds = [stringify_cache_args(c) for c, _ in instance.command_stack]\n resource = \"\\n\".join(cmds)\n with _trace_redis_execute_pipeline(pin, config.redis, resource, instance):\n return func(*args, **kwargs)\n", "path": "ddtrace/contrib/redis/patch.py"}]} | 1,578 | 213 |
gh_patches_debug_14674 | rasdani/github-patches | git_diff | beeware__toga-1626 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Android: `paths.app` doesn't work in event handlers
**Describe the bug**
On Android, `paths.app` returns `/` when called outside of the `startup` method, e.g. in an event handler. This is because the `__main__` module has finished importing, and it no longer exists in `sys.modules` under that name.
This doesn't happen on Windows, because it runs the entire main loop within the top level of the `__main__` module. The other desktop platforms are probably the same.
But on Android, the main loop is already running and we have to integrate into it, so importing the `__main__` module does not block. From inspection of the code, it looks like iOS is similar, so it might have the same problem.
The fallback to `/` makes no sense on Android anyway, so it should be removed.
Reported by @kakos-k9 on Discord.
**Test app**
```python
import toga
from toga.style import Pack
from toga.style.pack import COLUMN, ROW
class Hello(toga.App):
def startup(self):
button = toga.Button("Print", on_press=self.on_press)
self.on_press(button)
self.main_window = toga.MainWindow()
self.main_window.content = toga.Box(
style=Pack(direction=COLUMN),
children=[button]
)
self.main_window.show()
def on_press(self, button):
import traceback
for name in ["app", "data", "toga"]:
print(f"{name}={getattr(self.paths, name)}")
def main():
return Hello()
```
**Environment:**
- Operating System: Android
- Python version: 3.8
- Software versions:
- Toga: 9a292f8e26
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/android/toga_android/paths.py`
Content:
```
1 import sys
2 from pathlib import Path
3
4 import toga
5 from toga import App
6
7
8 class Paths:
9 # Allow instantiating Path object via the factory
10 Path = Path
11
12 @property
13 def __context(self):
14 return App.app._impl.native.getApplicationContext()
15
16 @property
17 def app(self):
18 try:
19 return Path(sys.modules["__main__"].__file__).parent
20 except KeyError:
21 # If we're running in test conditions,
22 # there is no __main__ module.
23 return Path.cwd()
24 except AttributeError:
25 # If we're running at an interactive prompt,
26 # the __main__ module isn't file-based.
27 return Path.cwd()
28
29 @property
30 def data(self):
31 return Path(self.__context.getFilesDir().getPath())
32
33 @property
34 def cache(self):
35 return Path(self.__context.getCacheDir().getPath())
36
37 @property
38 def logs(self):
39 return self.data
40
41 @property
42 def toga(self):
43 """Return a path to a Toga resources
44 """
45 return Path(toga.__file__).parent
46
47
48 paths = Paths()
49
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/android/toga_android/paths.py b/src/android/toga_android/paths.py
--- a/src/android/toga_android/paths.py
+++ b/src/android/toga_android/paths.py
@@ -13,18 +13,13 @@
def __context(self):
return App.app._impl.native.getApplicationContext()
+ def __init__(self):
+ # On Android, __main__ only exists during app startup, so cache its location now.
+ self._app = Path(sys.modules["__main__"].__file__).parent
+
@property
def app(self):
- try:
- return Path(sys.modules["__main__"].__file__).parent
- except KeyError:
- # If we're running in test conditions,
- # there is no __main__ module.
- return Path.cwd()
- except AttributeError:
- # If we're running at an interactive prompt,
- # the __main__ module isn't file-based.
- return Path.cwd()
+ return self._app
@property
def data(self):
| {"golden_diff": "diff --git a/src/android/toga_android/paths.py b/src/android/toga_android/paths.py\n--- a/src/android/toga_android/paths.py\n+++ b/src/android/toga_android/paths.py\n@@ -13,18 +13,13 @@\n def __context(self):\n return App.app._impl.native.getApplicationContext()\n \n+ def __init__(self):\n+ # On Android, __main__ only exists during app startup, so cache its location now.\n+ self._app = Path(sys.modules[\"__main__\"].__file__).parent\n+\n @property\n def app(self):\n- try:\n- return Path(sys.modules[\"__main__\"].__file__).parent\n- except KeyError:\n- # If we're running in test conditions,\n- # there is no __main__ module.\n- return Path.cwd()\n- except AttributeError:\n- # If we're running at an interactive prompt,\n- # the __main__ module isn't file-based.\n- return Path.cwd()\n+ return self._app\n \n @property\n def data(self):\n", "issue": "Android: `paths.app` doesn't work in event handlers\n**Describe the bug**\r\n\r\nOn Android, `paths.app` returns `/` when called outside of the `startup` method, e.g. in an event handler. This is because the `__main__` module has finished importing, and it no longer exists in `sys.modules` under that name.\r\n\r\nThis doesn't happen on Windows, because it runs the entire main loop within the top level of the `__main__` module. The other desktop platforms are probably the same.\r\n\r\nBut on Android, the main loop is already running and we have to integrate into it, so importing the `__main__` module does not block. From inspection of the code, it looks like iOS is similar, so it might have the same problem.\r\n\r\nThe fallback to `/` makes no sense on Android anyway, so it should be removed.\r\n\r\nReported by @kakos-k9 on Discord.\r\n\r\n**Test app**\r\n```python\r\nimport toga\r\nfrom toga.style import Pack\r\nfrom toga.style.pack import COLUMN, ROW\r\n\r\nclass Hello(toga.App):\r\n\r\n def startup(self):\r\n button = toga.Button(\"Print\", on_press=self.on_press)\r\n self.on_press(button)\r\n\r\n self.main_window = toga.MainWindow()\r\n self.main_window.content = toga.Box(\r\n style=Pack(direction=COLUMN),\r\n children=[button]\r\n )\r\n self.main_window.show()\r\n\r\n def on_press(self, button):\r\n import traceback\r\n for name in [\"app\", \"data\", \"toga\"]:\r\n print(f\"{name}={getattr(self.paths, name)}\")\r\n\r\n\r\ndef main():\r\n return Hello()\r\n```\r\n\r\n**Environment:**\r\n - Operating System: Android\r\n - Python version: 3.8\r\n - Software versions:\r\n - Toga: 9a292f8e26\r\n\n", "before_files": [{"content": "import sys\nfrom pathlib import Path\n\nimport toga\nfrom toga import App\n\n\nclass Paths:\n # Allow instantiating Path object via the factory\n Path = Path\n\n @property\n def __context(self):\n return App.app._impl.native.getApplicationContext()\n\n @property\n def app(self):\n try:\n return Path(sys.modules[\"__main__\"].__file__).parent\n except KeyError:\n # If we're running in test conditions,\n # there is no __main__ module.\n return Path.cwd()\n except AttributeError:\n # If we're running at an interactive prompt,\n # the __main__ module isn't file-based.\n return Path.cwd()\n\n @property\n def data(self):\n return Path(self.__context.getFilesDir().getPath())\n\n @property\n def cache(self):\n return Path(self.__context.getCacheDir().getPath())\n\n @property\n def logs(self):\n return self.data\n\n @property\n def toga(self):\n \"\"\"Return a path to a Toga resources\n \"\"\"\n return Path(toga.__file__).parent\n\n\npaths = Paths()\n", "path": "src/android/toga_android/paths.py"}], "after_files": [{"content": "import sys\nfrom pathlib import Path\n\nimport toga\nfrom toga import App\n\n\nclass Paths:\n # Allow instantiating Path object via the factory\n Path = Path\n\n @property\n def __context(self):\n return App.app._impl.native.getApplicationContext()\n\n def __init__(self):\n # On Android, __main__ only exists during app startup, so cache its location now.\n self._app = Path(sys.modules[\"__main__\"].__file__).parent\n\n @property\n def app(self):\n return self._app\n\n @property\n def data(self):\n return Path(self.__context.getFilesDir().getPath())\n\n @property\n def cache(self):\n return Path(self.__context.getCacheDir().getPath())\n\n @property\n def logs(self):\n return self.data\n\n @property\n def toga(self):\n \"\"\"Return a path to a Toga resources\n \"\"\"\n return Path(toga.__file__).parent\n\n\npaths = Paths()\n", "path": "src/android/toga_android/paths.py"}]} | 986 | 238 |
gh_patches_debug_43832 | rasdani/github-patches | git_diff | weecology__retriever-401 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Reserved keyword 'order' not being properly replaced
`clean_column_name()` in `table.py` should replace the column name `order` with `sporder`. It does not appear to be doing this in #286.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lib/table.py`
Content:
```
1 from retriever.lib.cleanup import *
2 import csv
3 import StringIO
4
5
6 class Table:
7 """Information about a database table."""
8 def __init__(self, name, **kwargs):
9 self.name = name
10 self.pk = True
11 self.contains_pk = False
12 self.delimiter = None
13 self.header_rows = 1
14 self.column_names_row = 1
15 self.fixed_width = False
16 self.cleanup = Cleanup()
17 self.record_id = 0
18 self.columns = []
19 self.replace_columns = []
20 self.escape_single_quotes=True
21 self.escape_double_quotes=True
22 for key, item in kwargs.items():
23 setattr(self, key, item[0] if isinstance(item, tuple) else item)
24
25 def auto_get_columns(self, header):
26 """Gets the column names from the header row
27
28 Identifies the column names from the header row.
29 Replaces database keywords with alternatives.
30 Replaces special characters and spaces.
31
32 """
33 if self.fixed_width:
34 column_names = self.extract_values(header)
35 else:
36 # Get column names from header row
37 values = self.split_on_delimiter(header)
38 column_names = [name.strip() for name in values]
39
40 columns = map(lambda x: self.clean_column_name(x), column_names)
41 column_values = {x:[] for x in columns if x}
42
43 return [[x, None] for x in columns if x], column_values
44
45 def clean_column_name(self, column_name):
46 '''Makes sure a column name is formatted correctly by removing reserved
47 words, symbols, numbers, etc.'''
48 column_name = column_name.lower()
49 replace_columns = {old.lower(): new.lower()
50 for old, new in self.replace_columns}
51 column_name = replace_columns.get(column_name, column_name)
52 replace = [
53 ("%", "percent"),
54 ("&", "and"),
55 ("\xb0", "degrees"),
56 ("group", "grp"),
57 ("order", "sporder"),
58 ("check", "checked"),
59 ("references", "refs"),
60 ("long", "lon"),
61 ("date", "record_date"),
62 ("?", ""),
63 ]
64 replace += [(x, '') for x in (")", "\n", "\r", '"', "'")]
65 replace += [(x, '_') for x in (" ", "(", "/", ".", "-")]
66 column_name = reduce(lambda x, y: x.replace(*y), replace, column_name)
67
68 while "__" in column_name:
69 column_name = column_name.replace("__", "_")
70 column_name = column_name.lstrip("0123456789_").rstrip("_")
71
72 return column_name
73
74 def split_on_delimiter(self, line):
75 dialect = csv.excel
76 dialect.escapechar = "\\"
77 r = csv.reader([line], dialect=dialect, delimiter=self.delimiter)
78 return r.next()
79
80 def combine_on_delimiter(self, line_as_list):
81 """Combine a list of values into a line of csv data"""
82 dialect = csv.excel
83 dialect.escapechar = "\\"
84 writer_file = StringIO.StringIO()
85 writer = csv.writer(writer_file, dialect=dialect, delimiter=self.delimiter)
86 writer.writerow(line_as_list)
87 return writer_file.getvalue()
88
89 def values_from_line(self, line):
90 linevalues = []
91 if (self.pk and self.contains_pk == False):
92 column = 0
93 else:
94 column = -1
95
96 for value in self.extract_values(line):
97 column += 1
98 try:
99 this_column = self.columns[column][1][0]
100
101 # If data type is "skip" ignore the value
102 if this_column == "skip":
103 pass
104 elif this_column == "combine":
105 # If "combine" append value to end of previous column
106 linevalues[-1] += " " + value
107 else:
108 # Otherwise, add new value
109 linevalues.append(value)
110 except:
111 # too many values for columns; ignore
112 pass
113
114 return linevalues
115
116 def extract_values(self, line):
117 """Given a line of data, this function returns a list of the individual
118 data values."""
119 if self.fixed_width:
120 pos = 0
121 values = []
122 for width in self.fixed_width:
123 values.append(line[pos:pos+width].strip())
124 pos += width
125 return values
126 else:
127 return self.split_on_delimiter(line)
128
129 def get_insert_columns(self, join=True):
130 """Gets a set of column names for insert statements."""
131 columns = ""
132 for item in self.columns:
133 thistype = item[1][0]
134 if ((thistype != "skip") and (thistype !="combine") and
135 (self.contains_pk == True or thistype[0:3] != "pk-")):
136 columns += item[0] + ", "
137 columns = columns.rstrip(', ')
138 if join:
139 return columns
140 else:
141 return columns.lstrip("(").rstrip(")").split(", ")
142
143 def get_column_datatypes(self):
144 """Gets a set of column names for insert statements."""
145 columns = []
146 for item in self.get_insert_columns(False):
147 for column in self.columns:
148 if item == column[0]:
149 columns.append(column[1][0])
150 return columns
151
152
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lib/table.py b/lib/table.py
--- a/lib/table.py
+++ b/lib/table.py
@@ -1,7 +1,8 @@
-from retriever.lib.cleanup import *
import csv
import StringIO
+from retriever.lib.cleanup import *
+
class Table:
"""Information about a database table."""
@@ -19,6 +20,7 @@
self.replace_columns = []
self.escape_single_quotes=True
self.escape_double_quotes=True
+ self.cleaned_columns = False
for key, item in kwargs.items():
setattr(self, key, item[0] if isinstance(item, tuple) else item)
@@ -39,36 +41,53 @@
columns = map(lambda x: self.clean_column_name(x), column_names)
column_values = {x:[] for x in columns if x}
-
+ self.cleaned_columns = True
return [[x, None] for x in columns if x], column_values
def clean_column_name(self, column_name):
- '''Makes sure a column name is formatted correctly by removing reserved
- words, symbols, numbers, etc.'''
- column_name = column_name.lower()
+ """Clean column names using the expected sql guidelines
+
+ remove leading whitespaces, replace sql key words, etc..
+ """
+ column_name = column_name.lower().strip()
replace_columns = {old.lower(): new.lower()
for old, new in self.replace_columns}
- column_name = replace_columns.get(column_name, column_name)
+
+ column_name = replace_columns.get(column_name, column_name).strip()
replace = [
- ("%", "percent"),
- ("&", "and"),
- ("\xb0", "degrees"),
- ("group", "grp"),
- ("order", "sporder"),
- ("check", "checked"),
- ("references", "refs"),
- ("long", "lon"),
- ("date", "record_date"),
- ("?", ""),
- ]
+ ("%", "percent"),
+ ("&", "and"),
+ ("\xb0", "degrees"),
+ ("?", ""),
+ ]
replace += [(x, '') for x in (")", "\n", "\r", '"', "'")]
- replace += [(x, '_') for x in (" ", "(", "/", ".", "-")]
+ replace += [(x, '_') for x in (" ", "(", "/", ".", "-", "*", ":")]
column_name = reduce(lambda x, y: x.replace(*y), replace, column_name)
-
while "__" in column_name:
column_name = column_name.replace("__", "_")
column_name = column_name.lstrip("0123456789_").rstrip("_")
-
+ replace_dict = {
+ "group": "grp",
+ "order": "ordered",
+ "check": "checked",
+ "references": "refs",
+ "long": "lon",
+ "column": "columns",
+ "cursor": "cursors",
+ "delete": "deleted",
+ "insert": "inserted",
+ "join": "joins",
+ "select": "selects",
+ "table": "tables",
+ "update": "updates",
+ "date": "record_date"
+ }
+ for x in (")", "\n", "\r", '"', "'"):
+ replace_dict[x] = ''
+ for x in (" ", "(", "/", ".", "-"):
+ replace_dict[x] = '_'
+ if column_name in replace_dict:
+ column_name = replace_dict[column_name]
return column_name
def split_on_delimiter(self, line):
@@ -129,6 +148,11 @@
def get_insert_columns(self, join=True):
"""Gets a set of column names for insert statements."""
columns = ""
+ if not self.cleaned_columns:
+ column_names = list(self.columns)
+ self.columns[:] = []
+ self.columns = [(self.clean_column_name(name[0]), name[1]) for name in column_names]
+ self.cleaned_columns = True
for item in self.columns:
thistype = item[1][0]
if ((thistype != "skip") and (thistype !="combine") and
@@ -148,4 +172,3 @@
if item == column[0]:
columns.append(column[1][0])
return columns
-
| {"golden_diff": "diff --git a/lib/table.py b/lib/table.py\n--- a/lib/table.py\n+++ b/lib/table.py\n@@ -1,7 +1,8 @@\n-from retriever.lib.cleanup import *\n import csv\n import StringIO\n \n+from retriever.lib.cleanup import *\n+\n \n class Table:\n \"\"\"Information about a database table.\"\"\"\n@@ -19,6 +20,7 @@\n self.replace_columns = []\n self.escape_single_quotes=True\n self.escape_double_quotes=True\n+ self.cleaned_columns = False\n for key, item in kwargs.items():\n setattr(self, key, item[0] if isinstance(item, tuple) else item)\n \n@@ -39,36 +41,53 @@\n \n columns = map(lambda x: self.clean_column_name(x), column_names)\n column_values = {x:[] for x in columns if x}\n-\n+ self.cleaned_columns = True\n return [[x, None] for x in columns if x], column_values\n \n def clean_column_name(self, column_name):\n- '''Makes sure a column name is formatted correctly by removing reserved\n- words, symbols, numbers, etc.'''\n- column_name = column_name.lower()\n+ \"\"\"Clean column names using the expected sql guidelines\n+\n+ remove leading whitespaces, replace sql key words, etc..\n+ \"\"\"\n+ column_name = column_name.lower().strip()\n replace_columns = {old.lower(): new.lower()\n for old, new in self.replace_columns}\n- column_name = replace_columns.get(column_name, column_name)\n+\n+ column_name = replace_columns.get(column_name, column_name).strip()\n replace = [\n- (\"%\", \"percent\"),\n- (\"&\", \"and\"),\n- (\"\\xb0\", \"degrees\"),\n- (\"group\", \"grp\"),\n- (\"order\", \"sporder\"),\n- (\"check\", \"checked\"),\n- (\"references\", \"refs\"),\n- (\"long\", \"lon\"),\n- (\"date\", \"record_date\"),\n- (\"?\", \"\"),\n- ]\n+ (\"%\", \"percent\"),\n+ (\"&\", \"and\"),\n+ (\"\\xb0\", \"degrees\"),\n+ (\"?\", \"\"),\n+ ]\n replace += [(x, '') for x in (\")\", \"\\n\", \"\\r\", '\"', \"'\")]\n- replace += [(x, '_') for x in (\" \", \"(\", \"/\", \".\", \"-\")]\n+ replace += [(x, '_') for x in (\" \", \"(\", \"/\", \".\", \"-\", \"*\", \":\")]\n column_name = reduce(lambda x, y: x.replace(*y), replace, column_name)\n-\n while \"__\" in column_name:\n column_name = column_name.replace(\"__\", \"_\")\n column_name = column_name.lstrip(\"0123456789_\").rstrip(\"_\")\n-\n+ replace_dict = {\n+ \"group\": \"grp\",\n+ \"order\": \"ordered\",\n+ \"check\": \"checked\",\n+ \"references\": \"refs\",\n+ \"long\": \"lon\",\n+ \"column\": \"columns\",\n+ \"cursor\": \"cursors\",\n+ \"delete\": \"deleted\",\n+ \"insert\": \"inserted\",\n+ \"join\": \"joins\",\n+ \"select\": \"selects\",\n+ \"table\": \"tables\",\n+ \"update\": \"updates\",\n+ \"date\": \"record_date\"\n+ }\n+ for x in (\")\", \"\\n\", \"\\r\", '\"', \"'\"):\n+ replace_dict[x] = ''\n+ for x in (\" \", \"(\", \"/\", \".\", \"-\"):\n+ replace_dict[x] = '_'\n+ if column_name in replace_dict:\n+ column_name = replace_dict[column_name]\n return column_name\n \n def split_on_delimiter(self, line):\n@@ -129,6 +148,11 @@\n def get_insert_columns(self, join=True):\n \"\"\"Gets a set of column names for insert statements.\"\"\"\n columns = \"\"\n+ if not self.cleaned_columns:\n+ column_names = list(self.columns)\n+ self.columns[:] = []\n+ self.columns = [(self.clean_column_name(name[0]), name[1]) for name in column_names]\n+ self.cleaned_columns = True\n for item in self.columns:\n thistype = item[1][0]\n if ((thistype != \"skip\") and (thistype !=\"combine\") and\n@@ -148,4 +172,3 @@\n if item == column[0]:\n columns.append(column[1][0])\n return columns\n-\n", "issue": "Reserved keyword 'order' not being properly replaced\n`clean_column_name()` in `table.py` should replace the column name `order` with `sporder`. It does not appear to be doing this in #286.\n\n", "before_files": [{"content": "from retriever.lib.cleanup import *\nimport csv\nimport StringIO\n\n\nclass Table:\n \"\"\"Information about a database table.\"\"\"\n def __init__(self, name, **kwargs):\n self.name = name\n self.pk = True\n self.contains_pk = False\n self.delimiter = None\n self.header_rows = 1\n self.column_names_row = 1\n self.fixed_width = False\n self.cleanup = Cleanup()\n self.record_id = 0\n self.columns = []\n self.replace_columns = []\n self.escape_single_quotes=True\n self.escape_double_quotes=True\n for key, item in kwargs.items():\n setattr(self, key, item[0] if isinstance(item, tuple) else item)\n\n def auto_get_columns(self, header):\n \"\"\"Gets the column names from the header row\n\n Identifies the column names from the header row.\n Replaces database keywords with alternatives.\n Replaces special characters and spaces.\n\n \"\"\"\n if self.fixed_width:\n column_names = self.extract_values(header)\n else:\n # Get column names from header row\n values = self.split_on_delimiter(header)\n column_names = [name.strip() for name in values]\n\n columns = map(lambda x: self.clean_column_name(x), column_names)\n column_values = {x:[] for x in columns if x}\n\n return [[x, None] for x in columns if x], column_values\n\n def clean_column_name(self, column_name):\n '''Makes sure a column name is formatted correctly by removing reserved\n words, symbols, numbers, etc.'''\n column_name = column_name.lower()\n replace_columns = {old.lower(): new.lower()\n for old, new in self.replace_columns}\n column_name = replace_columns.get(column_name, column_name)\n replace = [\n (\"%\", \"percent\"),\n (\"&\", \"and\"),\n (\"\\xb0\", \"degrees\"),\n (\"group\", \"grp\"),\n (\"order\", \"sporder\"),\n (\"check\", \"checked\"),\n (\"references\", \"refs\"),\n (\"long\", \"lon\"),\n (\"date\", \"record_date\"),\n (\"?\", \"\"),\n ]\n replace += [(x, '') for x in (\")\", \"\\n\", \"\\r\", '\"', \"'\")]\n replace += [(x, '_') for x in (\" \", \"(\", \"/\", \".\", \"-\")]\n column_name = reduce(lambda x, y: x.replace(*y), replace, column_name)\n\n while \"__\" in column_name:\n column_name = column_name.replace(\"__\", \"_\")\n column_name = column_name.lstrip(\"0123456789_\").rstrip(\"_\")\n\n return column_name\n\n def split_on_delimiter(self, line):\n dialect = csv.excel\n dialect.escapechar = \"\\\\\"\n r = csv.reader([line], dialect=dialect, delimiter=self.delimiter)\n return r.next()\n\n def combine_on_delimiter(self, line_as_list):\n \"\"\"Combine a list of values into a line of csv data\"\"\"\n dialect = csv.excel\n dialect.escapechar = \"\\\\\"\n writer_file = StringIO.StringIO()\n writer = csv.writer(writer_file, dialect=dialect, delimiter=self.delimiter)\n writer.writerow(line_as_list)\n return writer_file.getvalue()\n\n def values_from_line(self, line):\n linevalues = []\n if (self.pk and self.contains_pk == False):\n column = 0\n else:\n column = -1\n\n for value in self.extract_values(line):\n column += 1\n try:\n this_column = self.columns[column][1][0]\n\n # If data type is \"skip\" ignore the value\n if this_column == \"skip\":\n pass\n elif this_column == \"combine\":\n # If \"combine\" append value to end of previous column\n linevalues[-1] += \" \" + value\n else:\n # Otherwise, add new value\n linevalues.append(value)\n except:\n # too many values for columns; ignore\n pass\n\n return linevalues\n\n def extract_values(self, line):\n \"\"\"Given a line of data, this function returns a list of the individual\n data values.\"\"\"\n if self.fixed_width:\n pos = 0\n values = []\n for width in self.fixed_width:\n values.append(line[pos:pos+width].strip())\n pos += width\n return values\n else:\n return self.split_on_delimiter(line)\n\n def get_insert_columns(self, join=True):\n \"\"\"Gets a set of column names for insert statements.\"\"\"\n columns = \"\"\n for item in self.columns:\n thistype = item[1][0]\n if ((thistype != \"skip\") and (thistype !=\"combine\") and\n (self.contains_pk == True or thistype[0:3] != \"pk-\")):\n columns += item[0] + \", \"\n columns = columns.rstrip(', ')\n if join:\n return columns\n else:\n return columns.lstrip(\"(\").rstrip(\")\").split(\", \")\n\n def get_column_datatypes(self):\n \"\"\"Gets a set of column names for insert statements.\"\"\"\n columns = []\n for item in self.get_insert_columns(False):\n for column in self.columns:\n if item == column[0]:\n columns.append(column[1][0])\n return columns\n\n", "path": "lib/table.py"}], "after_files": [{"content": "import csv\nimport StringIO\n\nfrom retriever.lib.cleanup import *\n\n\nclass Table:\n \"\"\"Information about a database table.\"\"\"\n def __init__(self, name, **kwargs):\n self.name = name\n self.pk = True\n self.contains_pk = False\n self.delimiter = None\n self.header_rows = 1\n self.column_names_row = 1\n self.fixed_width = False\n self.cleanup = Cleanup()\n self.record_id = 0\n self.columns = []\n self.replace_columns = []\n self.escape_single_quotes=True\n self.escape_double_quotes=True\n self.cleaned_columns = False\n for key, item in kwargs.items():\n setattr(self, key, item[0] if isinstance(item, tuple) else item)\n\n def auto_get_columns(self, header):\n \"\"\"Gets the column names from the header row\n\n Identifies the column names from the header row.\n Replaces database keywords with alternatives.\n Replaces special characters and spaces.\n\n \"\"\"\n if self.fixed_width:\n column_names = self.extract_values(header)\n else:\n # Get column names from header row\n values = self.split_on_delimiter(header)\n column_names = [name.strip() for name in values]\n\n columns = map(lambda x: self.clean_column_name(x), column_names)\n column_values = {x:[] for x in columns if x}\n self.cleaned_columns = True\n return [[x, None] for x in columns if x], column_values\n\n def clean_column_name(self, column_name):\n \"\"\"Clean column names using the expected sql guidelines\n\n remove leading whitespaces, replace sql key words, etc..\n \"\"\"\n column_name = column_name.lower().strip()\n replace_columns = {old.lower(): new.lower()\n for old, new in self.replace_columns}\n\n column_name = replace_columns.get(column_name, column_name).strip()\n replace = [\n (\"%\", \"percent\"),\n (\"&\", \"and\"),\n (\"\\xb0\", \"degrees\"),\n (\"?\", \"\"),\n ]\n replace += [(x, '') for x in (\")\", \"\\n\", \"\\r\", '\"', \"'\")]\n replace += [(x, '_') for x in (\" \", \"(\", \"/\", \".\", \"-\", \"*\", \":\")]\n column_name = reduce(lambda x, y: x.replace(*y), replace, column_name)\n while \"__\" in column_name:\n column_name = column_name.replace(\"__\", \"_\")\n column_name = column_name.lstrip(\"0123456789_\").rstrip(\"_\")\n replace_dict = {\n \"group\": \"grp\",\n \"order\": \"ordered\",\n \"check\": \"checked\",\n \"references\": \"refs\",\n \"long\": \"lon\",\n \"column\": \"columns\",\n \"cursor\": \"cursors\",\n \"delete\": \"deleted\",\n \"insert\": \"inserted\",\n \"join\": \"joins\",\n \"select\": \"selects\",\n \"table\": \"tables\",\n \"update\": \"updates\",\n \"date\": \"record_date\"\n }\n for x in (\")\", \"\\n\", \"\\r\", '\"', \"'\"):\n replace_dict[x] = ''\n for x in (\" \", \"(\", \"/\", \".\", \"-\"):\n replace_dict[x] = '_'\n if column_name in replace_dict:\n column_name = replace_dict[column_name]\n return column_name\n\n def split_on_delimiter(self, line):\n dialect = csv.excel\n dialect.escapechar = \"\\\\\"\n r = csv.reader([line], dialect=dialect, delimiter=self.delimiter)\n return r.next()\n\n def combine_on_delimiter(self, line_as_list):\n \"\"\"Combine a list of values into a line of csv data\"\"\"\n dialect = csv.excel\n dialect.escapechar = \"\\\\\"\n writer_file = StringIO.StringIO()\n writer = csv.writer(writer_file, dialect=dialect, delimiter=self.delimiter)\n writer.writerow(line_as_list)\n return writer_file.getvalue()\n\n def values_from_line(self, line):\n linevalues = []\n if (self.pk and self.contains_pk == False):\n column = 0\n else:\n column = -1\n\n for value in self.extract_values(line):\n column += 1\n try:\n this_column = self.columns[column][1][0]\n\n # If data type is \"skip\" ignore the value\n if this_column == \"skip\":\n pass\n elif this_column == \"combine\":\n # If \"combine\" append value to end of previous column\n linevalues[-1] += \" \" + value\n else:\n # Otherwise, add new value\n linevalues.append(value)\n except:\n # too many values for columns; ignore\n pass\n\n return linevalues\n\n def extract_values(self, line):\n \"\"\"Given a line of data, this function returns a list of the individual\n data values.\"\"\"\n if self.fixed_width:\n pos = 0\n values = []\n for width in self.fixed_width:\n values.append(line[pos:pos+width].strip())\n pos += width\n return values\n else:\n return self.split_on_delimiter(line)\n\n def get_insert_columns(self, join=True):\n \"\"\"Gets a set of column names for insert statements.\"\"\"\n columns = \"\"\n if not self.cleaned_columns:\n column_names = list(self.columns)\n self.columns[:] = []\n self.columns = [(self.clean_column_name(name[0]), name[1]) for name in column_names]\n self.cleaned_columns = True\n for item in self.columns:\n thistype = item[1][0]\n if ((thistype != \"skip\") and (thistype !=\"combine\") and\n (self.contains_pk == True or thistype[0:3] != \"pk-\")):\n columns += item[0] + \", \"\n columns = columns.rstrip(', ')\n if join:\n return columns\n else:\n return columns.lstrip(\"(\").rstrip(\")\").split(\", \")\n\n def get_column_datatypes(self):\n \"\"\"Gets a set of column names for insert statements.\"\"\"\n columns = []\n for item in self.get_insert_columns(False):\n for column in self.columns:\n if item == column[0]:\n columns.append(column[1][0])\n return columns\n", "path": "lib/table.py"}]} | 1,800 | 974 |
gh_patches_debug_2178 | rasdani/github-patches | git_diff | rucio__rucio-1028 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Move conveyor transfers to third_party_copy_operation
Motivation
----------
At the moment read is used for source and write is used for destination, for both third_party_copy should be used.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lib/rucio/vcsversion.py`
Content:
```
1
2 '''
3 This file is automatically generated; Do not edit it. :)
4 '''
5 VERSION_INFO = {
6 'final': True,
7 'version': '1.15.3',
8 'branch_nick': 'patch-0-Rucio_1_15_3_preparation',
9 'revision_id': 'cd14416223d0b81a940312cb180a07778f85f1f8',
10 'revno': 6357
11 }
12
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lib/rucio/vcsversion.py b/lib/rucio/vcsversion.py
--- a/lib/rucio/vcsversion.py
+++ b/lib/rucio/vcsversion.py
@@ -4,8 +4,8 @@
'''
VERSION_INFO = {
'final': True,
- 'version': '1.15.3',
- 'branch_nick': 'patch-0-Rucio_1_15_3_preparation',
- 'revision_id': 'cd14416223d0b81a940312cb180a07778f85f1f8',
- 'revno': 6357
+ 'version': '1.15.4',
+ 'branch_nick': 'patch-0-Rucio_1_15_4_preparation',
+ 'revision_id': 'c2972be03297dc73a65d35bc9ffab1516efffebb',
+ 'revno': 6402
}
| {"golden_diff": "diff --git a/lib/rucio/vcsversion.py b/lib/rucio/vcsversion.py\n--- a/lib/rucio/vcsversion.py\n+++ b/lib/rucio/vcsversion.py\n@@ -4,8 +4,8 @@\n '''\n VERSION_INFO = {\n 'final': True,\n- 'version': '1.15.3',\n- 'branch_nick': 'patch-0-Rucio_1_15_3_preparation',\n- 'revision_id': 'cd14416223d0b81a940312cb180a07778f85f1f8',\n- 'revno': 6357\n+ 'version': '1.15.4',\n+ 'branch_nick': 'patch-0-Rucio_1_15_4_preparation',\n+ 'revision_id': 'c2972be03297dc73a65d35bc9ffab1516efffebb',\n+ 'revno': 6402\n }\n", "issue": "Move conveyor transfers to third_party_copy_operation\nMotivation\r\n----------\r\nAt the moment read is used for source and write is used for destination, for both third_party_copy should be used.\n", "before_files": [{"content": "\n'''\nThis file is automatically generated; Do not edit it. :)\n'''\nVERSION_INFO = {\n 'final': True,\n 'version': '1.15.3',\n 'branch_nick': 'patch-0-Rucio_1_15_3_preparation',\n 'revision_id': 'cd14416223d0b81a940312cb180a07778f85f1f8',\n 'revno': 6357\n}\n", "path": "lib/rucio/vcsversion.py"}], "after_files": [{"content": "\n'''\nThis file is automatically generated; Do not edit it. :)\n'''\nVERSION_INFO = {\n 'final': True,\n 'version': '1.15.4',\n 'branch_nick': 'patch-0-Rucio_1_15_4_preparation',\n 'revision_id': 'c2972be03297dc73a65d35bc9ffab1516efffebb',\n 'revno': 6402\n}\n", "path": "lib/rucio/vcsversion.py"}]} | 430 | 247 |
gh_patches_debug_16727 | rasdani/github-patches | git_diff | learningequality__kolibri-6191 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
recipients column in coach tools in not accurate
### Observed behavior
recipients column always indicates assignment to the entire class, even when that's not true:

### Expected behavior
recipients column should be accurate
### User-facing consequences
confusion
### Errors and logs
none observed
### Steps to reproduce
create a lesson and assign it to a group
### Context
0.13.0 beta 3
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kolibri/core/lessons/viewsets.py`
Content:
```
1 import json
2 from functools import partial
3 from itertools import chain
4
5 from django.db import connection
6 from django.db.models import CharField
7 from django_filters.rest_framework import DjangoFilterBackend
8
9 from .serializers import LessonSerializer
10 from kolibri.core.api import ValuesViewset
11 from kolibri.core.auth.api import KolibriAuthPermissions
12 from kolibri.core.auth.api import KolibriAuthPermissionsFilter
13 from kolibri.core.lessons.models import Lesson
14 from kolibri.core.lessons.models import LessonAssignment
15 from kolibri.core.query import ArrayAgg
16 from kolibri.core.query import GroupConcat
17 from kolibri.core.query import process_uuid_aggregate
18
19
20 def _ensure_raw_dict(d):
21 if hasattr(d, "dict"):
22 d = d.dict()
23 return dict(d)
24
25
26 class LessonPermissions(KolibriAuthPermissions):
27 # Overrides the default validator to sanitize the Lesson POST Payload
28 # before validation
29 def validator(self, request, view, datum):
30 model = view.get_serializer_class().Meta.model
31 validated_data = view.get_serializer().to_internal_value(
32 _ensure_raw_dict(datum)
33 )
34 # Cannot have create assignments without creating the Lesson first,
35 # so this doesn't try to validate the Lesson with a non-empty lesson_assignments list
36 validated_data.pop("lesson_assignments")
37 return request.user.can_create(model, validated_data)
38
39
40 def _map_lesson_classroom(item):
41 return {
42 "id": item.pop("collection__id"),
43 "name": item.pop("collection__name"),
44 "parent": item.pop("collection__parent_id"),
45 }
46
47
48 class LessonViewset(ValuesViewset):
49 serializer_class = LessonSerializer
50 filter_backends = (KolibriAuthPermissionsFilter, DjangoFilterBackend)
51 filter_fields = ("collection",)
52 permission_classes = (LessonPermissions,)
53 queryset = Lesson.objects.all().order_by("-date_created")
54
55 values = (
56 "id",
57 "title",
58 "description",
59 "resources",
60 "is_active",
61 "collection", # classroom
62 "collection__id",
63 "collection__name",
64 "collection__parent_id",
65 "created_by",
66 "assignment_ids",
67 )
68
69 field_map = {
70 "classroom": _map_lesson_classroom,
71 "resources": lambda x: json.loads(x["resources"]),
72 "assignment_ids": partial(process_uuid_aggregate, key="assignment_ids"),
73 }
74
75 def consolidate(self, items):
76 assignment_ids = []
77 for item in items:
78 assignment_ids.extend(item["assignment_ids"])
79 assignments = LessonAssignment.objects.filter(id__in=assignment_ids)
80 if connection.vendor == "postgresql" and ArrayAgg is not None:
81 assignments = assignments.annotate(
82 learner_ids=ArrayAgg("collection__membership__user__id"),
83 )
84
85 def _process_item(item):
86 item["learner_ids"] = map(
87 lambda x: x.hex, filter(lambda x: x, item["learner_ids"])
88 )
89 return item
90
91 else:
92 assignments = assignments.values("id").annotate(
93 learner_ids=GroupConcat(
94 "collection__membership__user__id", output_field=CharField(),
95 ),
96 )
97
98 def _process_item(item):
99 item["learner_ids"] = (
100 item["learner_ids"].split(",") if item["learner_ids"] else []
101 )
102 return item
103
104 assignments = {
105 a["id"]: _process_item(a)
106 for a in assignments.values(
107 "id", "collection", "collection__kind", "learner_ids", "assigned_by"
108 )
109 }
110 for item in items:
111 item_ids = item.pop("assignment_ids")
112 item["lesson_assignments"] = [assignments[a] for a in item_ids]
113 item["learner_ids"] = list(
114 set(
115 chain.from_iterable(
116 a.pop("learner_ids") for a in item["lesson_assignments"]
117 )
118 )
119 )
120 return items
121
122 def annotate_queryset(self, queryset):
123 if connection.vendor == "postgresql" and ArrayAgg is not None:
124 return queryset.annotate(assignment_ids=ArrayAgg("lesson_assignments__id"),)
125 return queryset.values("id").annotate(
126 assignment_ids=GroupConcat(
127 "lesson_assignments__id", output_field=CharField(),
128 ),
129 )
130
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/kolibri/core/lessons/viewsets.py b/kolibri/core/lessons/viewsets.py
--- a/kolibri/core/lessons/viewsets.py
+++ b/kolibri/core/lessons/viewsets.py
@@ -4,6 +4,7 @@
from django.db import connection
from django.db.models import CharField
+from django.db.models import F
from django_filters.rest_framework import DjangoFilterBackend
from .serializers import LessonSerializer
@@ -104,7 +105,11 @@
assignments = {
a["id"]: _process_item(a)
for a in assignments.values(
- "id", "collection", "collection__kind", "learner_ids", "assigned_by"
+ "id",
+ "collection",
+ "learner_ids",
+ "assigned_by",
+ collection_kind=F("collection__kind"),
)
}
for item in items:
| {"golden_diff": "diff --git a/kolibri/core/lessons/viewsets.py b/kolibri/core/lessons/viewsets.py\n--- a/kolibri/core/lessons/viewsets.py\n+++ b/kolibri/core/lessons/viewsets.py\n@@ -4,6 +4,7 @@\n \n from django.db import connection\n from django.db.models import CharField\n+from django.db.models import F\n from django_filters.rest_framework import DjangoFilterBackend\n \n from .serializers import LessonSerializer\n@@ -104,7 +105,11 @@\n assignments = {\n a[\"id\"]: _process_item(a)\n for a in assignments.values(\n- \"id\", \"collection\", \"collection__kind\", \"learner_ids\", \"assigned_by\"\n+ \"id\",\n+ \"collection\",\n+ \"learner_ids\",\n+ \"assigned_by\",\n+ collection_kind=F(\"collection__kind\"),\n )\n }\n for item in items:\n", "issue": "recipients column in coach tools in not accurate\n\r\n### Observed behavior\r\n\r\nrecipients column always indicates assignment to the entire class, even when that's not true:\r\n\r\n\r\n\r\n\r\n### Expected behavior\r\n\r\nrecipients column should be accurate\r\n\r\n### User-facing consequences\r\n\r\nconfusion\r\n\r\n### Errors and logs\r\n\r\nnone observed\r\n\r\n### Steps to reproduce\r\n\r\ncreate a lesson and assign it to a group\r\n\r\n### Context\r\n\r\n0.13.0 beta 3\r\n\n", "before_files": [{"content": "import json\nfrom functools import partial\nfrom itertools import chain\n\nfrom django.db import connection\nfrom django.db.models import CharField\nfrom django_filters.rest_framework import DjangoFilterBackend\n\nfrom .serializers import LessonSerializer\nfrom kolibri.core.api import ValuesViewset\nfrom kolibri.core.auth.api import KolibriAuthPermissions\nfrom kolibri.core.auth.api import KolibriAuthPermissionsFilter\nfrom kolibri.core.lessons.models import Lesson\nfrom kolibri.core.lessons.models import LessonAssignment\nfrom kolibri.core.query import ArrayAgg\nfrom kolibri.core.query import GroupConcat\nfrom kolibri.core.query import process_uuid_aggregate\n\n\ndef _ensure_raw_dict(d):\n if hasattr(d, \"dict\"):\n d = d.dict()\n return dict(d)\n\n\nclass LessonPermissions(KolibriAuthPermissions):\n # Overrides the default validator to sanitize the Lesson POST Payload\n # before validation\n def validator(self, request, view, datum):\n model = view.get_serializer_class().Meta.model\n validated_data = view.get_serializer().to_internal_value(\n _ensure_raw_dict(datum)\n )\n # Cannot have create assignments without creating the Lesson first,\n # so this doesn't try to validate the Lesson with a non-empty lesson_assignments list\n validated_data.pop(\"lesson_assignments\")\n return request.user.can_create(model, validated_data)\n\n\ndef _map_lesson_classroom(item):\n return {\n \"id\": item.pop(\"collection__id\"),\n \"name\": item.pop(\"collection__name\"),\n \"parent\": item.pop(\"collection__parent_id\"),\n }\n\n\nclass LessonViewset(ValuesViewset):\n serializer_class = LessonSerializer\n filter_backends = (KolibriAuthPermissionsFilter, DjangoFilterBackend)\n filter_fields = (\"collection\",)\n permission_classes = (LessonPermissions,)\n queryset = Lesson.objects.all().order_by(\"-date_created\")\n\n values = (\n \"id\",\n \"title\",\n \"description\",\n \"resources\",\n \"is_active\",\n \"collection\", # classroom\n \"collection__id\",\n \"collection__name\",\n \"collection__parent_id\",\n \"created_by\",\n \"assignment_ids\",\n )\n\n field_map = {\n \"classroom\": _map_lesson_classroom,\n \"resources\": lambda x: json.loads(x[\"resources\"]),\n \"assignment_ids\": partial(process_uuid_aggregate, key=\"assignment_ids\"),\n }\n\n def consolidate(self, items):\n assignment_ids = []\n for item in items:\n assignment_ids.extend(item[\"assignment_ids\"])\n assignments = LessonAssignment.objects.filter(id__in=assignment_ids)\n if connection.vendor == \"postgresql\" and ArrayAgg is not None:\n assignments = assignments.annotate(\n learner_ids=ArrayAgg(\"collection__membership__user__id\"),\n )\n\n def _process_item(item):\n item[\"learner_ids\"] = map(\n lambda x: x.hex, filter(lambda x: x, item[\"learner_ids\"])\n )\n return item\n\n else:\n assignments = assignments.values(\"id\").annotate(\n learner_ids=GroupConcat(\n \"collection__membership__user__id\", output_field=CharField(),\n ),\n )\n\n def _process_item(item):\n item[\"learner_ids\"] = (\n item[\"learner_ids\"].split(\",\") if item[\"learner_ids\"] else []\n )\n return item\n\n assignments = {\n a[\"id\"]: _process_item(a)\n for a in assignments.values(\n \"id\", \"collection\", \"collection__kind\", \"learner_ids\", \"assigned_by\"\n )\n }\n for item in items:\n item_ids = item.pop(\"assignment_ids\")\n item[\"lesson_assignments\"] = [assignments[a] for a in item_ids]\n item[\"learner_ids\"] = list(\n set(\n chain.from_iterable(\n a.pop(\"learner_ids\") for a in item[\"lesson_assignments\"]\n )\n )\n )\n return items\n\n def annotate_queryset(self, queryset):\n if connection.vendor == \"postgresql\" and ArrayAgg is not None:\n return queryset.annotate(assignment_ids=ArrayAgg(\"lesson_assignments__id\"),)\n return queryset.values(\"id\").annotate(\n assignment_ids=GroupConcat(\n \"lesson_assignments__id\", output_field=CharField(),\n ),\n )\n", "path": "kolibri/core/lessons/viewsets.py"}], "after_files": [{"content": "import json\nfrom functools import partial\nfrom itertools import chain\n\nfrom django.db import connection\nfrom django.db.models import CharField\nfrom django.db.models import F\nfrom django_filters.rest_framework import DjangoFilterBackend\n\nfrom .serializers import LessonSerializer\nfrom kolibri.core.api import ValuesViewset\nfrom kolibri.core.auth.api import KolibriAuthPermissions\nfrom kolibri.core.auth.api import KolibriAuthPermissionsFilter\nfrom kolibri.core.lessons.models import Lesson\nfrom kolibri.core.lessons.models import LessonAssignment\nfrom kolibri.core.query import ArrayAgg\nfrom kolibri.core.query import GroupConcat\nfrom kolibri.core.query import process_uuid_aggregate\n\n\ndef _ensure_raw_dict(d):\n if hasattr(d, \"dict\"):\n d = d.dict()\n return dict(d)\n\n\nclass LessonPermissions(KolibriAuthPermissions):\n # Overrides the default validator to sanitize the Lesson POST Payload\n # before validation\n def validator(self, request, view, datum):\n model = view.get_serializer_class().Meta.model\n validated_data = view.get_serializer().to_internal_value(\n _ensure_raw_dict(datum)\n )\n # Cannot have create assignments without creating the Lesson first,\n # so this doesn't try to validate the Lesson with a non-empty lesson_assignments list\n validated_data.pop(\"lesson_assignments\")\n return request.user.can_create(model, validated_data)\n\n\ndef _map_lesson_classroom(item):\n return {\n \"id\": item.pop(\"collection__id\"),\n \"name\": item.pop(\"collection__name\"),\n \"parent\": item.pop(\"collection__parent_id\"),\n }\n\n\nclass LessonViewset(ValuesViewset):\n serializer_class = LessonSerializer\n filter_backends = (KolibriAuthPermissionsFilter, DjangoFilterBackend)\n filter_fields = (\"collection\",)\n permission_classes = (LessonPermissions,)\n queryset = Lesson.objects.all().order_by(\"-date_created\")\n\n values = (\n \"id\",\n \"title\",\n \"description\",\n \"resources\",\n \"is_active\",\n \"collection\", # classroom\n \"collection__id\",\n \"collection__name\",\n \"collection__parent_id\",\n \"created_by\",\n \"assignment_ids\",\n )\n\n field_map = {\n \"classroom\": _map_lesson_classroom,\n \"resources\": lambda x: json.loads(x[\"resources\"]),\n \"assignment_ids\": partial(process_uuid_aggregate, key=\"assignment_ids\"),\n }\n\n def consolidate(self, items):\n assignment_ids = []\n for item in items:\n assignment_ids.extend(item[\"assignment_ids\"])\n assignments = LessonAssignment.objects.filter(id__in=assignment_ids)\n if connection.vendor == \"postgresql\" and ArrayAgg is not None:\n assignments = assignments.annotate(\n learner_ids=ArrayAgg(\"collection__membership__user__id\"),\n )\n\n def _process_item(item):\n item[\"learner_ids\"] = map(\n lambda x: x.hex, filter(lambda x: x, item[\"learner_ids\"])\n )\n return item\n\n else:\n assignments = assignments.values(\"id\").annotate(\n learner_ids=GroupConcat(\n \"collection__membership__user__id\", output_field=CharField(),\n ),\n )\n\n def _process_item(item):\n item[\"learner_ids\"] = (\n item[\"learner_ids\"].split(\",\") if item[\"learner_ids\"] else []\n )\n return item\n\n assignments = {\n a[\"id\"]: _process_item(a)\n for a in assignments.values(\n \"id\",\n \"collection\",\n \"learner_ids\",\n \"assigned_by\",\n collection_kind=F(\"collection__kind\"),\n )\n }\n for item in items:\n item_ids = item.pop(\"assignment_ids\")\n item[\"lesson_assignments\"] = [assignments[a] for a in item_ids]\n item[\"learner_ids\"] = list(\n set(\n chain.from_iterable(\n a.pop(\"learner_ids\") for a in item[\"lesson_assignments\"]\n )\n )\n )\n return items\n\n def annotate_queryset(self, queryset):\n if connection.vendor == \"postgresql\" and ArrayAgg is not None:\n return queryset.annotate(assignment_ids=ArrayAgg(\"lesson_assignments__id\"),)\n return queryset.values(\"id\").annotate(\n assignment_ids=GroupConcat(\n \"lesson_assignments__id\", output_field=CharField(),\n ),\n )\n", "path": "kolibri/core/lessons/viewsets.py"}]} | 1,614 | 198 |
gh_patches_debug_26403 | rasdani/github-patches | git_diff | sql-machine-learning__elasticdl-2188 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Set version for develop branch.
According to [PEP 440](https://www.python.org/dev/peps/pep-0440/#developmental-releases), we should set version `X.YaN.devM`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup_preprocessing.py`
Content:
```
1 # Copyright 2020 The ElasticDL Authors. All rights reserved.
2 # Licensed under the Apache License, Version 2.0 (the "License");
3 # you may not use this file except in compliance with the License.
4 # You may obtain a copy of the License at
5 #
6 # http://www.apache.org/licenses/LICENSE-2.0
7 #
8 # Unless required by applicable law or agreed to in writing, software
9 # distributed under the License is distributed on an "AS IS" BASIS,
10 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
11 # See the License for the specific language governing permissions and
12 # limitations under the License.
13
14 from setuptools import find_packages, setup
15
16 with open("elasticdl_preprocessing/requirements.txt") as f:
17 required_deps = f.read().splitlines()
18
19 extras = {}
20 with open("elasticdl_preprocessing/requirements-dev.txt") as f:
21 extras["develop"] = f.read().splitlines()
22
23 setup(
24 name="elasticdl_preprocessing",
25 version="develop",
26 description="A feature preprocessing library.",
27 long_description="This is an extension of the native Keras Preprocessing"
28 " Layers and Feature Column API from TensorFlow. We can develop our model"
29 " using the native high-level API from TensorFlow and our library."
30 " We can train this model using native TensorFlow or ElasticDL.",
31 long_description_content_type="text/markdown",
32 author="Ant Financial",
33 url="https://elasticdl.org",
34 install_requires=required_deps,
35 extras_require=extras,
36 python_requires=">=3.5",
37 packages=find_packages(
38 include=["elasticdl_preprocessing*"], exclude=["*test*"]
39 ),
40 package_data={"": ["requirements.txt"]},
41 )
42
```
Path: `setup.py`
Content:
```
1 # Copyright 2020 The ElasticDL Authors. All rights reserved.
2 # Licensed under the Apache License, Version 2.0 (the "License");
3 # you may not use this file except in compliance with the License.
4 # You may obtain a copy of the License at
5 #
6 # http://www.apache.org/licenses/LICENSE-2.0
7 #
8 # Unless required by applicable law or agreed to in writing, software
9 # distributed under the License is distributed on an "AS IS" BASIS,
10 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
11 # See the License for the specific language governing permissions and
12 # limitations under the License.
13
14 from setuptools import find_packages, setup
15
16 with open("elasticdl/requirements.txt") as f:
17 required_deps = f.read().splitlines()
18 required_deps.append("elasticdl_client")
19 required_deps.append("elasticdl_preprocessing")
20
21 extras = {}
22 with open("elasticdl/requirements-dev.txt") as f:
23 extras["develop"] = f.read().splitlines()
24
25 setup(
26 name="elasticdl",
27 version="develop",
28 description="A Kubernetes-native Deep Learning Framework",
29 long_description="ElasticDL is a Kubernetes-native deep learning framework"
30 " built on top of TensorFlow 2.0 that supports"
31 " fault-tolerance and elastic scheduling.",
32 long_description_content_type="text/markdown",
33 author="Ant Financial",
34 url="https://elasticdl.org",
35 install_requires=required_deps,
36 extras_require=extras,
37 python_requires=">=3.5",
38 packages=find_packages(
39 exclude=[
40 "*test*",
41 "elasticdl_client*",
42 "elasticdl_preprocessing*",
43 "model_zoo*",
44 ]
45 ),
46 package_data={
47 "": [
48 "proto/*.proto",
49 "docker/*",
50 "Makefile",
51 "requirements.txt",
52 "go/bin/elasticdl_ps",
53 "go/pkg/kernel/capi/*",
54 ]
55 },
56 )
57
```
Path: `setup_client.py`
Content:
```
1 # Copyright 2020 The ElasticDL Authors. All rights reserved.
2 # Licensed under the Apache License, Version 2.0 (the "License");
3 # you may not use this file except in compliance with the License.
4 # You may obtain a copy of the License at
5 #
6 # http://www.apache.org/licenses/LICENSE-2.0
7 #
8 # Unless required by applicable law or agreed to in writing, software
9 # distributed under the License is distributed on an "AS IS" BASIS,
10 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
11 # See the License for the specific language governing permissions and
12 # limitations under the License.
13
14 from setuptools import find_packages, setup
15
16 with open("elasticdl_client/requirements.txt") as f:
17 required_deps = f.read().splitlines()
18
19 setup(
20 name="elasticdl_client",
21 version="develop",
22 description="The client command line tool for ElasticDL.",
23 long_description="ElasticDL Client is the client command line tool for"
24 " ElasticDL. Users can use it to submit distributed ElasticDL jobs to"
25 " a Kubernetes cluster. It also provides an easy way to build and push"
26 " Docker images for distributed ElasticDL jobs.",
27 long_description_content_type="text/markdown",
28 author="Ant Financial",
29 url="https://elasticdl.org",
30 install_requires=required_deps,
31 python_requires=">=3.5",
32 packages=find_packages(include=["elasticdl_client*"]),
33 package_data={"": ["requirements.txt"]},
34 entry_points={"console_scripts": ["elasticdl=elasticdl_client.main:main"]},
35 )
36
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -24,7 +24,7 @@
setup(
name="elasticdl",
- version="develop",
+ version="0.2.0rc3.dev0",
description="A Kubernetes-native Deep Learning Framework",
long_description="ElasticDL is a Kubernetes-native deep learning framework"
" built on top of TensorFlow 2.0 that supports"
diff --git a/setup_client.py b/setup_client.py
--- a/setup_client.py
+++ b/setup_client.py
@@ -18,7 +18,7 @@
setup(
name="elasticdl_client",
- version="develop",
+ version="0.2.0rc3.dev0",
description="The client command line tool for ElasticDL.",
long_description="ElasticDL Client is the client command line tool for"
" ElasticDL. Users can use it to submit distributed ElasticDL jobs to"
diff --git a/setup_preprocessing.py b/setup_preprocessing.py
--- a/setup_preprocessing.py
+++ b/setup_preprocessing.py
@@ -22,7 +22,7 @@
setup(
name="elasticdl_preprocessing",
- version="develop",
+ version="0.2.0rc3.dev0",
description="A feature preprocessing library.",
long_description="This is an extension of the native Keras Preprocessing"
" Layers and Feature Column API from TensorFlow. We can develop our model"
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -24,7 +24,7 @@\n \n setup(\n name=\"elasticdl\",\n- version=\"develop\",\n+ version=\"0.2.0rc3.dev0\",\n description=\"A Kubernetes-native Deep Learning Framework\",\n long_description=\"ElasticDL is a Kubernetes-native deep learning framework\"\n \" built on top of TensorFlow 2.0 that supports\"\ndiff --git a/setup_client.py b/setup_client.py\n--- a/setup_client.py\n+++ b/setup_client.py\n@@ -18,7 +18,7 @@\n \n setup(\n name=\"elasticdl_client\",\n- version=\"develop\",\n+ version=\"0.2.0rc3.dev0\",\n description=\"The client command line tool for ElasticDL.\",\n long_description=\"ElasticDL Client is the client command line tool for\"\n \" ElasticDL. Users can use it to submit distributed ElasticDL jobs to\"\ndiff --git a/setup_preprocessing.py b/setup_preprocessing.py\n--- a/setup_preprocessing.py\n+++ b/setup_preprocessing.py\n@@ -22,7 +22,7 @@\n \n setup(\n name=\"elasticdl_preprocessing\",\n- version=\"develop\",\n+ version=\"0.2.0rc3.dev0\",\n description=\"A feature preprocessing library.\",\n long_description=\"This is an extension of the native Keras Preprocessing\"\n \" Layers and Feature Column API from TensorFlow. We can develop our model\"\n", "issue": "Set version for develop branch.\nAccording to [PEP 440](https://www.python.org/dev/peps/pep-0440/#developmental-releases), we should set version `X.YaN.devM`.\n", "before_files": [{"content": "# Copyright 2020 The ElasticDL Authors. All rights reserved.\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom setuptools import find_packages, setup\n\nwith open(\"elasticdl_preprocessing/requirements.txt\") as f:\n required_deps = f.read().splitlines()\n\nextras = {}\nwith open(\"elasticdl_preprocessing/requirements-dev.txt\") as f:\n extras[\"develop\"] = f.read().splitlines()\n\nsetup(\n name=\"elasticdl_preprocessing\",\n version=\"develop\",\n description=\"A feature preprocessing library.\",\n long_description=\"This is an extension of the native Keras Preprocessing\"\n \" Layers and Feature Column API from TensorFlow. We can develop our model\"\n \" using the native high-level API from TensorFlow and our library.\"\n \" We can train this model using native TensorFlow or ElasticDL.\",\n long_description_content_type=\"text/markdown\",\n author=\"Ant Financial\",\n url=\"https://elasticdl.org\",\n install_requires=required_deps,\n extras_require=extras,\n python_requires=\">=3.5\",\n packages=find_packages(\n include=[\"elasticdl_preprocessing*\"], exclude=[\"*test*\"]\n ),\n package_data={\"\": [\"requirements.txt\"]},\n)\n", "path": "setup_preprocessing.py"}, {"content": "# Copyright 2020 The ElasticDL Authors. All rights reserved.\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom setuptools import find_packages, setup\n\nwith open(\"elasticdl/requirements.txt\") as f:\n required_deps = f.read().splitlines()\nrequired_deps.append(\"elasticdl_client\")\nrequired_deps.append(\"elasticdl_preprocessing\")\n\nextras = {}\nwith open(\"elasticdl/requirements-dev.txt\") as f:\n extras[\"develop\"] = f.read().splitlines()\n\nsetup(\n name=\"elasticdl\",\n version=\"develop\",\n description=\"A Kubernetes-native Deep Learning Framework\",\n long_description=\"ElasticDL is a Kubernetes-native deep learning framework\"\n \" built on top of TensorFlow 2.0 that supports\"\n \" fault-tolerance and elastic scheduling.\",\n long_description_content_type=\"text/markdown\",\n author=\"Ant Financial\",\n url=\"https://elasticdl.org\",\n install_requires=required_deps,\n extras_require=extras,\n python_requires=\">=3.5\",\n packages=find_packages(\n exclude=[\n \"*test*\",\n \"elasticdl_client*\",\n \"elasticdl_preprocessing*\",\n \"model_zoo*\",\n ]\n ),\n package_data={\n \"\": [\n \"proto/*.proto\",\n \"docker/*\",\n \"Makefile\",\n \"requirements.txt\",\n \"go/bin/elasticdl_ps\",\n \"go/pkg/kernel/capi/*\",\n ]\n },\n)\n", "path": "setup.py"}, {"content": "# Copyright 2020 The ElasticDL Authors. All rights reserved.\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom setuptools import find_packages, setup\n\nwith open(\"elasticdl_client/requirements.txt\") as f:\n required_deps = f.read().splitlines()\n\nsetup(\n name=\"elasticdl_client\",\n version=\"develop\",\n description=\"The client command line tool for ElasticDL.\",\n long_description=\"ElasticDL Client is the client command line tool for\"\n \" ElasticDL. Users can use it to submit distributed ElasticDL jobs to\"\n \" a Kubernetes cluster. It also provides an easy way to build and push\"\n \" Docker images for distributed ElasticDL jobs.\",\n long_description_content_type=\"text/markdown\",\n author=\"Ant Financial\",\n url=\"https://elasticdl.org\",\n install_requires=required_deps,\n python_requires=\">=3.5\",\n packages=find_packages(include=[\"elasticdl_client*\"]),\n package_data={\"\": [\"requirements.txt\"]},\n entry_points={\"console_scripts\": [\"elasticdl=elasticdl_client.main:main\"]},\n)\n", "path": "setup_client.py"}], "after_files": [{"content": "# Copyright 2020 The ElasticDL Authors. All rights reserved.\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom setuptools import find_packages, setup\n\nwith open(\"elasticdl_preprocessing/requirements.txt\") as f:\n required_deps = f.read().splitlines()\n\nextras = {}\nwith open(\"elasticdl_preprocessing/requirements-dev.txt\") as f:\n extras[\"develop\"] = f.read().splitlines()\n\nsetup(\n name=\"elasticdl_preprocessing\",\n version=\"0.2.0rc3.dev0\",\n description=\"A feature preprocessing library.\",\n long_description=\"This is an extension of the native Keras Preprocessing\"\n \" Layers and Feature Column API from TensorFlow. We can develop our model\"\n \" using the native high-level API from TensorFlow and our library.\"\n \" We can train this model using native TensorFlow or ElasticDL.\",\n long_description_content_type=\"text/markdown\",\n author=\"Ant Financial\",\n url=\"https://elasticdl.org\",\n install_requires=required_deps,\n extras_require=extras,\n python_requires=\">=3.5\",\n packages=find_packages(\n include=[\"elasticdl_preprocessing*\"], exclude=[\"*test*\"]\n ),\n package_data={\"\": [\"requirements.txt\"]},\n)\n", "path": "setup_preprocessing.py"}, {"content": "# Copyright 2020 The ElasticDL Authors. All rights reserved.\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom setuptools import find_packages, setup\n\nwith open(\"elasticdl/requirements.txt\") as f:\n required_deps = f.read().splitlines()\nrequired_deps.append(\"elasticdl_client\")\nrequired_deps.append(\"elasticdl_preprocessing\")\n\nextras = {}\nwith open(\"elasticdl/requirements-dev.txt\") as f:\n extras[\"develop\"] = f.read().splitlines()\n\nsetup(\n name=\"elasticdl\",\n version=\"0.2.0rc3.dev0\",\n description=\"A Kubernetes-native Deep Learning Framework\",\n long_description=\"ElasticDL is a Kubernetes-native deep learning framework\"\n \" built on top of TensorFlow 2.0 that supports\"\n \" fault-tolerance and elastic scheduling.\",\n long_description_content_type=\"text/markdown\",\n author=\"Ant Financial\",\n url=\"https://elasticdl.org\",\n install_requires=required_deps,\n extras_require=extras,\n python_requires=\">=3.5\",\n packages=find_packages(\n exclude=[\n \"*test*\",\n \"elasticdl_client*\",\n \"elasticdl_preprocessing*\",\n \"model_zoo*\",\n ]\n ),\n package_data={\n \"\": [\n \"proto/*.proto\",\n \"docker/*\",\n \"Makefile\",\n \"requirements.txt\",\n \"go/bin/elasticdl_ps\",\n \"go/pkg/kernel/capi/*\",\n ]\n },\n)\n", "path": "setup.py"}, {"content": "# Copyright 2020 The ElasticDL Authors. All rights reserved.\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom setuptools import find_packages, setup\n\nwith open(\"elasticdl_client/requirements.txt\") as f:\n required_deps = f.read().splitlines()\n\nsetup(\n name=\"elasticdl_client\",\n version=\"0.2.0rc3.dev0\",\n description=\"The client command line tool for ElasticDL.\",\n long_description=\"ElasticDL Client is the client command line tool for\"\n \" ElasticDL. Users can use it to submit distributed ElasticDL jobs to\"\n \" a Kubernetes cluster. It also provides an easy way to build and push\"\n \" Docker images for distributed ElasticDL jobs.\",\n long_description_content_type=\"text/markdown\",\n author=\"Ant Financial\",\n url=\"https://elasticdl.org\",\n install_requires=required_deps,\n python_requires=\">=3.5\",\n packages=find_packages(include=[\"elasticdl_client*\"]),\n package_data={\"\": [\"requirements.txt\"]},\n entry_points={\"console_scripts\": [\"elasticdl=elasticdl_client.main:main\"]},\n)\n", "path": "setup_client.py"}]} | 1,689 | 322 |
gh_patches_debug_1874 | rasdani/github-patches | git_diff | cloudtools__troposphere-120 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Update metadata to include property keys
Within the CloudFormation metadata object for auto scaling launch configurations, it'd be nice to be able to rely on the validators within the template to sanity check the allowed keys within the 'config' dictionary:
```
"Resources": {
"MyInstance": {
"Type": "AWS::EC2::Instance",
"Metadata" : {
"AWS::CloudFormation::Init" : {
"config" : {
"packages" : {
:
},
"groups" : {
:
},
"users" : {
:
},
"sources" : {
:
},
"files" : {
:
},
"commands" : {
:
},
"services" : {
:
}
}
}
},
"Properties": {
:
}
}
}
```
Currently the implementation of troposphere.cloudformation.InitConfig only contains one key which is 'files'
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `troposphere/cloudformation.py`
Content:
```
1 # Copyright (c) 2013, Mark Peek <[email protected]>
2 # All rights reserved.
3 #
4 # See LICENSE file for full license.
5
6 from . import AWSHelperFn, AWSObject, AWSProperty, Ref
7 from .validators import integer
8
9
10 class Stack(AWSObject):
11 type = "AWS::CloudFormation::Stack"
12
13 props = {
14 'TemplateURL': (basestring, True),
15 'TimeoutInMinutes': (integer, False),
16 'Parameters': (dict, False),
17 }
18
19
20 class WaitCondition(AWSObject):
21 type = "AWS::CloudFormation::WaitCondition"
22
23 props = {
24 'Count': (integer, False),
25 'Handle': (Ref, True),
26 'Timeout': (integer, True),
27 }
28
29
30 class WaitConditionHandle(AWSObject):
31 type = "AWS::CloudFormation::WaitConditionHandle"
32
33 props = {}
34
35
36 class InitFileContext(AWSHelperFn):
37 def __init__(self, data):
38 self.data = data
39
40 def JSONrepr(self):
41 return self.data
42
43
44 class InitFile(AWSProperty):
45 props = {
46 'content': (basestring, True),
47 'mode': (basestring, False),
48 'owner': (basestring, False),
49 'group': (basestring, False),
50 'context': (InitFileContext, False)
51 }
52
53
54 class InitFiles(AWSHelperFn):
55 def __init__(self, data):
56 self.validate(data)
57 self.data = data
58
59 def validate(self, data):
60 for k in data:
61 if not isinstance(data[k], InitFile):
62 raise ValueError("File '" + k + "' must be of type InitFile")
63
64 def JSONrepr(self):
65 return self.data
66
67
68 class InitConfig(AWSProperty):
69 props = {
70 'files': (dict, False)
71 }
72
73
74 class Init(AWSHelperFn):
75 def __init__(self, data):
76 self.validate(data)
77 self.data = {"AWS::CloudFormation::Init": data}
78
79 def validate(self, data):
80 if 'config' not in data:
81 raise ValueError('config property is required')
82 if not isinstance(data['config'], InitConfig):
83 raise ValueError(
84 'config property must be of type autoscaling.InitConfig'
85 )
86
87 def JSONrepr(self):
88 return self.data
89
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/troposphere/cloudformation.py b/troposphere/cloudformation.py
--- a/troposphere/cloudformation.py
+++ b/troposphere/cloudformation.py
@@ -67,7 +67,13 @@
class InitConfig(AWSProperty):
props = {
- 'files': (dict, False)
+ 'groups': (dict, False),
+ 'users': (dict, False),
+ 'sources': (dict, False),
+ 'packages': (dict, False),
+ 'files': (dict, False),
+ 'commands': (dict, False),
+ 'services': (dict, False)
}
| {"golden_diff": "diff --git a/troposphere/cloudformation.py b/troposphere/cloudformation.py\n--- a/troposphere/cloudformation.py\n+++ b/troposphere/cloudformation.py\n@@ -67,7 +67,13 @@\n \n class InitConfig(AWSProperty):\n props = {\n- 'files': (dict, False)\n+ 'groups': (dict, False),\n+ 'users': (dict, False),\n+ 'sources': (dict, False),\n+ 'packages': (dict, False),\n+ 'files': (dict, False),\n+ 'commands': (dict, False),\n+ 'services': (dict, False)\n }\n", "issue": "Update metadata to include property keys\nWithin the CloudFormation metadata object for auto scaling launch configurations, it'd be nice to be able to rely on the validators within the template to sanity check the allowed keys within the 'config' dictionary: \n\n```\n\"Resources\": {\n \"MyInstance\": {\n \"Type\": \"AWS::EC2::Instance\",\n \"Metadata\" : {\n \"AWS::CloudFormation::Init\" : {\n \"config\" : {\n \"packages\" : {\n :\n },\n \"groups\" : {\n :\n },\n \"users\" : {\n :\n },\n \"sources\" : {\n :\n },\n \"files\" : {\n :\n },\n \"commands\" : {\n :\n },\n \"services\" : {\n :\n }\n }\n }\n },\n \"Properties\": {\n :\n }\n }\n}\n```\n\nCurrently the implementation of troposphere.cloudformation.InitConfig only contains one key which is 'files'\n\n", "before_files": [{"content": "# Copyright (c) 2013, Mark Peek <[email protected]>\n# All rights reserved.\n#\n# See LICENSE file for full license.\n\nfrom . import AWSHelperFn, AWSObject, AWSProperty, Ref\nfrom .validators import integer\n\n\nclass Stack(AWSObject):\n type = \"AWS::CloudFormation::Stack\"\n\n props = {\n 'TemplateURL': (basestring, True),\n 'TimeoutInMinutes': (integer, False),\n 'Parameters': (dict, False),\n }\n\n\nclass WaitCondition(AWSObject):\n type = \"AWS::CloudFormation::WaitCondition\"\n\n props = {\n 'Count': (integer, False),\n 'Handle': (Ref, True),\n 'Timeout': (integer, True),\n }\n\n\nclass WaitConditionHandle(AWSObject):\n type = \"AWS::CloudFormation::WaitConditionHandle\"\n\n props = {}\n\n\nclass InitFileContext(AWSHelperFn):\n def __init__(self, data):\n self.data = data\n\n def JSONrepr(self):\n return self.data\n\n\nclass InitFile(AWSProperty):\n props = {\n 'content': (basestring, True),\n 'mode': (basestring, False),\n 'owner': (basestring, False),\n 'group': (basestring, False),\n 'context': (InitFileContext, False)\n }\n\n\nclass InitFiles(AWSHelperFn):\n def __init__(self, data):\n self.validate(data)\n self.data = data\n\n def validate(self, data):\n for k in data:\n if not isinstance(data[k], InitFile):\n raise ValueError(\"File '\" + k + \"' must be of type InitFile\")\n\n def JSONrepr(self):\n return self.data\n\n\nclass InitConfig(AWSProperty):\n props = {\n 'files': (dict, False)\n }\n\n\nclass Init(AWSHelperFn):\n def __init__(self, data):\n self.validate(data)\n self.data = {\"AWS::CloudFormation::Init\": data}\n\n def validate(self, data):\n if 'config' not in data:\n raise ValueError('config property is required')\n if not isinstance(data['config'], InitConfig):\n raise ValueError(\n 'config property must be of type autoscaling.InitConfig'\n )\n\n def JSONrepr(self):\n return self.data\n", "path": "troposphere/cloudformation.py"}], "after_files": [{"content": "# Copyright (c) 2013, Mark Peek <[email protected]>\n# All rights reserved.\n#\n# See LICENSE file for full license.\n\nfrom . import AWSHelperFn, AWSObject, AWSProperty, Ref\nfrom .validators import integer\n\n\nclass Stack(AWSObject):\n type = \"AWS::CloudFormation::Stack\"\n\n props = {\n 'TemplateURL': (basestring, True),\n 'TimeoutInMinutes': (integer, False),\n 'Parameters': (dict, False),\n }\n\n\nclass WaitCondition(AWSObject):\n type = \"AWS::CloudFormation::WaitCondition\"\n\n props = {\n 'Count': (integer, False),\n 'Handle': (Ref, True),\n 'Timeout': (integer, True),\n }\n\n\nclass WaitConditionHandle(AWSObject):\n type = \"AWS::CloudFormation::WaitConditionHandle\"\n\n props = {}\n\n\nclass InitFileContext(AWSHelperFn):\n def __init__(self, data):\n self.data = data\n\n def JSONrepr(self):\n return self.data\n\n\nclass InitFile(AWSProperty):\n props = {\n 'content': (basestring, True),\n 'mode': (basestring, False),\n 'owner': (basestring, False),\n 'group': (basestring, False),\n 'context': (InitFileContext, False)\n }\n\n\nclass InitFiles(AWSHelperFn):\n def __init__(self, data):\n self.validate(data)\n self.data = data\n\n def validate(self, data):\n for k in data:\n if not isinstance(data[k], InitFile):\n raise ValueError(\"File '\" + k + \"' must be of type InitFile\")\n\n def JSONrepr(self):\n return self.data\n\n\nclass InitConfig(AWSProperty):\n props = {\n 'groups': (dict, False),\n 'users': (dict, False),\n 'sources': (dict, False),\n 'packages': (dict, False),\n 'files': (dict, False),\n 'commands': (dict, False),\n 'services': (dict, False)\n }\n\n\nclass Init(AWSHelperFn):\n def __init__(self, data):\n self.validate(data)\n self.data = {\"AWS::CloudFormation::Init\": data}\n\n def validate(self, data):\n if 'config' not in data:\n raise ValueError('config property is required')\n if not isinstance(data['config'], InitConfig):\n raise ValueError(\n 'config property must be of type autoscaling.InitConfig'\n )\n\n def JSONrepr(self):\n return self.data\n", "path": "troposphere/cloudformation.py"}]} | 1,144 | 143 |
gh_patches_debug_21385 | rasdani/github-patches | git_diff | joke2k__faker-529 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Decrease binary dummy length
`fake.binary(length=1048576)` generate too huge binary data but it is useless at docs build time.
I tested two case, 1048576 and 1024.
``` console
(faker) [item4@item4-mbp docs]$ echo "1048576"
1048576
(faker) [item4@item4-mbp docs]$ time make html
sphinx-build -b html -d _build/doctrees . _build/html
Running Sphinx v1.4.8
loading pickled environment... done
building [mo]: targets for 0 po files that are out of date
building [html]: targets for 61 source files that are out of date
updating environment: 0 added, 61 changed, 0 removed
reading sources... [100%] providers/faker.providers.user_agent
looking for now-outdated files... none found
pickling environment... done
checking consistency... done
preparing documents... done
writing output... [100%] providers/faker.providers.user_agent
generating indices... genindex
writing additional pages... search
copying static files... WARNING: html_static_path entry '/Users/item4/Projects/faker/docs/_static' does not exist
done
copying extra files... done
dumping search index in English (code: en) ... done
dumping object inventory... done
build succeeded, 1 warning.
Build finished. The HTML pages are in _build/html.
real 3m24.875s
user 3m3.222s
sys 0m10.986s
(faker) [item4@item4-mbp docs]$ echo "1024"
1024
(faker) [item4@item4-mbp docs]$ time make html
sphinx-build -b html -d _build/doctrees . _build/html
Running Sphinx v1.4.8
loading pickled environment... done
building [mo]: targets for 0 po files that are out of date
building [html]: targets for 61 source files that are out of date
updating environment: 0 added, 61 changed, 0 removed
reading sources... [100%] providers/faker.providers.user_agent
looking for now-outdated files... none found
pickling environment... done
checking consistency... done
preparing documents... done
writing output... [100%] providers/faker.providers.user_agent
generating indices... genindex
writing additional pages... search
copying static files... WARNING: html_static_path entry '/Users/item4/Projects/faker/docs/_static' does not exist
done
copying extra files... done
dumping search index in English (code: en) ... done
dumping object inventory... done
build succeeded, 1 warning.
Build finished. The HTML pages are in _build/html.
real 0m46.086s
user 0m43.772s
sys 0m1.479s
```
Does docs really need 1048576 length binary?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `faker/documentor.py`
Content:
```
1 # coding=utf-8
2
3 from __future__ import unicode_literals
4
5 import inspect
6
7 from faker import utils
8
9
10 class Documentor(object):
11
12 def __init__(self, generator):
13 """
14 :param generator: a localized Generator with providers filled,
15 for which to write the documentation
16 :type generator: faker.Generator()
17 """
18 self.generator = generator
19 self.max_name_len = 0
20 self.already_generated = []
21
22 def get_formatters(self, locale=None, excludes=None, **kwargs):
23
24 self.max_name_len = 0
25 self.already_generated = [] if excludes is None else excludes[:]
26 formatters = []
27 providers = self.generator.get_providers()
28 for provider in providers[::-1]: # reverse
29 if locale and provider.__lang__ != locale:
30 continue
31 formatters.append(
32 (provider, self.get_provider_formatters(provider, **kwargs))
33 )
34 return formatters
35
36 def get_provider_formatters(self, provider, prefix='fake.',
37 with_args=True, with_defaults=True):
38
39 formatters = {}
40
41 for name, method in inspect.getmembers(provider, inspect.ismethod):
42
43 # skip 'private' method and inherited methods
44 if name.startswith('_') or name in self.already_generated:
45 continue
46
47 arguments = []
48
49 if with_args:
50 # retrieve all parameter
51 argspec = inspect.getargspec(method)
52
53 lst = [x for x in argspec.args if x not in ['self', 'cls']]
54 for i, arg in enumerate(lst):
55
56 if argspec.defaults and with_defaults:
57
58 try:
59 default = argspec.defaults[i]
60 if utils.is_string(default):
61 default = utils.quote(default)
62 else:
63 # TODO check default type
64 default = "{0}".format(default)
65
66 arg = "{0}={1}".format(arg, default)
67
68 except IndexError:
69 pass
70
71 arguments.append(arg)
72 if with_args == 'first':
73 break
74
75 if with_args != 'first':
76 if argspec.varargs:
77 arguments.append('*' + argspec.varargs)
78 if argspec.keywords:
79 arguments.append('**' + argspec.keywords)
80
81 # build fake method signature
82 signature = "{0}{1}({2})".format(prefix,
83 name,
84 ", ".join(arguments))
85
86 # make a fake example
87 example = self.generator.format(name)
88
89 formatters[signature] = example
90
91 self.max_name_len = max(self.max_name_len, len(signature))
92 self.already_generated.append(name)
93
94 return formatters
95
96 @staticmethod
97 def get_provider_name(provider_class):
98 return provider_class.__provider__
99
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/faker/documentor.py b/faker/documentor.py
--- a/faker/documentor.py
+++ b/faker/documentor.py
@@ -39,12 +39,16 @@
formatters = {}
for name, method in inspect.getmembers(provider, inspect.ismethod):
-
# skip 'private' method and inherited methods
if name.startswith('_') or name in self.already_generated:
continue
arguments = []
+ faker_args = []
+ faker_kwargs = {}
+
+ if name == 'binary':
+ faker_kwargs['length'] = 1024
if with_args:
# retrieve all parameter
@@ -84,7 +88,7 @@
", ".join(arguments))
# make a fake example
- example = self.generator.format(name)
+ example = self.generator.format(name, *faker_args, **faker_kwargs)
formatters[signature] = example
| {"golden_diff": "diff --git a/faker/documentor.py b/faker/documentor.py\n--- a/faker/documentor.py\n+++ b/faker/documentor.py\n@@ -39,12 +39,16 @@\n formatters = {}\n \n for name, method in inspect.getmembers(provider, inspect.ismethod):\n-\n # skip 'private' method and inherited methods\n if name.startswith('_') or name in self.already_generated:\n continue\n \n arguments = []\n+ faker_args = []\n+ faker_kwargs = {}\n+\n+ if name == 'binary':\n+ faker_kwargs['length'] = 1024\n \n if with_args:\n # retrieve all parameter\n@@ -84,7 +88,7 @@\n \", \".join(arguments))\n \n # make a fake example\n- example = self.generator.format(name)\n+ example = self.generator.format(name, *faker_args, **faker_kwargs)\n \n formatters[signature] = example\n", "issue": "Decrease binary dummy length\n`fake.binary(length=1048576)` generate too huge binary data but it is useless at docs build time.\n\nI tested two case, 1048576 and 1024.\n\n``` console\n(faker) [item4@item4-mbp docs]$ echo \"1048576\"\n1048576\n(faker) [item4@item4-mbp docs]$ time make html\nsphinx-build -b html -d _build/doctrees . _build/html\nRunning Sphinx v1.4.8\nloading pickled environment... done\nbuilding [mo]: targets for 0 po files that are out of date\nbuilding [html]: targets for 61 source files that are out of date\nupdating environment: 0 added, 61 changed, 0 removed\nreading sources... [100%] providers/faker.providers.user_agent\nlooking for now-outdated files... none found\npickling environment... done\nchecking consistency... done\npreparing documents... done\nwriting output... [100%] providers/faker.providers.user_agent\ngenerating indices... genindex\nwriting additional pages... search\ncopying static files... WARNING: html_static_path entry '/Users/item4/Projects/faker/docs/_static' does not exist\ndone\ncopying extra files... done\ndumping search index in English (code: en) ... done\ndumping object inventory... done\nbuild succeeded, 1 warning.\n\nBuild finished. The HTML pages are in _build/html.\n\nreal 3m24.875s\nuser 3m3.222s\nsys 0m10.986s\n(faker) [item4@item4-mbp docs]$ echo \"1024\"\n1024\n(faker) [item4@item4-mbp docs]$ time make html\nsphinx-build -b html -d _build/doctrees . _build/html\nRunning Sphinx v1.4.8\nloading pickled environment... done\nbuilding [mo]: targets for 0 po files that are out of date\nbuilding [html]: targets for 61 source files that are out of date\nupdating environment: 0 added, 61 changed, 0 removed\nreading sources... [100%] providers/faker.providers.user_agent\nlooking for now-outdated files... none found\npickling environment... done\nchecking consistency... done\npreparing documents... done\nwriting output... [100%] providers/faker.providers.user_agent\ngenerating indices... genindex\nwriting additional pages... search\ncopying static files... WARNING: html_static_path entry '/Users/item4/Projects/faker/docs/_static' does not exist\ndone\ncopying extra files... done\ndumping search index in English (code: en) ... done\ndumping object inventory... done\nbuild succeeded, 1 warning.\n\nBuild finished. The HTML pages are in _build/html.\n\nreal 0m46.086s\nuser 0m43.772s\nsys 0m1.479s\n```\n\nDoes docs really need 1048576 length binary?\n\n", "before_files": [{"content": "# coding=utf-8\n\nfrom __future__ import unicode_literals\n\nimport inspect\n\nfrom faker import utils\n\n\nclass Documentor(object):\n\n def __init__(self, generator):\n \"\"\"\n :param generator: a localized Generator with providers filled,\n for which to write the documentation\n :type generator: faker.Generator()\n \"\"\"\n self.generator = generator\n self.max_name_len = 0\n self.already_generated = []\n\n def get_formatters(self, locale=None, excludes=None, **kwargs):\n\n self.max_name_len = 0\n self.already_generated = [] if excludes is None else excludes[:]\n formatters = []\n providers = self.generator.get_providers()\n for provider in providers[::-1]: # reverse\n if locale and provider.__lang__ != locale:\n continue\n formatters.append(\n (provider, self.get_provider_formatters(provider, **kwargs))\n )\n return formatters\n\n def get_provider_formatters(self, provider, prefix='fake.',\n with_args=True, with_defaults=True):\n\n formatters = {}\n\n for name, method in inspect.getmembers(provider, inspect.ismethod):\n\n # skip 'private' method and inherited methods\n if name.startswith('_') or name in self.already_generated:\n continue\n\n arguments = []\n\n if with_args:\n # retrieve all parameter\n argspec = inspect.getargspec(method)\n\n lst = [x for x in argspec.args if x not in ['self', 'cls']]\n for i, arg in enumerate(lst):\n\n if argspec.defaults and with_defaults:\n\n try:\n default = argspec.defaults[i]\n if utils.is_string(default):\n default = utils.quote(default)\n else:\n # TODO check default type\n default = \"{0}\".format(default)\n\n arg = \"{0}={1}\".format(arg, default)\n\n except IndexError:\n pass\n\n arguments.append(arg)\n if with_args == 'first':\n break\n\n if with_args != 'first':\n if argspec.varargs:\n arguments.append('*' + argspec.varargs)\n if argspec.keywords:\n arguments.append('**' + argspec.keywords)\n\n # build fake method signature\n signature = \"{0}{1}({2})\".format(prefix,\n name,\n \", \".join(arguments))\n\n # make a fake example\n example = self.generator.format(name)\n\n formatters[signature] = example\n\n self.max_name_len = max(self.max_name_len, len(signature))\n self.already_generated.append(name)\n\n return formatters\n\n @staticmethod\n def get_provider_name(provider_class):\n return provider_class.__provider__\n", "path": "faker/documentor.py"}], "after_files": [{"content": "# coding=utf-8\n\nfrom __future__ import unicode_literals\n\nimport inspect\n\nfrom faker import utils\n\n\nclass Documentor(object):\n\n def __init__(self, generator):\n \"\"\"\n :param generator: a localized Generator with providers filled,\n for which to write the documentation\n :type generator: faker.Generator()\n \"\"\"\n self.generator = generator\n self.max_name_len = 0\n self.already_generated = []\n\n def get_formatters(self, locale=None, excludes=None, **kwargs):\n\n self.max_name_len = 0\n self.already_generated = [] if excludes is None else excludes[:]\n formatters = []\n providers = self.generator.get_providers()\n for provider in providers[::-1]: # reverse\n if locale and provider.__lang__ != locale:\n continue\n formatters.append(\n (provider, self.get_provider_formatters(provider, **kwargs))\n )\n return formatters\n\n def get_provider_formatters(self, provider, prefix='fake.',\n with_args=True, with_defaults=True):\n\n formatters = {}\n\n for name, method in inspect.getmembers(provider, inspect.ismethod):\n # skip 'private' method and inherited methods\n if name.startswith('_') or name in self.already_generated:\n continue\n\n arguments = []\n faker_args = []\n faker_kwargs = {}\n\n if name == 'binary':\n faker_kwargs['length'] = 1024\n\n if with_args:\n # retrieve all parameter\n argspec = inspect.getargspec(method)\n\n lst = [x for x in argspec.args if x not in ['self', 'cls']]\n for i, arg in enumerate(lst):\n\n if argspec.defaults and with_defaults:\n\n try:\n default = argspec.defaults[i]\n if utils.is_string(default):\n default = utils.quote(default)\n else:\n # TODO check default type\n default = \"{0}\".format(default)\n\n arg = \"{0}={1}\".format(arg, default)\n\n except IndexError:\n pass\n\n arguments.append(arg)\n if with_args == 'first':\n break\n\n if with_args != 'first':\n if argspec.varargs:\n arguments.append('*' + argspec.varargs)\n if argspec.keywords:\n arguments.append('**' + argspec.keywords)\n\n # build fake method signature\n signature = \"{0}{1}({2})\".format(prefix,\n name,\n \", \".join(arguments))\n\n # make a fake example\n example = self.generator.format(name, *faker_args, **faker_kwargs)\n\n formatters[signature] = example\n\n self.max_name_len = max(self.max_name_len, len(signature))\n self.already_generated.append(name)\n\n return formatters\n\n @staticmethod\n def get_provider_name(provider_class):\n return provider_class.__provider__\n", "path": "faker/documentor.py"}]} | 1,732 | 208 |
gh_patches_debug_18357 | rasdani/github-patches | git_diff | pyca__cryptography-414 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Binding.is_avaialble can print to stderr
Specifically it'll print an error message, you can see this if you run `tests/hazmat/bindings/test_bindings.py`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cryptography/hazmat/bindings/openssl/binding.py`
Content:
```
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
10 # implied.
11 # See the License for the specific language governing permissions and
12 # limitations under the License.
13
14 from __future__ import absolute_import, division, print_function
15
16 from cryptography.hazmat.bindings.utils import (
17 build_ffi, binding_available
18 )
19
20 _OSX_PRE_INCLUDE = """
21 #ifdef __APPLE__
22 #include <AvailabilityMacros.h>
23 #define __ORIG_DEPRECATED_IN_MAC_OS_X_VERSION_10_7_AND_LATER \
24 DEPRECATED_IN_MAC_OS_X_VERSION_10_7_AND_LATER
25 #undef DEPRECATED_IN_MAC_OS_X_VERSION_10_7_AND_LATER
26 #define DEPRECATED_IN_MAC_OS_X_VERSION_10_7_AND_LATER
27 #endif
28 """
29
30 _OSX_POST_INCLUDE = """
31 #ifdef __APPLE__
32 #undef DEPRECATED_IN_MAC_OS_X_VERSION_10_7_AND_LATER
33 #define DEPRECATED_IN_MAC_OS_X_VERSION_10_7_AND_LATER \
34 __ORIG_DEPRECATED_IN_MAC_OS_X_VERSION_10_7_AND_LATER
35 #endif
36 """
37
38
39 class Binding(object):
40 """
41 OpenSSL API wrapper.
42 """
43 _module_prefix = "cryptography.hazmat.bindings.openssl."
44 _modules = [
45 "asn1",
46 "bignum",
47 "bio",
48 "conf",
49 "crypto",
50 "dh",
51 "dsa",
52 "engine",
53 "err",
54 "evp",
55 "hmac",
56 "nid",
57 "objects",
58 "opensslv",
59 "pem",
60 "pkcs7",
61 "pkcs12",
62 "rand",
63 "rsa",
64 "ssl",
65 "x509",
66 "x509name",
67 "x509v3",
68 ]
69
70 ffi = None
71 lib = None
72
73 def __init__(self):
74 self._ensure_ffi_initialized()
75
76 @classmethod
77 def _ensure_ffi_initialized(cls):
78 if cls.ffi is not None and cls.lib is not None:
79 return
80
81 cls.ffi, cls.lib = build_ffi(cls._module_prefix, cls._modules,
82 _OSX_PRE_INCLUDE, _OSX_POST_INCLUDE,
83 ["crypto", "ssl"])
84
85 @classmethod
86 def is_available(cls):
87 return binding_available(cls._ensure_ffi_initialized)
88
```
Path: `cryptography/hazmat/bindings/utils.py`
Content:
```
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
10 # implied.
11 # See the License for the specific language governing permissions and
12 # limitations under the License.
13
14 from __future__ import absolute_import, division, print_function
15
16 import sys
17
18 import cffi
19
20
21 def build_ffi(module_prefix, modules, pre_include, post_include, libraries):
22 """
23 Modules listed in ``modules`` should have the following attributes:
24
25 * ``INCLUDES``: A string containing C includes.
26 * ``TYPES``: A string containing C declarations for types.
27 * ``FUNCTIONS``: A string containing C declarations for functions.
28 * ``MACROS``: A string containing C declarations for any macros.
29 * ``CUSTOMIZATIONS``: A string containing arbitrary top-level C code, this
30 can be used to do things like test for a define and provide an
31 alternate implementation based on that.
32 * ``CONDITIONAL_NAMES``: A dict mapping strings of condition names from the
33 library to a list of names which will not be present without the
34 condition.
35 """
36 ffi = cffi.FFI()
37 includes = []
38 functions = []
39 macros = []
40 customizations = []
41 for name in modules:
42 module_name = module_prefix + name
43 __import__(module_name)
44 module = sys.modules[module_name]
45
46 ffi.cdef(module.TYPES)
47
48 macros.append(module.MACROS)
49 functions.append(module.FUNCTIONS)
50 includes.append(module.INCLUDES)
51 customizations.append(module.CUSTOMIZATIONS)
52
53 # loop over the functions & macros after declaring all the types
54 # so we can set interdependent types in different files and still
55 # have them all defined before we parse the funcs & macros
56 for func in functions:
57 ffi.cdef(func)
58 for macro in macros:
59 ffi.cdef(macro)
60
61 # We include functions here so that if we got any of their definitions
62 # wrong, the underlying C compiler will explode. In C you are allowed
63 # to re-declare a function if it has the same signature. That is:
64 # int foo(int);
65 # int foo(int);
66 # is legal, but the following will fail to compile:
67 # int foo(int);
68 # int foo(short);
69 lib = ffi.verify(
70 source="\n".join(
71 [pre_include] +
72 includes +
73 [post_include] +
74 functions +
75 customizations
76 ),
77 libraries=libraries,
78 ext_package="cryptography",
79 )
80
81 for name in modules:
82 module_name = module_prefix + name
83 module = sys.modules[module_name]
84 for condition, names in module.CONDITIONAL_NAMES.items():
85 if not getattr(lib, condition):
86 for name in names:
87 delattr(lib, name)
88
89 return ffi, lib
90
91
92 def binding_available(initializer):
93 try:
94 initializer()
95 return True
96 except cffi.VerificationError:
97 return False
98
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/cryptography/hazmat/bindings/openssl/binding.py b/cryptography/hazmat/bindings/openssl/binding.py
--- a/cryptography/hazmat/bindings/openssl/binding.py
+++ b/cryptography/hazmat/bindings/openssl/binding.py
@@ -13,9 +13,8 @@
from __future__ import absolute_import, division, print_function
-from cryptography.hazmat.bindings.utils import (
- build_ffi, binding_available
-)
+from cryptography.hazmat.bindings.utils import build_ffi
+
_OSX_PRE_INCLUDE = """
#ifdef __APPLE__
@@ -84,4 +83,5 @@
@classmethod
def is_available(cls):
- return binding_available(cls._ensure_ffi_initialized)
+ # OpenSSL is the only binding so for now it must always be available
+ return True
diff --git a/cryptography/hazmat/bindings/utils.py b/cryptography/hazmat/bindings/utils.py
--- a/cryptography/hazmat/bindings/utils.py
+++ b/cryptography/hazmat/bindings/utils.py
@@ -87,11 +87,3 @@
delattr(lib, name)
return ffi, lib
-
-
-def binding_available(initializer):
- try:
- initializer()
- return True
- except cffi.VerificationError:
- return False
| {"golden_diff": "diff --git a/cryptography/hazmat/bindings/openssl/binding.py b/cryptography/hazmat/bindings/openssl/binding.py\n--- a/cryptography/hazmat/bindings/openssl/binding.py\n+++ b/cryptography/hazmat/bindings/openssl/binding.py\n@@ -13,9 +13,8 @@\n \n from __future__ import absolute_import, division, print_function\n \n-from cryptography.hazmat.bindings.utils import (\n- build_ffi, binding_available\n-)\n+from cryptography.hazmat.bindings.utils import build_ffi\n+\n \n _OSX_PRE_INCLUDE = \"\"\"\n #ifdef __APPLE__\n@@ -84,4 +83,5 @@\n \n @classmethod\n def is_available(cls):\n- return binding_available(cls._ensure_ffi_initialized)\n+ # OpenSSL is the only binding so for now it must always be available\n+ return True\ndiff --git a/cryptography/hazmat/bindings/utils.py b/cryptography/hazmat/bindings/utils.py\n--- a/cryptography/hazmat/bindings/utils.py\n+++ b/cryptography/hazmat/bindings/utils.py\n@@ -87,11 +87,3 @@\n delattr(lib, name)\n \n return ffi, lib\n-\n-\n-def binding_available(initializer):\n- try:\n- initializer()\n- return True\n- except cffi.VerificationError:\n- return False\n", "issue": "Binding.is_avaialble can print to stderr\nSpecifically it'll print an error message, you can see this if you run `tests/hazmat/bindings/test_bindings.py`\n\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import, division, print_function\n\nfrom cryptography.hazmat.bindings.utils import (\n build_ffi, binding_available\n)\n\n_OSX_PRE_INCLUDE = \"\"\"\n#ifdef __APPLE__\n#include <AvailabilityMacros.h>\n#define __ORIG_DEPRECATED_IN_MAC_OS_X_VERSION_10_7_AND_LATER \\\n DEPRECATED_IN_MAC_OS_X_VERSION_10_7_AND_LATER\n#undef DEPRECATED_IN_MAC_OS_X_VERSION_10_7_AND_LATER\n#define DEPRECATED_IN_MAC_OS_X_VERSION_10_7_AND_LATER\n#endif\n\"\"\"\n\n_OSX_POST_INCLUDE = \"\"\"\n#ifdef __APPLE__\n#undef DEPRECATED_IN_MAC_OS_X_VERSION_10_7_AND_LATER\n#define DEPRECATED_IN_MAC_OS_X_VERSION_10_7_AND_LATER \\\n __ORIG_DEPRECATED_IN_MAC_OS_X_VERSION_10_7_AND_LATER\n#endif\n\"\"\"\n\n\nclass Binding(object):\n \"\"\"\n OpenSSL API wrapper.\n \"\"\"\n _module_prefix = \"cryptography.hazmat.bindings.openssl.\"\n _modules = [\n \"asn1\",\n \"bignum\",\n \"bio\",\n \"conf\",\n \"crypto\",\n \"dh\",\n \"dsa\",\n \"engine\",\n \"err\",\n \"evp\",\n \"hmac\",\n \"nid\",\n \"objects\",\n \"opensslv\",\n \"pem\",\n \"pkcs7\",\n \"pkcs12\",\n \"rand\",\n \"rsa\",\n \"ssl\",\n \"x509\",\n \"x509name\",\n \"x509v3\",\n ]\n\n ffi = None\n lib = None\n\n def __init__(self):\n self._ensure_ffi_initialized()\n\n @classmethod\n def _ensure_ffi_initialized(cls):\n if cls.ffi is not None and cls.lib is not None:\n return\n\n cls.ffi, cls.lib = build_ffi(cls._module_prefix, cls._modules,\n _OSX_PRE_INCLUDE, _OSX_POST_INCLUDE,\n [\"crypto\", \"ssl\"])\n\n @classmethod\n def is_available(cls):\n return binding_available(cls._ensure_ffi_initialized)\n", "path": "cryptography/hazmat/bindings/openssl/binding.py"}, {"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport sys\n\nimport cffi\n\n\ndef build_ffi(module_prefix, modules, pre_include, post_include, libraries):\n \"\"\"\n Modules listed in ``modules`` should have the following attributes:\n\n * ``INCLUDES``: A string containing C includes.\n * ``TYPES``: A string containing C declarations for types.\n * ``FUNCTIONS``: A string containing C declarations for functions.\n * ``MACROS``: A string containing C declarations for any macros.\n * ``CUSTOMIZATIONS``: A string containing arbitrary top-level C code, this\n can be used to do things like test for a define and provide an\n alternate implementation based on that.\n * ``CONDITIONAL_NAMES``: A dict mapping strings of condition names from the\n library to a list of names which will not be present without the\n condition.\n \"\"\"\n ffi = cffi.FFI()\n includes = []\n functions = []\n macros = []\n customizations = []\n for name in modules:\n module_name = module_prefix + name\n __import__(module_name)\n module = sys.modules[module_name]\n\n ffi.cdef(module.TYPES)\n\n macros.append(module.MACROS)\n functions.append(module.FUNCTIONS)\n includes.append(module.INCLUDES)\n customizations.append(module.CUSTOMIZATIONS)\n\n # loop over the functions & macros after declaring all the types\n # so we can set interdependent types in different files and still\n # have them all defined before we parse the funcs & macros\n for func in functions:\n ffi.cdef(func)\n for macro in macros:\n ffi.cdef(macro)\n\n # We include functions here so that if we got any of their definitions\n # wrong, the underlying C compiler will explode. In C you are allowed\n # to re-declare a function if it has the same signature. That is:\n # int foo(int);\n # int foo(int);\n # is legal, but the following will fail to compile:\n # int foo(int);\n # int foo(short);\n lib = ffi.verify(\n source=\"\\n\".join(\n [pre_include] +\n includes +\n [post_include] +\n functions +\n customizations\n ),\n libraries=libraries,\n ext_package=\"cryptography\",\n )\n\n for name in modules:\n module_name = module_prefix + name\n module = sys.modules[module_name]\n for condition, names in module.CONDITIONAL_NAMES.items():\n if not getattr(lib, condition):\n for name in names:\n delattr(lib, name)\n\n return ffi, lib\n\n\ndef binding_available(initializer):\n try:\n initializer()\n return True\n except cffi.VerificationError:\n return False\n", "path": "cryptography/hazmat/bindings/utils.py"}], "after_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import, division, print_function\n\nfrom cryptography.hazmat.bindings.utils import build_ffi\n\n\n_OSX_PRE_INCLUDE = \"\"\"\n#ifdef __APPLE__\n#include <AvailabilityMacros.h>\n#define __ORIG_DEPRECATED_IN_MAC_OS_X_VERSION_10_7_AND_LATER \\\n DEPRECATED_IN_MAC_OS_X_VERSION_10_7_AND_LATER\n#undef DEPRECATED_IN_MAC_OS_X_VERSION_10_7_AND_LATER\n#define DEPRECATED_IN_MAC_OS_X_VERSION_10_7_AND_LATER\n#endif\n\"\"\"\n\n_OSX_POST_INCLUDE = \"\"\"\n#ifdef __APPLE__\n#undef DEPRECATED_IN_MAC_OS_X_VERSION_10_7_AND_LATER\n#define DEPRECATED_IN_MAC_OS_X_VERSION_10_7_AND_LATER \\\n __ORIG_DEPRECATED_IN_MAC_OS_X_VERSION_10_7_AND_LATER\n#endif\n\"\"\"\n\n\nclass Binding(object):\n \"\"\"\n OpenSSL API wrapper.\n \"\"\"\n _module_prefix = \"cryptography.hazmat.bindings.openssl.\"\n _modules = [\n \"asn1\",\n \"bignum\",\n \"bio\",\n \"conf\",\n \"crypto\",\n \"dh\",\n \"dsa\",\n \"engine\",\n \"err\",\n \"evp\",\n \"hmac\",\n \"nid\",\n \"objects\",\n \"opensslv\",\n \"pem\",\n \"pkcs7\",\n \"pkcs12\",\n \"rand\",\n \"rsa\",\n \"ssl\",\n \"x509\",\n \"x509name\",\n \"x509v3\",\n ]\n\n ffi = None\n lib = None\n\n def __init__(self):\n self._ensure_ffi_initialized()\n\n @classmethod\n def _ensure_ffi_initialized(cls):\n if cls.ffi is not None and cls.lib is not None:\n return\n\n cls.ffi, cls.lib = build_ffi(cls._module_prefix, cls._modules,\n _OSX_PRE_INCLUDE, _OSX_POST_INCLUDE,\n [\"crypto\", \"ssl\"])\n\n @classmethod\n def is_available(cls):\n # OpenSSL is the only binding so for now it must always be available\n return True\n", "path": "cryptography/hazmat/bindings/openssl/binding.py"}, {"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport sys\n\nimport cffi\n\n\ndef build_ffi(module_prefix, modules, pre_include, post_include, libraries):\n \"\"\"\n Modules listed in ``modules`` should have the following attributes:\n\n * ``INCLUDES``: A string containing C includes.\n * ``TYPES``: A string containing C declarations for types.\n * ``FUNCTIONS``: A string containing C declarations for functions.\n * ``MACROS``: A string containing C declarations for any macros.\n * ``CUSTOMIZATIONS``: A string containing arbitrary top-level C code, this\n can be used to do things like test for a define and provide an\n alternate implementation based on that.\n * ``CONDITIONAL_NAMES``: A dict mapping strings of condition names from the\n library to a list of names which will not be present without the\n condition.\n \"\"\"\n ffi = cffi.FFI()\n includes = []\n functions = []\n macros = []\n customizations = []\n for name in modules:\n module_name = module_prefix + name\n __import__(module_name)\n module = sys.modules[module_name]\n\n ffi.cdef(module.TYPES)\n\n macros.append(module.MACROS)\n functions.append(module.FUNCTIONS)\n includes.append(module.INCLUDES)\n customizations.append(module.CUSTOMIZATIONS)\n\n # loop over the functions & macros after declaring all the types\n # so we can set interdependent types in different files and still\n # have them all defined before we parse the funcs & macros\n for func in functions:\n ffi.cdef(func)\n for macro in macros:\n ffi.cdef(macro)\n\n # We include functions here so that if we got any of their definitions\n # wrong, the underlying C compiler will explode. In C you are allowed\n # to re-declare a function if it has the same signature. That is:\n # int foo(int);\n # int foo(int);\n # is legal, but the following will fail to compile:\n # int foo(int);\n # int foo(short);\n lib = ffi.verify(\n source=\"\\n\".join(\n [pre_include] +\n includes +\n [post_include] +\n functions +\n customizations\n ),\n libraries=libraries,\n ext_package=\"cryptography\",\n )\n\n for name in modules:\n module_name = module_prefix + name\n module = sys.modules[module_name]\n for condition, names in module.CONDITIONAL_NAMES.items():\n if not getattr(lib, condition):\n for name in names:\n delattr(lib, name)\n\n return ffi, lib\n", "path": "cryptography/hazmat/bindings/utils.py"}]} | 2,005 | 302 |
gh_patches_debug_20421 | rasdani/github-patches | git_diff | psychopy__psychopy-1123 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Builder 'Aperture' component cannot have indefinite duration
If a Builder 'Aperture' component is set to have an indefinite duration, the actual result is that it is enabled and then immediately disabled. Hence, it has no effect on other stimuli. Below is example code that gets generated when an indefinite duration is specified:
```
# *aperture* updates
if t >= 0.0 and aperture.status == NOT_STARTED:
# keep track of start time/frame for later
aperture.tStart = t # underestimates by a little under one frame
aperture.frameNStart = frameN # exact frame index
aperture.enabled = True
if aperture.status == STARTED and t >= (0.0 + (-win.monitorFramePeriod*0.75)): #most of one frame period left
aperture.enabled = False
if aperture.status == STARTED: # only update if being drawn
aperture.setPos(mouse.getPos(), log=False)
```
The error is in the second if/then above, which should not occur. It is included as if there was a maximum duration in effect, but the value is omitted in the `(-win.monitorFramePeriod*0.75)` expression and hence the duration is effectively zero.
The fix here should be simple but I'm not familiar with playing with the Builder component generation code. So recording the bug here until I can figure out how to fix it (and test).
A work-around for users in the mean time would be to simply put an unfeasibly large number in the duration field (e.g. 9999).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `psychopy/app/builder/components/aperture.py`
Content:
```
1 # Part of the PsychoPy library
2 # Copyright (C) 2015 Jonathan Peirce
3 # Distributed under the terms of the GNU General Public License (GPL).
4
5 from os import path
6 from ._base import BaseVisualComponent, getInitVals
7
8 __author__ = 'Jeremy Gray, Jon Peirce'
9 # March 2011; builder-component for Yuri Spitsyn's visual.Aperture class
10 # July 2011: jwp added the code for it to be enabled only when needed
11
12 # the absolute path to the folder containing this path
13 thisFolder = path.abspath(path.dirname(__file__))
14 iconFile = path.join(thisFolder, 'aperture.png')
15 tooltip = _translate('Aperture: restrict the drawing of stimuli to a given '
16 'region')
17
18
19 class ApertureComponent(BaseVisualComponent):
20 """An event class for using GL stencil to restrict the viewing area to a
21 circle or square of a given size and position"""
22
23 def __init__(self, exp, parentName, name='aperture', units='norm',
24 size=1, pos=(0, 0),
25 startType='time (s)', startVal=0.0,
26 stopType='duration (s)', stopVal=1.0,
27 startEstim='', durationEstim=''):
28 # initialise main parameters
29 super(ApertureComponent, self).__init__(
30 exp, parentName, name=name, units=units,
31 pos=pos, size=size,
32 startType=startType, startVal=startVal,
33 stopType=stopType, stopVal=stopVal,
34 startEstim=startEstim, durationEstim=durationEstim)
35
36 self.type = 'Aperture'
37 self.url = "http://www.psychopy.org/builder/components/aperture.html"
38 # params:
39 # NB make some adjustments on the params defined by _visual component
40 self.order = ['name', 'size', 'pos'] # make sure this is at top
41
42 msg = _translate("How big is the aperture? (a single number for diameter)")
43 self.params['size'].hint = msg
44 # only localize hints and labels
45 self.params['size'].label = _translate("Size")
46 self.params['pos'].hint = _translate("Where is the aperture centred?")
47
48 # inherited from _visual component but not needed
49 del self.params['ori']
50 del self.params['color']
51 del self.params['colorSpace']
52 del self.params['opacity']
53
54 def writeInitCode(self, buff):
55 # do we need units code?
56 if self.params['units'].val == 'from exp settings':
57 unitsStr = ""
58 else:
59 unitsStr = "units=%(units)s, " % self.params
60
61 # do writing of init
62 inits = getInitVals(self.params)
63
64 code = ("%(name)s = visual.Aperture(\n"
65 " win=win, name='%(name)s',\n"
66 " " + unitsStr + "size=%(size)s, pos=%(pos)s)\n"
67 "%(name)s.disable() # disable until its actually used\n")
68 buff.writeIndentedLines(code % inits)
69
70 def writeFrameCode(self, buff):
71 """Only activate the aperture for the required frames
72 """
73
74 buff.writeIndented("\n")
75 buff.writeIndented("# *%s* updates\n" % (self.params['name']))
76 # writes an if statement to determine whether to draw etc
77 self.writeStartTestCode(buff)
78 buff.writeIndented("%(name)s.enabled = True\n" % self.params)
79 # to get out of the if statement
80 buff.setIndentLevel(-1, relative=True)
81 # writes an if statement to determine whether to draw etc
82 self.writeStopTestCode(buff)
83 buff.writeIndented("%(name)s.enabled = False\n" % self.params)
84 # to get out of the if statement
85 buff.setIndentLevel(-1, relative=True)
86 # set parameters that need updating every frame
87 # do any params need updating? (this method inherited from _base)
88 if self.checkNeedToUpdate('set every frame'):
89 code = ("if %(name)s.status == STARTED: # only update if being drawn\n")
90 buff.writeIndented(code % self.params)
91
92 buff.setIndentLevel(+1, relative=True) # to enter the if block
93 self.writeParamUpdates(buff, 'set every frame')
94 buff.setIndentLevel(-1, relative=True) # to exit the if block
95
96 def writeRoutineEndCode(self, buff):
97 msg = "%(name)s.enabled = False # just in case it was left enabled\n"
98 buff.writeIndented(msg % self.params)
99
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/psychopy/app/builder/components/aperture.py b/psychopy/app/builder/components/aperture.py
--- a/psychopy/app/builder/components/aperture.py
+++ b/psychopy/app/builder/components/aperture.py
@@ -78,11 +78,12 @@
buff.writeIndented("%(name)s.enabled = True\n" % self.params)
# to get out of the if statement
buff.setIndentLevel(-1, relative=True)
- # writes an if statement to determine whether to draw etc
- self.writeStopTestCode(buff)
- buff.writeIndented("%(name)s.enabled = False\n" % self.params)
- # to get out of the if statement
- buff.setIndentLevel(-1, relative=True)
+ if self.params['stopVal'].val not in ['', None, -1, 'None']:
+ # writes an if statement to determine whether to draw etc
+ self.writeStopTestCode(buff)
+ buff.writeIndented("%(name)s.enabled = False\n" % self.params)
+ # to get out of the if statement
+ buff.setIndentLevel(-1, relative=True)
# set parameters that need updating every frame
# do any params need updating? (this method inherited from _base)
if self.checkNeedToUpdate('set every frame'):
| {"golden_diff": "diff --git a/psychopy/app/builder/components/aperture.py b/psychopy/app/builder/components/aperture.py\n--- a/psychopy/app/builder/components/aperture.py\n+++ b/psychopy/app/builder/components/aperture.py\n@@ -78,11 +78,12 @@\n buff.writeIndented(\"%(name)s.enabled = True\\n\" % self.params)\n # to get out of the if statement\n buff.setIndentLevel(-1, relative=True)\n- # writes an if statement to determine whether to draw etc\n- self.writeStopTestCode(buff)\n- buff.writeIndented(\"%(name)s.enabled = False\\n\" % self.params)\n- # to get out of the if statement\n- buff.setIndentLevel(-1, relative=True)\n+ if self.params['stopVal'].val not in ['', None, -1, 'None']:\n+ # writes an if statement to determine whether to draw etc\n+ self.writeStopTestCode(buff)\n+ buff.writeIndented(\"%(name)s.enabled = False\\n\" % self.params)\n+ # to get out of the if statement\n+ buff.setIndentLevel(-1, relative=True)\n # set parameters that need updating every frame\n # do any params need updating? (this method inherited from _base)\n if self.checkNeedToUpdate('set every frame'):\n", "issue": "Builder 'Aperture' component cannot have indefinite duration\nIf a Builder 'Aperture' component is set to have an indefinite duration, the actual result is that it is enabled and then immediately disabled. Hence, it has no effect on other stimuli. Below is example code that gets generated when an indefinite duration is specified:\n\n```\n# *aperture* updates\nif t >= 0.0 and aperture.status == NOT_STARTED:\n # keep track of start time/frame for later\n aperture.tStart = t # underestimates by a little under one frame\n aperture.frameNStart = frameN # exact frame index\n aperture.enabled = True\nif aperture.status == STARTED and t >= (0.0 + (-win.monitorFramePeriod*0.75)): #most of one frame period left\n aperture.enabled = False\nif aperture.status == STARTED: # only update if being drawn\n aperture.setPos(mouse.getPos(), log=False)\n```\n\nThe error is in the second if/then above, which should not occur. It is included as if there was a maximum duration in effect, but the value is omitted in the `(-win.monitorFramePeriod*0.75)` expression and hence the duration is effectively zero.\n\nThe fix here should be simple but I'm not familiar with playing with the Builder component generation code. So recording the bug here until I can figure out how to fix it (and test).\n\nA work-around for users in the mean time would be to simply put an unfeasibly large number in the duration field (e.g. 9999).\n\n", "before_files": [{"content": "# Part of the PsychoPy library\n# Copyright (C) 2015 Jonathan Peirce\n# Distributed under the terms of the GNU General Public License (GPL).\n\nfrom os import path\nfrom ._base import BaseVisualComponent, getInitVals\n\n__author__ = 'Jeremy Gray, Jon Peirce'\n# March 2011; builder-component for Yuri Spitsyn's visual.Aperture class\n# July 2011: jwp added the code for it to be enabled only when needed\n\n# the absolute path to the folder containing this path\nthisFolder = path.abspath(path.dirname(__file__))\niconFile = path.join(thisFolder, 'aperture.png')\ntooltip = _translate('Aperture: restrict the drawing of stimuli to a given '\n 'region')\n\n\nclass ApertureComponent(BaseVisualComponent):\n \"\"\"An event class for using GL stencil to restrict the viewing area to a\n circle or square of a given size and position\"\"\"\n\n def __init__(self, exp, parentName, name='aperture', units='norm',\n size=1, pos=(0, 0),\n startType='time (s)', startVal=0.0,\n stopType='duration (s)', stopVal=1.0,\n startEstim='', durationEstim=''):\n # initialise main parameters\n super(ApertureComponent, self).__init__(\n exp, parentName, name=name, units=units,\n pos=pos, size=size,\n startType=startType, startVal=startVal,\n stopType=stopType, stopVal=stopVal,\n startEstim=startEstim, durationEstim=durationEstim)\n\n self.type = 'Aperture'\n self.url = \"http://www.psychopy.org/builder/components/aperture.html\"\n # params:\n # NB make some adjustments on the params defined by _visual component\n self.order = ['name', 'size', 'pos'] # make sure this is at top\n\n msg = _translate(\"How big is the aperture? (a single number for diameter)\")\n self.params['size'].hint = msg\n # only localize hints and labels\n self.params['size'].label = _translate(\"Size\")\n self.params['pos'].hint = _translate(\"Where is the aperture centred?\")\n\n # inherited from _visual component but not needed\n del self.params['ori']\n del self.params['color']\n del self.params['colorSpace']\n del self.params['opacity']\n\n def writeInitCode(self, buff):\n # do we need units code?\n if self.params['units'].val == 'from exp settings':\n unitsStr = \"\"\n else:\n unitsStr = \"units=%(units)s, \" % self.params\n\n # do writing of init\n inits = getInitVals(self.params)\n\n code = (\"%(name)s = visual.Aperture(\\n\"\n \" win=win, name='%(name)s',\\n\"\n \" \" + unitsStr + \"size=%(size)s, pos=%(pos)s)\\n\"\n \"%(name)s.disable() # disable until its actually used\\n\")\n buff.writeIndentedLines(code % inits)\n\n def writeFrameCode(self, buff):\n \"\"\"Only activate the aperture for the required frames\n \"\"\"\n\n buff.writeIndented(\"\\n\")\n buff.writeIndented(\"# *%s* updates\\n\" % (self.params['name']))\n # writes an if statement to determine whether to draw etc\n self.writeStartTestCode(buff)\n buff.writeIndented(\"%(name)s.enabled = True\\n\" % self.params)\n # to get out of the if statement\n buff.setIndentLevel(-1, relative=True)\n # writes an if statement to determine whether to draw etc\n self.writeStopTestCode(buff)\n buff.writeIndented(\"%(name)s.enabled = False\\n\" % self.params)\n # to get out of the if statement\n buff.setIndentLevel(-1, relative=True)\n # set parameters that need updating every frame\n # do any params need updating? (this method inherited from _base)\n if self.checkNeedToUpdate('set every frame'):\n code = (\"if %(name)s.status == STARTED: # only update if being drawn\\n\")\n buff.writeIndented(code % self.params)\n\n buff.setIndentLevel(+1, relative=True) # to enter the if block\n self.writeParamUpdates(buff, 'set every frame')\n buff.setIndentLevel(-1, relative=True) # to exit the if block\n\n def writeRoutineEndCode(self, buff):\n msg = \"%(name)s.enabled = False # just in case it was left enabled\\n\"\n buff.writeIndented(msg % self.params)\n", "path": "psychopy/app/builder/components/aperture.py"}], "after_files": [{"content": "# Part of the PsychoPy library\n# Copyright (C) 2015 Jonathan Peirce\n# Distributed under the terms of the GNU General Public License (GPL).\n\nfrom os import path\nfrom ._base import BaseVisualComponent, getInitVals\n\n__author__ = 'Jeremy Gray, Jon Peirce'\n# March 2011; builder-component for Yuri Spitsyn's visual.Aperture class\n# July 2011: jwp added the code for it to be enabled only when needed\n\n# the absolute path to the folder containing this path\nthisFolder = path.abspath(path.dirname(__file__))\niconFile = path.join(thisFolder, 'aperture.png')\ntooltip = _translate('Aperture: restrict the drawing of stimuli to a given '\n 'region')\n\n\nclass ApertureComponent(BaseVisualComponent):\n \"\"\"An event class for using GL stencil to restrict the viewing area to a\n circle or square of a given size and position\"\"\"\n\n def __init__(self, exp, parentName, name='aperture', units='norm',\n size=1, pos=(0, 0),\n startType='time (s)', startVal=0.0,\n stopType='duration (s)', stopVal=1.0,\n startEstim='', durationEstim=''):\n # initialise main parameters\n super(ApertureComponent, self).__init__(\n exp, parentName, name=name, units=units,\n pos=pos, size=size,\n startType=startType, startVal=startVal,\n stopType=stopType, stopVal=stopVal,\n startEstim=startEstim, durationEstim=durationEstim)\n\n self.type = 'Aperture'\n self.url = \"http://www.psychopy.org/builder/components/aperture.html\"\n # params:\n # NB make some adjustments on the params defined by _visual component\n self.order = ['name', 'size', 'pos'] # make sure this is at top\n\n msg = _translate(\"How big is the aperture? (a single number for diameter)\")\n self.params['size'].hint = msg\n # only localize hints and labels\n self.params['size'].label = _translate(\"Size\")\n self.params['pos'].hint = _translate(\"Where is the aperture centred?\")\n\n # inherited from _visual component but not needed\n del self.params['ori']\n del self.params['color']\n del self.params['colorSpace']\n del self.params['opacity']\n\n def writeInitCode(self, buff):\n # do we need units code?\n if self.params['units'].val == 'from exp settings':\n unitsStr = \"\"\n else:\n unitsStr = \"units=%(units)s, \" % self.params\n\n # do writing of init\n inits = getInitVals(self.params)\n\n code = (\"%(name)s = visual.Aperture(\\n\"\n \" win=win, name='%(name)s',\\n\"\n \" \" + unitsStr + \"size=%(size)s, pos=%(pos)s)\\n\"\n \"%(name)s.disable() # disable until its actually used\\n\")\n buff.writeIndentedLines(code % inits)\n\n def writeFrameCode(self, buff):\n \"\"\"Only activate the aperture for the required frames\n \"\"\"\n\n buff.writeIndented(\"\\n\")\n buff.writeIndented(\"# *%s* updates\\n\" % (self.params['name']))\n # writes an if statement to determine whether to draw etc\n self.writeStartTestCode(buff)\n buff.writeIndented(\"%(name)s.enabled = True\\n\" % self.params)\n # to get out of the if statement\n buff.setIndentLevel(-1, relative=True)\n if self.params['stopVal'].val not in ['', None, -1, 'None']:\n # writes an if statement to determine whether to draw etc\n self.writeStopTestCode(buff)\n buff.writeIndented(\"%(name)s.enabled = False\\n\" % self.params)\n # to get out of the if statement\n buff.setIndentLevel(-1, relative=True)\n # set parameters that need updating every frame\n # do any params need updating? (this method inherited from _base)\n if self.checkNeedToUpdate('set every frame'):\n code = (\"if %(name)s.status == STARTED: # only update if being drawn\\n\")\n buff.writeIndented(code % self.params)\n\n buff.setIndentLevel(+1, relative=True) # to enter the if block\n self.writeParamUpdates(buff, 'set every frame')\n buff.setIndentLevel(-1, relative=True) # to exit the if block\n\n def writeRoutineEndCode(self, buff):\n msg = \"%(name)s.enabled = False # just in case it was left enabled\\n\"\n buff.writeIndented(msg % self.params)\n", "path": "psychopy/app/builder/components/aperture.py"}]} | 1,802 | 290 |
gh_patches_debug_64631 | rasdani/github-patches | git_diff | quantumlib__Cirq-2370 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Escape unicode characters when producing latex
Currently, the qcircuit to latex code passes through unicode characters like ↺ ↻. LaTeX may not be happy about this. We should replace the unicode characters with some that LaTeX is happy with. Or perhaps throw an error?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cirq/contrib/qcircuit/qcircuit_pdf.py`
Content:
```
1 # Copyright 2018 The Cirq Developers
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 # coverage: ignore
16
17 import errno
18 import os
19
20 from pylatex import Document, NoEscape, Package
21
22 from cirq import circuits
23 from cirq.contrib.qcircuit.qcircuit_diagram import (
24 circuit_to_latex_using_qcircuit)
25
26
27 def circuit_to_pdf_using_qcircuit_via_tex(circuit: circuits.Circuit,
28 filepath: str,
29 pdf_kwargs=None,
30 qcircuit_kwargs=None,
31 clean_ext=('dvi', 'ps'),
32 documentclass='article'):
33 """Compiles the QCircuit-based latex diagram of the given circuit.
34
35 Args:
36 circuit: The circuit to produce a pdf of.
37 filepath: Where to output the pdf.
38 pdf_kwargs: The arguments to pass to generate_pdf.
39 qcircuit_kwargs: The arguments to pass to
40 circuit_to_latex_using_qcircuit.
41 clean_ext: The file extensions to clean up after compilation. By
42 default, latexmk is used with the '-pdfps' flag, which produces
43 intermediary dvi and ps files.
44 documentclass: The documentclass of the latex file.
45 """
46 pdf_kwargs = {'compiler': 'latexmk', 'compiler_args': ['-pdfps'],
47 **({} if pdf_kwargs is None else pdf_kwargs)}
48 qcircuit_kwargs = {} if qcircuit_kwargs is None else qcircuit_kwargs
49 tex = circuit_to_latex_using_qcircuit(circuit, **qcircuit_kwargs)
50 doc = Document(documentclass=documentclass, document_options='dvips')
51 doc.packages.append(Package('amsmath'))
52 doc.packages.append(Package('qcircuit'))
53 doc.append(NoEscape(tex))
54 doc.generate_pdf(filepath, **pdf_kwargs)
55 for ext in clean_ext:
56 try:
57 os.remove(filepath + '.' + ext)
58 except (OSError, IOError) as e:
59 if e.errno != errno.ENOENT:
60 raise
61
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/cirq/contrib/qcircuit/qcircuit_pdf.py b/cirq/contrib/qcircuit/qcircuit_pdf.py
--- a/cirq/contrib/qcircuit/qcircuit_pdf.py
+++ b/cirq/contrib/qcircuit/qcircuit_pdf.py
@@ -50,6 +50,7 @@
doc = Document(documentclass=documentclass, document_options='dvips')
doc.packages.append(Package('amsmath'))
doc.packages.append(Package('qcircuit'))
+ doc.preamble.append(Package('inputenc', options=['utf8']))
doc.append(NoEscape(tex))
doc.generate_pdf(filepath, **pdf_kwargs)
for ext in clean_ext:
| {"golden_diff": "diff --git a/cirq/contrib/qcircuit/qcircuit_pdf.py b/cirq/contrib/qcircuit/qcircuit_pdf.py\n--- a/cirq/contrib/qcircuit/qcircuit_pdf.py\n+++ b/cirq/contrib/qcircuit/qcircuit_pdf.py\n@@ -50,6 +50,7 @@\n doc = Document(documentclass=documentclass, document_options='dvips')\n doc.packages.append(Package('amsmath'))\n doc.packages.append(Package('qcircuit'))\n+ doc.preamble.append(Package('inputenc', options=['utf8']))\n doc.append(NoEscape(tex))\n doc.generate_pdf(filepath, **pdf_kwargs)\n for ext in clean_ext:\n", "issue": "Escape unicode characters when producing latex\nCurrently, the qcircuit to latex code passes through unicode characters like \u21ba \u21bb. LaTeX may not be happy about this. We should replace the unicode characters with some that LaTeX is happy with. Or perhaps throw an error?\n", "before_files": [{"content": "# Copyright 2018 The Cirq Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# coverage: ignore\n\nimport errno\nimport os\n\nfrom pylatex import Document, NoEscape, Package\n\nfrom cirq import circuits\nfrom cirq.contrib.qcircuit.qcircuit_diagram import (\n circuit_to_latex_using_qcircuit)\n\n\ndef circuit_to_pdf_using_qcircuit_via_tex(circuit: circuits.Circuit,\n filepath: str,\n pdf_kwargs=None,\n qcircuit_kwargs=None,\n clean_ext=('dvi', 'ps'),\n documentclass='article'):\n \"\"\"Compiles the QCircuit-based latex diagram of the given circuit.\n\n Args:\n circuit: The circuit to produce a pdf of.\n filepath: Where to output the pdf.\n pdf_kwargs: The arguments to pass to generate_pdf.\n qcircuit_kwargs: The arguments to pass to\n circuit_to_latex_using_qcircuit.\n clean_ext: The file extensions to clean up after compilation. By\n default, latexmk is used with the '-pdfps' flag, which produces\n intermediary dvi and ps files.\n documentclass: The documentclass of the latex file.\n \"\"\"\n pdf_kwargs = {'compiler': 'latexmk', 'compiler_args': ['-pdfps'],\n **({} if pdf_kwargs is None else pdf_kwargs)}\n qcircuit_kwargs = {} if qcircuit_kwargs is None else qcircuit_kwargs\n tex = circuit_to_latex_using_qcircuit(circuit, **qcircuit_kwargs)\n doc = Document(documentclass=documentclass, document_options='dvips')\n doc.packages.append(Package('amsmath'))\n doc.packages.append(Package('qcircuit'))\n doc.append(NoEscape(tex))\n doc.generate_pdf(filepath, **pdf_kwargs)\n for ext in clean_ext:\n try:\n os.remove(filepath + '.' + ext)\n except (OSError, IOError) as e:\n if e.errno != errno.ENOENT:\n raise\n", "path": "cirq/contrib/qcircuit/qcircuit_pdf.py"}], "after_files": [{"content": "# Copyright 2018 The Cirq Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# coverage: ignore\n\nimport errno\nimport os\n\nfrom pylatex import Document, NoEscape, Package\n\nfrom cirq import circuits\nfrom cirq.contrib.qcircuit.qcircuit_diagram import (\n circuit_to_latex_using_qcircuit)\n\n\ndef circuit_to_pdf_using_qcircuit_via_tex(circuit: circuits.Circuit,\n filepath: str,\n pdf_kwargs=None,\n qcircuit_kwargs=None,\n clean_ext=('dvi', 'ps'),\n documentclass='article'):\n \"\"\"Compiles the QCircuit-based latex diagram of the given circuit.\n\n Args:\n circuit: The circuit to produce a pdf of.\n filepath: Where to output the pdf.\n pdf_kwargs: The arguments to pass to generate_pdf.\n qcircuit_kwargs: The arguments to pass to\n circuit_to_latex_using_qcircuit.\n clean_ext: The file extensions to clean up after compilation. By\n default, latexmk is used with the '-pdfps' flag, which produces\n intermediary dvi and ps files.\n documentclass: The documentclass of the latex file.\n \"\"\"\n pdf_kwargs = {'compiler': 'latexmk', 'compiler_args': ['-pdfps'],\n **({} if pdf_kwargs is None else pdf_kwargs)}\n qcircuit_kwargs = {} if qcircuit_kwargs is None else qcircuit_kwargs\n tex = circuit_to_latex_using_qcircuit(circuit, **qcircuit_kwargs)\n doc = Document(documentclass=documentclass, document_options='dvips')\n doc.packages.append(Package('amsmath'))\n doc.packages.append(Package('qcircuit'))\n doc.preamble.append(Package('inputenc', options=['utf8']))\n doc.append(NoEscape(tex))\n doc.generate_pdf(filepath, **pdf_kwargs)\n for ext in clean_ext:\n try:\n os.remove(filepath + '.' + ext)\n except (OSError, IOError) as e:\n if e.errno != errno.ENOENT:\n raise\n", "path": "cirq/contrib/qcircuit/qcircuit_pdf.py"}]} | 967 | 152 |
gh_patches_debug_38801 | rasdani/github-patches | git_diff | TheAlgorithms__Python-9083 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Returning index instead of boolean when knuth_morris_pratt matches.
In [knuth morris pratt algorithm](https://github.com/TheAlgorithms/Python/blob/master/strings/knuth_morris_pratt.py), I think it's better to return the "index" of the matched substring in case of success, and `-1` in case of failure, so that it would be consistent with the `str.find()` in Python. `True` and `False` aren't that helpful.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `strings/knuth_morris_pratt.py`
Content:
```
1 from __future__ import annotations
2
3
4 def kmp(pattern: str, text: str) -> bool:
5 """
6 The Knuth-Morris-Pratt Algorithm for finding a pattern within a piece of text
7 with complexity O(n + m)
8
9 1) Preprocess pattern to identify any suffixes that are identical to prefixes
10
11 This tells us where to continue from if we get a mismatch between a character
12 in our pattern and the text.
13
14 2) Step through the text one character at a time and compare it to a character in
15 the pattern updating our location within the pattern if necessary
16
17 """
18
19 # 1) Construct the failure array
20 failure = get_failure_array(pattern)
21
22 # 2) Step through text searching for pattern
23 i, j = 0, 0 # index into text, pattern
24 while i < len(text):
25 if pattern[j] == text[i]:
26 if j == (len(pattern) - 1):
27 return True
28 j += 1
29
30 # if this is a prefix in our pattern
31 # just go back far enough to continue
32 elif j > 0:
33 j = failure[j - 1]
34 continue
35 i += 1
36 return False
37
38
39 def get_failure_array(pattern: str) -> list[int]:
40 """
41 Calculates the new index we should go to if we fail a comparison
42 :param pattern:
43 :return:
44 """
45 failure = [0]
46 i = 0
47 j = 1
48 while j < len(pattern):
49 if pattern[i] == pattern[j]:
50 i += 1
51 elif i > 0:
52 i = failure[i - 1]
53 continue
54 j += 1
55 failure.append(i)
56 return failure
57
58
59 if __name__ == "__main__":
60 # Test 1)
61 pattern = "abc1abc12"
62 text1 = "alskfjaldsabc1abc1abc12k23adsfabcabc"
63 text2 = "alskfjaldsk23adsfabcabc"
64 assert kmp(pattern, text1) and not kmp(pattern, text2)
65
66 # Test 2)
67 pattern = "ABABX"
68 text = "ABABZABABYABABX"
69 assert kmp(pattern, text)
70
71 # Test 3)
72 pattern = "AAAB"
73 text = "ABAAAAAB"
74 assert kmp(pattern, text)
75
76 # Test 4)
77 pattern = "abcdabcy"
78 text = "abcxabcdabxabcdabcdabcy"
79 assert kmp(pattern, text)
80
81 # Test 5)
82 pattern = "aabaabaaa"
83 assert get_failure_array(pattern) == [0, 1, 0, 1, 2, 3, 4, 5, 2]
84
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/strings/knuth_morris_pratt.py b/strings/knuth_morris_pratt.py
--- a/strings/knuth_morris_pratt.py
+++ b/strings/knuth_morris_pratt.py
@@ -1,7 +1,7 @@
from __future__ import annotations
-def kmp(pattern: str, text: str) -> bool:
+def knuth_morris_pratt(text: str, pattern: str) -> int:
"""
The Knuth-Morris-Pratt Algorithm for finding a pattern within a piece of text
with complexity O(n + m)
@@ -14,6 +14,12 @@
2) Step through the text one character at a time and compare it to a character in
the pattern updating our location within the pattern if necessary
+ >>> kmp = "knuth_morris_pratt"
+ >>> all(
+ ... knuth_morris_pratt(kmp, s) == kmp.find(s)
+ ... for s in ("kn", "h_m", "rr", "tt", "not there")
+ ... )
+ True
"""
# 1) Construct the failure array
@@ -24,7 +30,7 @@
while i < len(text):
if pattern[j] == text[i]:
if j == (len(pattern) - 1):
- return True
+ return i - j
j += 1
# if this is a prefix in our pattern
@@ -33,7 +39,7 @@
j = failure[j - 1]
continue
i += 1
- return False
+ return -1
def get_failure_array(pattern: str) -> list[int]:
@@ -57,27 +63,38 @@
if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
+
# Test 1)
pattern = "abc1abc12"
text1 = "alskfjaldsabc1abc1abc12k23adsfabcabc"
text2 = "alskfjaldsk23adsfabcabc"
- assert kmp(pattern, text1) and not kmp(pattern, text2)
+ assert knuth_morris_pratt(text1, pattern) and knuth_morris_pratt(text2, pattern)
# Test 2)
pattern = "ABABX"
text = "ABABZABABYABABX"
- assert kmp(pattern, text)
+ assert knuth_morris_pratt(text, pattern)
# Test 3)
pattern = "AAAB"
text = "ABAAAAAB"
- assert kmp(pattern, text)
+ assert knuth_morris_pratt(text, pattern)
# Test 4)
pattern = "abcdabcy"
text = "abcxabcdabxabcdabcdabcy"
- assert kmp(pattern, text)
+ assert knuth_morris_pratt(text, pattern)
+
+ # Test 5) -> Doctests
+ kmp = "knuth_morris_pratt"
+ assert all(
+ knuth_morris_pratt(kmp, s) == kmp.find(s)
+ for s in ("kn", "h_m", "rr", "tt", "not there")
+ )
- # Test 5)
+ # Test 6)
pattern = "aabaabaaa"
assert get_failure_array(pattern) == [0, 1, 0, 1, 2, 3, 4, 5, 2]
| {"golden_diff": "diff --git a/strings/knuth_morris_pratt.py b/strings/knuth_morris_pratt.py\n--- a/strings/knuth_morris_pratt.py\n+++ b/strings/knuth_morris_pratt.py\n@@ -1,7 +1,7 @@\n from __future__ import annotations\n \n \n-def kmp(pattern: str, text: str) -> bool:\n+def knuth_morris_pratt(text: str, pattern: str) -> int:\n \"\"\"\n The Knuth-Morris-Pratt Algorithm for finding a pattern within a piece of text\n with complexity O(n + m)\n@@ -14,6 +14,12 @@\n 2) Step through the text one character at a time and compare it to a character in\n the pattern updating our location within the pattern if necessary\n \n+ >>> kmp = \"knuth_morris_pratt\"\n+ >>> all(\n+ ... knuth_morris_pratt(kmp, s) == kmp.find(s)\n+ ... for s in (\"kn\", \"h_m\", \"rr\", \"tt\", \"not there\")\n+ ... )\n+ True\n \"\"\"\n \n # 1) Construct the failure array\n@@ -24,7 +30,7 @@\n while i < len(text):\n if pattern[j] == text[i]:\n if j == (len(pattern) - 1):\n- return True\n+ return i - j\n j += 1\n \n # if this is a prefix in our pattern\n@@ -33,7 +39,7 @@\n j = failure[j - 1]\n continue\n i += 1\n- return False\n+ return -1\n \n \n def get_failure_array(pattern: str) -> list[int]:\n@@ -57,27 +63,38 @@\n \n \n if __name__ == \"__main__\":\n+ import doctest\n+\n+ doctest.testmod()\n+\n # Test 1)\n pattern = \"abc1abc12\"\n text1 = \"alskfjaldsabc1abc1abc12k23adsfabcabc\"\n text2 = \"alskfjaldsk23adsfabcabc\"\n- assert kmp(pattern, text1) and not kmp(pattern, text2)\n+ assert knuth_morris_pratt(text1, pattern) and knuth_morris_pratt(text2, pattern)\n \n # Test 2)\n pattern = \"ABABX\"\n text = \"ABABZABABYABABX\"\n- assert kmp(pattern, text)\n+ assert knuth_morris_pratt(text, pattern)\n \n # Test 3)\n pattern = \"AAAB\"\n text = \"ABAAAAAB\"\n- assert kmp(pattern, text)\n+ assert knuth_morris_pratt(text, pattern)\n \n # Test 4)\n pattern = \"abcdabcy\"\n text = \"abcxabcdabxabcdabcdabcy\"\n- assert kmp(pattern, text)\n+ assert knuth_morris_pratt(text, pattern)\n+\n+ # Test 5) -> Doctests\n+ kmp = \"knuth_morris_pratt\"\n+ assert all(\n+ knuth_morris_pratt(kmp, s) == kmp.find(s)\n+ for s in (\"kn\", \"h_m\", \"rr\", \"tt\", \"not there\")\n+ )\n \n- # Test 5)\n+ # Test 6)\n pattern = \"aabaabaaa\"\n assert get_failure_array(pattern) == [0, 1, 0, 1, 2, 3, 4, 5, 2]\n", "issue": "Returning index instead of boolean when knuth_morris_pratt matches.\nIn [knuth morris pratt algorithm](https://github.com/TheAlgorithms/Python/blob/master/strings/knuth_morris_pratt.py), I think it's better to return the \"index\" of the matched substring in case of success, and `-1` in case of failure, so that it would be consistent with the `str.find()` in Python. `True` and `False` aren't that helpful.\n", "before_files": [{"content": "from __future__ import annotations\n\n\ndef kmp(pattern: str, text: str) -> bool:\n \"\"\"\n The Knuth-Morris-Pratt Algorithm for finding a pattern within a piece of text\n with complexity O(n + m)\n\n 1) Preprocess pattern to identify any suffixes that are identical to prefixes\n\n This tells us where to continue from if we get a mismatch between a character\n in our pattern and the text.\n\n 2) Step through the text one character at a time and compare it to a character in\n the pattern updating our location within the pattern if necessary\n\n \"\"\"\n\n # 1) Construct the failure array\n failure = get_failure_array(pattern)\n\n # 2) Step through text searching for pattern\n i, j = 0, 0 # index into text, pattern\n while i < len(text):\n if pattern[j] == text[i]:\n if j == (len(pattern) - 1):\n return True\n j += 1\n\n # if this is a prefix in our pattern\n # just go back far enough to continue\n elif j > 0:\n j = failure[j - 1]\n continue\n i += 1\n return False\n\n\ndef get_failure_array(pattern: str) -> list[int]:\n \"\"\"\n Calculates the new index we should go to if we fail a comparison\n :param pattern:\n :return:\n \"\"\"\n failure = [0]\n i = 0\n j = 1\n while j < len(pattern):\n if pattern[i] == pattern[j]:\n i += 1\n elif i > 0:\n i = failure[i - 1]\n continue\n j += 1\n failure.append(i)\n return failure\n\n\nif __name__ == \"__main__\":\n # Test 1)\n pattern = \"abc1abc12\"\n text1 = \"alskfjaldsabc1abc1abc12k23adsfabcabc\"\n text2 = \"alskfjaldsk23adsfabcabc\"\n assert kmp(pattern, text1) and not kmp(pattern, text2)\n\n # Test 2)\n pattern = \"ABABX\"\n text = \"ABABZABABYABABX\"\n assert kmp(pattern, text)\n\n # Test 3)\n pattern = \"AAAB\"\n text = \"ABAAAAAB\"\n assert kmp(pattern, text)\n\n # Test 4)\n pattern = \"abcdabcy\"\n text = \"abcxabcdabxabcdabcdabcy\"\n assert kmp(pattern, text)\n\n # Test 5)\n pattern = \"aabaabaaa\"\n assert get_failure_array(pattern) == [0, 1, 0, 1, 2, 3, 4, 5, 2]\n", "path": "strings/knuth_morris_pratt.py"}], "after_files": [{"content": "from __future__ import annotations\n\n\ndef knuth_morris_pratt(text: str, pattern: str) -> int:\n \"\"\"\n The Knuth-Morris-Pratt Algorithm for finding a pattern within a piece of text\n with complexity O(n + m)\n\n 1) Preprocess pattern to identify any suffixes that are identical to prefixes\n\n This tells us where to continue from if we get a mismatch between a character\n in our pattern and the text.\n\n 2) Step through the text one character at a time and compare it to a character in\n the pattern updating our location within the pattern if necessary\n\n >>> kmp = \"knuth_morris_pratt\"\n >>> all(\n ... knuth_morris_pratt(kmp, s) == kmp.find(s)\n ... for s in (\"kn\", \"h_m\", \"rr\", \"tt\", \"not there\")\n ... )\n True\n \"\"\"\n\n # 1) Construct the failure array\n failure = get_failure_array(pattern)\n\n # 2) Step through text searching for pattern\n i, j = 0, 0 # index into text, pattern\n while i < len(text):\n if pattern[j] == text[i]:\n if j == (len(pattern) - 1):\n return i - j\n j += 1\n\n # if this is a prefix in our pattern\n # just go back far enough to continue\n elif j > 0:\n j = failure[j - 1]\n continue\n i += 1\n return -1\n\n\ndef get_failure_array(pattern: str) -> list[int]:\n \"\"\"\n Calculates the new index we should go to if we fail a comparison\n :param pattern:\n :return:\n \"\"\"\n failure = [0]\n i = 0\n j = 1\n while j < len(pattern):\n if pattern[i] == pattern[j]:\n i += 1\n elif i > 0:\n i = failure[i - 1]\n continue\n j += 1\n failure.append(i)\n return failure\n\n\nif __name__ == \"__main__\":\n import doctest\n\n doctest.testmod()\n\n # Test 1)\n pattern = \"abc1abc12\"\n text1 = \"alskfjaldsabc1abc1abc12k23adsfabcabc\"\n text2 = \"alskfjaldsk23adsfabcabc\"\n assert knuth_morris_pratt(text1, pattern) and knuth_morris_pratt(text2, pattern)\n\n # Test 2)\n pattern = \"ABABX\"\n text = \"ABABZABABYABABX\"\n assert knuth_morris_pratt(text, pattern)\n\n # Test 3)\n pattern = \"AAAB\"\n text = \"ABAAAAAB\"\n assert knuth_morris_pratt(text, pattern)\n\n # Test 4)\n pattern = \"abcdabcy\"\n text = \"abcxabcdabxabcdabcdabcy\"\n assert knuth_morris_pratt(text, pattern)\n\n # Test 5) -> Doctests\n kmp = \"knuth_morris_pratt\"\n assert all(\n knuth_morris_pratt(kmp, s) == kmp.find(s)\n for s in (\"kn\", \"h_m\", \"rr\", \"tt\", \"not there\")\n )\n\n # Test 6)\n pattern = \"aabaabaaa\"\n assert get_failure_array(pattern) == [0, 1, 0, 1, 2, 3, 4, 5, 2]\n", "path": "strings/knuth_morris_pratt.py"}]} | 1,155 | 808 |
gh_patches_debug_19094 | rasdani/github-patches | git_diff | bridgecrewio__checkov-4875 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CKV_AWS_300 S3 MPU lifecycle alert
**Issue**
Alert ID - CKV_AWS_300
Checkov sends false alert for MPU "aws_s3_bucket_lifecycle_configuration" despite having the correct rule configuration. In my scenario I'm having an empty filter in my terraform code. But as per logic in checkov code base, (checkov/checkov/terraform/checks/resource/aws/S3AbortIncompleteUploads.py) under scan_resource_conf method, the 'if' condition doesn't take the 'filter' condition into account. This in-turn throws a false alert.
**Examples**
```hcl
resource "aws_s3_bucket_lifecycle_configuration" "lifecycle_rule" {
bucket = aws_s3_bucket.s3.id
rule {
abort_incomplete_multipart_upload {
days_after_initiation = 3
}
id = "MPU 3days"
filter {
}
status = "Enabled"
}
}
```
**Version (please complete the following information):**
- Checkov 2.3.155 (Latest)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `checkov/terraform/checks/resource/aws/S3AbortIncompleteUploads.py`
Content:
```
1 from __future__ import annotations
2
3 from typing import Any
4
5 from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck
6 from checkov.common.models.enums import CheckCategories, CheckResult
7
8
9 class S3AbortIncompleteUploads(BaseResourceCheck):
10 def __init__(self) -> None:
11 """
12 If you don't set this value in a lifecycle configuration you'll end up paying for s3
13 resources you never could use
14 """
15 name = "Ensure S3 lifecycle configuration sets period for aborting failed uploads"
16 id = "CKV_AWS_300"
17 supported_resources = ("aws_s3_bucket_lifecycle_configuration",)
18 categories = (CheckCategories.GENERAL_SECURITY,)
19 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
20
21 def scan_resource_conf(self, conf: dict[str, list[Any]]) -> CheckResult:
22 self.evaluated_keys = ["rule"]
23 rules = conf.get("rule")
24 if rules and isinstance(rules, list):
25 for idx_rule, rule in enumerate(rules):
26 if (
27 rule.get("abort_incomplete_multipart_upload")
28 and rule.get("status") == ["Enabled"]
29 and not rule.get("filter")
30 ):
31 self.evaluated_keys = [f"rule/[{idx_rule}]/abort_incomplete_multipart_upload"]
32 return CheckResult.PASSED
33
34 return CheckResult.FAILED
35
36
37 check = S3AbortIncompleteUploads()
38
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/checkov/terraform/checks/resource/aws/S3AbortIncompleteUploads.py b/checkov/terraform/checks/resource/aws/S3AbortIncompleteUploads.py
--- a/checkov/terraform/checks/resource/aws/S3AbortIncompleteUploads.py
+++ b/checkov/terraform/checks/resource/aws/S3AbortIncompleteUploads.py
@@ -23,12 +23,13 @@
rules = conf.get("rule")
if rules and isinstance(rules, list):
for idx_rule, rule in enumerate(rules):
- if (
- rule.get("abort_incomplete_multipart_upload")
- and rule.get("status") == ["Enabled"]
- and not rule.get("filter")
- ):
+ if rule.get("abort_incomplete_multipart_upload") and rule.get("status") == ["Enabled"]:
self.evaluated_keys = [f"rule/[{idx_rule}]/abort_incomplete_multipart_upload"]
+ filter = rule.get("filter")
+ if filter and isinstance(filter, list) and filter[0]:
+ # it is possible to set an empty filter, which applies then to all objects
+ continue
+
return CheckResult.PASSED
return CheckResult.FAILED
| {"golden_diff": "diff --git a/checkov/terraform/checks/resource/aws/S3AbortIncompleteUploads.py b/checkov/terraform/checks/resource/aws/S3AbortIncompleteUploads.py\n--- a/checkov/terraform/checks/resource/aws/S3AbortIncompleteUploads.py\n+++ b/checkov/terraform/checks/resource/aws/S3AbortIncompleteUploads.py\n@@ -23,12 +23,13 @@\n rules = conf.get(\"rule\")\n if rules and isinstance(rules, list):\n for idx_rule, rule in enumerate(rules):\n- if (\n- rule.get(\"abort_incomplete_multipart_upload\")\n- and rule.get(\"status\") == [\"Enabled\"]\n- and not rule.get(\"filter\")\n- ):\n+ if rule.get(\"abort_incomplete_multipart_upload\") and rule.get(\"status\") == [\"Enabled\"]:\n self.evaluated_keys = [f\"rule/[{idx_rule}]/abort_incomplete_multipart_upload\"]\n+ filter = rule.get(\"filter\")\n+ if filter and isinstance(filter, list) and filter[0]:\n+ # it is possible to set an empty filter, which applies then to all objects\n+ continue\n+\n return CheckResult.PASSED\n \n return CheckResult.FAILED\n", "issue": "CKV_AWS_300 S3 MPU lifecycle alert\n**Issue**\r\nAlert ID - CKV_AWS_300 \r\nCheckov sends false alert for MPU \"aws_s3_bucket_lifecycle_configuration\" despite having the correct rule configuration. In my scenario I'm having an empty filter in my terraform code. But as per logic in checkov code base, (checkov/checkov/terraform/checks/resource/aws/S3AbortIncompleteUploads.py) under scan_resource_conf method, the 'if' condition doesn't take the 'filter' condition into account. This in-turn throws a false alert.\r\n\r\n**Examples**\r\n```hcl\r\nresource \"aws_s3_bucket_lifecycle_configuration\" \"lifecycle_rule\" {\r\n bucket = aws_s3_bucket.s3.id\r\n rule {\r\n abort_incomplete_multipart_upload {\r\n days_after_initiation = 3\r\n }\r\n id = \"MPU 3days\"\r\n filter {\r\n }\r\n status = \"Enabled\"\r\n }\r\n}\r\n```\r\n\r\n**Version (please complete the following information):**\r\n - Checkov 2.3.155 (Latest)\r\n\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom typing import Any\n\nfrom checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\nfrom checkov.common.models.enums import CheckCategories, CheckResult\n\n\nclass S3AbortIncompleteUploads(BaseResourceCheck):\n def __init__(self) -> None:\n \"\"\"\n If you don't set this value in a lifecycle configuration you'll end up paying for s3\n resources you never could use\n \"\"\"\n name = \"Ensure S3 lifecycle configuration sets period for aborting failed uploads\"\n id = \"CKV_AWS_300\"\n supported_resources = (\"aws_s3_bucket_lifecycle_configuration\",)\n categories = (CheckCategories.GENERAL_SECURITY,)\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def scan_resource_conf(self, conf: dict[str, list[Any]]) -> CheckResult:\n self.evaluated_keys = [\"rule\"]\n rules = conf.get(\"rule\")\n if rules and isinstance(rules, list):\n for idx_rule, rule in enumerate(rules):\n if (\n rule.get(\"abort_incomplete_multipart_upload\")\n and rule.get(\"status\") == [\"Enabled\"]\n and not rule.get(\"filter\")\n ):\n self.evaluated_keys = [f\"rule/[{idx_rule}]/abort_incomplete_multipart_upload\"]\n return CheckResult.PASSED\n\n return CheckResult.FAILED\n\n\ncheck = S3AbortIncompleteUploads()\n", "path": "checkov/terraform/checks/resource/aws/S3AbortIncompleteUploads.py"}], "after_files": [{"content": "from __future__ import annotations\n\nfrom typing import Any\n\nfrom checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\nfrom checkov.common.models.enums import CheckCategories, CheckResult\n\n\nclass S3AbortIncompleteUploads(BaseResourceCheck):\n def __init__(self) -> None:\n \"\"\"\n If you don't set this value in a lifecycle configuration you'll end up paying for s3\n resources you never could use\n \"\"\"\n name = \"Ensure S3 lifecycle configuration sets period for aborting failed uploads\"\n id = \"CKV_AWS_300\"\n supported_resources = (\"aws_s3_bucket_lifecycle_configuration\",)\n categories = (CheckCategories.GENERAL_SECURITY,)\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def scan_resource_conf(self, conf: dict[str, list[Any]]) -> CheckResult:\n self.evaluated_keys = [\"rule\"]\n rules = conf.get(\"rule\")\n if rules and isinstance(rules, list):\n for idx_rule, rule in enumerate(rules):\n if rule.get(\"abort_incomplete_multipart_upload\") and rule.get(\"status\") == [\"Enabled\"]:\n self.evaluated_keys = [f\"rule/[{idx_rule}]/abort_incomplete_multipart_upload\"]\n filter = rule.get(\"filter\")\n if filter and isinstance(filter, list) and filter[0]:\n # it is possible to set an empty filter, which applies then to all objects\n continue\n\n return CheckResult.PASSED\n\n return CheckResult.FAILED\n\n\ncheck = S3AbortIncompleteUploads()\n", "path": "checkov/terraform/checks/resource/aws/S3AbortIncompleteUploads.py"}]} | 886 | 263 |
gh_patches_debug_53987 | rasdani/github-patches | git_diff | getredash__redash-784 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
AttributeError: 'datetime.timedelta' object has no attribute 'isoformat'
On the latest 0.9.2-rc:
```
[2016-01-21 14:30:36,838: ERROR/MainProcess] Task redash.tasks.execute_query[766d3f9f-68a6-4a64-8cd9-b7e4e18bf2af] raised unexpected: AttributeError("'datetime.timedelta' object has no attribute 'isoformat'",)
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/celery/app/trace.py", line 240, in trace_task
R = retval = fun(*args, **kwargs)
File "/opt/redash/redash/tasks.py", line 31, in __call__
return super(BaseTask, self).__call__(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/celery/app/trace.py", line 437, in __protected_call__
return self.run(*args, **kwargs)
File "/opt/redash/redash/tasks.py", line 286, in execute_query
data, error = query_runner.run_query(annotated_query)
File "/opt/redash/redash/query_runner/pg.py", line 132, in run_query
json_data = json.dumps(data, cls=JSONEncoder)
File "/usr/lib/python2.7/json/__init__.py", line 250, in dumps
sort_keys=sort_keys, **kw).encode(obj)
File "/usr/lib/python2.7/json/encoder.py", line 207, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/usr/lib/python2.7/json/encoder.py", line 270, in iterencode
return _iterencode(o, 0)
File "/opt/redash/redash/utils.py", line 57, in default
return o.isoformat()
AttributeError: 'datetime.timedelta' object has no attribute 'isoformat'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `redash/utils.py`
Content:
```
1 import cStringIO
2 import csv
3 import codecs
4 import decimal
5 import datetime
6 import json
7 import random
8 import re
9 import hashlib
10 import pytz
11
12 COMMENTS_REGEX = re.compile("/\*.*?\*/")
13
14
15 def utcnow():
16 """Return datetime.now value with timezone specified.
17
18 Without the timezone data, when the timestamp stored to the database it gets the current timezone of the server,
19 which leads to errors in calculations.
20 """
21 return datetime.datetime.now(pytz.utc)
22
23
24 def slugify(s):
25 return re.sub('[^a-z0-9_\-]+', '-', s.lower())
26
27
28 def gen_query_hash(sql):
29 """Returns hash of the given query after stripping all comments, line breaks and multiple
30 spaces, and lower casing all text.
31
32 TODO: possible issue - the following queries will get the same id:
33 1. SELECT 1 FROM table WHERE column='Value';
34 2. SELECT 1 FROM table where column='value';
35 """
36 sql = COMMENTS_REGEX.sub("", sql)
37 sql = "".join(sql.split()).lower()
38 return hashlib.md5(sql.encode('utf-8')).hexdigest()
39
40
41 def generate_token(length):
42 chars = ('abcdefghijklmnopqrstuvwxyz'
43 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'
44 '0123456789')
45
46 rand = random.SystemRandom()
47 return ''.join(rand.choice(chars) for x in range(length))
48
49 class JSONEncoder(json.JSONEncoder):
50 """Custom JSON encoding class, to handle Decimal and datetime.date instances.
51 """
52 def default(self, o):
53 if isinstance(o, decimal.Decimal):
54 return float(o)
55
56 if isinstance(o, (datetime.date, datetime.time, datetime.timedelta)):
57 return o.isoformat()
58
59 super(JSONEncoder, self).default(o)
60
61
62 def json_dumps(data):
63 return json.dumps(data, cls=JSONEncoder)
64
65
66 def build_url(request, host, path):
67 parts = request.host.split(':')
68 if len(parts) > 1:
69 port = parts[1]
70 if (port, request.scheme) not in (('80', 'http'), ('443', 'https')):
71 host = '{}:{}'.format(host, port)
72
73 return "{}://{}{}".format(request.scheme, host, path)
74
75
76 class UnicodeWriter:
77 """
78 A CSV writer which will write rows to CSV file "f",
79 which is encoded in the given encoding.
80 """
81 def __init__(self, f, dialect=csv.excel, encoding="utf-8", **kwds):
82 # Redirect output to a queue
83 self.queue = cStringIO.StringIO()
84 self.writer = csv.writer(self.queue, dialect=dialect, **kwds)
85 self.stream = f
86 self.encoder = codecs.getincrementalencoder(encoding)()
87
88 def _encode_utf8(self, val):
89 if isinstance(val, (unicode, str)):
90 return val.encode('utf-8')
91
92 return val
93
94 def writerow(self, row):
95 self.writer.writerow([self._encode_utf8(s) for s in row])
96 # Fetch UTF-8 output from the queue ...
97 data = self.queue.getvalue()
98 data = data.decode("utf-8")
99 # ... and reencode it into the target encoding
100 data = self.encoder.encode(data)
101 # write to the target stream
102 self.stream.write(data)
103 # empty queue
104 self.queue.truncate(0)
105
106 def writerows(self, rows):
107 for row in rows:
108 self.writerow(row)
109
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/redash/utils.py b/redash/utils.py
--- a/redash/utils.py
+++ b/redash/utils.py
@@ -53,9 +53,12 @@
if isinstance(o, decimal.Decimal):
return float(o)
- if isinstance(o, (datetime.date, datetime.time, datetime.timedelta)):
+ if isinstance(o, (datetime.date, datetime.time)):
return o.isoformat()
-
+
+ if isinstance(o, datetime.timedelta):
+ return str(o)
+
super(JSONEncoder, self).default(o)
| {"golden_diff": "diff --git a/redash/utils.py b/redash/utils.py\n--- a/redash/utils.py\n+++ b/redash/utils.py\n@@ -53,9 +53,12 @@\n if isinstance(o, decimal.Decimal):\n return float(o)\n \n- if isinstance(o, (datetime.date, datetime.time, datetime.timedelta)):\n+ if isinstance(o, (datetime.date, datetime.time)):\n return o.isoformat()\n- \n+\n+ if isinstance(o, datetime.timedelta):\n+ return str(o)\n+\n super(JSONEncoder, self).default(o)\n", "issue": "AttributeError: 'datetime.timedelta' object has no attribute 'isoformat'\nOn the latest 0.9.2-rc:\n\n```\n[2016-01-21 14:30:36,838: ERROR/MainProcess] Task redash.tasks.execute_query[766d3f9f-68a6-4a64-8cd9-b7e4e18bf2af] raised unexpected: AttributeError(\"'datetime.timedelta' object has no attribute 'isoformat'\",)\nTraceback (most recent call last):\n File \"/usr/local/lib/python2.7/dist-packages/celery/app/trace.py\", line 240, in trace_task\n R = retval = fun(*args, **kwargs)\n File \"/opt/redash/redash/tasks.py\", line 31, in __call__\n return super(BaseTask, self).__call__(*args, **kwargs)\n File \"/usr/local/lib/python2.7/dist-packages/celery/app/trace.py\", line 437, in __protected_call__\n return self.run(*args, **kwargs)\n File \"/opt/redash/redash/tasks.py\", line 286, in execute_query\n data, error = query_runner.run_query(annotated_query)\n File \"/opt/redash/redash/query_runner/pg.py\", line 132, in run_query\n json_data = json.dumps(data, cls=JSONEncoder)\n File \"/usr/lib/python2.7/json/__init__.py\", line 250, in dumps\n sort_keys=sort_keys, **kw).encode(obj)\n File \"/usr/lib/python2.7/json/encoder.py\", line 207, in encode\n chunks = self.iterencode(o, _one_shot=True)\n File \"/usr/lib/python2.7/json/encoder.py\", line 270, in iterencode\n return _iterencode(o, 0)\n File \"/opt/redash/redash/utils.py\", line 57, in default\n return o.isoformat()\nAttributeError: 'datetime.timedelta' object has no attribute 'isoformat'\n```\n\n", "before_files": [{"content": "import cStringIO\nimport csv\nimport codecs\nimport decimal\nimport datetime\nimport json\nimport random\nimport re\nimport hashlib\nimport pytz\n\nCOMMENTS_REGEX = re.compile(\"/\\*.*?\\*/\")\n\n\ndef utcnow():\n \"\"\"Return datetime.now value with timezone specified.\n\n Without the timezone data, when the timestamp stored to the database it gets the current timezone of the server,\n which leads to errors in calculations.\n \"\"\"\n return datetime.datetime.now(pytz.utc)\n\n\ndef slugify(s):\n return re.sub('[^a-z0-9_\\-]+', '-', s.lower())\n\n\ndef gen_query_hash(sql):\n \"\"\"Returns hash of the given query after stripping all comments, line breaks and multiple\n spaces, and lower casing all text.\n\n TODO: possible issue - the following queries will get the same id:\n 1. SELECT 1 FROM table WHERE column='Value';\n 2. SELECT 1 FROM table where column='value';\n \"\"\"\n sql = COMMENTS_REGEX.sub(\"\", sql)\n sql = \"\".join(sql.split()).lower()\n return hashlib.md5(sql.encode('utf-8')).hexdigest()\n\n\ndef generate_token(length):\n chars = ('abcdefghijklmnopqrstuvwxyz'\n 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'\n '0123456789')\n\n rand = random.SystemRandom()\n return ''.join(rand.choice(chars) for x in range(length))\n\nclass JSONEncoder(json.JSONEncoder):\n \"\"\"Custom JSON encoding class, to handle Decimal and datetime.date instances.\n \"\"\"\n def default(self, o):\n if isinstance(o, decimal.Decimal):\n return float(o)\n\n if isinstance(o, (datetime.date, datetime.time, datetime.timedelta)):\n return o.isoformat()\n \n super(JSONEncoder, self).default(o)\n\n\ndef json_dumps(data):\n return json.dumps(data, cls=JSONEncoder)\n\n\ndef build_url(request, host, path):\n parts = request.host.split(':')\n if len(parts) > 1:\n port = parts[1]\n if (port, request.scheme) not in (('80', 'http'), ('443', 'https')):\n host = '{}:{}'.format(host, port)\n\n return \"{}://{}{}\".format(request.scheme, host, path)\n\n\nclass UnicodeWriter:\n \"\"\"\n A CSV writer which will write rows to CSV file \"f\",\n which is encoded in the given encoding.\n \"\"\"\n def __init__(self, f, dialect=csv.excel, encoding=\"utf-8\", **kwds):\n # Redirect output to a queue\n self.queue = cStringIO.StringIO()\n self.writer = csv.writer(self.queue, dialect=dialect, **kwds)\n self.stream = f\n self.encoder = codecs.getincrementalencoder(encoding)()\n\n def _encode_utf8(self, val):\n if isinstance(val, (unicode, str)):\n return val.encode('utf-8')\n\n return val\n\n def writerow(self, row):\n self.writer.writerow([self._encode_utf8(s) for s in row])\n # Fetch UTF-8 output from the queue ...\n data = self.queue.getvalue()\n data = data.decode(\"utf-8\")\n # ... and reencode it into the target encoding\n data = self.encoder.encode(data)\n # write to the target stream\n self.stream.write(data)\n # empty queue\n self.queue.truncate(0)\n\n def writerows(self, rows):\n for row in rows:\n self.writerow(row)\n", "path": "redash/utils.py"}], "after_files": [{"content": "import cStringIO\nimport csv\nimport codecs\nimport decimal\nimport datetime\nimport json\nimport random\nimport re\nimport hashlib\nimport pytz\n\nCOMMENTS_REGEX = re.compile(\"/\\*.*?\\*/\")\n\n\ndef utcnow():\n \"\"\"Return datetime.now value with timezone specified.\n\n Without the timezone data, when the timestamp stored to the database it gets the current timezone of the server,\n which leads to errors in calculations.\n \"\"\"\n return datetime.datetime.now(pytz.utc)\n\n\ndef slugify(s):\n return re.sub('[^a-z0-9_\\-]+', '-', s.lower())\n\n\ndef gen_query_hash(sql):\n \"\"\"Returns hash of the given query after stripping all comments, line breaks and multiple\n spaces, and lower casing all text.\n\n TODO: possible issue - the following queries will get the same id:\n 1. SELECT 1 FROM table WHERE column='Value';\n 2. SELECT 1 FROM table where column='value';\n \"\"\"\n sql = COMMENTS_REGEX.sub(\"\", sql)\n sql = \"\".join(sql.split()).lower()\n return hashlib.md5(sql.encode('utf-8')).hexdigest()\n\n\ndef generate_token(length):\n chars = ('abcdefghijklmnopqrstuvwxyz'\n 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'\n '0123456789')\n\n rand = random.SystemRandom()\n return ''.join(rand.choice(chars) for x in range(length))\n\nclass JSONEncoder(json.JSONEncoder):\n \"\"\"Custom JSON encoding class, to handle Decimal and datetime.date instances.\n \"\"\"\n def default(self, o):\n if isinstance(o, decimal.Decimal):\n return float(o)\n\n if isinstance(o, (datetime.date, datetime.time)):\n return o.isoformat()\n\n if isinstance(o, datetime.timedelta):\n return str(o)\n\n super(JSONEncoder, self).default(o)\n\n\ndef json_dumps(data):\n return json.dumps(data, cls=JSONEncoder)\n\n\ndef build_url(request, host, path):\n parts = request.host.split(':')\n if len(parts) > 1:\n port = parts[1]\n if (port, request.scheme) not in (('80', 'http'), ('443', 'https')):\n host = '{}:{}'.format(host, port)\n\n return \"{}://{}{}\".format(request.scheme, host, path)\n\n\nclass UnicodeWriter:\n \"\"\"\n A CSV writer which will write rows to CSV file \"f\",\n which is encoded in the given encoding.\n \"\"\"\n def __init__(self, f, dialect=csv.excel, encoding=\"utf-8\", **kwds):\n # Redirect output to a queue\n self.queue = cStringIO.StringIO()\n self.writer = csv.writer(self.queue, dialect=dialect, **kwds)\n self.stream = f\n self.encoder = codecs.getincrementalencoder(encoding)()\n\n def _encode_utf8(self, val):\n if isinstance(val, (unicode, str)):\n return val.encode('utf-8')\n\n return val\n\n def writerow(self, row):\n self.writer.writerow([self._encode_utf8(s) for s in row])\n # Fetch UTF-8 output from the queue ...\n data = self.queue.getvalue()\n data = data.decode(\"utf-8\")\n # ... and reencode it into the target encoding\n data = self.encoder.encode(data)\n # write to the target stream\n self.stream.write(data)\n # empty queue\n self.queue.truncate(0)\n\n def writerows(self, rows):\n for row in rows:\n self.writerow(row)\n", "path": "redash/utils.py"}]} | 1,685 | 119 |
gh_patches_debug_23933 | rasdani/github-patches | git_diff | ietf-tools__datatracker-5109 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
secr.middleware.dbquery.QueryCountDebugMiddleware is unused.
Is this still needed in the source tree? Can it be removed?
cc @rpcross
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ietf/secr/middleware/dbquery.py`
Content:
```
1 # Copyright The IETF Trust 2014-2019, All Rights Reserved
2 #import logging
3
4 from django.db import connection
5 from django.utils.log import getLogger # type: ignore
6
7
8 logger = getLogger(__name__)
9 #logger.setLevel(logging.DEBUG)
10 #logger.addHandler(logging.FileHandler(settings.SECR_LOG_FILE))
11
12 class QueryCountDebugMiddleware(object):
13 """
14 This middleware will log the number of queries run
15 and the total time taken for each request (with a
16 status code of 200). It does not currently support
17 multi-db setups.
18 """
19 def process_response(self, request, response):
20 #assert False, request.path
21 logger.debug('called middleware. %s:%s' % (request.path,len(connection.queries)))
22 if response.status_code == 200:
23 total_time = 0
24 #for query in connection.queries:
25 # query_time = query.get('time')
26 # if query_time is None:
27 # django-debug-toolbar monkeypatches the connection
28 # cursor wrapper and adds extra information in each
29 # item in connection.queries. The query time is stored
30 # under the key "duration" rather than "time" and is
31 # in milliseconds, not seconds.
32 # query_time = query.get('duration', 0) / 1000
33 # total_time += float(query_time)
34 logger.debug('%s: %s queries run, total %s seconds' % (request.path,len(connection.queries), total_time))
35 return response
36
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ietf/secr/middleware/dbquery.py b/ietf/secr/middleware/dbquery.py
deleted file mode 100644
--- a/ietf/secr/middleware/dbquery.py
+++ /dev/null
@@ -1,35 +0,0 @@
-# Copyright The IETF Trust 2014-2019, All Rights Reserved
-#import logging
-
-from django.db import connection
-from django.utils.log import getLogger # type: ignore
-
-
-logger = getLogger(__name__)
-#logger.setLevel(logging.DEBUG)
-#logger.addHandler(logging.FileHandler(settings.SECR_LOG_FILE))
-
-class QueryCountDebugMiddleware(object):
- """
- This middleware will log the number of queries run
- and the total time taken for each request (with a
- status code of 200). It does not currently support
- multi-db setups.
- """
- def process_response(self, request, response):
- #assert False, request.path
- logger.debug('called middleware. %s:%s' % (request.path,len(connection.queries)))
- if response.status_code == 200:
- total_time = 0
- #for query in connection.queries:
- # query_time = query.get('time')
- # if query_time is None:
- # django-debug-toolbar monkeypatches the connection
- # cursor wrapper and adds extra information in each
- # item in connection.queries. The query time is stored
- # under the key "duration" rather than "time" and is
- # in milliseconds, not seconds.
- # query_time = query.get('duration', 0) / 1000
- # total_time += float(query_time)
- logger.debug('%s: %s queries run, total %s seconds' % (request.path,len(connection.queries), total_time))
- return response
| {"golden_diff": "diff --git a/ietf/secr/middleware/dbquery.py b/ietf/secr/middleware/dbquery.py\ndeleted file mode 100644\n--- a/ietf/secr/middleware/dbquery.py\n+++ /dev/null\n@@ -1,35 +0,0 @@\n-# Copyright The IETF Trust 2014-2019, All Rights Reserved\n-#import logging\n-\n-from django.db import connection\n-from django.utils.log import getLogger # type: ignore\n-\n-\n-logger = getLogger(__name__)\n-#logger.setLevel(logging.DEBUG)\n-#logger.addHandler(logging.FileHandler(settings.SECR_LOG_FILE))\n-\n-class QueryCountDebugMiddleware(object):\n- \"\"\"\n- This middleware will log the number of queries run\n- and the total time taken for each request (with a\n- status code of 200). It does not currently support\n- multi-db setups.\n- \"\"\"\n- def process_response(self, request, response):\n- #assert False, request.path\n- logger.debug('called middleware. %s:%s' % (request.path,len(connection.queries)))\n- if response.status_code == 200:\n- total_time = 0\n- #for query in connection.queries:\n- # query_time = query.get('time')\n- # if query_time is None:\n- # django-debug-toolbar monkeypatches the connection\n- # cursor wrapper and adds extra information in each\n- # item in connection.queries. The query time is stored\n- # under the key \"duration\" rather than \"time\" and is\n- # in milliseconds, not seconds.\n- # query_time = query.get('duration', 0) / 1000\n- # total_time += float(query_time)\n- logger.debug('%s: %s queries run, total %s seconds' % (request.path,len(connection.queries), total_time))\n- return response\n", "issue": "secr.middleware.dbquery.QueryCountDebugMiddleware is unused.\nIs this still needed in the source tree? Can it be removed?\r\ncc @rpcross\n", "before_files": [{"content": "# Copyright The IETF Trust 2014-2019, All Rights Reserved\n#import logging\n\nfrom django.db import connection\nfrom django.utils.log import getLogger # type: ignore\n\n\nlogger = getLogger(__name__)\n#logger.setLevel(logging.DEBUG)\n#logger.addHandler(logging.FileHandler(settings.SECR_LOG_FILE))\n\nclass QueryCountDebugMiddleware(object):\n \"\"\"\n This middleware will log the number of queries run\n and the total time taken for each request (with a\n status code of 200). It does not currently support\n multi-db setups.\n \"\"\"\n def process_response(self, request, response):\n #assert False, request.path\n logger.debug('called middleware. %s:%s' % (request.path,len(connection.queries)))\n if response.status_code == 200:\n total_time = 0\n #for query in connection.queries:\n # query_time = query.get('time')\n # if query_time is None:\n # django-debug-toolbar monkeypatches the connection\n # cursor wrapper and adds extra information in each\n # item in connection.queries. The query time is stored\n # under the key \"duration\" rather than \"time\" and is\n # in milliseconds, not seconds.\n # query_time = query.get('duration', 0) / 1000\n # total_time += float(query_time)\n logger.debug('%s: %s queries run, total %s seconds' % (request.path,len(connection.queries), total_time))\n return response\n", "path": "ietf/secr/middleware/dbquery.py"}], "after_files": [{"content": null, "path": "ietf/secr/middleware/dbquery.py"}]} | 691 | 426 |
gh_patches_debug_50452 | rasdani/github-patches | git_diff | jupyterhub__jupyterhub-364 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
404 on single user notebook server anytime I restart jupyterhub
I just deployed jupyterhub from scratch using my ansible scripts this morning. The script are unchanged from 2-3 weeks ago when I did it previously and it was all working. I am running from latest master of jupyterhub.
Anytime I restart jupyterhub (hub+proxy) I see the following behavior:
- I can log in (using GitHub OAuth)
- Trying to start my server gives me a 404 on `/hub/users/ellisonbg`. I can tell from the server log that the single user server isn't being started.
- I can go to the Admin page and start my server from there.
- But then, I get a redirect loop when I try to go to my single user server.
- If I clear all my cookies while my single user server (started through the Admin page) is still running, it starts to work as expected. I can start and stop the single user server on the control panel page just fine.
- If a restart jupyterhub, the problems start all over again.
I am using a fixed proxy_auth_token and cookie secret.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `jupyterhub/handlers/pages.py`
Content:
```
1 """Basic html-rendering handlers."""
2
3 # Copyright (c) Jupyter Development Team.
4 # Distributed under the terms of the Modified BSD License.
5
6 from tornado import web, gen
7
8 from .. import orm
9 from ..utils import admin_only, url_path_join
10 from .base import BaseHandler
11 from .login import LoginHandler
12
13
14 class RootHandler(BaseHandler):
15 """Render the Hub root page.
16
17 If logged in, redirects to:
18
19 - single-user server if running
20 - hub home, otherwise
21
22 Otherwise, renders login page.
23 """
24 def get(self):
25 user = self.get_current_user()
26 if user:
27 if user.running:
28 url = user.server.base_url
29 self.log.debug("User is running: %s", url)
30 else:
31 url = url_path_join(self.hub.server.base_url, 'home')
32 self.log.debug("User is not running: %s", url)
33 self.redirect(url)
34 return
35 url = url_path_join(self.hub.server.base_url, 'login')
36 self.redirect(url)
37
38
39 class HomeHandler(BaseHandler):
40 """Render the user's home page."""
41
42 @web.authenticated
43 def get(self):
44 html = self.render_template('home.html',
45 user=self.get_current_user(),
46 )
47 self.finish(html)
48
49
50 class SpawnHandler(BaseHandler):
51 """Handle spawning of single-user servers via form.
52
53 GET renders the form, POST handles form submission.
54
55 Only enabled when Spawner.options_form is defined.
56 """
57 @web.authenticated
58 def get(self):
59 """GET renders form for spawning with user-specified options"""
60 user = self.get_current_user()
61 if user.running:
62 url = user.server.base_url
63 self.log.debug("User is running: %s", url)
64 self.redirect(url)
65 return
66 if user.spawner.options_form:
67 html = self.render_template('spawn.html',
68 user=self.get_current_user(),
69 spawner_options_form=user.spawner.options_form,
70 )
71 self.finish(html)
72 else:
73 # not running, no form. Trigger spawn.
74 url = url_path_join(self.base_url, 'users', user.name)
75 self.redirect(url)
76
77 @web.authenticated
78 @gen.coroutine
79 def post(self):
80 """POST spawns with user-specified options"""
81 user = self.get_current_user()
82 if user.running:
83 url = user.server.base_url
84 self.log.warning("User is already running: %s", url)
85 self.redirect(url)
86 return
87 form_options = {}
88 for key, byte_list in self.request.body_arguments.items():
89 form_options[key] = [ bs.decode('utf8') for bs in byte_list ]
90 options = user.spawner.options_from_form(form_options)
91 yield self.spawn_single_user(user, options=options)
92 self.set_login_cookie(user)
93 url = user.server.base_url
94 self.redirect(url)
95
96 class AdminHandler(BaseHandler):
97 """Render the admin page."""
98
99 @admin_only
100 def get(self):
101 available = {'name', 'admin', 'running', 'last_activity'}
102 default_sort = ['admin', 'name']
103 mapping = {
104 'running': '_server_id'
105 }
106 default_order = {
107 'name': 'asc',
108 'last_activity': 'desc',
109 'admin': 'desc',
110 'running': 'desc',
111 }
112 sorts = self.get_arguments('sort') or default_sort
113 orders = self.get_arguments('order')
114
115 for bad in set(sorts).difference(available):
116 self.log.warn("ignoring invalid sort: %r", bad)
117 sorts.remove(bad)
118 for bad in set(orders).difference({'asc', 'desc'}):
119 self.log.warn("ignoring invalid order: %r", bad)
120 orders.remove(bad)
121
122 # add default sort as secondary
123 for s in default_sort:
124 if s not in sorts:
125 sorts.append(s)
126 if len(orders) < len(sorts):
127 for col in sorts[len(orders):]:
128 orders.append(default_order[col])
129 else:
130 orders = orders[:len(sorts)]
131
132 # this could be one incomprehensible nested list comprehension
133 # get User columns
134 cols = [ getattr(orm.User, mapping.get(c, c)) for c in sorts ]
135 # get User.col.desc() order objects
136 ordered = [ getattr(c, o)() for c, o in zip(cols, orders) ]
137
138 users = self.db.query(orm.User).order_by(*ordered)
139 users = [ self._user_from_orm(u) for u in users ]
140 running = [ u for u in users if u.running ]
141
142 html = self.render_template('admin.html',
143 user=self.get_current_user(),
144 admin_access=self.settings.get('admin_access', False),
145 users=users,
146 running=running,
147 sort={s:o for s,o in zip(sorts, orders)},
148 )
149 self.finish(html)
150
151
152 default_handlers = [
153 (r'/', RootHandler),
154 (r'/home', HomeHandler),
155 (r'/admin', AdminHandler),
156 (r'/spawn', SpawnHandler),
157 ]
158
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/jupyterhub/handlers/pages.py b/jupyterhub/handlers/pages.py
--- a/jupyterhub/handlers/pages.py
+++ b/jupyterhub/handlers/pages.py
@@ -71,7 +71,7 @@
self.finish(html)
else:
# not running, no form. Trigger spawn.
- url = url_path_join(self.base_url, 'users', user.name)
+ url = url_path_join(self.base_url, 'user', user.name)
self.redirect(url)
@web.authenticated
| {"golden_diff": "diff --git a/jupyterhub/handlers/pages.py b/jupyterhub/handlers/pages.py\n--- a/jupyterhub/handlers/pages.py\n+++ b/jupyterhub/handlers/pages.py\n@@ -71,7 +71,7 @@\n self.finish(html)\n else:\n # not running, no form. Trigger spawn.\n- url = url_path_join(self.base_url, 'users', user.name)\n+ url = url_path_join(self.base_url, 'user', user.name)\n self.redirect(url)\n \n @web.authenticated\n", "issue": "404 on single user notebook server anytime I restart jupyterhub\nI just deployed jupyterhub from scratch using my ansible scripts this morning. The script are unchanged from 2-3 weeks ago when I did it previously and it was all working. I am running from latest master of jupyterhub.\n\nAnytime I restart jupyterhub (hub+proxy) I see the following behavior:\n- I can log in (using GitHub OAuth)\n- Trying to start my server gives me a 404 on `/hub/users/ellisonbg`. I can tell from the server log that the single user server isn't being started.\n- I can go to the Admin page and start my server from there.\n- But then, I get a redirect loop when I try to go to my single user server.\n- If I clear all my cookies while my single user server (started through the Admin page) is still running, it starts to work as expected. I can start and stop the single user server on the control panel page just fine.\n- If a restart jupyterhub, the problems start all over again.\n\nI am using a fixed proxy_auth_token and cookie secret.\n\n", "before_files": [{"content": "\"\"\"Basic html-rendering handlers.\"\"\"\n\n# Copyright (c) Jupyter Development Team.\n# Distributed under the terms of the Modified BSD License.\n\nfrom tornado import web, gen\n\nfrom .. import orm\nfrom ..utils import admin_only, url_path_join\nfrom .base import BaseHandler\nfrom .login import LoginHandler\n\n\nclass RootHandler(BaseHandler):\n \"\"\"Render the Hub root page.\n \n If logged in, redirects to:\n \n - single-user server if running\n - hub home, otherwise\n \n Otherwise, renders login page.\n \"\"\"\n def get(self):\n user = self.get_current_user()\n if user:\n if user.running:\n url = user.server.base_url\n self.log.debug(\"User is running: %s\", url)\n else:\n url = url_path_join(self.hub.server.base_url, 'home')\n self.log.debug(\"User is not running: %s\", url)\n self.redirect(url)\n return\n url = url_path_join(self.hub.server.base_url, 'login')\n self.redirect(url)\n\n\nclass HomeHandler(BaseHandler):\n \"\"\"Render the user's home page.\"\"\"\n\n @web.authenticated\n def get(self):\n html = self.render_template('home.html',\n user=self.get_current_user(),\n )\n self.finish(html)\n\n\nclass SpawnHandler(BaseHandler):\n \"\"\"Handle spawning of single-user servers via form.\n \n GET renders the form, POST handles form submission.\n \n Only enabled when Spawner.options_form is defined.\n \"\"\"\n @web.authenticated\n def get(self):\n \"\"\"GET renders form for spawning with user-specified options\"\"\"\n user = self.get_current_user()\n if user.running:\n url = user.server.base_url\n self.log.debug(\"User is running: %s\", url)\n self.redirect(url)\n return\n if user.spawner.options_form:\n html = self.render_template('spawn.html',\n user=self.get_current_user(),\n spawner_options_form=user.spawner.options_form,\n )\n self.finish(html)\n else:\n # not running, no form. Trigger spawn.\n url = url_path_join(self.base_url, 'users', user.name)\n self.redirect(url)\n \n @web.authenticated\n @gen.coroutine\n def post(self):\n \"\"\"POST spawns with user-specified options\"\"\"\n user = self.get_current_user()\n if user.running:\n url = user.server.base_url\n self.log.warning(\"User is already running: %s\", url)\n self.redirect(url)\n return\n form_options = {}\n for key, byte_list in self.request.body_arguments.items():\n form_options[key] = [ bs.decode('utf8') for bs in byte_list ]\n options = user.spawner.options_from_form(form_options)\n yield self.spawn_single_user(user, options=options)\n self.set_login_cookie(user)\n url = user.server.base_url\n self.redirect(url)\n\nclass AdminHandler(BaseHandler):\n \"\"\"Render the admin page.\"\"\"\n\n @admin_only\n def get(self):\n available = {'name', 'admin', 'running', 'last_activity'}\n default_sort = ['admin', 'name']\n mapping = {\n 'running': '_server_id'\n }\n default_order = {\n 'name': 'asc',\n 'last_activity': 'desc',\n 'admin': 'desc',\n 'running': 'desc',\n }\n sorts = self.get_arguments('sort') or default_sort\n orders = self.get_arguments('order')\n \n for bad in set(sorts).difference(available):\n self.log.warn(\"ignoring invalid sort: %r\", bad)\n sorts.remove(bad)\n for bad in set(orders).difference({'asc', 'desc'}):\n self.log.warn(\"ignoring invalid order: %r\", bad)\n orders.remove(bad)\n \n # add default sort as secondary\n for s in default_sort:\n if s not in sorts:\n sorts.append(s)\n if len(orders) < len(sorts):\n for col in sorts[len(orders):]:\n orders.append(default_order[col])\n else:\n orders = orders[:len(sorts)]\n \n # this could be one incomprehensible nested list comprehension\n # get User columns\n cols = [ getattr(orm.User, mapping.get(c, c)) for c in sorts ]\n # get User.col.desc() order objects\n ordered = [ getattr(c, o)() for c, o in zip(cols, orders) ]\n \n users = self.db.query(orm.User).order_by(*ordered)\n users = [ self._user_from_orm(u) for u in users ]\n running = [ u for u in users if u.running ]\n \n html = self.render_template('admin.html',\n user=self.get_current_user(),\n admin_access=self.settings.get('admin_access', False),\n users=users,\n running=running,\n sort={s:o for s,o in zip(sorts, orders)},\n )\n self.finish(html)\n\n\ndefault_handlers = [\n (r'/', RootHandler),\n (r'/home', HomeHandler),\n (r'/admin', AdminHandler),\n (r'/spawn', SpawnHandler),\n]\n", "path": "jupyterhub/handlers/pages.py"}], "after_files": [{"content": "\"\"\"Basic html-rendering handlers.\"\"\"\n\n# Copyright (c) Jupyter Development Team.\n# Distributed under the terms of the Modified BSD License.\n\nfrom tornado import web, gen\n\nfrom .. import orm\nfrom ..utils import admin_only, url_path_join\nfrom .base import BaseHandler\nfrom .login import LoginHandler\n\n\nclass RootHandler(BaseHandler):\n \"\"\"Render the Hub root page.\n \n If logged in, redirects to:\n \n - single-user server if running\n - hub home, otherwise\n \n Otherwise, renders login page.\n \"\"\"\n def get(self):\n user = self.get_current_user()\n if user:\n if user.running:\n url = user.server.base_url\n self.log.debug(\"User is running: %s\", url)\n else:\n url = url_path_join(self.hub.server.base_url, 'home')\n self.log.debug(\"User is not running: %s\", url)\n self.redirect(url)\n return\n url = url_path_join(self.hub.server.base_url, 'login')\n self.redirect(url)\n\n\nclass HomeHandler(BaseHandler):\n \"\"\"Render the user's home page.\"\"\"\n\n @web.authenticated\n def get(self):\n html = self.render_template('home.html',\n user=self.get_current_user(),\n )\n self.finish(html)\n\n\nclass SpawnHandler(BaseHandler):\n \"\"\"Handle spawning of single-user servers via form.\n \n GET renders the form, POST handles form submission.\n \n Only enabled when Spawner.options_form is defined.\n \"\"\"\n @web.authenticated\n def get(self):\n \"\"\"GET renders form for spawning with user-specified options\"\"\"\n user = self.get_current_user()\n if user.running:\n url = user.server.base_url\n self.log.debug(\"User is running: %s\", url)\n self.redirect(url)\n return\n if user.spawner.options_form:\n html = self.render_template('spawn.html',\n user=self.get_current_user(),\n spawner_options_form=user.spawner.options_form,\n )\n self.finish(html)\n else:\n # not running, no form. Trigger spawn.\n url = url_path_join(self.base_url, 'user', user.name)\n self.redirect(url)\n \n @web.authenticated\n @gen.coroutine\n def post(self):\n \"\"\"POST spawns with user-specified options\"\"\"\n user = self.get_current_user()\n if user.running:\n url = user.server.base_url\n self.log.warning(\"User is already running: %s\", url)\n self.redirect(url)\n return\n form_options = {}\n for key, byte_list in self.request.body_arguments.items():\n form_options[key] = [ bs.decode('utf8') for bs in byte_list ]\n options = user.spawner.options_from_form(form_options)\n yield self.spawn_single_user(user, options=options)\n self.set_login_cookie(user)\n url = user.server.base_url\n self.redirect(url)\n\nclass AdminHandler(BaseHandler):\n \"\"\"Render the admin page.\"\"\"\n\n @admin_only\n def get(self):\n available = {'name', 'admin', 'running', 'last_activity'}\n default_sort = ['admin', 'name']\n mapping = {\n 'running': '_server_id'\n }\n default_order = {\n 'name': 'asc',\n 'last_activity': 'desc',\n 'admin': 'desc',\n 'running': 'desc',\n }\n sorts = self.get_arguments('sort') or default_sort\n orders = self.get_arguments('order')\n \n for bad in set(sorts).difference(available):\n self.log.warn(\"ignoring invalid sort: %r\", bad)\n sorts.remove(bad)\n for bad in set(orders).difference({'asc', 'desc'}):\n self.log.warn(\"ignoring invalid order: %r\", bad)\n orders.remove(bad)\n \n # add default sort as secondary\n for s in default_sort:\n if s not in sorts:\n sorts.append(s)\n if len(orders) < len(sorts):\n for col in sorts[len(orders):]:\n orders.append(default_order[col])\n else:\n orders = orders[:len(sorts)]\n \n # this could be one incomprehensible nested list comprehension\n # get User columns\n cols = [ getattr(orm.User, mapping.get(c, c)) for c in sorts ]\n # get User.col.desc() order objects\n ordered = [ getattr(c, o)() for c, o in zip(cols, orders) ]\n \n users = self.db.query(orm.User).order_by(*ordered)\n users = [ self._user_from_orm(u) for u in users ]\n running = [ u for u in users if u.running ]\n \n html = self.render_template('admin.html',\n user=self.get_current_user(),\n admin_access=self.settings.get('admin_access', False),\n users=users,\n running=running,\n sort={s:o for s,o in zip(sorts, orders)},\n )\n self.finish(html)\n\n\ndefault_handlers = [\n (r'/', RootHandler),\n (r'/home', HomeHandler),\n (r'/admin', AdminHandler),\n (r'/spawn', SpawnHandler),\n]\n", "path": "jupyterhub/handlers/pages.py"}]} | 1,971 | 121 |
gh_patches_debug_395 | rasdani/github-patches | git_diff | wright-group__WrightTools-640 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CITATION text file not distributed
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #! /usr/bin/env python3
2
3 import os
4 from setuptools import setup, find_packages
5
6
7 here = os.path.abspath(os.path.dirname(__file__))
8
9
10 def read(fname):
11 return open(os.path.join(here, fname)).read()
12
13
14 extra_files = {
15 "WrightTools": [
16 "datasets",
17 "datasets/*",
18 "datasets/*/*",
19 "datasets/*/*/*",
20 "datasets/*/*/*/*",
21 "VERSION",
22 "WT5_VERSION",
23 ]
24 }
25
26 with open(os.path.join(here, "WrightTools", "VERSION")) as version_file:
27 version = version_file.read().strip()
28
29 setup(
30 name="WrightTools",
31 packages=find_packages(exclude=("tests", "tests.*")),
32 package_data=extra_files,
33 python_requires=">=3.5",
34 setup_requires=["pytest-runner"],
35 tests_require=[
36 "pytest",
37 "pytest-cov",
38 "sphinx==1.6.5",
39 "sphinx-gallery==0.1.12",
40 "sphinx-rtd-theme",
41 ],
42 install_requires=[
43 "h5py",
44 "imageio",
45 "matplotlib>=2.0",
46 "numexpr",
47 "numpy",
48 "python-dateutil",
49 "pytz",
50 "scipy",
51 "tidy_headers",
52 ],
53 extras_require={
54 "docs": ["sphinx-gallery==0.1.12"],
55 "dev": ["black", "pre-commit", "pydocstyle"],
56 },
57 version=version,
58 description="Tools for loading, processing, and plotting multidimensional spectroscopy data.",
59 long_description=read("README.rst"),
60 author="WrightTools Developers",
61 license="MIT",
62 url="http://wright.tools",
63 keywords="spectroscopy science multidimensional visualization",
64 classifiers=[
65 "Development Status :: 5 - Production/Stable",
66 "Intended Audience :: Science/Research",
67 "License :: OSI Approved :: MIT License",
68 "Natural Language :: English",
69 "Programming Language :: Python :: 3",
70 "Programming Language :: Python :: 3.5",
71 "Programming Language :: Python :: 3.6",
72 "Topic :: Scientific/Engineering",
73 ],
74 )
75
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -18,6 +18,7 @@
"datasets/*/*",
"datasets/*/*/*",
"datasets/*/*/*/*",
+ "CITATION",
"VERSION",
"WT5_VERSION",
]
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -18,6 +18,7 @@\n \"datasets/*/*\",\n \"datasets/*/*/*\",\n \"datasets/*/*/*/*\",\n+ \"CITATION\",\n \"VERSION\",\n \"WT5_VERSION\",\n ]\n", "issue": "CITATION text file not distributed\n\n", "before_files": [{"content": "#! /usr/bin/env python3\n\nimport os\nfrom setuptools import setup, find_packages\n\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\n\ndef read(fname):\n return open(os.path.join(here, fname)).read()\n\n\nextra_files = {\n \"WrightTools\": [\n \"datasets\",\n \"datasets/*\",\n \"datasets/*/*\",\n \"datasets/*/*/*\",\n \"datasets/*/*/*/*\",\n \"VERSION\",\n \"WT5_VERSION\",\n ]\n}\n\nwith open(os.path.join(here, \"WrightTools\", \"VERSION\")) as version_file:\n version = version_file.read().strip()\n\nsetup(\n name=\"WrightTools\",\n packages=find_packages(exclude=(\"tests\", \"tests.*\")),\n package_data=extra_files,\n python_requires=\">=3.5\",\n setup_requires=[\"pytest-runner\"],\n tests_require=[\n \"pytest\",\n \"pytest-cov\",\n \"sphinx==1.6.5\",\n \"sphinx-gallery==0.1.12\",\n \"sphinx-rtd-theme\",\n ],\n install_requires=[\n \"h5py\",\n \"imageio\",\n \"matplotlib>=2.0\",\n \"numexpr\",\n \"numpy\",\n \"python-dateutil\",\n \"pytz\",\n \"scipy\",\n \"tidy_headers\",\n ],\n extras_require={\n \"docs\": [\"sphinx-gallery==0.1.12\"],\n \"dev\": [\"black\", \"pre-commit\", \"pydocstyle\"],\n },\n version=version,\n description=\"Tools for loading, processing, and plotting multidimensional spectroscopy data.\",\n long_description=read(\"README.rst\"),\n author=\"WrightTools Developers\",\n license=\"MIT\",\n url=\"http://wright.tools\",\n keywords=\"spectroscopy science multidimensional visualization\",\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: MIT License\",\n \"Natural Language :: English\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Topic :: Scientific/Engineering\",\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "#! /usr/bin/env python3\n\nimport os\nfrom setuptools import setup, find_packages\n\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\n\ndef read(fname):\n return open(os.path.join(here, fname)).read()\n\n\nextra_files = {\n \"WrightTools\": [\n \"datasets\",\n \"datasets/*\",\n \"datasets/*/*\",\n \"datasets/*/*/*\",\n \"datasets/*/*/*/*\",\n \"CITATION\",\n \"VERSION\",\n \"WT5_VERSION\",\n ]\n}\n\nwith open(os.path.join(here, \"WrightTools\", \"VERSION\")) as version_file:\n version = version_file.read().strip()\n\nsetup(\n name=\"WrightTools\",\n packages=find_packages(exclude=(\"tests\", \"tests.*\")),\n package_data=extra_files,\n python_requires=\">=3.5\",\n setup_requires=[\"pytest-runner\"],\n tests_require=[\n \"pytest\",\n \"pytest-cov\",\n \"sphinx==1.6.5\",\n \"sphinx-gallery==0.1.12\",\n \"sphinx-rtd-theme\",\n ],\n install_requires=[\n \"h5py\",\n \"imageio\",\n \"matplotlib>=2.0\",\n \"numexpr\",\n \"numpy\",\n \"python-dateutil\",\n \"pytz\",\n \"scipy\",\n \"tidy_headers\",\n ],\n extras_require={\n \"docs\": [\"sphinx-gallery==0.1.12\"],\n \"dev\": [\"black\", \"pre-commit\", \"pydocstyle\"],\n },\n version=version,\n description=\"Tools for loading, processing, and plotting multidimensional spectroscopy data.\",\n long_description=read(\"README.rst\"),\n author=\"WrightTools Developers\",\n license=\"MIT\",\n url=\"http://wright.tools\",\n keywords=\"spectroscopy science multidimensional visualization\",\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: MIT License\",\n \"Natural Language :: English\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Topic :: Scientific/Engineering\",\n ],\n)\n", "path": "setup.py"}]} | 882 | 72 |
gh_patches_debug_24933 | rasdani/github-patches | git_diff | urllib3__urllib3-1665 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Multipart mixed responses generate warnings
If you read a multipart/mixed response, the connectionpool issues a couple of warnings about defects in the message. I'm not sure what the expected, correct behavior is, but these warnings seem spurious.
Stick this perl script somewhere:
```
#!/usr/bin/perl
print "Server: Some Server Name\r\n";
print "Content-Type: multipart/mixed; boundary=36eeb8c4e26d842a\r\n";
print "Content-Length: 178\r\n";
print "\r\n\r\n";
print "--36eeb8c4e26d842a\r\n";
print "Content-Type: text/plain\r\n";
print "\r\n";
print "7\r\n";
print "--36eeb8c4e26d842a\r\n";
print "Content-Type: text/plain\r\n";
print "\r\n";
print "9\r\n";
print "--36eeb8c4e26d842a\r\n";
print "Content-Type: text/plain\r\n";
print "\r\n";
print "11\r\n";
print "--36eeb8c4e26d842a--\r\n";
```
Read it with requests (naturally, you'll have to change the URI to wherever you put the script):
```
import requests, logging
logging.basicConfig(level=logging.WARNING)
logging.getLogger("requests").setLevel(logging.DEBUG)
headers = {'accept': "multipart/mixed"}
r = requests.get("http://localhost:8124/cgi-bin/mpm.pl", headers=headers)
print(r)
```
The following errors are displayed:
```
DEBUG:requests.packages.urllib3.connectionpool:"GET http://localhost:8124/cgi-bin/mpm.pl HTTP/1.1" 200 178
WARNING:requests.packages.urllib3.connectionpool:Failed to parse headers (url=http://localhost:8888/http://localhost:8124/cgi-bin/mpm.pl): [StartBoundaryNotFoundDefect(), MultipartInvariantViolationDefect()], unparsed data: ''
Traceback (most recent call last):
File "/home/ndw/.virtualenvs/pyapi/lib/python3.4/site-packages/requests-2.8.0-py3.4.egg/requests/packages/urllib3/connectionpool.py", line 390, in _make_request
assert_header_parsing(httplib_response.msg)
File "/home/ndw/.virtualenvs/pyapi/lib/python3.4/site-packages/requests-2.8.0-py3.4.egg/requests/packages/urllib3/util/response.py", line 58, in assert_header_parsing
raise HeaderParsingError(defects=defects, unparsed_data=unparsed_data)
requests.packages.urllib3.exceptions.HeaderParsingError: [StartBoundaryNotFoundDefect(), MultipartInvariantViolationDefect()], unparsed data: ''
```
It took me quite a while to work out that they were spurious (because in real life, the server side that is generating the multipart/mixed is more complicated!)
See also kennethreitz/requests#3001
##
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/urllib3/util/response.py`
Content:
```
1 from __future__ import absolute_import
2 from ..packages.six.moves import http_client as httplib
3
4 from ..exceptions import HeaderParsingError
5
6
7 def is_fp_closed(obj):
8 """
9 Checks whether a given file-like object is closed.
10
11 :param obj:
12 The file-like object to check.
13 """
14
15 try:
16 # Check `isclosed()` first, in case Python3 doesn't set `closed`.
17 # GH Issue #928
18 return obj.isclosed()
19 except AttributeError:
20 pass
21
22 try:
23 # Check via the official file-like-object way.
24 return obj.closed
25 except AttributeError:
26 pass
27
28 try:
29 # Check if the object is a container for another file-like object that
30 # gets released on exhaustion (e.g. HTTPResponse).
31 return obj.fp is None
32 except AttributeError:
33 pass
34
35 raise ValueError("Unable to determine whether fp is closed.")
36
37
38 def assert_header_parsing(headers):
39 """
40 Asserts whether all headers have been successfully parsed.
41 Extracts encountered errors from the result of parsing headers.
42
43 Only works on Python 3.
44
45 :param headers: Headers to verify.
46 :type headers: `httplib.HTTPMessage`.
47
48 :raises urllib3.exceptions.HeaderParsingError:
49 If parsing errors are found.
50 """
51
52 # This will fail silently if we pass in the wrong kind of parameter.
53 # To make debugging easier add an explicit check.
54 if not isinstance(headers, httplib.HTTPMessage):
55 raise TypeError("expected httplib.Message, got {0}.".format(type(headers)))
56
57 defects = getattr(headers, "defects", None)
58 get_payload = getattr(headers, "get_payload", None)
59
60 unparsed_data = None
61 if get_payload:
62 # get_payload is actually email.message.Message.get_payload;
63 # we're only interested in the result if it's not a multipart message
64 if not headers.is_multipart():
65 payload = get_payload()
66
67 if isinstance(payload, (bytes, str)):
68 unparsed_data = payload
69
70 if defects or unparsed_data:
71 raise HeaderParsingError(defects=defects, unparsed_data=unparsed_data)
72
73
74 def is_response_to_head(response):
75 """
76 Checks whether the request of a response has been a HEAD-request.
77 Handles the quirks of AppEngine.
78
79 :param conn:
80 :type conn: :class:`httplib.HTTPResponse`
81 """
82 # FIXME: Can we do this somehow without accessing private httplib _method?
83 method = response._method
84 if isinstance(method, int): # Platform-specific: Appengine
85 return method == 3
86 return method.upper() == "HEAD"
87
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/urllib3/util/response.py b/src/urllib3/util/response.py
--- a/src/urllib3/util/response.py
+++ b/src/urllib3/util/response.py
@@ -1,4 +1,5 @@
from __future__ import absolute_import
+from email.errors import StartBoundaryNotFoundDefect, MultipartInvariantViolationDefect
from ..packages.six.moves import http_client as httplib
from ..exceptions import HeaderParsingError
@@ -66,6 +67,25 @@
if isinstance(payload, (bytes, str)):
unparsed_data = payload
+ if defects:
+ # httplib is assuming a response body is available
+ # when parsing headers even when httplib only sends
+ # header data to parse_headers() This results in
+ # defects on multipart responses in particular.
+ # See: https://github.com/urllib3/urllib3/issues/800
+
+ # So we ignore the following defects:
+ # - StartBoundaryNotFoundDefect:
+ # The claimed start boundary was never found.
+ # - MultipartInvariantViolationDefect:
+ # A message claimed to be a multipart but no subparts were found.
+ defects = [
+ defect
+ for defect in defects
+ if not isinstance(
+ defect, (StartBoundaryNotFoundDefect, MultipartInvariantViolationDefect)
+ )
+ ]
if defects or unparsed_data:
raise HeaderParsingError(defects=defects, unparsed_data=unparsed_data)
| {"golden_diff": "diff --git a/src/urllib3/util/response.py b/src/urllib3/util/response.py\n--- a/src/urllib3/util/response.py\n+++ b/src/urllib3/util/response.py\n@@ -1,4 +1,5 @@\n from __future__ import absolute_import\n+from email.errors import StartBoundaryNotFoundDefect, MultipartInvariantViolationDefect\n from ..packages.six.moves import http_client as httplib\n \n from ..exceptions import HeaderParsingError\n@@ -66,6 +67,25 @@\n \n if isinstance(payload, (bytes, str)):\n unparsed_data = payload\n+ if defects:\n+ # httplib is assuming a response body is available\n+ # when parsing headers even when httplib only sends\n+ # header data to parse_headers() This results in\n+ # defects on multipart responses in particular.\n+ # See: https://github.com/urllib3/urllib3/issues/800\n+\n+ # So we ignore the following defects:\n+ # - StartBoundaryNotFoundDefect:\n+ # The claimed start boundary was never found.\n+ # - MultipartInvariantViolationDefect:\n+ # A message claimed to be a multipart but no subparts were found.\n+ defects = [\n+ defect\n+ for defect in defects\n+ if not isinstance(\n+ defect, (StartBoundaryNotFoundDefect, MultipartInvariantViolationDefect)\n+ )\n+ ]\n \n if defects or unparsed_data:\n raise HeaderParsingError(defects=defects, unparsed_data=unparsed_data)\n", "issue": "Multipart mixed responses generate warnings\nIf you read a multipart/mixed response, the connectionpool issues a couple of warnings about defects in the message. I'm not sure what the expected, correct behavior is, but these warnings seem spurious.\n\nStick this perl script somewhere:\n\n```\n#!/usr/bin/perl\n\nprint \"Server: Some Server Name\\r\\n\";\nprint \"Content-Type: multipart/mixed; boundary=36eeb8c4e26d842a\\r\\n\";\nprint \"Content-Length: 178\\r\\n\";\nprint \"\\r\\n\\r\\n\";\nprint \"--36eeb8c4e26d842a\\r\\n\";\nprint \"Content-Type: text/plain\\r\\n\";\nprint \"\\r\\n\";\nprint \"7\\r\\n\";\nprint \"--36eeb8c4e26d842a\\r\\n\";\nprint \"Content-Type: text/plain\\r\\n\";\nprint \"\\r\\n\";\nprint \"9\\r\\n\";\nprint \"--36eeb8c4e26d842a\\r\\n\";\nprint \"Content-Type: text/plain\\r\\n\";\nprint \"\\r\\n\";\nprint \"11\\r\\n\";\nprint \"--36eeb8c4e26d842a--\\r\\n\";\n```\n\nRead it with requests (naturally, you'll have to change the URI to wherever you put the script):\n\n```\nimport requests, logging\n\nlogging.basicConfig(level=logging.WARNING)\nlogging.getLogger(\"requests\").setLevel(logging.DEBUG)\n\nheaders = {'accept': \"multipart/mixed\"}\nr = requests.get(\"http://localhost:8124/cgi-bin/mpm.pl\", headers=headers)\n\nprint(r)\n```\n\nThe following errors are displayed:\n\n```\nDEBUG:requests.packages.urllib3.connectionpool:\"GET http://localhost:8124/cgi-bin/mpm.pl HTTP/1.1\" 200 178\nWARNING:requests.packages.urllib3.connectionpool:Failed to parse headers (url=http://localhost:8888/http://localhost:8124/cgi-bin/mpm.pl): [StartBoundaryNotFoundDefect(), MultipartInvariantViolationDefect()], unparsed data: ''\nTraceback (most recent call last):\n File \"/home/ndw/.virtualenvs/pyapi/lib/python3.4/site-packages/requests-2.8.0-py3.4.egg/requests/packages/urllib3/connectionpool.py\", line 390, in _make_request\n assert_header_parsing(httplib_response.msg)\n File \"/home/ndw/.virtualenvs/pyapi/lib/python3.4/site-packages/requests-2.8.0-py3.4.egg/requests/packages/urllib3/util/response.py\", line 58, in assert_header_parsing\n raise HeaderParsingError(defects=defects, unparsed_data=unparsed_data)\nrequests.packages.urllib3.exceptions.HeaderParsingError: [StartBoundaryNotFoundDefect(), MultipartInvariantViolationDefect()], unparsed data: ''\n```\n\nIt took me quite a while to work out that they were spurious (because in real life, the server side that is generating the multipart/mixed is more complicated!)\n\nSee also kennethreitz/requests#3001\n## \n\n", "before_files": [{"content": "from __future__ import absolute_import\nfrom ..packages.six.moves import http_client as httplib\n\nfrom ..exceptions import HeaderParsingError\n\n\ndef is_fp_closed(obj):\n \"\"\"\n Checks whether a given file-like object is closed.\n\n :param obj:\n The file-like object to check.\n \"\"\"\n\n try:\n # Check `isclosed()` first, in case Python3 doesn't set `closed`.\n # GH Issue #928\n return obj.isclosed()\n except AttributeError:\n pass\n\n try:\n # Check via the official file-like-object way.\n return obj.closed\n except AttributeError:\n pass\n\n try:\n # Check if the object is a container for another file-like object that\n # gets released on exhaustion (e.g. HTTPResponse).\n return obj.fp is None\n except AttributeError:\n pass\n\n raise ValueError(\"Unable to determine whether fp is closed.\")\n\n\ndef assert_header_parsing(headers):\n \"\"\"\n Asserts whether all headers have been successfully parsed.\n Extracts encountered errors from the result of parsing headers.\n\n Only works on Python 3.\n\n :param headers: Headers to verify.\n :type headers: `httplib.HTTPMessage`.\n\n :raises urllib3.exceptions.HeaderParsingError:\n If parsing errors are found.\n \"\"\"\n\n # This will fail silently if we pass in the wrong kind of parameter.\n # To make debugging easier add an explicit check.\n if not isinstance(headers, httplib.HTTPMessage):\n raise TypeError(\"expected httplib.Message, got {0}.\".format(type(headers)))\n\n defects = getattr(headers, \"defects\", None)\n get_payload = getattr(headers, \"get_payload\", None)\n\n unparsed_data = None\n if get_payload:\n # get_payload is actually email.message.Message.get_payload;\n # we're only interested in the result if it's not a multipart message\n if not headers.is_multipart():\n payload = get_payload()\n\n if isinstance(payload, (bytes, str)):\n unparsed_data = payload\n\n if defects or unparsed_data:\n raise HeaderParsingError(defects=defects, unparsed_data=unparsed_data)\n\n\ndef is_response_to_head(response):\n \"\"\"\n Checks whether the request of a response has been a HEAD-request.\n Handles the quirks of AppEngine.\n\n :param conn:\n :type conn: :class:`httplib.HTTPResponse`\n \"\"\"\n # FIXME: Can we do this somehow without accessing private httplib _method?\n method = response._method\n if isinstance(method, int): # Platform-specific: Appengine\n return method == 3\n return method.upper() == \"HEAD\"\n", "path": "src/urllib3/util/response.py"}], "after_files": [{"content": "from __future__ import absolute_import\nfrom email.errors import StartBoundaryNotFoundDefect, MultipartInvariantViolationDefect\nfrom ..packages.six.moves import http_client as httplib\n\nfrom ..exceptions import HeaderParsingError\n\n\ndef is_fp_closed(obj):\n \"\"\"\n Checks whether a given file-like object is closed.\n\n :param obj:\n The file-like object to check.\n \"\"\"\n\n try:\n # Check `isclosed()` first, in case Python3 doesn't set `closed`.\n # GH Issue #928\n return obj.isclosed()\n except AttributeError:\n pass\n\n try:\n # Check via the official file-like-object way.\n return obj.closed\n except AttributeError:\n pass\n\n try:\n # Check if the object is a container for another file-like object that\n # gets released on exhaustion (e.g. HTTPResponse).\n return obj.fp is None\n except AttributeError:\n pass\n\n raise ValueError(\"Unable to determine whether fp is closed.\")\n\n\ndef assert_header_parsing(headers):\n \"\"\"\n Asserts whether all headers have been successfully parsed.\n Extracts encountered errors from the result of parsing headers.\n\n Only works on Python 3.\n\n :param headers: Headers to verify.\n :type headers: `httplib.HTTPMessage`.\n\n :raises urllib3.exceptions.HeaderParsingError:\n If parsing errors are found.\n \"\"\"\n\n # This will fail silently if we pass in the wrong kind of parameter.\n # To make debugging easier add an explicit check.\n if not isinstance(headers, httplib.HTTPMessage):\n raise TypeError(\"expected httplib.Message, got {0}.\".format(type(headers)))\n\n defects = getattr(headers, \"defects\", None)\n get_payload = getattr(headers, \"get_payload\", None)\n\n unparsed_data = None\n if get_payload:\n # get_payload is actually email.message.Message.get_payload;\n # we're only interested in the result if it's not a multipart message\n if not headers.is_multipart():\n payload = get_payload()\n\n if isinstance(payload, (bytes, str)):\n unparsed_data = payload\n if defects:\n # httplib is assuming a response body is available\n # when parsing headers even when httplib only sends\n # header data to parse_headers() This results in\n # defects on multipart responses in particular.\n # See: https://github.com/urllib3/urllib3/issues/800\n\n # So we ignore the following defects:\n # - StartBoundaryNotFoundDefect:\n # The claimed start boundary was never found.\n # - MultipartInvariantViolationDefect:\n # A message claimed to be a multipart but no subparts were found.\n defects = [\n defect\n for defect in defects\n if not isinstance(\n defect, (StartBoundaryNotFoundDefect, MultipartInvariantViolationDefect)\n )\n ]\n\n if defects or unparsed_data:\n raise HeaderParsingError(defects=defects, unparsed_data=unparsed_data)\n\n\ndef is_response_to_head(response):\n \"\"\"\n Checks whether the request of a response has been a HEAD-request.\n Handles the quirks of AppEngine.\n\n :param conn:\n :type conn: :class:`httplib.HTTPResponse`\n \"\"\"\n # FIXME: Can we do this somehow without accessing private httplib _method?\n method = response._method\n if isinstance(method, int): # Platform-specific: Appengine\n return method == 3\n return method.upper() == \"HEAD\"\n", "path": "src/urllib3/util/response.py"}]} | 1,698 | 347 |
gh_patches_debug_22952 | rasdani/github-patches | git_diff | pwndbg__pwndbg-648 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
xuntil doesn't work with PIE binaries before being run
`xuntil` was added in #604 and works except that it doesn't work with PIE binaries before being run.
This is because it expects an int argument. I'll look into using sloppy parsing.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pwndbg/commands/peda.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 from __future__ import absolute_import
4 from __future__ import division
5 from __future__ import print_function
6 from __future__ import unicode_literals
7
8 import argparse
9 import sys
10
11 import gdb
12
13 import pwndbg.auxv
14 import pwndbg.color.message as message
15 import pwndbg.commands
16 import pwndbg.commands.context
17 import pwndbg.commands.telescope
18 import pwndbg.proc
19
20
21 @pwndbg.commands.ArgparsedCommand("Gets the current file.")
22 @pwndbg.commands.OnlyWhenRunning
23 def getfile():
24 print(repr(pwndbg.auxv.get().AT_EXECFN))
25
26 @pwndbg.commands.ArgparsedCommand("Get the pid.")
27 @pwndbg.commands.OnlyWhenRunning
28 def getpid():
29 print(pwndbg.proc.pid)
30
31
32 parser = argparse.ArgumentParser(description='Continue execution until an address or function.')
33 parser.add_argument('target', help='Address or function to stop execution at')
34
35
36 @pwndbg.commands.ArgparsedCommand(parser)
37 def xuntil(target):
38 addr = int(target)
39
40 if not pwndbg.memory.peek(addr):
41 print(message.error('Invalid address %#x' % addr))
42 return
43
44 spec = "*%#x" % (addr)
45 b = gdb.Breakpoint(spec, temporary=True)
46 if pwndbg.proc.alive:
47 gdb.execute("continue", from_tty=False)
48 else:
49 gdb.execute("run", from_tty=False)
50
51 xinfo = pwndbg.commands.context.context
52 xprint = pwndbg.commands.telescope.telescope
53
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pwndbg/commands/peda.py b/pwndbg/commands/peda.py
--- a/pwndbg/commands/peda.py
+++ b/pwndbg/commands/peda.py
@@ -30,18 +30,27 @@
parser = argparse.ArgumentParser(description='Continue execution until an address or function.')
-parser.add_argument('target', help='Address or function to stop execution at')
-
+parser.add_argument('target', type=str, help='Address or function to stop execution at')
@pwndbg.commands.ArgparsedCommand(parser)
def xuntil(target):
- addr = int(target)
-
- if not pwndbg.memory.peek(addr):
- print(message.error('Invalid address %#x' % addr))
- return
+ try:
+ addr = int(target,0)
+
+ if not pwndbg.memory.peek(addr):
+ print(message.error('Invalid address %#x' % addr))
+ return
+
+ spec = "*%#x" % (addr)
+ except (TypeError, ValueError):
+ #The following gdb command will throw an error if the symbol is not defined.
+ try:
+ result = gdb.execute('info address %s' % target, to_string=True, from_tty=False)
+ except gdb.error:
+ print(message.error("Unable to resolve %s" % target))
+ return
+ spec = target
- spec = "*%#x" % (addr)
b = gdb.Breakpoint(spec, temporary=True)
if pwndbg.proc.alive:
gdb.execute("continue", from_tty=False)
| {"golden_diff": "diff --git a/pwndbg/commands/peda.py b/pwndbg/commands/peda.py\n--- a/pwndbg/commands/peda.py\n+++ b/pwndbg/commands/peda.py\n@@ -30,18 +30,27 @@\n \n \n parser = argparse.ArgumentParser(description='Continue execution until an address or function.')\n-parser.add_argument('target', help='Address or function to stop execution at')\n-\n+parser.add_argument('target', type=str, help='Address or function to stop execution at')\n \n @pwndbg.commands.ArgparsedCommand(parser)\n def xuntil(target):\n- addr = int(target)\n-\n- if not pwndbg.memory.peek(addr):\n- print(message.error('Invalid address %#x' % addr))\n- return\n+ try:\n+ addr = int(target,0)\n+ \n+ if not pwndbg.memory.peek(addr):\n+ print(message.error('Invalid address %#x' % addr))\n+ return\n+\n+ spec = \"*%#x\" % (addr)\n+ except (TypeError, ValueError):\n+ #The following gdb command will throw an error if the symbol is not defined.\n+ try:\n+ result = gdb.execute('info address %s' % target, to_string=True, from_tty=False)\n+ except gdb.error:\n+ print(message.error(\"Unable to resolve %s\" % target))\n+ return \n+ spec = target\n \n- spec = \"*%#x\" % (addr)\n b = gdb.Breakpoint(spec, temporary=True)\n if pwndbg.proc.alive:\n gdb.execute(\"continue\", from_tty=False)\n", "issue": "xuntil doesn't work with PIE binaries before being run\n`xuntil` was added in #604 and works except that it doesn't work with PIE binaries before being run.\r\nThis is because it expects an int argument. I'll look into using sloppy parsing.\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\nfrom __future__ import unicode_literals\n\nimport argparse\nimport sys\n\nimport gdb\n\nimport pwndbg.auxv\nimport pwndbg.color.message as message\nimport pwndbg.commands\nimport pwndbg.commands.context\nimport pwndbg.commands.telescope\nimport pwndbg.proc\n\n\[email protected](\"Gets the current file.\")\[email protected]\ndef getfile():\n print(repr(pwndbg.auxv.get().AT_EXECFN))\n\[email protected](\"Get the pid.\")\[email protected]\ndef getpid():\n print(pwndbg.proc.pid)\n\n\nparser = argparse.ArgumentParser(description='Continue execution until an address or function.')\nparser.add_argument('target', help='Address or function to stop execution at')\n\n\[email protected](parser)\ndef xuntil(target):\n addr = int(target)\n\n if not pwndbg.memory.peek(addr):\n print(message.error('Invalid address %#x' % addr))\n return\n\n spec = \"*%#x\" % (addr)\n b = gdb.Breakpoint(spec, temporary=True)\n if pwndbg.proc.alive:\n gdb.execute(\"continue\", from_tty=False)\n else:\n gdb.execute(\"run\", from_tty=False)\n\nxinfo = pwndbg.commands.context.context\nxprint = pwndbg.commands.telescope.telescope\n", "path": "pwndbg/commands/peda.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\nfrom __future__ import unicode_literals\n\nimport argparse\nimport sys\n\nimport gdb\n\nimport pwndbg.auxv\nimport pwndbg.color.message as message\nimport pwndbg.commands\nimport pwndbg.commands.context\nimport pwndbg.commands.telescope\nimport pwndbg.proc\n\n\[email protected](\"Gets the current file.\")\[email protected]\ndef getfile():\n print(repr(pwndbg.auxv.get().AT_EXECFN))\n\[email protected](\"Get the pid.\")\[email protected]\ndef getpid():\n print(pwndbg.proc.pid)\n\n\nparser = argparse.ArgumentParser(description='Continue execution until an address or function.')\nparser.add_argument('target', type=str, help='Address or function to stop execution at')\n\[email protected](parser)\ndef xuntil(target):\n try:\n addr = int(target,0)\n \n if not pwndbg.memory.peek(addr):\n print(message.error('Invalid address %#x' % addr))\n return\n\n spec = \"*%#x\" % (addr)\n except (TypeError, ValueError):\n #The following gdb command will throw an error if the symbol is not defined.\n try:\n result = gdb.execute('info address %s' % target, to_string=True, from_tty=False)\n except gdb.error:\n print(message.error(\"Unable to resolve %s\" % target))\n return \n spec = target\n\n b = gdb.Breakpoint(spec, temporary=True)\n if pwndbg.proc.alive:\n gdb.execute(\"continue\", from_tty=False)\n else:\n gdb.execute(\"run\", from_tty=False)\n\nxinfo = pwndbg.commands.context.context\nxprint = pwndbg.commands.telescope.telescope\n", "path": "pwndbg/commands/peda.py"}]} | 761 | 351 |
gh_patches_debug_17968 | rasdani/github-patches | git_diff | Pycord-Development__pycord-621 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cog.get_commands() does not return ext.commands commands
### Summary
`Cog.get_commands()` no longer returns anything but slash/interaction based commands, breaking all help commands
### Reproduction Steps
Run the example below with a cog that has `ext.commands` based commands.
### Minimal Reproducible Code
```python
cog = bot.get_cog("MyCog")
commands = cog.get_commands()
print(commands)
#This will return [] unless MyCog contains slash/interaction based commands
```
### Expected Results
Commands inside cog returned as a list.
### Actual Results
Only interaction-based commands are returned
### Intents
```
discord.Intents(guilds = True, members = True, bans = True, emojis = True, messages = True, invites = True, reactions = True)
```
### System Information
```
- Python v3.10.1-final
- py-cord v2.0.0-alpha
- py-cord pkg_resources: v2.0.0a4627+gca1a76cb
- aiohttp v3.7.4.post0
- system info: Linux 5.15.10-zen1-1-zen #1 ZEN SMP PREEMPT Fri, 17 Dec 2021 11:17:39 +0000
```
### Checklist
- [X] I have searched the open issues for duplicates.
- [X] I have shown the entire traceback, if possible.
- [X] I have removed my token from display, if visible.
### Additional Context
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `discord/ext/commands/cog.py`
Content:
```
1 """
2 The MIT License (MIT)
3
4 Copyright (c) 2015-2021 Rapptz
5 Copyright (c) 2021-present Pycord Development
6
7 Permission is hereby granted, free of charge, to any person obtaining a
8 copy of this software and associated documentation files (the "Software"),
9 to deal in the Software without restriction, including without limitation
10 the rights to use, copy, modify, merge, publish, distribute, sublicense,
11 and/or sell copies of the Software, and to permit persons to whom the
12 Software is furnished to do so, subject to the following conditions:
13
14 The above copyright notice and this permission notice shall be included in
15 all copies or substantial portions of the Software.
16
17 THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
18 OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
19 FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
20 AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
21 LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
22 FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
23 DEALINGS IN THE SOFTWARE.
24 """
25 from __future__ import annotations
26
27 import discord
28 from ...cog import Cog
29
30 from typing import Any, Callable, Generator, TYPE_CHECKING, TypeVar, Type
31
32 from ...commands import ApplicationCommand
33
34 if TYPE_CHECKING:
35 from .core import Command
36
37 __all__ = ('Cog',)
38
39 CogT = TypeVar('CogT', bound='Cog')
40 FuncT = TypeVar('FuncT', bound=Callable[..., Any])
41
42 MISSING: Any = discord.utils.MISSING
43
44 class Cog(Cog):
45 def __new__(cls: Type[CogT], *args: Any, **kwargs: Any) -> CogT:
46 # For issue 426, we need to store a copy of the command objects
47 # since we modify them to inject `self` to them.
48 # To do this, we need to interfere with the Cog creation process.
49 self = super().__new__(cls)
50
51 return self
52
53 def walk_commands(self) -> Generator[Command, None, None]:
54 """An iterator that recursively walks through this cog's commands and subcommands.
55
56 Yields
57 ------
58 Union[:class:`.Command`, :class:`.Group`]
59 A command or group from the cog.
60 """
61 from .core import GroupMixin
62 for command in self.__cog_commands__:
63 if isinstance(command, ApplicationCommand):
64 yield command
65 else:
66 if command.parent is None:
67 yield command
68 if isinstance(command, GroupMixin):
69 yield from command.walk_commands()
70
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/discord/ext/commands/cog.py b/discord/ext/commands/cog.py
--- a/discord/ext/commands/cog.py
+++ b/discord/ext/commands/cog.py
@@ -27,7 +27,7 @@
import discord
from ...cog import Cog
-from typing import Any, Callable, Generator, TYPE_CHECKING, TypeVar, Type
+from typing import Any, Callable, Generator, TYPE_CHECKING, TypeVar, Type, Union
from ...commands import ApplicationCommand
@@ -67,3 +67,16 @@
yield command
if isinstance(command, GroupMixin):
yield from command.walk_commands()
+
+ def get_commands(self) -> List[Union[ApplicationCommand, Command]]:
+ r"""
+ Returns
+ --------
+ List[Union[:class:`~discord.ApplicationCommand`, :class:`.Command`]]
+ A :class:`list` of commands that are defined inside this cog.
+
+ .. note::
+
+ This does not include subcommands.
+ """
+ return [c for c in self.__cog_commands__ if c.parent is None]
| {"golden_diff": "diff --git a/discord/ext/commands/cog.py b/discord/ext/commands/cog.py\n--- a/discord/ext/commands/cog.py\n+++ b/discord/ext/commands/cog.py\n@@ -27,7 +27,7 @@\n import discord\n from ...cog import Cog\n \n-from typing import Any, Callable, Generator, TYPE_CHECKING, TypeVar, Type\n+from typing import Any, Callable, Generator, TYPE_CHECKING, TypeVar, Type, Union\n \n from ...commands import ApplicationCommand\n \n@@ -67,3 +67,16 @@\n yield command\n if isinstance(command, GroupMixin):\n yield from command.walk_commands()\n+\n+ def get_commands(self) -> List[Union[ApplicationCommand, Command]]:\n+ r\"\"\"\n+ Returns\n+ --------\n+ List[Union[:class:`~discord.ApplicationCommand`, :class:`.Command`]]\n+ A :class:`list` of commands that are defined inside this cog.\n+\n+ .. note::\n+\n+ This does not include subcommands.\n+ \"\"\"\n+ return [c for c in self.__cog_commands__ if c.parent is None]\n", "issue": "Cog.get_commands() does not return ext.commands commands\n### Summary\r\n\r\n`Cog.get_commands()` no longer returns anything but slash/interaction based commands, breaking all help commands\r\n\r\n### Reproduction Steps\r\n\r\nRun the example below with a cog that has `ext.commands` based commands.\r\n\r\n### Minimal Reproducible Code\r\n\r\n```python\r\ncog = bot.get_cog(\"MyCog\")\r\ncommands = cog.get_commands()\r\nprint(commands)\r\n#This will return [] unless MyCog contains slash/interaction based commands\r\n```\r\n\r\n\r\n### Expected Results\r\n\r\nCommands inside cog returned as a list.\r\n\r\n### Actual Results\r\n\r\nOnly interaction-based commands are returned\r\n\r\n### Intents\r\n\r\n```\r\ndiscord.Intents(guilds = True, members = True, bans = True, emojis = True, messages = True, invites = True, reactions = True)\r\n```\r\n\r\n### System Information\r\n```\r\n- Python v3.10.1-final\r\n- py-cord v2.0.0-alpha\r\n - py-cord pkg_resources: v2.0.0a4627+gca1a76cb\r\n- aiohttp v3.7.4.post0\r\n- system info: Linux 5.15.10-zen1-1-zen #1 ZEN SMP PREEMPT Fri, 17 Dec 2021 11:17:39 +0000\r\n```\r\n### Checklist\r\n\r\n- [X] I have searched the open issues for duplicates.\r\n- [X] I have shown the entire traceback, if possible.\r\n- [X] I have removed my token from display, if visible.\r\n\r\n### Additional Context\r\n\r\n_No response_\n", "before_files": [{"content": "\"\"\"\nThe MIT License (MIT)\n\nCopyright (c) 2015-2021 Rapptz\nCopyright (c) 2021-present Pycord Development\n\nPermission is hereby granted, free of charge, to any person obtaining a\ncopy of this software and associated documentation files (the \"Software\"),\nto deal in the Software without restriction, including without limitation\nthe rights to use, copy, modify, merge, publish, distribute, sublicense,\nand/or sell copies of the Software, and to permit persons to whom the\nSoftware is furnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in\nall copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS\nOR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\nFROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\nDEALINGS IN THE SOFTWARE.\n\"\"\"\nfrom __future__ import annotations\n\nimport discord\nfrom ...cog import Cog\n\nfrom typing import Any, Callable, Generator, TYPE_CHECKING, TypeVar, Type\n\nfrom ...commands import ApplicationCommand\n\nif TYPE_CHECKING:\n from .core import Command\n\n__all__ = ('Cog',)\n\nCogT = TypeVar('CogT', bound='Cog')\nFuncT = TypeVar('FuncT', bound=Callable[..., Any])\n\nMISSING: Any = discord.utils.MISSING\n\nclass Cog(Cog):\n def __new__(cls: Type[CogT], *args: Any, **kwargs: Any) -> CogT:\n # For issue 426, we need to store a copy of the command objects\n # since we modify them to inject `self` to them.\n # To do this, we need to interfere with the Cog creation process.\n self = super().__new__(cls)\n\n return self\n\n def walk_commands(self) -> Generator[Command, None, None]:\n \"\"\"An iterator that recursively walks through this cog's commands and subcommands.\n\n Yields\n ------\n Union[:class:`.Command`, :class:`.Group`]\n A command or group from the cog.\n \"\"\"\n from .core import GroupMixin\n for command in self.__cog_commands__:\n if isinstance(command, ApplicationCommand):\n yield command\n else:\n if command.parent is None:\n yield command\n if isinstance(command, GroupMixin):\n yield from command.walk_commands()\n", "path": "discord/ext/commands/cog.py"}], "after_files": [{"content": "\"\"\"\nThe MIT License (MIT)\n\nCopyright (c) 2015-2021 Rapptz\nCopyright (c) 2021-present Pycord Development\n\nPermission is hereby granted, free of charge, to any person obtaining a\ncopy of this software and associated documentation files (the \"Software\"),\nto deal in the Software without restriction, including without limitation\nthe rights to use, copy, modify, merge, publish, distribute, sublicense,\nand/or sell copies of the Software, and to permit persons to whom the\nSoftware is furnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in\nall copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS\nOR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\nFROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\nDEALINGS IN THE SOFTWARE.\n\"\"\"\nfrom __future__ import annotations\n\nimport discord\nfrom ...cog import Cog\n\nfrom typing import Any, Callable, Generator, TYPE_CHECKING, TypeVar, Type, Union\n\nfrom ...commands import ApplicationCommand\n\nif TYPE_CHECKING:\n from .core import Command\n\n__all__ = ('Cog',)\n\nCogT = TypeVar('CogT', bound='Cog')\nFuncT = TypeVar('FuncT', bound=Callable[..., Any])\n\nMISSING: Any = discord.utils.MISSING\n\nclass Cog(Cog):\n def __new__(cls: Type[CogT], *args: Any, **kwargs: Any) -> CogT:\n # For issue 426, we need to store a copy of the command objects\n # since we modify them to inject `self` to them.\n # To do this, we need to interfere with the Cog creation process.\n self = super().__new__(cls)\n\n return self\n\n def walk_commands(self) -> Generator[Command, None, None]:\n \"\"\"An iterator that recursively walks through this cog's commands and subcommands.\n\n Yields\n ------\n Union[:class:`.Command`, :class:`.Group`]\n A command or group from the cog.\n \"\"\"\n from .core import GroupMixin\n for command in self.__cog_commands__:\n if isinstance(command, ApplicationCommand):\n yield command\n else:\n if command.parent is None:\n yield command\n if isinstance(command, GroupMixin):\n yield from command.walk_commands()\n\n def get_commands(self) -> List[Union[ApplicationCommand, Command]]:\n r\"\"\"\n Returns\n --------\n List[Union[:class:`~discord.ApplicationCommand`, :class:`.Command`]]\n A :class:`list` of commands that are defined inside this cog.\n\n .. note::\n\n This does not include subcommands.\n \"\"\"\n return [c for c in self.__cog_commands__ if c.parent is None]\n", "path": "discord/ext/commands/cog.py"}]} | 1,315 | 249 |
gh_patches_debug_34006 | rasdani/github-patches | git_diff | conan-io__conan-5846 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[question] How to not include some libs in conanbuildinfo.props
Conan version 1.19.0
I have a dependency on boost (boost/1.71.0@conan/stable).
I'm using the visual_studio generator with VS2017
This adds *all* of the boost libs to conanbuildinfo.props which causes my build to fail. I don't completely understand why this used to be OK with 1.68 and now fails with that too after moving my cache to a different drive (perhaps it was due to installing vs2019 alongside vs2017).
But regardless it would be good if I could either cherrypick which libs to include or remove specific ones from the list.
For example I don't want to include any of the stacktrace libs which result in duplicate symbols.
I can't figure out how to do it. If I specify any of the 'without_xxx' options then it rebuilds the whole of the boost dependency which I would rather avoid - I just want to control which libs it puts in the vs props file!
I was trying to see if there was some place I could remove them from deps_cpp_info but it doesn't look like there is a place to put that (i.e. an event just before the generators are run).
Perhaps overriding the generator could do the trick but it seems like that would be a bit of a sledgehammer and not all that clean.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `conans/client/generators/visualstudio.py`
Content:
```
1 import os
2 import re
3
4 from conans.model import Generator
5 from conans.paths import BUILD_INFO_VISUAL_STUDIO
6 from conans.client.tools.files import VALID_LIB_EXTENSIONS
7
8
9 class VisualStudioGenerator(Generator):
10
11 template = '''<?xml version="1.0" encoding="utf-8"?>
12 <Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
13 <ImportGroup Label="PropertySheets" />
14 <PropertyGroup Label="UserMacros" />
15 <PropertyGroup Label="Conan-RootDirs">{item_properties}
16 </PropertyGroup>
17 {properties}
18 <ItemGroup />
19 </Project>'''
20
21 properties_template = ''' <PropertyGroup Label="ConanVariables"{condition}>
22 <ConanCompilerFlags>{compiler_flags}</ConanCompilerFlags>
23 <ConanPreprocessorDefinitions>{definitions}</ConanPreprocessorDefinitions>
24 <ConanIncludeDirectories>{include_dirs}</ConanIncludeDirectories>
25 <ConanResourceDirectories>{res_dirs}</ConanResourceDirectories>
26 <ConanLibraryDirectories>{lib_dirs}</ConanLibraryDirectories>
27 <ConanBinaryDirectories>{bin_dirs}</ConanBinaryDirectories>
28 </PropertyGroup>
29 <PropertyGroup{condition}>
30 <LocalDebuggerEnvironment>PATH=%PATH%;{bin_dirs}</LocalDebuggerEnvironment>
31 <DebuggerFlavor>WindowsLocalDebugger</DebuggerFlavor>
32 </PropertyGroup>
33 <ItemDefinitionGroup{condition}>
34 <ClCompile>
35 <AdditionalIncludeDirectories>$(ConanIncludeDirectories)%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
36 <PreprocessorDefinitions>$(ConanPreprocessorDefinitions)%(PreprocessorDefinitions)</PreprocessorDefinitions>
37 <AdditionalOptions>$(ConanCompilerFlags) %(AdditionalOptions)</AdditionalOptions>
38 </ClCompile>
39 <Link>
40 <AdditionalLibraryDirectories>$(ConanLibraryDirectories)%(AdditionalLibraryDirectories)</AdditionalLibraryDirectories>
41 <AdditionalDependencies>{libs}%(AdditionalDependencies)</AdditionalDependencies>
42 <AdditionalOptions>{linker_flags} %(AdditionalOptions)</AdditionalOptions>
43 </Link>
44 <Lib>
45 <AdditionalLibraryDirectories>$(ConanLibraryDirectories)%(AdditionalLibraryDirectories)</AdditionalLibraryDirectories>
46 <AdditionalDependencies>{libs}%(AdditionalDependencies)</AdditionalDependencies>
47 </Lib>
48 <Midl>
49 <AdditionalIncludeDirectories>$(ConanIncludeDirectories)%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
50 </Midl>
51 <ResourceCompile>
52 <AdditionalIncludeDirectories>$(ConanIncludeDirectories)%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
53 <PreprocessorDefinitions>$(ConanPreprocessorDefinitions)%(PreprocessorDefinitions)</PreprocessorDefinitions>
54 <AdditionalOptions>$(ConanCompilerFlags) %(AdditionalOptions)</AdditionalOptions>
55 </ResourceCompile>
56 </ItemDefinitionGroup>'''
57
58 item_template = '''
59 <Conan-{name}-Root>{root_dir}</Conan-{name}-Root>'''
60
61 def _format_items(self):
62 sections = []
63 for dep_name, cpp_info in self._deps_build_info.dependencies:
64 fields = {
65 'root_dir': cpp_info.rootpath,
66 'name': dep_name.replace(".", "-")
67 }
68 section = self.item_template.format(**fields)
69 sections.append(section)
70 return "".join(sections)
71
72 @property
73 def filename(self):
74 return BUILD_INFO_VISUAL_STUDIO
75
76 def _format_properties(self, build_info, condition):
77 def has_valid_ext(lib):
78 ext = os.path.splitext(lib)[1]
79 return ext in VALID_LIB_EXTENSIONS
80
81 fields = {
82 'condition': condition,
83 'bin_dirs': "".join("%s;" % p for p in build_info.bin_paths),
84 'res_dirs': "".join("%s;" % p for p in build_info.res_paths),
85 'include_dirs': "".join("%s;" % p for p in build_info.include_paths),
86 'lib_dirs': "".join("%s;" % p for p in build_info.lib_paths),
87 'libs': "".join(['%s.lib;' % lib if not has_valid_ext(lib)
88 else '%s;' % lib for lib in build_info.libs]),
89 'definitions': "".join("%s;" % d for d in build_info.defines),
90 'compiler_flags': " ".join(build_info.cxxflags + build_info.cflags),
91 'linker_flags': " ".join(build_info.sharedlinkflags),
92 'exe_flags': " ".join(build_info.exelinkflags)
93 }
94 formatted_template = self.properties_template.format(**fields)
95 return formatted_template
96
97 @property
98 def content(self):
99 per_item_props = self._format_items()
100
101 properties = [self._format_properties(self._deps_build_info, condition='')]
102 for config, cpp_info in self._deps_build_info.configs.items():
103 condition = " Condition=\"'$(Configuration)' == '%s'\"" % config
104 properties.append(self._format_properties(cpp_info, condition=condition))
105
106 fields = {
107 'item_properties': per_item_props,
108 'properties': '\n'.join(properties)
109 }
110 formatted_template = self.template.format(**fields)
111
112 userprofile = os.getenv("USERPROFILE")
113 if userprofile:
114 userprofile = userprofile.replace("\\", "\\\\")
115 formatted_template = re.sub(userprofile, "$(USERPROFILE)", formatted_template,
116 flags=re.I)
117 return formatted_template
118
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/conans/client/generators/visualstudio.py b/conans/client/generators/visualstudio.py
--- a/conans/client/generators/visualstudio.py
+++ b/conans/client/generators/visualstudio.py
@@ -18,13 +18,15 @@
<ItemGroup />
</Project>'''
- properties_template = ''' <PropertyGroup Label="ConanVariables"{condition}>
+ properties_template = '''<PropertyGroup Label="ConanVariables"{condition}>
<ConanCompilerFlags>{compiler_flags}</ConanCompilerFlags>
+ <ConanLinkerFlags>{linker_flags}</ConanLinkerFlags>
<ConanPreprocessorDefinitions>{definitions}</ConanPreprocessorDefinitions>
<ConanIncludeDirectories>{include_dirs}</ConanIncludeDirectories>
<ConanResourceDirectories>{res_dirs}</ConanResourceDirectories>
<ConanLibraryDirectories>{lib_dirs}</ConanLibraryDirectories>
<ConanBinaryDirectories>{bin_dirs}</ConanBinaryDirectories>
+ <ConanLibraries>{libs}</ConanLibraries>
</PropertyGroup>
<PropertyGroup{condition}>
<LocalDebuggerEnvironment>PATH=%PATH%;{bin_dirs}</LocalDebuggerEnvironment>
@@ -38,13 +40,9 @@
</ClCompile>
<Link>
<AdditionalLibraryDirectories>$(ConanLibraryDirectories)%(AdditionalLibraryDirectories)</AdditionalLibraryDirectories>
- <AdditionalDependencies>{libs}%(AdditionalDependencies)</AdditionalDependencies>
- <AdditionalOptions>{linker_flags} %(AdditionalOptions)</AdditionalOptions>
+ <AdditionalDependencies>$(ConanLibraries)%(AdditionalDependencies)</AdditionalDependencies>
+ <AdditionalOptions>$(ConanLinkerFlags) %(AdditionalOptions)</AdditionalOptions>
</Link>
- <Lib>
- <AdditionalLibraryDirectories>$(ConanLibraryDirectories)%(AdditionalLibraryDirectories)</AdditionalLibraryDirectories>
- <AdditionalDependencies>{libs}%(AdditionalDependencies)</AdditionalDependencies>
- </Lib>
<Midl>
<AdditionalIncludeDirectories>$(ConanIncludeDirectories)%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
</Midl>
| {"golden_diff": "diff --git a/conans/client/generators/visualstudio.py b/conans/client/generators/visualstudio.py\n--- a/conans/client/generators/visualstudio.py\n+++ b/conans/client/generators/visualstudio.py\n@@ -18,13 +18,15 @@\n <ItemGroup />\n </Project>'''\n \n- properties_template = ''' <PropertyGroup Label=\"ConanVariables\"{condition}>\n+ properties_template = '''<PropertyGroup Label=\"ConanVariables\"{condition}>\n <ConanCompilerFlags>{compiler_flags}</ConanCompilerFlags>\n+ <ConanLinkerFlags>{linker_flags}</ConanLinkerFlags>\n <ConanPreprocessorDefinitions>{definitions}</ConanPreprocessorDefinitions>\n <ConanIncludeDirectories>{include_dirs}</ConanIncludeDirectories>\n <ConanResourceDirectories>{res_dirs}</ConanResourceDirectories>\n <ConanLibraryDirectories>{lib_dirs}</ConanLibraryDirectories>\n <ConanBinaryDirectories>{bin_dirs}</ConanBinaryDirectories>\n+ <ConanLibraries>{libs}</ConanLibraries>\n </PropertyGroup>\n <PropertyGroup{condition}>\n <LocalDebuggerEnvironment>PATH=%PATH%;{bin_dirs}</LocalDebuggerEnvironment>\n@@ -38,13 +40,9 @@\n </ClCompile>\n <Link>\n <AdditionalLibraryDirectories>$(ConanLibraryDirectories)%(AdditionalLibraryDirectories)</AdditionalLibraryDirectories>\n- <AdditionalDependencies>{libs}%(AdditionalDependencies)</AdditionalDependencies>\n- <AdditionalOptions>{linker_flags} %(AdditionalOptions)</AdditionalOptions>\n+ <AdditionalDependencies>$(ConanLibraries)%(AdditionalDependencies)</AdditionalDependencies>\n+ <AdditionalOptions>$(ConanLinkerFlags) %(AdditionalOptions)</AdditionalOptions>\n </Link>\n- <Lib>\n- <AdditionalLibraryDirectories>$(ConanLibraryDirectories)%(AdditionalLibraryDirectories)</AdditionalLibraryDirectories>\n- <AdditionalDependencies>{libs}%(AdditionalDependencies)</AdditionalDependencies>\n- </Lib>\n <Midl>\n <AdditionalIncludeDirectories>$(ConanIncludeDirectories)%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n </Midl>\n", "issue": "[question] How to not include some libs in conanbuildinfo.props\nConan version 1.19.0\r\n\r\nI have a dependency on boost (boost/1.71.0@conan/stable).\r\n\r\nI'm using the visual_studio generator with VS2017\r\n\r\nThis adds *all* of the boost libs to conanbuildinfo.props which causes my build to fail. I don't completely understand why this used to be OK with 1.68 and now fails with that too after moving my cache to a different drive (perhaps it was due to installing vs2019 alongside vs2017). \r\n\r\nBut regardless it would be good if I could either cherrypick which libs to include or remove specific ones from the list.\r\n\r\nFor example I don't want to include any of the stacktrace libs which result in duplicate symbols.\r\n\r\nI can't figure out how to do it. If I specify any of the 'without_xxx' options then it rebuilds the whole of the boost dependency which I would rather avoid - I just want to control which libs it puts in the vs props file!\r\n\r\nI was trying to see if there was some place I could remove them from deps_cpp_info but it doesn't look like there is a place to put that (i.e. an event just before the generators are run).\r\n\r\nPerhaps overriding the generator could do the trick but it seems like that would be a bit of a sledgehammer and not all that clean.\r\n\r\n\r\n\n", "before_files": [{"content": "import os\nimport re\n\nfrom conans.model import Generator\nfrom conans.paths import BUILD_INFO_VISUAL_STUDIO\nfrom conans.client.tools.files import VALID_LIB_EXTENSIONS\n\n\nclass VisualStudioGenerator(Generator):\n\n template = '''<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<Project ToolsVersion=\"4.0\" xmlns=\"http://schemas.microsoft.com/developer/msbuild/2003\">\n <ImportGroup Label=\"PropertySheets\" />\n <PropertyGroup Label=\"UserMacros\" />\n <PropertyGroup Label=\"Conan-RootDirs\">{item_properties}\n </PropertyGroup>\n {properties}\n <ItemGroup />\n</Project>'''\n\n properties_template = ''' <PropertyGroup Label=\"ConanVariables\"{condition}>\n <ConanCompilerFlags>{compiler_flags}</ConanCompilerFlags>\n <ConanPreprocessorDefinitions>{definitions}</ConanPreprocessorDefinitions>\n <ConanIncludeDirectories>{include_dirs}</ConanIncludeDirectories>\n <ConanResourceDirectories>{res_dirs}</ConanResourceDirectories>\n <ConanLibraryDirectories>{lib_dirs}</ConanLibraryDirectories>\n <ConanBinaryDirectories>{bin_dirs}</ConanBinaryDirectories>\n </PropertyGroup>\n <PropertyGroup{condition}>\n <LocalDebuggerEnvironment>PATH=%PATH%;{bin_dirs}</LocalDebuggerEnvironment>\n <DebuggerFlavor>WindowsLocalDebugger</DebuggerFlavor>\n </PropertyGroup>\n <ItemDefinitionGroup{condition}>\n <ClCompile>\n <AdditionalIncludeDirectories>$(ConanIncludeDirectories)%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n <PreprocessorDefinitions>$(ConanPreprocessorDefinitions)%(PreprocessorDefinitions)</PreprocessorDefinitions>\n <AdditionalOptions>$(ConanCompilerFlags) %(AdditionalOptions)</AdditionalOptions>\n </ClCompile>\n <Link>\n <AdditionalLibraryDirectories>$(ConanLibraryDirectories)%(AdditionalLibraryDirectories)</AdditionalLibraryDirectories>\n <AdditionalDependencies>{libs}%(AdditionalDependencies)</AdditionalDependencies>\n <AdditionalOptions>{linker_flags} %(AdditionalOptions)</AdditionalOptions>\n </Link>\n <Lib>\n <AdditionalLibraryDirectories>$(ConanLibraryDirectories)%(AdditionalLibraryDirectories)</AdditionalLibraryDirectories>\n <AdditionalDependencies>{libs}%(AdditionalDependencies)</AdditionalDependencies>\n </Lib>\n <Midl>\n <AdditionalIncludeDirectories>$(ConanIncludeDirectories)%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n </Midl>\n <ResourceCompile>\n <AdditionalIncludeDirectories>$(ConanIncludeDirectories)%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n <PreprocessorDefinitions>$(ConanPreprocessorDefinitions)%(PreprocessorDefinitions)</PreprocessorDefinitions>\n <AdditionalOptions>$(ConanCompilerFlags) %(AdditionalOptions)</AdditionalOptions>\n </ResourceCompile>\n </ItemDefinitionGroup>'''\n\n item_template = '''\n <Conan-{name}-Root>{root_dir}</Conan-{name}-Root>'''\n\n def _format_items(self):\n sections = []\n for dep_name, cpp_info in self._deps_build_info.dependencies:\n fields = {\n 'root_dir': cpp_info.rootpath,\n 'name': dep_name.replace(\".\", \"-\")\n }\n section = self.item_template.format(**fields)\n sections.append(section)\n return \"\".join(sections)\n\n @property\n def filename(self):\n return BUILD_INFO_VISUAL_STUDIO\n\n def _format_properties(self, build_info, condition):\n def has_valid_ext(lib):\n ext = os.path.splitext(lib)[1]\n return ext in VALID_LIB_EXTENSIONS\n\n fields = {\n 'condition': condition,\n 'bin_dirs': \"\".join(\"%s;\" % p for p in build_info.bin_paths),\n 'res_dirs': \"\".join(\"%s;\" % p for p in build_info.res_paths),\n 'include_dirs': \"\".join(\"%s;\" % p for p in build_info.include_paths),\n 'lib_dirs': \"\".join(\"%s;\" % p for p in build_info.lib_paths),\n 'libs': \"\".join(['%s.lib;' % lib if not has_valid_ext(lib)\n else '%s;' % lib for lib in build_info.libs]),\n 'definitions': \"\".join(\"%s;\" % d for d in build_info.defines),\n 'compiler_flags': \" \".join(build_info.cxxflags + build_info.cflags),\n 'linker_flags': \" \".join(build_info.sharedlinkflags),\n 'exe_flags': \" \".join(build_info.exelinkflags)\n }\n formatted_template = self.properties_template.format(**fields)\n return formatted_template\n\n @property\n def content(self):\n per_item_props = self._format_items()\n\n properties = [self._format_properties(self._deps_build_info, condition='')]\n for config, cpp_info in self._deps_build_info.configs.items():\n condition = \" Condition=\\\"'$(Configuration)' == '%s'\\\"\" % config\n properties.append(self._format_properties(cpp_info, condition=condition))\n\n fields = {\n 'item_properties': per_item_props,\n 'properties': '\\n'.join(properties)\n }\n formatted_template = self.template.format(**fields)\n\n userprofile = os.getenv(\"USERPROFILE\")\n if userprofile:\n userprofile = userprofile.replace(\"\\\\\", \"\\\\\\\\\")\n formatted_template = re.sub(userprofile, \"$(USERPROFILE)\", formatted_template,\n flags=re.I)\n return formatted_template\n", "path": "conans/client/generators/visualstudio.py"}], "after_files": [{"content": "import os\nimport re\n\nfrom conans.model import Generator\nfrom conans.paths import BUILD_INFO_VISUAL_STUDIO\nfrom conans.client.tools.files import VALID_LIB_EXTENSIONS\n\n\nclass VisualStudioGenerator(Generator):\n\n template = '''<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<Project ToolsVersion=\"4.0\" xmlns=\"http://schemas.microsoft.com/developer/msbuild/2003\">\n <ImportGroup Label=\"PropertySheets\" />\n <PropertyGroup Label=\"UserMacros\" />\n <PropertyGroup Label=\"Conan-RootDirs\">{item_properties}\n </PropertyGroup>\n {properties}\n <ItemGroup />\n</Project>'''\n\n properties_template = '''<PropertyGroup Label=\"ConanVariables\"{condition}>\n <ConanCompilerFlags>{compiler_flags}</ConanCompilerFlags>\n <ConanLinkerFlags>{linker_flags}</ConanLinkerFlags>\n <ConanPreprocessorDefinitions>{definitions}</ConanPreprocessorDefinitions>\n <ConanIncludeDirectories>{include_dirs}</ConanIncludeDirectories>\n <ConanResourceDirectories>{res_dirs}</ConanResourceDirectories>\n <ConanLibraryDirectories>{lib_dirs}</ConanLibraryDirectories>\n <ConanBinaryDirectories>{bin_dirs}</ConanBinaryDirectories>\n <ConanLibraries>{libs}</ConanLibraries>\n </PropertyGroup>\n <PropertyGroup{condition}>\n <LocalDebuggerEnvironment>PATH=%PATH%;{bin_dirs}</LocalDebuggerEnvironment>\n <DebuggerFlavor>WindowsLocalDebugger</DebuggerFlavor>\n </PropertyGroup>\n <ItemDefinitionGroup{condition}>\n <ClCompile>\n <AdditionalIncludeDirectories>$(ConanIncludeDirectories)%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n <PreprocessorDefinitions>$(ConanPreprocessorDefinitions)%(PreprocessorDefinitions)</PreprocessorDefinitions>\n <AdditionalOptions>$(ConanCompilerFlags) %(AdditionalOptions)</AdditionalOptions>\n </ClCompile>\n <Link>\n <AdditionalLibraryDirectories>$(ConanLibraryDirectories)%(AdditionalLibraryDirectories)</AdditionalLibraryDirectories>\n <AdditionalDependencies>$(ConanLibraries)%(AdditionalDependencies)</AdditionalDependencies>\n <AdditionalOptions>$(ConanLinkerFlags) %(AdditionalOptions)</AdditionalOptions>\n </Link>\n <Midl>\n <AdditionalIncludeDirectories>$(ConanIncludeDirectories)%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n </Midl>\n <ResourceCompile>\n <AdditionalIncludeDirectories>$(ConanIncludeDirectories)%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n <PreprocessorDefinitions>$(ConanPreprocessorDefinitions)%(PreprocessorDefinitions)</PreprocessorDefinitions>\n <AdditionalOptions>$(ConanCompilerFlags) %(AdditionalOptions)</AdditionalOptions>\n </ResourceCompile>\n </ItemDefinitionGroup>'''\n\n item_template = '''\n <Conan-{name}-Root>{root_dir}</Conan-{name}-Root>'''\n\n def _format_items(self):\n sections = []\n for dep_name, cpp_info in self._deps_build_info.dependencies:\n fields = {\n 'root_dir': cpp_info.rootpath,\n 'name': dep_name.replace(\".\", \"-\")\n }\n section = self.item_template.format(**fields)\n sections.append(section)\n return \"\".join(sections)\n\n @property\n def filename(self):\n return BUILD_INFO_VISUAL_STUDIO\n\n def _format_properties(self, build_info, condition):\n def has_valid_ext(lib):\n ext = os.path.splitext(lib)[1]\n return ext in VALID_LIB_EXTENSIONS\n\n fields = {\n 'condition': condition,\n 'bin_dirs': \"\".join(\"%s;\" % p for p in build_info.bin_paths),\n 'res_dirs': \"\".join(\"%s;\" % p for p in build_info.res_paths),\n 'include_dirs': \"\".join(\"%s;\" % p for p in build_info.include_paths),\n 'lib_dirs': \"\".join(\"%s;\" % p for p in build_info.lib_paths),\n 'libs': \"\".join(['%s.lib;' % lib if not has_valid_ext(lib)\n else '%s;' % lib for lib in build_info.libs]),\n 'definitions': \"\".join(\"%s;\" % d for d in build_info.defines),\n 'compiler_flags': \" \".join(build_info.cxxflags + build_info.cflags),\n 'linker_flags': \" \".join(build_info.sharedlinkflags),\n 'exe_flags': \" \".join(build_info.exelinkflags)\n }\n formatted_template = self.properties_template.format(**fields)\n return formatted_template\n\n @property\n def content(self):\n per_item_props = self._format_items()\n\n properties = [self._format_properties(self._deps_build_info, condition='')]\n for config, cpp_info in self._deps_build_info.configs.items():\n condition = \" Condition=\\\"'$(Configuration)' == '%s'\\\"\" % config\n properties.append(self._format_properties(cpp_info, condition=condition))\n\n fields = {\n 'item_properties': per_item_props,\n 'properties': '\\n'.join(properties)\n }\n formatted_template = self.template.format(**fields)\n\n userprofile = os.getenv(\"USERPROFILE\")\n if userprofile:\n userprofile = userprofile.replace(\"\\\\\", \"\\\\\\\\\")\n formatted_template = re.sub(userprofile, \"$(USERPROFILE)\", formatted_template,\n flags=re.I)\n return formatted_template\n", "path": "conans/client/generators/visualstudio.py"}]} | 1,964 | 462 |
gh_patches_debug_32036 | rasdani/github-patches | git_diff | strawberry-graphql__strawberry-851 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Improve errors logging
Right now errors are being caught by GraphQL core and sent into the `errors` fields of the GraphQL response. Unfortunately that makes things harder to debug locally; we should print better exceptions locally, by using one of the following options:
```python
def process_result(
self, request: HttpRequest, result: ExecutionResult
) -> GraphQLHTTPResponse:
data: GraphQLHTTPResponse = {"data": result.data}
for error in result.errors:
actual_error = error.original_error or error
traceback.print_exception(
type(actual_error), actual_error, actual_error.__traceback__
)
return data
```
```python
def process_result(self, request, result):
data = super().process_result(request, result)
# Log errors
if result.errors:
for error in result.errors:
try:
if isinstance(error, GraphQLError) and error.original_error:
raise error.original_error
else:
raise error
except Exception as error:
logger.exception(error)
return data
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `strawberry/schema/schema.py`
Content:
```
1 from typing import Any, Dict, List, Optional, Sequence, Type, Union
2
3 from graphql import (
4 ExecutionContext as GraphQLExecutionContext,
5 GraphQLSchema,
6 get_introspection_query,
7 parse,
8 validate_schema,
9 )
10 from graphql.subscription import subscribe
11 from graphql.type.directives import specified_directives
12
13 from strawberry.custom_scalar import ScalarDefinition
14 from strawberry.enum import EnumDefinition
15 from strawberry.extensions import Extension
16 from strawberry.schema.schema_converter import GraphQLCoreConverter
17 from strawberry.types import ExecutionResult
18 from strawberry.types.types import TypeDefinition
19 from strawberry.union import StrawberryUnion
20
21 from ..middleware import DirectivesMiddleware, Middleware
22 from ..printer import print_schema
23 from .execute import execute, execute_sync
24
25
26 class Schema:
27 def __init__(
28 self,
29 # TODO: can we make sure we only allow to pass something that has been decorated?
30 query: Type,
31 mutation: Optional[Type] = None,
32 subscription: Optional[Type] = None,
33 directives=(),
34 types=(),
35 extensions: Sequence[Type[Extension]] = (),
36 execution_context_class: Optional[Type[GraphQLExecutionContext]] = None,
37 ):
38 self.extensions = extensions
39 self.execution_context_class = execution_context_class
40 self.schema_converter = GraphQLCoreConverter()
41
42 query_type = self.schema_converter.from_object_type(query)
43 mutation_type = (
44 self.schema_converter.from_object_type(mutation) if mutation else None
45 )
46 subscription_type = (
47 self.schema_converter.from_object_type(subscription)
48 if subscription
49 else None
50 )
51
52 self.middleware: List[Middleware] = [DirectivesMiddleware(directives)]
53
54 directives = [
55 self.schema_converter.from_directive(directive.directive_definition)
56 for directive in directives
57 ]
58
59 self._schema = GraphQLSchema(
60 query=query_type,
61 mutation=mutation_type,
62 subscription=subscription_type if subscription else None,
63 directives=specified_directives + directives,
64 types=list(map(self.schema_converter.from_object_type, types)),
65 )
66
67 # Validate schema early because we want developers to know about
68 # possible issues as soon as possible
69 errors = validate_schema(self._schema)
70 if errors:
71 formatted_errors = "\n\n".join(f"❌ {error.message}" for error in errors)
72 raise ValueError(f"Invalid Schema. Errors:\n\n{formatted_errors}")
73
74 self.query = self.schema_converter.type_map[query_type.name]
75
76 def get_type_by_name(
77 self, name: str
78 ) -> Optional[
79 Union[TypeDefinition, ScalarDefinition, EnumDefinition, StrawberryUnion]
80 ]:
81 if name in self.schema_converter.type_map:
82 return self.schema_converter.type_map[name].definition
83
84 return None
85
86 async def execute(
87 self,
88 query: str,
89 variable_values: Optional[Dict[str, Any]] = None,
90 context_value: Optional[Any] = None,
91 root_value: Optional[Any] = None,
92 operation_name: Optional[str] = None,
93 validate_queries: bool = True,
94 ) -> ExecutionResult:
95 result = await execute(
96 self._schema,
97 query,
98 variable_values=variable_values,
99 root_value=root_value,
100 context_value=context_value,
101 operation_name=operation_name,
102 additional_middlewares=self.middleware,
103 extensions=self.extensions,
104 execution_context_class=self.execution_context_class,
105 validate_queries=validate_queries,
106 )
107
108 return ExecutionResult(
109 data=result.data,
110 errors=result.errors,
111 extensions=result.extensions,
112 )
113
114 def execute_sync(
115 self,
116 query: str,
117 variable_values: Optional[Dict[str, Any]] = None,
118 context_value: Optional[Any] = None,
119 root_value: Optional[Any] = None,
120 operation_name: Optional[str] = None,
121 validate_queries: bool = True,
122 ) -> ExecutionResult:
123 result = execute_sync(
124 self._schema,
125 query,
126 variable_values=variable_values,
127 root_value=root_value,
128 context_value=context_value,
129 operation_name=operation_name,
130 additional_middlewares=self.middleware,
131 extensions=self.extensions,
132 execution_context_class=self.execution_context_class,
133 validate_queries=validate_queries,
134 )
135
136 return ExecutionResult(
137 data=result.data,
138 errors=result.errors,
139 extensions=result.extensions,
140 )
141
142 async def subscribe(
143 self,
144 query: str,
145 variable_values: Optional[Dict[str, Any]] = None,
146 context_value: Optional[Any] = None,
147 root_value: Optional[Any] = None,
148 operation_name: Optional[str] = None,
149 ):
150 return await subscribe(
151 self._schema,
152 parse(query),
153 root_value=root_value,
154 context_value=context_value,
155 variable_values=variable_values,
156 operation_name=operation_name,
157 )
158
159 def as_str(self) -> str:
160 return print_schema(self)
161
162 __str__ = as_str
163
164 def introspect(self) -> Dict[str, Any]:
165 """Return the introspection query result for the current schema
166
167 Raises:
168 ValueError: If the introspection query fails due to an invalid schema
169 """
170 introspection = self.execute_sync(get_introspection_query())
171 if introspection.errors or not introspection.data:
172 raise ValueError(f"Invalid Schema. Errors {introspection.errors!r}")
173
174 return introspection.data
175
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/strawberry/schema/schema.py b/strawberry/schema/schema.py
--- a/strawberry/schema/schema.py
+++ b/strawberry/schema/schema.py
@@ -1,3 +1,4 @@
+import logging
from typing import Any, Dict, List, Optional, Sequence, Type, Union
from graphql import (
@@ -7,6 +8,7 @@
parse,
validate_schema,
)
+from graphql.error import GraphQLError
from graphql.subscription import subscribe
from graphql.type.directives import specified_directives
@@ -23,6 +25,9 @@
from .execute import execute, execute_sync
+logger = logging.getLogger("strawberry.execution")
+
+
class Schema:
def __init__(
self,
@@ -83,6 +88,11 @@
return None
+ def process_errors(self, errors: List[GraphQLError]) -> None:
+ for error in errors:
+ actual_error = error.original_error or error
+ logger.error(actual_error, exc_info=actual_error)
+
async def execute(
self,
query: str,
@@ -105,6 +115,9 @@
validate_queries=validate_queries,
)
+ if result.errors:
+ self.process_errors(result.errors)
+
return ExecutionResult(
data=result.data,
errors=result.errors,
@@ -133,6 +146,9 @@
validate_queries=validate_queries,
)
+ if result.errors:
+ self.process_errors(result.errors)
+
return ExecutionResult(
data=result.data,
errors=result.errors,
| {"golden_diff": "diff --git a/strawberry/schema/schema.py b/strawberry/schema/schema.py\n--- a/strawberry/schema/schema.py\n+++ b/strawberry/schema/schema.py\n@@ -1,3 +1,4 @@\n+import logging\n from typing import Any, Dict, List, Optional, Sequence, Type, Union\n \n from graphql import (\n@@ -7,6 +8,7 @@\n parse,\n validate_schema,\n )\n+from graphql.error import GraphQLError\n from graphql.subscription import subscribe\n from graphql.type.directives import specified_directives\n \n@@ -23,6 +25,9 @@\n from .execute import execute, execute_sync\n \n \n+logger = logging.getLogger(\"strawberry.execution\")\n+\n+\n class Schema:\n def __init__(\n self,\n@@ -83,6 +88,11 @@\n \n return None\n \n+ def process_errors(self, errors: List[GraphQLError]) -> None:\n+ for error in errors:\n+ actual_error = error.original_error or error\n+ logger.error(actual_error, exc_info=actual_error)\n+\n async def execute(\n self,\n query: str,\n@@ -105,6 +115,9 @@\n validate_queries=validate_queries,\n )\n \n+ if result.errors:\n+ self.process_errors(result.errors)\n+\n return ExecutionResult(\n data=result.data,\n errors=result.errors,\n@@ -133,6 +146,9 @@\n validate_queries=validate_queries,\n )\n \n+ if result.errors:\n+ self.process_errors(result.errors)\n+\n return ExecutionResult(\n data=result.data,\n errors=result.errors,\n", "issue": "Improve errors logging\nRight now errors are being caught by GraphQL core and sent into the `errors` fields of the GraphQL response. Unfortunately that makes things harder to debug locally; we should print better exceptions locally, by using one of the following options:\r\n\r\n```python\r\n def process_result(\r\n self, request: HttpRequest, result: ExecutionResult\r\n ) -> GraphQLHTTPResponse:\r\n data: GraphQLHTTPResponse = {\"data\": result.data}\r\n\r\n for error in result.errors:\r\n actual_error = error.original_error or error\r\n\r\n traceback.print_exception(\r\n type(actual_error), actual_error, actual_error.__traceback__\r\n )\r\n\r\n return data\r\n```\r\n\r\n```python\r\n def process_result(self, request, result):\r\n data = super().process_result(request, result)\r\n\r\n # Log errors\r\n if result.errors:\r\n for error in result.errors:\r\n try:\r\n if isinstance(error, GraphQLError) and error.original_error:\r\n raise error.original_error\r\n else:\r\n raise error\r\n except Exception as error:\r\n logger.exception(error)\r\n\r\n return data\r\n```\n", "before_files": [{"content": "from typing import Any, Dict, List, Optional, Sequence, Type, Union\n\nfrom graphql import (\n ExecutionContext as GraphQLExecutionContext,\n GraphQLSchema,\n get_introspection_query,\n parse,\n validate_schema,\n)\nfrom graphql.subscription import subscribe\nfrom graphql.type.directives import specified_directives\n\nfrom strawberry.custom_scalar import ScalarDefinition\nfrom strawberry.enum import EnumDefinition\nfrom strawberry.extensions import Extension\nfrom strawberry.schema.schema_converter import GraphQLCoreConverter\nfrom strawberry.types import ExecutionResult\nfrom strawberry.types.types import TypeDefinition\nfrom strawberry.union import StrawberryUnion\n\nfrom ..middleware import DirectivesMiddleware, Middleware\nfrom ..printer import print_schema\nfrom .execute import execute, execute_sync\n\n\nclass Schema:\n def __init__(\n self,\n # TODO: can we make sure we only allow to pass something that has been decorated?\n query: Type,\n mutation: Optional[Type] = None,\n subscription: Optional[Type] = None,\n directives=(),\n types=(),\n extensions: Sequence[Type[Extension]] = (),\n execution_context_class: Optional[Type[GraphQLExecutionContext]] = None,\n ):\n self.extensions = extensions\n self.execution_context_class = execution_context_class\n self.schema_converter = GraphQLCoreConverter()\n\n query_type = self.schema_converter.from_object_type(query)\n mutation_type = (\n self.schema_converter.from_object_type(mutation) if mutation else None\n )\n subscription_type = (\n self.schema_converter.from_object_type(subscription)\n if subscription\n else None\n )\n\n self.middleware: List[Middleware] = [DirectivesMiddleware(directives)]\n\n directives = [\n self.schema_converter.from_directive(directive.directive_definition)\n for directive in directives\n ]\n\n self._schema = GraphQLSchema(\n query=query_type,\n mutation=mutation_type,\n subscription=subscription_type if subscription else None,\n directives=specified_directives + directives,\n types=list(map(self.schema_converter.from_object_type, types)),\n )\n\n # Validate schema early because we want developers to know about\n # possible issues as soon as possible\n errors = validate_schema(self._schema)\n if errors:\n formatted_errors = \"\\n\\n\".join(f\"\u274c {error.message}\" for error in errors)\n raise ValueError(f\"Invalid Schema. Errors:\\n\\n{formatted_errors}\")\n\n self.query = self.schema_converter.type_map[query_type.name]\n\n def get_type_by_name(\n self, name: str\n ) -> Optional[\n Union[TypeDefinition, ScalarDefinition, EnumDefinition, StrawberryUnion]\n ]:\n if name in self.schema_converter.type_map:\n return self.schema_converter.type_map[name].definition\n\n return None\n\n async def execute(\n self,\n query: str,\n variable_values: Optional[Dict[str, Any]] = None,\n context_value: Optional[Any] = None,\n root_value: Optional[Any] = None,\n operation_name: Optional[str] = None,\n validate_queries: bool = True,\n ) -> ExecutionResult:\n result = await execute(\n self._schema,\n query,\n variable_values=variable_values,\n root_value=root_value,\n context_value=context_value,\n operation_name=operation_name,\n additional_middlewares=self.middleware,\n extensions=self.extensions,\n execution_context_class=self.execution_context_class,\n validate_queries=validate_queries,\n )\n\n return ExecutionResult(\n data=result.data,\n errors=result.errors,\n extensions=result.extensions,\n )\n\n def execute_sync(\n self,\n query: str,\n variable_values: Optional[Dict[str, Any]] = None,\n context_value: Optional[Any] = None,\n root_value: Optional[Any] = None,\n operation_name: Optional[str] = None,\n validate_queries: bool = True,\n ) -> ExecutionResult:\n result = execute_sync(\n self._schema,\n query,\n variable_values=variable_values,\n root_value=root_value,\n context_value=context_value,\n operation_name=operation_name,\n additional_middlewares=self.middleware,\n extensions=self.extensions,\n execution_context_class=self.execution_context_class,\n validate_queries=validate_queries,\n )\n\n return ExecutionResult(\n data=result.data,\n errors=result.errors,\n extensions=result.extensions,\n )\n\n async def subscribe(\n self,\n query: str,\n variable_values: Optional[Dict[str, Any]] = None,\n context_value: Optional[Any] = None,\n root_value: Optional[Any] = None,\n operation_name: Optional[str] = None,\n ):\n return await subscribe(\n self._schema,\n parse(query),\n root_value=root_value,\n context_value=context_value,\n variable_values=variable_values,\n operation_name=operation_name,\n )\n\n def as_str(self) -> str:\n return print_schema(self)\n\n __str__ = as_str\n\n def introspect(self) -> Dict[str, Any]:\n \"\"\"Return the introspection query result for the current schema\n\n Raises:\n ValueError: If the introspection query fails due to an invalid schema\n \"\"\"\n introspection = self.execute_sync(get_introspection_query())\n if introspection.errors or not introspection.data:\n raise ValueError(f\"Invalid Schema. Errors {introspection.errors!r}\")\n\n return introspection.data\n", "path": "strawberry/schema/schema.py"}], "after_files": [{"content": "import logging\nfrom typing import Any, Dict, List, Optional, Sequence, Type, Union\n\nfrom graphql import (\n ExecutionContext as GraphQLExecutionContext,\n GraphQLSchema,\n get_introspection_query,\n parse,\n validate_schema,\n)\nfrom graphql.error import GraphQLError\nfrom graphql.subscription import subscribe\nfrom graphql.type.directives import specified_directives\n\nfrom strawberry.custom_scalar import ScalarDefinition\nfrom strawberry.enum import EnumDefinition\nfrom strawberry.extensions import Extension\nfrom strawberry.schema.schema_converter import GraphQLCoreConverter\nfrom strawberry.types import ExecutionResult\nfrom strawberry.types.types import TypeDefinition\nfrom strawberry.union import StrawberryUnion\n\nfrom ..middleware import DirectivesMiddleware, Middleware\nfrom ..printer import print_schema\nfrom .execute import execute, execute_sync\n\n\nlogger = logging.getLogger(\"strawberry.execution\")\n\n\nclass Schema:\n def __init__(\n self,\n # TODO: can we make sure we only allow to pass something that has been decorated?\n query: Type,\n mutation: Optional[Type] = None,\n subscription: Optional[Type] = None,\n directives=(),\n types=(),\n extensions: Sequence[Type[Extension]] = (),\n execution_context_class: Optional[Type[GraphQLExecutionContext]] = None,\n ):\n self.extensions = extensions\n self.execution_context_class = execution_context_class\n self.schema_converter = GraphQLCoreConverter()\n\n query_type = self.schema_converter.from_object_type(query)\n mutation_type = (\n self.schema_converter.from_object_type(mutation) if mutation else None\n )\n subscription_type = (\n self.schema_converter.from_object_type(subscription)\n if subscription\n else None\n )\n\n self.middleware: List[Middleware] = [DirectivesMiddleware(directives)]\n\n directives = [\n self.schema_converter.from_directive(directive.directive_definition)\n for directive in directives\n ]\n\n self._schema = GraphQLSchema(\n query=query_type,\n mutation=mutation_type,\n subscription=subscription_type if subscription else None,\n directives=specified_directives + directives,\n types=list(map(self.schema_converter.from_object_type, types)),\n )\n\n # Validate schema early because we want developers to know about\n # possible issues as soon as possible\n errors = validate_schema(self._schema)\n if errors:\n formatted_errors = \"\\n\\n\".join(f\"\u274c {error.message}\" for error in errors)\n raise ValueError(f\"Invalid Schema. Errors:\\n\\n{formatted_errors}\")\n\n self.query = self.schema_converter.type_map[query_type.name]\n\n def get_type_by_name(\n self, name: str\n ) -> Optional[\n Union[TypeDefinition, ScalarDefinition, EnumDefinition, StrawberryUnion]\n ]:\n if name in self.schema_converter.type_map:\n return self.schema_converter.type_map[name].definition\n\n return None\n\n def process_errors(self, errors: List[GraphQLError]) -> None:\n for error in errors:\n actual_error = error.original_error or error\n logger.error(actual_error, exc_info=actual_error)\n\n async def execute(\n self,\n query: str,\n variable_values: Optional[Dict[str, Any]] = None,\n context_value: Optional[Any] = None,\n root_value: Optional[Any] = None,\n operation_name: Optional[str] = None,\n validate_queries: bool = True,\n ) -> ExecutionResult:\n result = await execute(\n self._schema,\n query,\n variable_values=variable_values,\n root_value=root_value,\n context_value=context_value,\n operation_name=operation_name,\n additional_middlewares=self.middleware,\n extensions=self.extensions,\n execution_context_class=self.execution_context_class,\n validate_queries=validate_queries,\n )\n\n if result.errors:\n self.process_errors(result.errors)\n\n return ExecutionResult(\n data=result.data,\n errors=result.errors,\n extensions=result.extensions,\n )\n\n def execute_sync(\n self,\n query: str,\n variable_values: Optional[Dict[str, Any]] = None,\n context_value: Optional[Any] = None,\n root_value: Optional[Any] = None,\n operation_name: Optional[str] = None,\n validate_queries: bool = True,\n ) -> ExecutionResult:\n result = execute_sync(\n self._schema,\n query,\n variable_values=variable_values,\n root_value=root_value,\n context_value=context_value,\n operation_name=operation_name,\n additional_middlewares=self.middleware,\n extensions=self.extensions,\n execution_context_class=self.execution_context_class,\n validate_queries=validate_queries,\n )\n\n if result.errors:\n self.process_errors(result.errors)\n\n return ExecutionResult(\n data=result.data,\n errors=result.errors,\n extensions=result.extensions,\n )\n\n async def subscribe(\n self,\n query: str,\n variable_values: Optional[Dict[str, Any]] = None,\n context_value: Optional[Any] = None,\n root_value: Optional[Any] = None,\n operation_name: Optional[str] = None,\n ):\n return await subscribe(\n self._schema,\n parse(query),\n root_value=root_value,\n context_value=context_value,\n variable_values=variable_values,\n operation_name=operation_name,\n )\n\n def as_str(self) -> str:\n return print_schema(self)\n\n __str__ = as_str\n\n def introspect(self) -> Dict[str, Any]:\n \"\"\"Return the introspection query result for the current schema\n\n Raises:\n ValueError: If the introspection query fails due to an invalid schema\n \"\"\"\n introspection = self.execute_sync(get_introspection_query())\n if introspection.errors or not introspection.data:\n raise ValueError(f\"Invalid Schema. Errors {introspection.errors!r}\")\n\n return introspection.data\n", "path": "strawberry/schema/schema.py"}]} | 2,023 | 353 |
gh_patches_debug_17155 | rasdani/github-patches | git_diff | cloudtools__troposphere-1157 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
RDS VPC example instance classes are out of date
https://github.com/cloudtools/troposphere/blob/565247a50bd27a8f45c3e408b466b193473cb9a6/examples/RDS_VPC.py#L67-L76
The current list of instance classes is documented here: https://aws.amazon.com/rds/instance-types/
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/RDS_VPC.py`
Content:
```
1 # Converted from RDS_VPC.template located at:
2 # http://aws.amazon.com/cloudformation/aws-cloudformation-templates/
3
4 from troposphere import GetAtt, Join, Output, Parameter, Ref, Template
5 from troposphere.ec2 import SecurityGroup
6 from troposphere.rds import DBInstance, DBSubnetGroup
7
8
9 t = Template()
10
11 t.add_description(
12 "AWS CloudFormation Sample Template VPC_RDS_DB_Instance: Sample template "
13 "showing how to create an RDS DBInstance in an existing Virtual Private "
14 "Cloud (VPC). **WARNING** This template creates an Amazon Relational "
15 "Database Service database instance. You will be billed for the AWS "
16 "resources used if you create a stack from this template.")
17
18 vpcid = t.add_parameter(Parameter(
19 "VpcId",
20 Type="String",
21 Description="VpcId of your existing Virtual Private Cloud (VPC)"
22 ))
23
24 subnet = t.add_parameter(Parameter(
25 "Subnets",
26 Type="CommaDelimitedList",
27 Description=(
28 "The list of SubnetIds, for at least two Availability Zones in the "
29 "region in your Virtual Private Cloud (VPC)")
30 ))
31
32 dbname = t.add_parameter(Parameter(
33 "DBName",
34 Default="MyDatabase",
35 Description="The database name",
36 Type="String",
37 MinLength="1",
38 MaxLength="64",
39 AllowedPattern="[a-zA-Z][a-zA-Z0-9]*",
40 ConstraintDescription=("must begin with a letter and contain only"
41 " alphanumeric characters.")
42 ))
43
44 dbuser = t.add_parameter(Parameter(
45 "DBUser",
46 NoEcho=True,
47 Description="The database admin account username",
48 Type="String",
49 MinLength="1",
50 MaxLength="16",
51 AllowedPattern="[a-zA-Z][a-zA-Z0-9]*",
52 ConstraintDescription=("must begin with a letter and contain only"
53 " alphanumeric characters.")
54 ))
55
56 dbpassword = t.add_parameter(Parameter(
57 "DBPassword",
58 NoEcho=True,
59 Description="The database admin account password",
60 Type="String",
61 MinLength="1",
62 MaxLength="41",
63 AllowedPattern="[a-zA-Z0-9]*",
64 ConstraintDescription="must contain only alphanumeric characters."
65 ))
66
67 dbclass = t.add_parameter(Parameter(
68 "DBClass",
69 Default="db.m1.small",
70 Description="Database instance class",
71 Type="String",
72 AllowedValues=[
73 "db.m1.small", "db.m1.large", "db.m1.xlarge", "db.m2.xlarge",
74 "db.m2.2xlarge", "db.m2.4xlarge"],
75 ConstraintDescription="must select a valid database instance type.",
76 ))
77
78 dballocatedstorage = t.add_parameter(Parameter(
79 "DBAllocatedStorage",
80 Default="5",
81 Description="The size of the database (Gb)",
82 Type="Number",
83 MinValue="5",
84 MaxValue="1024",
85 ConstraintDescription="must be between 5 and 1024Gb.",
86 ))
87
88
89 mydbsubnetgroup = t.add_resource(DBSubnetGroup(
90 "MyDBSubnetGroup",
91 DBSubnetGroupDescription="Subnets available for the RDS DB Instance",
92 SubnetIds=Ref(subnet),
93 ))
94
95 myvpcsecuritygroup = t.add_resource(SecurityGroup(
96 "myVPCSecurityGroup",
97 GroupDescription="Security group for RDS DB Instance.",
98 VpcId=Ref(vpcid)
99 ))
100
101 mydb = t.add_resource(DBInstance(
102 "MyDB",
103 DBName=Ref(dbname),
104 AllocatedStorage=Ref(dballocatedstorage),
105 DBInstanceClass=Ref(dbclass),
106 Engine="MySQL",
107 EngineVersion="5.5",
108 MasterUsername=Ref(dbuser),
109 MasterUserPassword=Ref(dbpassword),
110 DBSubnetGroupName=Ref(mydbsubnetgroup),
111 VPCSecurityGroups=[Ref(myvpcsecuritygroup)],
112 ))
113
114 t.add_output(Output(
115 "JDBCConnectionString",
116 Description="JDBC connection string for database",
117 Value=Join("", [
118 "jdbc:mysql://",
119 GetAtt("MyDB", "Endpoint.Address"),
120 GetAtt("MyDB", "Endpoint.Port"),
121 "/",
122 Ref(dbname)
123 ])
124 ))
125
126 print(t.to_json())
127
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/examples/RDS_VPC.py b/examples/RDS_VPC.py
--- a/examples/RDS_VPC.py
+++ b/examples/RDS_VPC.py
@@ -70,8 +70,17 @@
Description="Database instance class",
Type="String",
AllowedValues=[
- "db.m1.small", "db.m1.large", "db.m1.xlarge", "db.m2.xlarge",
- "db.m2.2xlarge", "db.m2.4xlarge"],
+ "db.m5.large", "db.m5.xlarge", "db.m5.2xlarge", "db.m5.4xlarge",
+ "db.m5.12xlarge", "db.m5.24xlarge", "db.m4.large", "db.m4.xlarge",
+ "db.m4.2xlarge", "db.m4.4xlarge", "db.m4.10xlarge", "db.m4.16xlarge",
+ "db.r4.large", "db.r4.xlarge", "db.r4.2xlarge", "db.r4.4xlarge",
+ "db.r4.8xlarge", "db.r4.16xlarge", "db.x1e.xlarge", "db.x1e.2xlarge",
+ "db.x1e.4xlarge", "db.x1e.8xlarge", "db.x1e.16xlarge", "db.x1e.32xlarge",
+ "db.x1.16xlarge", "db.x1.32xlarge", "db.r3.large", "db.r3.xlarge",
+ "db.r3.2xlarge", "db.r3.4xlarge", "db.r3.8xlarge", "db.t2.micro",
+ "db.t2.small", "db.t2.medium", "db.t2.large", "db.t2.xlarge",
+ "db.t2.2xlarge"
+ ],
ConstraintDescription="must select a valid database instance type.",
))
| {"golden_diff": "diff --git a/examples/RDS_VPC.py b/examples/RDS_VPC.py\n--- a/examples/RDS_VPC.py\n+++ b/examples/RDS_VPC.py\n@@ -70,8 +70,17 @@\n Description=\"Database instance class\",\n Type=\"String\",\n AllowedValues=[\n- \"db.m1.small\", \"db.m1.large\", \"db.m1.xlarge\", \"db.m2.xlarge\",\n- \"db.m2.2xlarge\", \"db.m2.4xlarge\"],\n+ \"db.m5.large\", \"db.m5.xlarge\", \"db.m5.2xlarge\", \"db.m5.4xlarge\",\n+ \"db.m5.12xlarge\", \"db.m5.24xlarge\", \"db.m4.large\", \"db.m4.xlarge\",\n+ \"db.m4.2xlarge\", \"db.m4.4xlarge\", \"db.m4.10xlarge\", \"db.m4.16xlarge\",\n+ \"db.r4.large\", \"db.r4.xlarge\", \"db.r4.2xlarge\", \"db.r4.4xlarge\",\n+ \"db.r4.8xlarge\", \"db.r4.16xlarge\", \"db.x1e.xlarge\", \"db.x1e.2xlarge\",\n+ \"db.x1e.4xlarge\", \"db.x1e.8xlarge\", \"db.x1e.16xlarge\", \"db.x1e.32xlarge\",\n+ \"db.x1.16xlarge\", \"db.x1.32xlarge\", \"db.r3.large\", \"db.r3.xlarge\",\n+ \"db.r3.2xlarge\", \"db.r3.4xlarge\", \"db.r3.8xlarge\", \"db.t2.micro\",\n+ \"db.t2.small\", \"db.t2.medium\", \"db.t2.large\", \"db.t2.xlarge\",\n+ \"db.t2.2xlarge\"\n+ ],\n ConstraintDescription=\"must select a valid database instance type.\",\n ))\n", "issue": "RDS VPC example instance classes are out of date\nhttps://github.com/cloudtools/troposphere/blob/565247a50bd27a8f45c3e408b466b193473cb9a6/examples/RDS_VPC.py#L67-L76\r\n\r\nThe current list of instance classes is documented here: https://aws.amazon.com/rds/instance-types/\n", "before_files": [{"content": "# Converted from RDS_VPC.template located at:\n# http://aws.amazon.com/cloudformation/aws-cloudformation-templates/\n\nfrom troposphere import GetAtt, Join, Output, Parameter, Ref, Template\nfrom troposphere.ec2 import SecurityGroup\nfrom troposphere.rds import DBInstance, DBSubnetGroup\n\n\nt = Template()\n\nt.add_description(\n \"AWS CloudFormation Sample Template VPC_RDS_DB_Instance: Sample template \"\n \"showing how to create an RDS DBInstance in an existing Virtual Private \"\n \"Cloud (VPC). **WARNING** This template creates an Amazon Relational \"\n \"Database Service database instance. You will be billed for the AWS \"\n \"resources used if you create a stack from this template.\")\n\nvpcid = t.add_parameter(Parameter(\n \"VpcId\",\n Type=\"String\",\n Description=\"VpcId of your existing Virtual Private Cloud (VPC)\"\n))\n\nsubnet = t.add_parameter(Parameter(\n \"Subnets\",\n Type=\"CommaDelimitedList\",\n Description=(\n \"The list of SubnetIds, for at least two Availability Zones in the \"\n \"region in your Virtual Private Cloud (VPC)\")\n))\n\ndbname = t.add_parameter(Parameter(\n \"DBName\",\n Default=\"MyDatabase\",\n Description=\"The database name\",\n Type=\"String\",\n MinLength=\"1\",\n MaxLength=\"64\",\n AllowedPattern=\"[a-zA-Z][a-zA-Z0-9]*\",\n ConstraintDescription=(\"must begin with a letter and contain only\"\n \" alphanumeric characters.\")\n))\n\ndbuser = t.add_parameter(Parameter(\n \"DBUser\",\n NoEcho=True,\n Description=\"The database admin account username\",\n Type=\"String\",\n MinLength=\"1\",\n MaxLength=\"16\",\n AllowedPattern=\"[a-zA-Z][a-zA-Z0-9]*\",\n ConstraintDescription=(\"must begin with a letter and contain only\"\n \" alphanumeric characters.\")\n))\n\ndbpassword = t.add_parameter(Parameter(\n \"DBPassword\",\n NoEcho=True,\n Description=\"The database admin account password\",\n Type=\"String\",\n MinLength=\"1\",\n MaxLength=\"41\",\n AllowedPattern=\"[a-zA-Z0-9]*\",\n ConstraintDescription=\"must contain only alphanumeric characters.\"\n))\n\ndbclass = t.add_parameter(Parameter(\n \"DBClass\",\n Default=\"db.m1.small\",\n Description=\"Database instance class\",\n Type=\"String\",\n AllowedValues=[\n \"db.m1.small\", \"db.m1.large\", \"db.m1.xlarge\", \"db.m2.xlarge\",\n \"db.m2.2xlarge\", \"db.m2.4xlarge\"],\n ConstraintDescription=\"must select a valid database instance type.\",\n))\n\ndballocatedstorage = t.add_parameter(Parameter(\n \"DBAllocatedStorage\",\n Default=\"5\",\n Description=\"The size of the database (Gb)\",\n Type=\"Number\",\n MinValue=\"5\",\n MaxValue=\"1024\",\n ConstraintDescription=\"must be between 5 and 1024Gb.\",\n))\n\n\nmydbsubnetgroup = t.add_resource(DBSubnetGroup(\n \"MyDBSubnetGroup\",\n DBSubnetGroupDescription=\"Subnets available for the RDS DB Instance\",\n SubnetIds=Ref(subnet),\n))\n\nmyvpcsecuritygroup = t.add_resource(SecurityGroup(\n \"myVPCSecurityGroup\",\n GroupDescription=\"Security group for RDS DB Instance.\",\n VpcId=Ref(vpcid)\n))\n\nmydb = t.add_resource(DBInstance(\n \"MyDB\",\n DBName=Ref(dbname),\n AllocatedStorage=Ref(dballocatedstorage),\n DBInstanceClass=Ref(dbclass),\n Engine=\"MySQL\",\n EngineVersion=\"5.5\",\n MasterUsername=Ref(dbuser),\n MasterUserPassword=Ref(dbpassword),\n DBSubnetGroupName=Ref(mydbsubnetgroup),\n VPCSecurityGroups=[Ref(myvpcsecuritygroup)],\n))\n\nt.add_output(Output(\n \"JDBCConnectionString\",\n Description=\"JDBC connection string for database\",\n Value=Join(\"\", [\n \"jdbc:mysql://\",\n GetAtt(\"MyDB\", \"Endpoint.Address\"),\n GetAtt(\"MyDB\", \"Endpoint.Port\"),\n \"/\",\n Ref(dbname)\n ])\n))\n\nprint(t.to_json())\n", "path": "examples/RDS_VPC.py"}], "after_files": [{"content": "# Converted from RDS_VPC.template located at:\n# http://aws.amazon.com/cloudformation/aws-cloudformation-templates/\n\nfrom troposphere import GetAtt, Join, Output, Parameter, Ref, Template\nfrom troposphere.ec2 import SecurityGroup\nfrom troposphere.rds import DBInstance, DBSubnetGroup\n\n\nt = Template()\n\nt.add_description(\n \"AWS CloudFormation Sample Template VPC_RDS_DB_Instance: Sample template \"\n \"showing how to create an RDS DBInstance in an existing Virtual Private \"\n \"Cloud (VPC). **WARNING** This template creates an Amazon Relational \"\n \"Database Service database instance. You will be billed for the AWS \"\n \"resources used if you create a stack from this template.\")\n\nvpcid = t.add_parameter(Parameter(\n \"VpcId\",\n Type=\"String\",\n Description=\"VpcId of your existing Virtual Private Cloud (VPC)\"\n))\n\nsubnet = t.add_parameter(Parameter(\n \"Subnets\",\n Type=\"CommaDelimitedList\",\n Description=(\n \"The list of SubnetIds, for at least two Availability Zones in the \"\n \"region in your Virtual Private Cloud (VPC)\")\n))\n\ndbname = t.add_parameter(Parameter(\n \"DBName\",\n Default=\"MyDatabase\",\n Description=\"The database name\",\n Type=\"String\",\n MinLength=\"1\",\n MaxLength=\"64\",\n AllowedPattern=\"[a-zA-Z][a-zA-Z0-9]*\",\n ConstraintDescription=(\"must begin with a letter and contain only\"\n \" alphanumeric characters.\")\n))\n\ndbuser = t.add_parameter(Parameter(\n \"DBUser\",\n NoEcho=True,\n Description=\"The database admin account username\",\n Type=\"String\",\n MinLength=\"1\",\n MaxLength=\"16\",\n AllowedPattern=\"[a-zA-Z][a-zA-Z0-9]*\",\n ConstraintDescription=(\"must begin with a letter and contain only\"\n \" alphanumeric characters.\")\n))\n\ndbpassword = t.add_parameter(Parameter(\n \"DBPassword\",\n NoEcho=True,\n Description=\"The database admin account password\",\n Type=\"String\",\n MinLength=\"1\",\n MaxLength=\"41\",\n AllowedPattern=\"[a-zA-Z0-9]*\",\n ConstraintDescription=\"must contain only alphanumeric characters.\"\n))\n\ndbclass = t.add_parameter(Parameter(\n \"DBClass\",\n Default=\"db.m1.small\",\n Description=\"Database instance class\",\n Type=\"String\",\n AllowedValues=[\n \"db.m5.large\", \"db.m5.xlarge\", \"db.m5.2xlarge\", \"db.m5.4xlarge\",\n \"db.m5.12xlarge\", \"db.m5.24xlarge\", \"db.m4.large\", \"db.m4.xlarge\",\n \"db.m4.2xlarge\", \"db.m4.4xlarge\", \"db.m4.10xlarge\", \"db.m4.16xlarge\",\n \"db.r4.large\", \"db.r4.xlarge\", \"db.r4.2xlarge\", \"db.r4.4xlarge\",\n \"db.r4.8xlarge\", \"db.r4.16xlarge\", \"db.x1e.xlarge\", \"db.x1e.2xlarge\",\n \"db.x1e.4xlarge\", \"db.x1e.8xlarge\", \"db.x1e.16xlarge\", \"db.x1e.32xlarge\",\n \"db.x1.16xlarge\", \"db.x1.32xlarge\", \"db.r3.large\", \"db.r3.xlarge\",\n \"db.r3.2xlarge\", \"db.r3.4xlarge\", \"db.r3.8xlarge\", \"db.t2.micro\",\n \"db.t2.small\", \"db.t2.medium\", \"db.t2.large\", \"db.t2.xlarge\",\n \"db.t2.2xlarge\"\n ],\n ConstraintDescription=\"must select a valid database instance type.\",\n))\n\ndballocatedstorage = t.add_parameter(Parameter(\n \"DBAllocatedStorage\",\n Default=\"5\",\n Description=\"The size of the database (Gb)\",\n Type=\"Number\",\n MinValue=\"5\",\n MaxValue=\"1024\",\n ConstraintDescription=\"must be between 5 and 1024Gb.\",\n))\n\n\nmydbsubnetgroup = t.add_resource(DBSubnetGroup(\n \"MyDBSubnetGroup\",\n DBSubnetGroupDescription=\"Subnets available for the RDS DB Instance\",\n SubnetIds=Ref(subnet),\n))\n\nmyvpcsecuritygroup = t.add_resource(SecurityGroup(\n \"myVPCSecurityGroup\",\n GroupDescription=\"Security group for RDS DB Instance.\",\n VpcId=Ref(vpcid)\n))\n\nmydb = t.add_resource(DBInstance(\n \"MyDB\",\n DBName=Ref(dbname),\n AllocatedStorage=Ref(dballocatedstorage),\n DBInstanceClass=Ref(dbclass),\n Engine=\"MySQL\",\n EngineVersion=\"5.5\",\n MasterUsername=Ref(dbuser),\n MasterUserPassword=Ref(dbpassword),\n DBSubnetGroupName=Ref(mydbsubnetgroup),\n VPCSecurityGroups=[Ref(myvpcsecuritygroup)],\n))\n\nt.add_output(Output(\n \"JDBCConnectionString\",\n Description=\"JDBC connection string for database\",\n Value=Join(\"\", [\n \"jdbc:mysql://\",\n GetAtt(\"MyDB\", \"Endpoint.Address\"),\n GetAtt(\"MyDB\", \"Endpoint.Port\"),\n \"/\",\n Ref(dbname)\n ])\n))\n\nprint(t.to_json())\n", "path": "examples/RDS_VPC.py"}]} | 1,546 | 461 |
gh_patches_debug_30972 | rasdani/github-patches | git_diff | saleor__saleor-3639 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Expose in API list of supported payment gateways
In API we expose the `GatewaysEnum` enum which contains all the available payment gateways, but there is no way for the Storefront 2.0 to determine which of them are actually enabled (currently Braintree is hardcoded over there). We should expose a field that would provide this information, e.g:
```
type Shop {
enabledGateways: [GatewaysEnum]
}
```
A resolver function should take into account gateways enabled in `CHECKOUT_PAYMENT_GATEWAYS` setting.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `saleor/graphql/checkout/types.py`
Content:
```
1 import graphene
2 import graphene_django_optimizer as gql_optimizer
3
4 from ...checkout import models
5 from ...core.utils.taxes import get_taxes_for_address
6 from ..core.connection import CountableDjangoObjectType
7 from ..core.types.money import TaxedMoney
8 from ..order.utils import applicable_shipping_methods
9 from ..shipping.types import ShippingMethod
10
11
12 class CheckoutLine(CountableDjangoObjectType):
13 total_price = graphene.Field(
14 TaxedMoney,
15 description=(
16 'The sum of the checkout line price, taxes and discounts.'))
17 requires_shipping = graphene.Boolean(
18 description='Indicates whether the item need to be delivered.')
19
20 class Meta:
21 exclude_fields = ['cart', 'data']
22 description = 'Represents an item in the checkout.'
23 interfaces = [graphene.relay.Node]
24 model = models.CartLine
25 filter_fields = ['id']
26
27 def resolve_total_price(self, info):
28 taxes = get_taxes_for_address(self.cart.shipping_address)
29 return self.get_total(taxes=taxes)
30
31 def resolve_requires_shipping(self, info):
32 return self.is_shipping_required()
33
34
35 class Checkout(CountableDjangoObjectType):
36 available_shipping_methods = graphene.List(
37 ShippingMethod, required=False,
38 description='Shipping methods that can be used with this order.')
39 is_shipping_required = graphene.Boolean(
40 description='Returns True, if checkout requires shipping.',
41 required=True)
42 lines = gql_optimizer.field(
43 graphene.List(
44 CheckoutLine, description=(
45 'A list of checkout lines, each containing information about '
46 'an item in the checkout.')),
47 model_field='lines')
48 shipping_price = graphene.Field(
49 TaxedMoney,
50 description='The price of the shipping, with all the taxes included.')
51 subtotal_price = graphene.Field(
52 TaxedMoney,
53 description=(
54 'The price of the checkout before shipping, with taxes included.'))
55 total_price = graphene.Field(
56 TaxedMoney,
57 description=(
58 'The sum of the the checkout line prices, with all the taxes,'
59 'shipping costs, and discounts included.'))
60
61 class Meta:
62 exclude_fields = ['payments']
63 description = 'Checkout object'
64 model = models.Cart
65 interfaces = [graphene.relay.Node]
66 filter_fields = ['token']
67
68 def resolve_total_price(self, info):
69 taxes = get_taxes_for_address(self.shipping_address)
70 return self.get_total(taxes=taxes)
71
72 def resolve_subtotal_price(self, info):
73 taxes = get_taxes_for_address(self.shipping_address)
74 return self.get_subtotal(taxes=taxes)
75
76 def resolve_shipping_price(self, info):
77 taxes = get_taxes_for_address(self.shipping_address)
78 return self.get_shipping_price(taxes=taxes)
79
80 def resolve_lines(self, info):
81 return self.lines.prefetch_related('variant')
82
83 def resolve_available_shipping_methods(self, info):
84 taxes = get_taxes_for_address(self.shipping_address)
85 price = self.get_subtotal(
86 taxes=taxes, discounts=info.context.discounts)
87 return applicable_shipping_methods(self, info, price.gross.amount)
88
89 def resolve_is_shipping_required(self, info):
90 return self.is_shipping_required()
91
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/saleor/graphql/checkout/types.py b/saleor/graphql/checkout/types.py
--- a/saleor/graphql/checkout/types.py
+++ b/saleor/graphql/checkout/types.py
@@ -1,11 +1,13 @@
import graphene
import graphene_django_optimizer as gql_optimizer
+from django.conf import settings
from ...checkout import models
from ...core.utils.taxes import get_taxes_for_address
from ..core.connection import CountableDjangoObjectType
from ..core.types.money import TaxedMoney
from ..order.utils import applicable_shipping_methods
+from ..payment.enums import PaymentGatewayEnum
from ..shipping.types import ShippingMethod
@@ -36,6 +38,9 @@
available_shipping_methods = graphene.List(
ShippingMethod, required=False,
description='Shipping methods that can be used with this order.')
+ available_payment_gateways = graphene.List(
+ PaymentGatewayEnum, description='List of available payment gateways.',
+ required=True)
is_shipping_required = graphene.Boolean(
description='Returns True, if checkout requires shipping.',
required=True)
@@ -86,5 +91,8 @@
taxes=taxes, discounts=info.context.discounts)
return applicable_shipping_methods(self, info, price.gross.amount)
+ def resolve_available_payment_gateways(self, info):
+ return settings.CHECKOUT_PAYMENT_GATEWAYS.keys()
+
def resolve_is_shipping_required(self, info):
return self.is_shipping_required()
| {"golden_diff": "diff --git a/saleor/graphql/checkout/types.py b/saleor/graphql/checkout/types.py\n--- a/saleor/graphql/checkout/types.py\n+++ b/saleor/graphql/checkout/types.py\n@@ -1,11 +1,13 @@\n import graphene\n import graphene_django_optimizer as gql_optimizer\n+from django.conf import settings\n \n from ...checkout import models\n from ...core.utils.taxes import get_taxes_for_address\n from ..core.connection import CountableDjangoObjectType\n from ..core.types.money import TaxedMoney\n from ..order.utils import applicable_shipping_methods\n+from ..payment.enums import PaymentGatewayEnum\n from ..shipping.types import ShippingMethod\n \n \n@@ -36,6 +38,9 @@\n available_shipping_methods = graphene.List(\n ShippingMethod, required=False,\n description='Shipping methods that can be used with this order.')\n+ available_payment_gateways = graphene.List(\n+ PaymentGatewayEnum, description='List of available payment gateways.',\n+ required=True)\n is_shipping_required = graphene.Boolean(\n description='Returns True, if checkout requires shipping.',\n required=True)\n@@ -86,5 +91,8 @@\n taxes=taxes, discounts=info.context.discounts)\n return applicable_shipping_methods(self, info, price.gross.amount)\n \n+ def resolve_available_payment_gateways(self, info):\n+ return settings.CHECKOUT_PAYMENT_GATEWAYS.keys()\n+\n def resolve_is_shipping_required(self, info):\n return self.is_shipping_required()\n", "issue": "Expose in API list of supported payment gateways\nIn API we expose the `GatewaysEnum` enum which contains all the available payment gateways, but there is no way for the Storefront 2.0 to determine which of them are actually enabled (currently Braintree is hardcoded over there). We should expose a field that would provide this information, e.g:\r\n```\r\ntype Shop {\r\n enabledGateways: [GatewaysEnum]\r\n}\r\n```\r\n\r\nA resolver function should take into account gateways enabled in `CHECKOUT_PAYMENT_GATEWAYS` setting.\n", "before_files": [{"content": "import graphene\nimport graphene_django_optimizer as gql_optimizer\n\nfrom ...checkout import models\nfrom ...core.utils.taxes import get_taxes_for_address\nfrom ..core.connection import CountableDjangoObjectType\nfrom ..core.types.money import TaxedMoney\nfrom ..order.utils import applicable_shipping_methods\nfrom ..shipping.types import ShippingMethod\n\n\nclass CheckoutLine(CountableDjangoObjectType):\n total_price = graphene.Field(\n TaxedMoney,\n description=(\n 'The sum of the checkout line price, taxes and discounts.'))\n requires_shipping = graphene.Boolean(\n description='Indicates whether the item need to be delivered.')\n\n class Meta:\n exclude_fields = ['cart', 'data']\n description = 'Represents an item in the checkout.'\n interfaces = [graphene.relay.Node]\n model = models.CartLine\n filter_fields = ['id']\n\n def resolve_total_price(self, info):\n taxes = get_taxes_for_address(self.cart.shipping_address)\n return self.get_total(taxes=taxes)\n\n def resolve_requires_shipping(self, info):\n return self.is_shipping_required()\n\n\nclass Checkout(CountableDjangoObjectType):\n available_shipping_methods = graphene.List(\n ShippingMethod, required=False,\n description='Shipping methods that can be used with this order.')\n is_shipping_required = graphene.Boolean(\n description='Returns True, if checkout requires shipping.',\n required=True)\n lines = gql_optimizer.field(\n graphene.List(\n CheckoutLine, description=(\n 'A list of checkout lines, each containing information about '\n 'an item in the checkout.')),\n model_field='lines')\n shipping_price = graphene.Field(\n TaxedMoney,\n description='The price of the shipping, with all the taxes included.')\n subtotal_price = graphene.Field(\n TaxedMoney,\n description=(\n 'The price of the checkout before shipping, with taxes included.'))\n total_price = graphene.Field(\n TaxedMoney,\n description=(\n 'The sum of the the checkout line prices, with all the taxes,'\n 'shipping costs, and discounts included.'))\n\n class Meta:\n exclude_fields = ['payments']\n description = 'Checkout object'\n model = models.Cart\n interfaces = [graphene.relay.Node]\n filter_fields = ['token']\n\n def resolve_total_price(self, info):\n taxes = get_taxes_for_address(self.shipping_address)\n return self.get_total(taxes=taxes)\n\n def resolve_subtotal_price(self, info):\n taxes = get_taxes_for_address(self.shipping_address)\n return self.get_subtotal(taxes=taxes)\n\n def resolve_shipping_price(self, info):\n taxes = get_taxes_for_address(self.shipping_address)\n return self.get_shipping_price(taxes=taxes)\n\n def resolve_lines(self, info):\n return self.lines.prefetch_related('variant')\n\n def resolve_available_shipping_methods(self, info):\n taxes = get_taxes_for_address(self.shipping_address)\n price = self.get_subtotal(\n taxes=taxes, discounts=info.context.discounts)\n return applicable_shipping_methods(self, info, price.gross.amount)\n\n def resolve_is_shipping_required(self, info):\n return self.is_shipping_required()\n", "path": "saleor/graphql/checkout/types.py"}], "after_files": [{"content": "import graphene\nimport graphene_django_optimizer as gql_optimizer\nfrom django.conf import settings\n\nfrom ...checkout import models\nfrom ...core.utils.taxes import get_taxes_for_address\nfrom ..core.connection import CountableDjangoObjectType\nfrom ..core.types.money import TaxedMoney\nfrom ..order.utils import applicable_shipping_methods\nfrom ..payment.enums import PaymentGatewayEnum\nfrom ..shipping.types import ShippingMethod\n\n\nclass CheckoutLine(CountableDjangoObjectType):\n total_price = graphene.Field(\n TaxedMoney,\n description=(\n 'The sum of the checkout line price, taxes and discounts.'))\n requires_shipping = graphene.Boolean(\n description='Indicates whether the item need to be delivered.')\n\n class Meta:\n exclude_fields = ['cart', 'data']\n description = 'Represents an item in the checkout.'\n interfaces = [graphene.relay.Node]\n model = models.CartLine\n filter_fields = ['id']\n\n def resolve_total_price(self, info):\n taxes = get_taxes_for_address(self.cart.shipping_address)\n return self.get_total(taxes=taxes)\n\n def resolve_requires_shipping(self, info):\n return self.is_shipping_required()\n\n\nclass Checkout(CountableDjangoObjectType):\n available_shipping_methods = graphene.List(\n ShippingMethod, required=False,\n description='Shipping methods that can be used with this order.')\n available_payment_gateways = graphene.List(\n PaymentGatewayEnum, description='List of available payment gateways.',\n required=True)\n is_shipping_required = graphene.Boolean(\n description='Returns True, if checkout requires shipping.',\n required=True)\n lines = gql_optimizer.field(\n graphene.List(\n CheckoutLine, description=(\n 'A list of checkout lines, each containing information about '\n 'an item in the checkout.')),\n model_field='lines')\n shipping_price = graphene.Field(\n TaxedMoney,\n description='The price of the shipping, with all the taxes included.')\n subtotal_price = graphene.Field(\n TaxedMoney,\n description=(\n 'The price of the checkout before shipping, with taxes included.'))\n total_price = graphene.Field(\n TaxedMoney,\n description=(\n 'The sum of the the checkout line prices, with all the taxes,'\n 'shipping costs, and discounts included.'))\n\n class Meta:\n exclude_fields = ['payments']\n description = 'Checkout object'\n model = models.Cart\n interfaces = [graphene.relay.Node]\n filter_fields = ['token']\n\n def resolve_total_price(self, info):\n taxes = get_taxes_for_address(self.shipping_address)\n return self.get_total(taxes=taxes)\n\n def resolve_subtotal_price(self, info):\n taxes = get_taxes_for_address(self.shipping_address)\n return self.get_subtotal(taxes=taxes)\n\n def resolve_shipping_price(self, info):\n taxes = get_taxes_for_address(self.shipping_address)\n return self.get_shipping_price(taxes=taxes)\n\n def resolve_lines(self, info):\n return self.lines.prefetch_related('variant')\n\n def resolve_available_shipping_methods(self, info):\n taxes = get_taxes_for_address(self.shipping_address)\n price = self.get_subtotal(\n taxes=taxes, discounts=info.context.discounts)\n return applicable_shipping_methods(self, info, price.gross.amount)\n\n def resolve_available_payment_gateways(self, info):\n return settings.CHECKOUT_PAYMENT_GATEWAYS.keys()\n\n def resolve_is_shipping_required(self, info):\n return self.is_shipping_required()\n", "path": "saleor/graphql/checkout/types.py"}]} | 1,230 | 318 |
gh_patches_debug_17205 | rasdani/github-patches | git_diff | fidals__shopelectro-532 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
tracking.es6:30-33: Send `purchase` event to YA and GA...
The puzzle `504-30170dc3` from #504 has to be resolved:
https://github.com/fidals/shopelectro/blob/4e607fb856842b43192511b1b4eaff19d2052b9a/front/js/shared/tracking.es6#L30-L33
The puzzle was created by Artemiy on 17-Aug-18.
Estimate: 60 minutes,
If you have any technical questions, don't ask me, submit new tickets instead. The task will be "done" when the problem is fixed and the text of the puzzle is _removed_ from the source code. Here is more about [PDD](http://www.yegor256.com/2009/03/04/pdd.html) and [about me](http://www.yegor256.com/2017/04/05/pdd-in-action.html).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `shopelectro/views/ecommerce.py`
Content:
```
1 from django.conf import settings
2 from django.http import HttpResponse, JsonResponse
3 from django.shortcuts import get_object_or_404, render
4 from django.views.decorators.http import require_POST
5
6 from ecommerce import mailer, views as ec_views
7 from pages.models import CustomPage
8
9 from shopelectro.cart import SECart
10 from shopelectro.forms import OrderForm
11 from shopelectro.models import Product, Order
12
13
14 # ECOMMERCE VIEWS
15 class OrderPage(ec_views.OrderPage):
16 order_form = OrderForm
17 cart = SECart
18
19 def get_context_data(self, request, **kwargs):
20 return {
21 **super(OrderPage, self).get_context_data(request, **kwargs),
22 'page': CustomPage.objects.get(slug='order'),
23 }
24
25
26 class AddToCart(ec_views.AddToCart):
27 cart = SECart
28 product_model = Product
29 order_form = OrderForm
30
31
32 class RemoveFromCart(ec_views.RemoveFromCart):
33 cart = SECart
34 product_model = Product
35 order_form = OrderForm
36
37
38 class ChangeCount(ec_views.ChangeCount):
39 cart = SECart
40 product_model = Product
41 order_form = OrderForm
42
43
44 class FlushCart(ec_views.FlushCart):
45 product_model = Product
46 order_form = OrderForm
47
48
49 class OrderSuccess(ec_views.OrderSuccess):
50 order = Order
51
52
53 @require_POST
54 def one_click_buy(request):
55 """
56 Handle one-click-buy.
57
58 Accept XHR, save Order to DB, send mail about it
59 and return 200 OK.
60 """
61 SECart(request.session).clear()
62
63 cart = SECart(request.session)
64 product = get_object_or_404(Product, id=request.POST['product'])
65 cart.add(product, int(request.POST['quantity']))
66 order = Order(phone=request.POST['phone'])
67 order.set_positions(cart)
68 ec_views.save_order_to_session(request.session, order)
69 mailer.send_order(
70 subject=settings.EMAIL_SUBJECTS['one_click'],
71 order=order,
72 to_customer=False,
73 )
74 return HttpResponse('ok')
75
76
77 @require_POST
78 def order_call(request):
79 """Send email about ordered call."""
80 phone, time, url = ec_views.get_keys_from_post(
81 request, 'phone', 'time', 'url')
82
83 mailer.send_backcall(
84 subject=settings.EMAIL_SUBJECTS['call'],
85 phone=phone,
86 time=time,
87 url=url,
88 )
89
90 return HttpResponse('ok')
91
92
93 class YandexOrder(OrderPage):
94
95 def post(self, request):
96 cart = self.cart(request.session)
97 form = self.order_form(request.POST.dict())
98
99 if not form.is_valid():
100 return render(request, self.template, {'cart': cart, 'form': form})
101
102 order = form.save()
103 order.set_positions(cart)
104 ec_views.save_order_to_session(request.session, order)
105
106 # Took form fields from Yandex docs https://goo.gl/afKfsz
107 response_data = {
108 'yandex_kassa_link': settings.YANDEX_KASSA_LINK, # Required
109 'shopId': settings.SHOP['id'], # Required
110 'scid': settings.SHOP['scid'], # Required
111 'shopSuccessURL': settings.SHOP['success_url'],
112 'shopFailURL': settings.SHOP['fail_url'],
113 'customerNumber': order.id, # Required
114 'sum': order.total_price, # Required
115 'orderNumber': order.fake_order_number,
116 'cps_phone': order.phone,
117 'cps_email': order.email,
118 'paymentType': request.POST.get('payment_type'),
119 }
120
121 return JsonResponse(response_data)
122
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/shopelectro/views/ecommerce.py b/shopelectro/views/ecommerce.py
--- a/shopelectro/views/ecommerce.py
+++ b/shopelectro/views/ecommerce.py
@@ -1,4 +1,5 @@
from django.conf import settings
+from django.core import serializers
from django.http import HttpResponse, JsonResponse
from django.shortcuts import get_object_or_404, render
from django.views.decorators.http import require_POST
@@ -47,7 +48,20 @@
class OrderSuccess(ec_views.OrderSuccess):
- order = Order
+ order = Order.objects.all().prefetch_related('positions')
+
+ def get_context_data(self, **kwargs):
+ context = super().get_context_data(**kwargs)
+ positions_json = serializers.serialize(
+ 'json',
+ context['order'].positions.all(),
+ fields=['name', 'quantity', 'price'],
+ )
+
+ return {
+ **context,
+ 'positions_json': positions_json,
+ }
@require_POST
| {"golden_diff": "diff --git a/shopelectro/views/ecommerce.py b/shopelectro/views/ecommerce.py\n--- a/shopelectro/views/ecommerce.py\n+++ b/shopelectro/views/ecommerce.py\n@@ -1,4 +1,5 @@\n from django.conf import settings\n+from django.core import serializers\n from django.http import HttpResponse, JsonResponse\n from django.shortcuts import get_object_or_404, render\n from django.views.decorators.http import require_POST\n@@ -47,7 +48,20 @@\n \n \n class OrderSuccess(ec_views.OrderSuccess):\n- order = Order\n+ order = Order.objects.all().prefetch_related('positions')\n+\n+ def get_context_data(self, **kwargs):\n+ context = super().get_context_data(**kwargs)\n+ positions_json = serializers.serialize(\n+ 'json',\n+ context['order'].positions.all(),\n+ fields=['name', 'quantity', 'price'],\n+ )\n+\n+ return {\n+ **context,\n+ 'positions_json': positions_json,\n+ }\n \n \n @require_POST\n", "issue": "tracking.es6:30-33: Send `purchase` event to YA and GA...\nThe puzzle `504-30170dc3` from #504 has to be resolved:\n\nhttps://github.com/fidals/shopelectro/blob/4e607fb856842b43192511b1b4eaff19d2052b9a/front/js/shared/tracking.es6#L30-L33\n\nThe puzzle was created by Artemiy on 17-Aug-18. \n\nEstimate: 60 minutes, \n\nIf you have any technical questions, don't ask me, submit new tickets instead. The task will be \"done\" when the problem is fixed and the text of the puzzle is _removed_ from the source code. Here is more about [PDD](http://www.yegor256.com/2009/03/04/pdd.html) and [about me](http://www.yegor256.com/2017/04/05/pdd-in-action.html).\n", "before_files": [{"content": "from django.conf import settings\nfrom django.http import HttpResponse, JsonResponse\nfrom django.shortcuts import get_object_or_404, render\nfrom django.views.decorators.http import require_POST\n\nfrom ecommerce import mailer, views as ec_views\nfrom pages.models import CustomPage\n\nfrom shopelectro.cart import SECart\nfrom shopelectro.forms import OrderForm\nfrom shopelectro.models import Product, Order\n\n\n# ECOMMERCE VIEWS\nclass OrderPage(ec_views.OrderPage):\n order_form = OrderForm\n cart = SECart\n\n def get_context_data(self, request, **kwargs):\n return {\n **super(OrderPage, self).get_context_data(request, **kwargs),\n 'page': CustomPage.objects.get(slug='order'),\n }\n\n\nclass AddToCart(ec_views.AddToCart):\n cart = SECart\n product_model = Product\n order_form = OrderForm\n\n\nclass RemoveFromCart(ec_views.RemoveFromCart):\n cart = SECart\n product_model = Product\n order_form = OrderForm\n\n\nclass ChangeCount(ec_views.ChangeCount):\n cart = SECart\n product_model = Product\n order_form = OrderForm\n\n\nclass FlushCart(ec_views.FlushCart):\n product_model = Product\n order_form = OrderForm\n\n\nclass OrderSuccess(ec_views.OrderSuccess):\n order = Order\n\n\n@require_POST\ndef one_click_buy(request):\n \"\"\"\n Handle one-click-buy.\n\n Accept XHR, save Order to DB, send mail about it\n and return 200 OK.\n \"\"\"\n SECart(request.session).clear()\n\n cart = SECart(request.session)\n product = get_object_or_404(Product, id=request.POST['product'])\n cart.add(product, int(request.POST['quantity']))\n order = Order(phone=request.POST['phone'])\n order.set_positions(cart)\n ec_views.save_order_to_session(request.session, order)\n mailer.send_order(\n subject=settings.EMAIL_SUBJECTS['one_click'],\n order=order,\n to_customer=False,\n )\n return HttpResponse('ok')\n\n\n@require_POST\ndef order_call(request):\n \"\"\"Send email about ordered call.\"\"\"\n phone, time, url = ec_views.get_keys_from_post(\n request, 'phone', 'time', 'url')\n\n mailer.send_backcall(\n subject=settings.EMAIL_SUBJECTS['call'],\n phone=phone,\n time=time,\n url=url,\n )\n\n return HttpResponse('ok')\n\n\nclass YandexOrder(OrderPage):\n\n def post(self, request):\n cart = self.cart(request.session)\n form = self.order_form(request.POST.dict())\n\n if not form.is_valid():\n return render(request, self.template, {'cart': cart, 'form': form})\n\n order = form.save()\n order.set_positions(cart)\n ec_views.save_order_to_session(request.session, order)\n\n # Took form fields from Yandex docs https://goo.gl/afKfsz\n response_data = {\n 'yandex_kassa_link': settings.YANDEX_KASSA_LINK, # Required\n 'shopId': settings.SHOP['id'], # Required\n 'scid': settings.SHOP['scid'], # Required\n 'shopSuccessURL': settings.SHOP['success_url'],\n 'shopFailURL': settings.SHOP['fail_url'],\n 'customerNumber': order.id, # Required\n 'sum': order.total_price, # Required\n 'orderNumber': order.fake_order_number,\n 'cps_phone': order.phone,\n 'cps_email': order.email,\n 'paymentType': request.POST.get('payment_type'),\n }\n\n return JsonResponse(response_data)\n", "path": "shopelectro/views/ecommerce.py"}], "after_files": [{"content": "from django.conf import settings\nfrom django.core import serializers\nfrom django.http import HttpResponse, JsonResponse\nfrom django.shortcuts import get_object_or_404, render\nfrom django.views.decorators.http import require_POST\n\nfrom ecommerce import mailer, views as ec_views\nfrom pages.models import CustomPage\n\nfrom shopelectro.cart import SECart\nfrom shopelectro.forms import OrderForm\nfrom shopelectro.models import Product, Order\n\n\n# ECOMMERCE VIEWS\nclass OrderPage(ec_views.OrderPage):\n order_form = OrderForm\n cart = SECart\n\n def get_context_data(self, request, **kwargs):\n return {\n **super(OrderPage, self).get_context_data(request, **kwargs),\n 'page': CustomPage.objects.get(slug='order'),\n }\n\n\nclass AddToCart(ec_views.AddToCart):\n cart = SECart\n product_model = Product\n order_form = OrderForm\n\n\nclass RemoveFromCart(ec_views.RemoveFromCart):\n cart = SECart\n product_model = Product\n order_form = OrderForm\n\n\nclass ChangeCount(ec_views.ChangeCount):\n cart = SECart\n product_model = Product\n order_form = OrderForm\n\n\nclass FlushCart(ec_views.FlushCart):\n product_model = Product\n order_form = OrderForm\n\n\nclass OrderSuccess(ec_views.OrderSuccess):\n order = Order.objects.all().prefetch_related('positions')\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n positions_json = serializers.serialize(\n 'json',\n context['order'].positions.all(),\n fields=['name', 'quantity', 'price'],\n )\n\n return {\n **context,\n 'positions_json': positions_json,\n }\n\n\n@require_POST\ndef one_click_buy(request):\n \"\"\"\n Handle one-click-buy.\n\n Accept XHR, save Order to DB, send mail about it\n and return 200 OK.\n \"\"\"\n SECart(request.session).clear()\n\n cart = SECart(request.session)\n product = get_object_or_404(Product, id=request.POST['product'])\n cart.add(product, int(request.POST['quantity']))\n order = Order(phone=request.POST['phone'])\n order.set_positions(cart)\n ec_views.save_order_to_session(request.session, order)\n mailer.send_order(\n subject=settings.EMAIL_SUBJECTS['one_click'],\n order=order,\n to_customer=False,\n )\n return HttpResponse('ok')\n\n\n@require_POST\ndef order_call(request):\n \"\"\"Send email about ordered call.\"\"\"\n phone, time, url = ec_views.get_keys_from_post(\n request, 'phone', 'time', 'url')\n\n mailer.send_backcall(\n subject=settings.EMAIL_SUBJECTS['call'],\n phone=phone,\n time=time,\n url=url,\n )\n\n return HttpResponse('ok')\n\n\nclass YandexOrder(OrderPage):\n\n def post(self, request):\n cart = self.cart(request.session)\n form = self.order_form(request.POST.dict())\n\n if not form.is_valid():\n return render(request, self.template, {'cart': cart, 'form': form})\n\n order = form.save()\n order.set_positions(cart)\n ec_views.save_order_to_session(request.session, order)\n\n # Took form fields from Yandex docs https://goo.gl/afKfsz\n response_data = {\n 'yandex_kassa_link': settings.YANDEX_KASSA_LINK, # Required\n 'shopId': settings.SHOP['id'], # Required\n 'scid': settings.SHOP['scid'], # Required\n 'shopSuccessURL': settings.SHOP['success_url'],\n 'shopFailURL': settings.SHOP['fail_url'],\n 'customerNumber': order.id, # Required\n 'sum': order.total_price, # Required\n 'orderNumber': order.fake_order_number,\n 'cps_phone': order.phone,\n 'cps_email': order.email,\n 'paymentType': request.POST.get('payment_type'),\n }\n\n return JsonResponse(response_data)\n", "path": "shopelectro/views/ecommerce.py"}]} | 1,559 | 232 |
gh_patches_debug_32456 | rasdani/github-patches | git_diff | PlasmaPy__PlasmaPy-217 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Create physics exceptions and warnings
Several of the exceptions and warnings that come up in PlasmaPy result from when we try to violate the laws of physics. Most of the time these violations cause a `UserWarning` or raise a `ValueError`, but these are pretty generic and don't give much insight into the cause of the problem. To help with exception handling, I propose that we create new exceptions and warnings, such as `PhysicsError` and `PhysicsWarning`. We could additionally be more specific and also have `RelativityError` and/or `RelativityWarning`. One possibility would be to put these in a new file called `plasmapy/utils/exceptions.py`.
This would be a good first contribution for someone who is somewhat familiar with Python. Here's documentation on [user-defined exceptions](https://docs.python.org/3/tutorial/errors.html#user-defined-exceptions. Thank you!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `plasmapy/utils/exceptions.py`
Content:
```
1 """
2 Custom Error and Warning names to improve readability
3 """
4
5
6 # ----------
7 # Exceptions:
8 # ----------
9
10 class PlasmaPyError(Exception):
11 """
12 Base class of PlasmaPy custom errors.
13
14 All custom exceptions raised by PlasmaPy should inherit from this class
15 and be defined in this module.
16
17 Custom exceptions can inherit from other exception types too. Thus, if code
18 already knows how to handle a ValueError, it won't need any specific
19 modification.
20 """
21
22
23 class PhysicsError(PlasmaPyError, ValueError):
24 """Error for use of a physics value outside PlasmaPy theoretical bounds"""
25
26
27 class RelativityError(PhysicsError):
28 """Error for use of a speed greater than or equal to the speed of light"""
29
30
31 # ----------
32 # Warnings:
33 # ----------
34
35 class PlasmaPyWarning(Warning):
36 """Base class of PlasmaPy custom warnings.
37
38 All PlasmaPy custom warnings should inherit from this class and be defined
39 in this module.
40
41 Warnings should be issued using warnings.warn, which will not break
42 execution if unhandled.
43 """
44
45
46 class PhysicsWarning(PlasmaPyWarning):
47 """Warning for using a mildly worrisome physics value"""
48
49
50 class RelativityWarning(PhysicsWarning):
51 """Warning for use of a speed quantity approaching the speed of light"""
52
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/plasmapy/utils/exceptions.py b/plasmapy/utils/exceptions.py
--- a/plasmapy/utils/exceptions.py
+++ b/plasmapy/utils/exceptions.py
@@ -18,14 +18,51 @@
already knows how to handle a ValueError, it won't need any specific
modification.
"""
+ pass
class PhysicsError(PlasmaPyError, ValueError):
"""Error for use of a physics value outside PlasmaPy theoretical bounds"""
+ pass
class RelativityError(PhysicsError):
"""Error for use of a speed greater than or equal to the speed of light"""
+ pass
+
+
+class AtomicError(PlasmaPyError):
+ """Error for use by an atomic subpackage"""
+ pass
+
+
+class MissingAtomicDataError(AtomicError):
+ """Error for use when atomic data is missing."""
+ pass
+
+
+class NoChargeInfoError(AtomicError):
+ """Error for use when charge information is needed but missing."""
+
+
+class IonError(NoChargeInfoError):
+ """Error for use when an ion is invalid."""
+ pass
+
+
+class IsotopeError(AtomicError):
+ """Error for use when an isotope is invalid."""
+ pass
+
+
+class ElementError(IsotopeError, IonError):
+ """Error for use when an element is invalid."""
+ pass
+
+
+class ParticleError(ElementError):
+ """Error for use when a particle is invalid."""
+ pass
# ----------
@@ -41,11 +78,24 @@
Warnings should be issued using warnings.warn, which will not break
execution if unhandled.
"""
+ pass
class PhysicsWarning(PlasmaPyWarning):
"""Warning for using a mildly worrisome physics value"""
+ pass
class RelativityWarning(PhysicsWarning):
"""Warning for use of a speed quantity approaching the speed of light"""
+ pass
+
+
+class AtomicWarning(PlasmaPyWarning):
+ """Warnings for use in the atomic subpackage."""
+ pass
+
+
+class MissingAtomicDataWarning(AtomicWarning):
+ """Warning for use when atomic data is missing."""
+ pass
| {"golden_diff": "diff --git a/plasmapy/utils/exceptions.py b/plasmapy/utils/exceptions.py\n--- a/plasmapy/utils/exceptions.py\n+++ b/plasmapy/utils/exceptions.py\n@@ -18,14 +18,51 @@\n already knows how to handle a ValueError, it won't need any specific\n modification.\n \"\"\"\n+ pass\n \n \n class PhysicsError(PlasmaPyError, ValueError):\n \"\"\"Error for use of a physics value outside PlasmaPy theoretical bounds\"\"\"\n+ pass\n \n \n class RelativityError(PhysicsError):\n \"\"\"Error for use of a speed greater than or equal to the speed of light\"\"\"\n+ pass\n+\n+\n+class AtomicError(PlasmaPyError):\n+ \"\"\"Error for use by an atomic subpackage\"\"\"\n+ pass\n+\n+\n+class MissingAtomicDataError(AtomicError):\n+ \"\"\"Error for use when atomic data is missing.\"\"\"\n+ pass\n+\n+\n+class NoChargeInfoError(AtomicError):\n+ \"\"\"Error for use when charge information is needed but missing.\"\"\"\n+\n+\n+class IonError(NoChargeInfoError):\n+ \"\"\"Error for use when an ion is invalid.\"\"\"\n+ pass\n+\n+\n+class IsotopeError(AtomicError):\n+ \"\"\"Error for use when an isotope is invalid.\"\"\"\n+ pass\n+\n+\n+class ElementError(IsotopeError, IonError):\n+ \"\"\"Error for use when an element is invalid.\"\"\"\n+ pass\n+\n+\n+class ParticleError(ElementError):\n+ \"\"\"Error for use when a particle is invalid.\"\"\"\n+ pass\n \n \n # ----------\n@@ -41,11 +78,24 @@\n Warnings should be issued using warnings.warn, which will not break\n execution if unhandled.\n \"\"\"\n+ pass\n \n \n class PhysicsWarning(PlasmaPyWarning):\n \"\"\"Warning for using a mildly worrisome physics value\"\"\"\n+ pass\n \n \n class RelativityWarning(PhysicsWarning):\n \"\"\"Warning for use of a speed quantity approaching the speed of light\"\"\"\n+ pass\n+\n+\n+class AtomicWarning(PlasmaPyWarning):\n+ \"\"\"Warnings for use in the atomic subpackage.\"\"\"\n+ pass\n+\n+\n+class MissingAtomicDataWarning(AtomicWarning):\n+ \"\"\"Warning for use when atomic data is missing.\"\"\"\n+ pass\n", "issue": "Create physics exceptions and warnings\nSeveral of the exceptions and warnings that come up in PlasmaPy result from when we try to violate the laws of physics. Most of the time these violations cause a `UserWarning` or raise a `ValueError`, but these are pretty generic and don't give much insight into the cause of the problem. To help with exception handling, I propose that we create new exceptions and warnings, such as `PhysicsError` and `PhysicsWarning`. We could additionally be more specific and also have `RelativityError` and/or `RelativityWarning`. One possibility would be to put these in a new file called `plasmapy/utils/exceptions.py`.\r\n\r\nThis would be a good first contribution for someone who is somewhat familiar with Python. Here's documentation on [user-defined exceptions](https://docs.python.org/3/tutorial/errors.html#user-defined-exceptions. Thank you!\r\n\n", "before_files": [{"content": "\"\"\"\nCustom Error and Warning names to improve readability\n\"\"\"\n\n\n# ----------\n# Exceptions:\n# ----------\n\nclass PlasmaPyError(Exception):\n \"\"\"\n Base class of PlasmaPy custom errors.\n\n All custom exceptions raised by PlasmaPy should inherit from this class\n and be defined in this module.\n\n Custom exceptions can inherit from other exception types too. Thus, if code\n already knows how to handle a ValueError, it won't need any specific\n modification.\n \"\"\"\n\n\nclass PhysicsError(PlasmaPyError, ValueError):\n \"\"\"Error for use of a physics value outside PlasmaPy theoretical bounds\"\"\"\n\n\nclass RelativityError(PhysicsError):\n \"\"\"Error for use of a speed greater than or equal to the speed of light\"\"\"\n\n\n# ----------\n# Warnings:\n# ----------\n\nclass PlasmaPyWarning(Warning):\n \"\"\"Base class of PlasmaPy custom warnings.\n\n All PlasmaPy custom warnings should inherit from this class and be defined\n in this module.\n\n Warnings should be issued using warnings.warn, which will not break\n execution if unhandled.\n \"\"\"\n\n\nclass PhysicsWarning(PlasmaPyWarning):\n \"\"\"Warning for using a mildly worrisome physics value\"\"\"\n\n\nclass RelativityWarning(PhysicsWarning):\n \"\"\"Warning for use of a speed quantity approaching the speed of light\"\"\"\n", "path": "plasmapy/utils/exceptions.py"}], "after_files": [{"content": "\"\"\"\nCustom Error and Warning names to improve readability\n\"\"\"\n\n\n# ----------\n# Exceptions:\n# ----------\n\nclass PlasmaPyError(Exception):\n \"\"\"\n Base class of PlasmaPy custom errors.\n\n All custom exceptions raised by PlasmaPy should inherit from this class\n and be defined in this module.\n\n Custom exceptions can inherit from other exception types too. Thus, if code\n already knows how to handle a ValueError, it won't need any specific\n modification.\n \"\"\"\n pass\n\n\nclass PhysicsError(PlasmaPyError, ValueError):\n \"\"\"Error for use of a physics value outside PlasmaPy theoretical bounds\"\"\"\n pass\n\n\nclass RelativityError(PhysicsError):\n \"\"\"Error for use of a speed greater than or equal to the speed of light\"\"\"\n pass\n\n\nclass AtomicError(PlasmaPyError):\n \"\"\"Error for use by an atomic subpackage\"\"\"\n pass\n\n\nclass MissingAtomicDataError(AtomicError):\n \"\"\"Error for use when atomic data is missing.\"\"\"\n pass\n\n\nclass NoChargeInfoError(AtomicError):\n \"\"\"Error for use when charge information is needed but missing.\"\"\"\n\n\nclass IonError(NoChargeInfoError):\n \"\"\"Error for use when an ion is invalid.\"\"\"\n pass\n\n\nclass IsotopeError(AtomicError):\n \"\"\"Error for use when an isotope is invalid.\"\"\"\n pass\n\n\nclass ElementError(IsotopeError, IonError):\n \"\"\"Error for use when an element is invalid.\"\"\"\n pass\n\n\nclass ParticleError(ElementError):\n \"\"\"Error for use when a particle is invalid.\"\"\"\n pass\n\n\n# ----------\n# Warnings:\n# ----------\n\nclass PlasmaPyWarning(Warning):\n \"\"\"Base class of PlasmaPy custom warnings.\n\n All PlasmaPy custom warnings should inherit from this class and be defined\n in this module.\n\n Warnings should be issued using warnings.warn, which will not break\n execution if unhandled.\n \"\"\"\n pass\n\n\nclass PhysicsWarning(PlasmaPyWarning):\n \"\"\"Warning for using a mildly worrisome physics value\"\"\"\n pass\n\n\nclass RelativityWarning(PhysicsWarning):\n \"\"\"Warning for use of a speed quantity approaching the speed of light\"\"\"\n pass\n\n\nclass AtomicWarning(PlasmaPyWarning):\n \"\"\"Warnings for use in the atomic subpackage.\"\"\"\n pass\n\n\nclass MissingAtomicDataWarning(AtomicWarning):\n \"\"\"Warning for use when atomic data is missing.\"\"\"\n pass\n", "path": "plasmapy/utils/exceptions.py"}]} | 824 | 492 |
gh_patches_debug_916 | rasdani/github-patches | git_diff | facebookresearch__ParlAI-581 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Can we keep a mturk task from outside parlai/mturk/tasks?
Hi @JackUrb, I have a few questions regarding the mturk evaluation:
1. This link (http://parl.ai/static/docs/mturk.html#running-a-task) says that
> to run an MTurk task, first ensure that the task directory is in `parlai/mturk/tasks/`.
Is it by design? I tried to keep my task in some other directory (outside root parlai directory) and tried to import parlai as a package but that doesn't seem to work. Basically I am trying to use ParlAI as one of the dependency for one of my project for Human In the loop evaluation.
2. How easy/hard it is to provide support for keeping the mturk task anywhere?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # Copyright (c) 2017-present, Facebook, Inc.
2 # All rights reserved.
3 # This source code is licensed under the BSD-style license found in the
4 # LICENSE file in the root directory of this source tree. An additional grant
5 # of patent rights can be found in the PATENTS file in the same directory.
6
7
8 from setuptools import setup, find_packages
9 import sys
10
11 if sys.version_info < (3,):
12 sys.exit('Sorry, Python3 is required for ParlAI.')
13
14 with open('README.md', encoding="utf8") as f:
15 readme = f.read()
16
17 with open('LICENSE') as f:
18 license = f.read()
19
20 with open('requirements.txt') as f:
21 reqs = f.read()
22
23 setup(
24 name='parlai',
25 version='0.1.0',
26 description='Unified API for accessing dialog datasets.',
27 long_description=readme,
28 url='http://parl.ai/',
29 license=license,
30 packages=find_packages(exclude=(
31 'data', 'docs', 'downloads', 'examples', 'logs', 'tests')),
32 install_requires=reqs.strip().split('\n'),
33 )
34
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -30,4 +30,5 @@
packages=find_packages(exclude=(
'data', 'docs', 'downloads', 'examples', 'logs', 'tests')),
install_requires=reqs.strip().split('\n'),
+ include_package_data=True,
)
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -30,4 +30,5 @@\n packages=find_packages(exclude=(\n 'data', 'docs', 'downloads', 'examples', 'logs', 'tests')),\n install_requires=reqs.strip().split('\\n'),\n+ include_package_data=True,\n )\n", "issue": "Can we keep a mturk task from outside parlai/mturk/tasks?\nHi @JackUrb, I have a few questions regarding the mturk evaluation:\r\n\r\n1. This link (http://parl.ai/static/docs/mturk.html#running-a-task) says that \r\n\r\n> to run an MTurk task, first ensure that the task directory is in `parlai/mturk/tasks/`. \r\n\r\n Is it by design? I tried to keep my task in some other directory (outside root parlai directory) and tried to import parlai as a package but that doesn't seem to work. Basically I am trying to use ParlAI as one of the dependency for one of my project for Human In the loop evaluation.\r\n\r\n2. How easy/hard it is to provide support for keeping the mturk task anywhere?\r\n\n", "before_files": [{"content": "# Copyright (c) 2017-present, Facebook, Inc.\n# All rights reserved.\n# This source code is licensed under the BSD-style license found in the\n# LICENSE file in the root directory of this source tree. An additional grant\n# of patent rights can be found in the PATENTS file in the same directory.\n\n\nfrom setuptools import setup, find_packages\nimport sys\n\nif sys.version_info < (3,):\n sys.exit('Sorry, Python3 is required for ParlAI.')\n\nwith open('README.md', encoding=\"utf8\") as f:\n readme = f.read()\n\nwith open('LICENSE') as f:\n license = f.read()\n\nwith open('requirements.txt') as f:\n reqs = f.read()\n\nsetup(\n name='parlai',\n version='0.1.0',\n description='Unified API for accessing dialog datasets.',\n long_description=readme,\n url='http://parl.ai/',\n license=license,\n packages=find_packages(exclude=(\n 'data', 'docs', 'downloads', 'examples', 'logs', 'tests')),\n install_requires=reqs.strip().split('\\n'),\n)\n", "path": "setup.py"}], "after_files": [{"content": "# Copyright (c) 2017-present, Facebook, Inc.\n# All rights reserved.\n# This source code is licensed under the BSD-style license found in the\n# LICENSE file in the root directory of this source tree. An additional grant\n# of patent rights can be found in the PATENTS file in the same directory.\n\n\nfrom setuptools import setup, find_packages\nimport sys\n\nif sys.version_info < (3,):\n sys.exit('Sorry, Python3 is required for ParlAI.')\n\nwith open('README.md', encoding=\"utf8\") as f:\n readme = f.read()\n\nwith open('LICENSE') as f:\n license = f.read()\n\nwith open('requirements.txt') as f:\n reqs = f.read()\n\nsetup(\n name='parlai',\n version='0.1.0',\n description='Unified API for accessing dialog datasets.',\n long_description=readme,\n url='http://parl.ai/',\n license=license,\n packages=find_packages(exclude=(\n 'data', 'docs', 'downloads', 'examples', 'logs', 'tests')),\n install_requires=reqs.strip().split('\\n'),\n include_package_data=True,\n)\n", "path": "setup.py"}]} | 737 | 77 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.