problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.1k
25.4k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 582
39.1k
| num_tokens
int64 271
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_32737 | rasdani/github-patches | git_diff | dask__dask-586 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Additional improvement for da.image.imread
I think following 2 improvements make the function better.
#### 1. Allow custom `imread` function.
Some prefer `opencv` which reads color in BGR order, otherwise `skimage` reads in RGB order. Adding `dialect` option (like `dialect='skimage'` or `dialect='cv'`) or accept different file read function may better.
#### 2. Allow custom preprocessing function.
In image processing, input images may have different sizes. In these case, preprocessing is needed between image read and array creation. This preprocessing function must return the same size of array on user's responsibility.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `dask/array/image.py`
Content:
```
1 from glob import glob
2 import os
3
4 try:
5 from skimage.io import imread as sk_imread
6 except ImportError:
7 pass
8
9 from .core import Array
10 from ..base import tokenize
11
12 def add_leading_dimension(x):
13 return x[None, ...]
14
15
16 def imread(filename):
17 """ Read a stack of images into a dask array
18
19 Parameters
20 ----------
21
22 filename: string
23 A globstring like 'myfile.*.png'
24
25 Example
26 -------
27
28 >>> from dask.array.image import imread
29 >>> im = imread('2015-*-*.png') # doctest: +SKIP
30 >>> im.shape # doctest: +SKIP
31 (365, 1000, 1000, 3)
32
33 Returns
34 -------
35
36 Dask array of all images stacked along the first dimension. All images
37 will be treated as individual chunks
38 """
39 filenames = sorted(glob(filename))
40 if not filenames:
41 raise ValueError("No files found under name %s" % filename)
42
43 name = 'imread-%s' % tokenize(filenames, map(os.path.getmtime, filenames))
44
45 sample = sk_imread(filenames[0])
46
47 dsk = dict(((name, i) + (0,) * len(sample.shape),
48 (add_leading_dimension, (sk_imread, filename)))
49 for i, filename in enumerate(filenames))
50
51 chunks = ((1,) * len(filenames),) + tuple((d,) for d in sample.shape)
52
53 return Array(dsk, name, chunks, sample.dtype)
54
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/dask/array/image.py b/dask/array/image.py
--- a/dask/array/image.py
+++ b/dask/array/image.py
@@ -13,7 +13,7 @@
return x[None, ...]
-def imread(filename):
+def imread(filename, imread=None, preprocess=None):
""" Read a stack of images into a dask array
Parameters
@@ -21,6 +21,13 @@
filename: string
A globstring like 'myfile.*.png'
+ imread: function (optional)
+ Optionally provide custom imread function.
+ Function should expect a filename and produce a numpy array.
+ Defaults to ``skimage.io.imread``.
+ preprocess: function (optional)
+ Optionally provide custom function to preprocess the image.
+ Function should expect a numpy array for a single image.
Example
-------
@@ -36,17 +43,25 @@
Dask array of all images stacked along the first dimension. All images
will be treated as individual chunks
"""
+ imread = imread or sk_imread
filenames = sorted(glob(filename))
if not filenames:
raise ValueError("No files found under name %s" % filename)
name = 'imread-%s' % tokenize(filenames, map(os.path.getmtime, filenames))
- sample = sk_imread(filenames[0])
-
- dsk = dict(((name, i) + (0,) * len(sample.shape),
- (add_leading_dimension, (sk_imread, filename)))
- for i, filename in enumerate(filenames))
+ sample = imread(filenames[0])
+ if preprocess:
+ sample = preprocess(sample)
+
+ keys = [(name, i) + (0,) * len(sample.shape) for i in range(len(filenames))]
+ if preprocess:
+ values = [(add_leading_dimension, (preprocess, (imread, filename)))
+ for filename in filenames]
+ else:
+ values = [(add_leading_dimension, (imread, filename))
+ for filename in filenames]
+ dsk = dict(zip(keys, values))
chunks = ((1,) * len(filenames),) + tuple((d,) for d in sample.shape)
| {"golden_diff": "diff --git a/dask/array/image.py b/dask/array/image.py\n--- a/dask/array/image.py\n+++ b/dask/array/image.py\n@@ -13,7 +13,7 @@\n return x[None, ...]\n \n \n-def imread(filename):\n+def imread(filename, imread=None, preprocess=None):\n \"\"\" Read a stack of images into a dask array\n \n Parameters\n@@ -21,6 +21,13 @@\n \n filename: string\n A globstring like 'myfile.*.png'\n+ imread: function (optional)\n+ Optionally provide custom imread function.\n+ Function should expect a filename and produce a numpy array.\n+ Defaults to ``skimage.io.imread``.\n+ preprocess: function (optional)\n+ Optionally provide custom function to preprocess the image.\n+ Function should expect a numpy array for a single image.\n \n Example\n -------\n@@ -36,17 +43,25 @@\n Dask array of all images stacked along the first dimension. All images\n will be treated as individual chunks\n \"\"\"\n+ imread = imread or sk_imread\n filenames = sorted(glob(filename))\n if not filenames:\n raise ValueError(\"No files found under name %s\" % filename)\n \n name = 'imread-%s' % tokenize(filenames, map(os.path.getmtime, filenames))\n \n- sample = sk_imread(filenames[0])\n-\n- dsk = dict(((name, i) + (0,) * len(sample.shape),\n- (add_leading_dimension, (sk_imread, filename)))\n- for i, filename in enumerate(filenames))\n+ sample = imread(filenames[0])\n+ if preprocess:\n+ sample = preprocess(sample)\n+\n+ keys = [(name, i) + (0,) * len(sample.shape) for i in range(len(filenames))]\n+ if preprocess:\n+ values = [(add_leading_dimension, (preprocess, (imread, filename)))\n+ for filename in filenames]\n+ else:\n+ values = [(add_leading_dimension, (imread, filename))\n+ for filename in filenames]\n+ dsk = dict(zip(keys, values))\n \n chunks = ((1,) * len(filenames),) + tuple((d,) for d in sample.shape)\n", "issue": "Additional improvement for da.image.imread\nI think following 2 improvements make the function better.\n#### 1. Allow custom `imread` function.\n\nSome prefer `opencv` which reads color in BGR order, otherwise `skimage` reads in RGB order. Adding `dialect` option (like `dialect='skimage'` or `dialect='cv'`) or accept different file read function may better.\n#### 2. Allow custom preprocessing function.\n\nIn image processing, input images may have different sizes. In these case, preprocessing is needed between image read and array creation. This preprocessing function must return the same size of array on user's responsibility.\n\n", "before_files": [{"content": "from glob import glob\nimport os\n\ntry:\n from skimage.io import imread as sk_imread\nexcept ImportError:\n pass\n\nfrom .core import Array\nfrom ..base import tokenize\n\ndef add_leading_dimension(x):\n return x[None, ...]\n\n\ndef imread(filename):\n \"\"\" Read a stack of images into a dask array\n\n Parameters\n ----------\n\n filename: string\n A globstring like 'myfile.*.png'\n\n Example\n -------\n\n >>> from dask.array.image import imread\n >>> im = imread('2015-*-*.png') # doctest: +SKIP\n >>> im.shape # doctest: +SKIP\n (365, 1000, 1000, 3)\n\n Returns\n -------\n\n Dask array of all images stacked along the first dimension. All images\n will be treated as individual chunks\n \"\"\"\n filenames = sorted(glob(filename))\n if not filenames:\n raise ValueError(\"No files found under name %s\" % filename)\n\n name = 'imread-%s' % tokenize(filenames, map(os.path.getmtime, filenames))\n\n sample = sk_imread(filenames[0])\n\n dsk = dict(((name, i) + (0,) * len(sample.shape),\n (add_leading_dimension, (sk_imread, filename)))\n for i, filename in enumerate(filenames))\n\n chunks = ((1,) * len(filenames),) + tuple((d,) for d in sample.shape)\n\n return Array(dsk, name, chunks, sample.dtype)\n", "path": "dask/array/image.py"}], "after_files": [{"content": "from glob import glob\nimport os\n\ntry:\n from skimage.io import imread as sk_imread\nexcept ImportError:\n pass\n\nfrom .core import Array\nfrom ..base import tokenize\n\ndef add_leading_dimension(x):\n return x[None, ...]\n\n\ndef imread(filename, imread=None, preprocess=None):\n \"\"\" Read a stack of images into a dask array\n\n Parameters\n ----------\n\n filename: string\n A globstring like 'myfile.*.png'\n imread: function (optional)\n Optionally provide custom imread function.\n Function should expect a filename and produce a numpy array.\n Defaults to ``skimage.io.imread``.\n preprocess: function (optional)\n Optionally provide custom function to preprocess the image.\n Function should expect a numpy array for a single image.\n\n Example\n -------\n\n >>> from dask.array.image import imread\n >>> im = imread('2015-*-*.png') # doctest: +SKIP\n >>> im.shape # doctest: +SKIP\n (365, 1000, 1000, 3)\n\n Returns\n -------\n\n Dask array of all images stacked along the first dimension. All images\n will be treated as individual chunks\n \"\"\"\n imread = imread or sk_imread\n filenames = sorted(glob(filename))\n if not filenames:\n raise ValueError(\"No files found under name %s\" % filename)\n\n name = 'imread-%s' % tokenize(filenames, map(os.path.getmtime, filenames))\n\n sample = imread(filenames[0])\n if preprocess:\n sample = preprocess(sample)\n\n keys = [(name, i) + (0,) * len(sample.shape) for i in range(len(filenames))]\n if preprocess:\n values = [(add_leading_dimension, (preprocess, (imread, filename)))\n for filename in filenames]\n else:\n values = [(add_leading_dimension, (imread, filename))\n for filename in filenames]\n dsk = dict(zip(keys, values))\n\n chunks = ((1,) * len(filenames),) + tuple((d,) for d in sample.shape)\n\n return Array(dsk, name, chunks, sample.dtype)\n", "path": "dask/array/image.py"}]} | 843 | 497 |
gh_patches_debug_3876 | rasdani/github-patches | git_diff | xorbitsai__inference-299 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
FEAT: Disable Gradio Telemetry
Pull requests are disabled but see here:
https://github.com/arch-btw/inference/pull/1
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/gradio_chatinterface.py`
Content:
```
1 from typing import Dict, List
2
3 import gradio as gr
4
5 from xinference.client import Client
6
7 if __name__ == "__main__":
8 import argparse
9 import textwrap
10
11 parser = argparse.ArgumentParser(
12 formatter_class=argparse.RawDescriptionHelpFormatter,
13 epilog=textwrap.dedent(
14 """\
15 instructions to run:
16 1. Install Xinference and Llama-cpp-python
17 2. Run 'xinference --host "localhost" --port 9997' in terminal
18 3. Run this python file in new terminal window
19
20 e.g. (feel free to copy)
21 python gradio_chatinterface.py \\
22 --endpoint http://localhost:9997 \\
23 --model_name vicuna-v1.3 \\
24 --model_size_in_billions 7 \\
25 --model_format ggmlv3 \\
26 --quantization q2_K
27
28 If you decide to change the port number in step 2,
29 please also change the endpoint in the arguments
30 """
31 ),
32 )
33
34 parser.add_argument(
35 "--endpoint", type=str, required=True, help="Xinference endpoint, required"
36 )
37 parser.add_argument(
38 "--model_name", type=str, required=True, help="Name of the model, required"
39 )
40 parser.add_argument(
41 "--model_size_in_billions",
42 type=int,
43 required=False,
44 help="Size of the model in billions",
45 )
46 parser.add_argument(
47 "--model_format",
48 type=str,
49 required=False,
50 help="Format of the model",
51 )
52 parser.add_argument(
53 "--quantization", type=str, required=False, help="Quantization of the model"
54 )
55
56 args = parser.parse_args()
57
58 endpoint = args.endpoint
59 model_name = args.model_name
60 model_size_in_billions = args.model_size_in_billions
61 model_format = args.model_format
62 quantization = args.quantization
63
64 print(f"Xinference endpoint: {endpoint}")
65 print(f"Model Name: {model_name}")
66 print(f"Model Size (in billions): {model_size_in_billions}")
67 print(f"Model Format: {model_format}")
68 print(f"Quantization: {quantization}")
69
70 client = Client(endpoint)
71 model_uid = client.launch_model(
72 model_name,
73 model_size_in_billions=model_size_in_billions,
74 model_format=model_format,
75 quantization=quantization,
76 n_ctx=2048,
77 )
78 model = client.get_model(model_uid)
79
80 def flatten(matrix: List[List[str]]) -> List[str]:
81 flat_list = []
82 for row in matrix:
83 flat_list += row
84 return flat_list
85
86 def to_chat(lst: List[str]) -> List[Dict[str, str]]:
87 res = []
88 for i in range(len(lst)):
89 role = "assistant" if i % 2 == 1 else "user"
90 res.append(
91 {
92 "role": role,
93 "content": lst[i],
94 }
95 )
96 return res
97
98 def generate_wrapper(message: str, history: List[List[str]]) -> str:
99 output = model.chat(
100 prompt=message,
101 chat_history=to_chat(flatten(history)),
102 generate_config={"max_tokens": 512, "stream": False},
103 )
104 return output["choices"][0]["message"]["content"]
105
106 demo = gr.ChatInterface(
107 fn=generate_wrapper,
108 examples=[
109 "Show me a two sentence horror story with a plot twist",
110 "Generate a Haiku poem using trignometry as the central theme",
111 "Write three sentences of scholarly description regarding a supernatural beast",
112 "Prove there does not exist a largest integer",
113 ],
114 title="Xinference Chat Bot",
115 )
116 demo.launch()
117
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/examples/gradio_chatinterface.py b/examples/gradio_chatinterface.py
--- a/examples/gradio_chatinterface.py
+++ b/examples/gradio_chatinterface.py
@@ -105,6 +105,7 @@
demo = gr.ChatInterface(
fn=generate_wrapper,
+ analytics_enabled=False,
examples=[
"Show me a two sentence horror story with a plot twist",
"Generate a Haiku poem using trignometry as the central theme",
| {"golden_diff": "diff --git a/examples/gradio_chatinterface.py b/examples/gradio_chatinterface.py\n--- a/examples/gradio_chatinterface.py\n+++ b/examples/gradio_chatinterface.py\n@@ -105,6 +105,7 @@\n \n demo = gr.ChatInterface(\n fn=generate_wrapper,\n+ analytics_enabled=False,\n examples=[\n \"Show me a two sentence horror story with a plot twist\",\n \"Generate a Haiku poem using trignometry as the central theme\",\n", "issue": "FEAT: Disable Gradio Telemetry\nPull requests are disabled but see here:\r\n\r\nhttps://github.com/arch-btw/inference/pull/1\n", "before_files": [{"content": "from typing import Dict, List\n\nimport gradio as gr\n\nfrom xinference.client import Client\n\nif __name__ == \"__main__\":\n import argparse\n import textwrap\n\n parser = argparse.ArgumentParser(\n formatter_class=argparse.RawDescriptionHelpFormatter,\n epilog=textwrap.dedent(\n \"\"\"\\\n instructions to run:\n 1. Install Xinference and Llama-cpp-python\n 2. Run 'xinference --host \"localhost\" --port 9997' in terminal\n 3. Run this python file in new terminal window\n\n e.g. (feel free to copy)\n python gradio_chatinterface.py \\\\\n --endpoint http://localhost:9997 \\\\\n --model_name vicuna-v1.3 \\\\\n --model_size_in_billions 7 \\\\\n --model_format ggmlv3 \\\\\n --quantization q2_K\n\n If you decide to change the port number in step 2,\n please also change the endpoint in the arguments\n \"\"\"\n ),\n )\n\n parser.add_argument(\n \"--endpoint\", type=str, required=True, help=\"Xinference endpoint, required\"\n )\n parser.add_argument(\n \"--model_name\", type=str, required=True, help=\"Name of the model, required\"\n )\n parser.add_argument(\n \"--model_size_in_billions\",\n type=int,\n required=False,\n help=\"Size of the model in billions\",\n )\n parser.add_argument(\n \"--model_format\",\n type=str,\n required=False,\n help=\"Format of the model\",\n )\n parser.add_argument(\n \"--quantization\", type=str, required=False, help=\"Quantization of the model\"\n )\n\n args = parser.parse_args()\n\n endpoint = args.endpoint\n model_name = args.model_name\n model_size_in_billions = args.model_size_in_billions\n model_format = args.model_format\n quantization = args.quantization\n\n print(f\"Xinference endpoint: {endpoint}\")\n print(f\"Model Name: {model_name}\")\n print(f\"Model Size (in billions): {model_size_in_billions}\")\n print(f\"Model Format: {model_format}\")\n print(f\"Quantization: {quantization}\")\n\n client = Client(endpoint)\n model_uid = client.launch_model(\n model_name,\n model_size_in_billions=model_size_in_billions,\n model_format=model_format,\n quantization=quantization,\n n_ctx=2048,\n )\n model = client.get_model(model_uid)\n\n def flatten(matrix: List[List[str]]) -> List[str]:\n flat_list = []\n for row in matrix:\n flat_list += row\n return flat_list\n\n def to_chat(lst: List[str]) -> List[Dict[str, str]]:\n res = []\n for i in range(len(lst)):\n role = \"assistant\" if i % 2 == 1 else \"user\"\n res.append(\n {\n \"role\": role,\n \"content\": lst[i],\n }\n )\n return res\n\n def generate_wrapper(message: str, history: List[List[str]]) -> str:\n output = model.chat(\n prompt=message,\n chat_history=to_chat(flatten(history)),\n generate_config={\"max_tokens\": 512, \"stream\": False},\n )\n return output[\"choices\"][0][\"message\"][\"content\"]\n\n demo = gr.ChatInterface(\n fn=generate_wrapper,\n examples=[\n \"Show me a two sentence horror story with a plot twist\",\n \"Generate a Haiku poem using trignometry as the central theme\",\n \"Write three sentences of scholarly description regarding a supernatural beast\",\n \"Prove there does not exist a largest integer\",\n ],\n title=\"Xinference Chat Bot\",\n )\n demo.launch()\n", "path": "examples/gradio_chatinterface.py"}], "after_files": [{"content": "from typing import Dict, List\n\nimport gradio as gr\n\nfrom xinference.client import Client\n\nif __name__ == \"__main__\":\n import argparse\n import textwrap\n\n parser = argparse.ArgumentParser(\n formatter_class=argparse.RawDescriptionHelpFormatter,\n epilog=textwrap.dedent(\n \"\"\"\\\n instructions to run:\n 1. Install Xinference and Llama-cpp-python\n 2. Run 'xinference --host \"localhost\" --port 9997' in terminal\n 3. Run this python file in new terminal window\n\n e.g. (feel free to copy)\n python gradio_chatinterface.py \\\\\n --endpoint http://localhost:9997 \\\\\n --model_name vicuna-v1.3 \\\\\n --model_size_in_billions 7 \\\\\n --model_format ggmlv3 \\\\\n --quantization q2_K\n\n If you decide to change the port number in step 2,\n please also change the endpoint in the arguments\n \"\"\"\n ),\n )\n\n parser.add_argument(\n \"--endpoint\", type=str, required=True, help=\"Xinference endpoint, required\"\n )\n parser.add_argument(\n \"--model_name\", type=str, required=True, help=\"Name of the model, required\"\n )\n parser.add_argument(\n \"--model_size_in_billions\",\n type=int,\n required=False,\n help=\"Size of the model in billions\",\n )\n parser.add_argument(\n \"--model_format\",\n type=str,\n required=False,\n help=\"Format of the model\",\n )\n parser.add_argument(\n \"--quantization\", type=str, required=False, help=\"Quantization of the model\"\n )\n\n args = parser.parse_args()\n\n endpoint = args.endpoint\n model_name = args.model_name\n model_size_in_billions = args.model_size_in_billions\n model_format = args.model_format\n quantization = args.quantization\n\n print(f\"Xinference endpoint: {endpoint}\")\n print(f\"Model Name: {model_name}\")\n print(f\"Model Size (in billions): {model_size_in_billions}\")\n print(f\"Model Format: {model_format}\")\n print(f\"Quantization: {quantization}\")\n\n client = Client(endpoint)\n model_uid = client.launch_model(\n model_name,\n model_size_in_billions=model_size_in_billions,\n model_format=model_format,\n quantization=quantization,\n n_ctx=2048,\n )\n model = client.get_model(model_uid)\n\n def flatten(matrix: List[List[str]]) -> List[str]:\n flat_list = []\n for row in matrix:\n flat_list += row\n return flat_list\n\n def to_chat(lst: List[str]) -> List[Dict[str, str]]:\n res = []\n for i in range(len(lst)):\n role = \"assistant\" if i % 2 == 1 else \"user\"\n res.append(\n {\n \"role\": role,\n \"content\": lst[i],\n }\n )\n return res\n\n def generate_wrapper(message: str, history: List[List[str]]) -> str:\n output = model.chat(\n prompt=message,\n chat_history=to_chat(flatten(history)),\n generate_config={\"max_tokens\": 512, \"stream\": False},\n )\n return output[\"choices\"][0][\"message\"][\"content\"]\n\n demo = gr.ChatInterface(\n fn=generate_wrapper,\n analytics_enabled=False,\n examples=[\n \"Show me a two sentence horror story with a plot twist\",\n \"Generate a Haiku poem using trignometry as the central theme\",\n \"Write three sentences of scholarly description regarding a supernatural beast\",\n \"Prove there does not exist a largest integer\",\n ],\n title=\"Xinference Chat Bot\",\n )\n demo.launch()\n", "path": "examples/gradio_chatinterface.py"}]} | 1,351 | 103 |
gh_patches_debug_16504 | rasdani/github-patches | git_diff | mampfes__hacs_waste_collection_schedule-1693 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Bug]: portenf_sa_gov_au reporting incorrect dates
### I Have A Problem With:
A specific source
### What's Your Problem
The portenf_sa_gov_au sensor has been reporting incorrectly since it updated itself on 24 December 2023 (I can see this from HA logs). It appears that when there is 1 week or less left in the month "main-month" switches to the coming month and "other-month" becomes the current month.
Because of this, the integration reports the current collection next month and the next collections as in the past (and hides them).
The fix in #1110 by @5ila5 partly addresses the problem but it was not foreseeable to him that EOM would be treated this way. @5ila5 also noted that this might be an issue in that closed issue.
### Source (if relevant)
portenf_sa_gov_au
### Logs
```Shell
Output of test_sources.py:
Testing source portenf_sa_gov_au ...
found 8 entries for Broadview, Regency Road, 565
2024-01-26 : general-waste bin [mdi:trash-can]
2024-01-26 : recycling bin [mdi:recycle]
2023-12-02 : general-waste bin [mdi:trash-can]
2023-12-02 : organics bin [mdi:leaf]
2023-12-09 : general-waste bin [mdi:trash-can]
2023-12-09 : recycling bin [mdi:recycle]
2023-12-16 : general-waste bin [mdi:trash-can]
2023-12-16 : organics bin [mdi:leaf]
found 8 entries for 48 Floriedale Rd
2024-01-26 : general-waste bin [mdi:trash-can]
2024-01-26 : recycling bin [mdi:recycle]
2023-12-02 : general-waste bin [mdi:trash-can]
2023-12-02 : organics bin [mdi:leaf]
2023-12-09 : general-waste bin [mdi:trash-can]
2023-12-09 : recycling bin [mdi:recycle]
2023-12-16 : general-waste bin [mdi:trash-can]
2023-12-16 : organics bin [mdi:leaf]
found 8 entries for 24 Margaret Terrace
2024-01-28 : general-waste bin [mdi:trash-can]
2024-01-28 : organics bin [mdi:leaf]
2023-12-04 : general-waste bin [mdi:trash-can]
2023-12-04 : recycling bin [mdi:recycle]
2023-12-11 : general-waste bin [mdi:trash-can]
2023-12-11 : organics bin [mdi:leaf]
2023-12-18 : general-waste bin [mdi:trash-can]
2023-12-18 : recycling bin [mdi:recycle]
found 8 entries for Addison Road 91 with unit
2024-01-28 : general-waste bin [mdi:trash-can]
2024-01-28 : organics bin [mdi:leaf]
2023-12-04 : general-waste bin [mdi:trash-can]
2023-12-04 : recycling bin [mdi:recycle]
2023-12-11 : general-waste bin [mdi:trash-can]
2023-12-11 : organics bin [mdi:leaf]
2023-12-18 : general-waste bin [mdi:trash-can]
2023-12-18 : recycling bin [mdi:recycle]
```
### Relevant Configuration
_No response_
### Checklist Source Error
- [X] Use the example parameters for your source (often available in the documentation) (don't forget to restart Home Assistant after changing the configuration)
- [X] Checked that the website of your service provider is still working
- [X] Tested my attributes on the service provider website (if possible)
- [X] I have tested with the latest version of the integration (master) (for HACS in the 3 dot menu of the integration click on "Redownload" and choose master as version)
### Checklist Sensor Error
- [X] Checked in the Home Assistant Calendar tab if the event names match the types names (if types argument is used)
### Required
- [X] I have searched past (closed AND opened) issues to see if this bug has already been reported, and it hasn't been.
- [X] I understand that people give their precious time for free, and thus I've done my very best to make this problem as easy as possible to investigate.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `custom_components/waste_collection_schedule/waste_collection_schedule/source/portenf_sa_gov_au.py`
Content:
```
1 import logging
2 import re
3 from datetime import datetime
4
5 import requests
6 import urllib3
7 from bs4 import BeautifulSoup
8 from waste_collection_schedule import Collection # type: ignore[attr-defined]
9
10 # With verify=True the POST fails due to a SSLCertVerificationError.
11 # Using verify=False works, but is not ideal. The following links may provide a better way of dealing with this:
12 # https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings
13 # https://urllib3.readthedocs.io/en/1.26.x/user-guide.html#ssl
14 # These two lines areused to suppress the InsecureRequestWarning when using verify=False
15 urllib3.disable_warnings()
16
17 TITLE = "Port Adelaide Enfield, South Australia"
18 DESCRIPTION = "Source for City of Port Adelaide Enfield, South Australia."
19 URL = "https://ecouncil.portenf.sa.gov.au/"
20 TEST_CASES = {
21 "Broadview, Regency Road, 565 ": {
22 "suburb": "Broadview",
23 "street": "Regency Road",
24 "house_number": 565,
25 "unit_number": "",
26 },
27 "48 Floriedale Rd ": {
28 "suburb": "Greenacres",
29 "street": "Floriedale Rd",
30 "house_number": "48",
31 },
32 "24 Margaret Terrace": {
33 "suburb": "Rosewater",
34 "street": "Margaret Terrace",
35 "house_number": "24",
36 },
37 "Addison Road 91 with unit": {
38 "suburb": "Rosewater",
39 "street": "Addison Road",
40 "house_number": 91,
41 "unit_number": 2,
42 },
43 }
44
45 ICON_MAP = {
46 "general-waste bin": "mdi:trash-can",
47 "organics bin": "mdi:leaf",
48 "recycling bin": "mdi:recycle",
49 }
50
51 LOGGER = logging.getLogger(__name__)
52
53 API_URL = "https://ecouncil.portenf.sa.gov.au/public/propertywastedates/public.aspx"
54
55
56 class Source:
57 def __init__(
58 self,
59 suburb: str,
60 street: str,
61 house_number: str | int,
62 unit_number: str | int = "",
63 ):
64 self._suburb: str = suburb
65 self._street: str = street
66 self._house_number: str = str(house_number)
67 self._unit_number: str = str(unit_number)
68
69 def __set_args(
70 self, soup: BeautifulSoup, event_taget=None, additional: dict = {}
71 ) -> dict:
72 args = {
73 "ctl00$MainContent$txtSuburb": self._suburb,
74 "ctl00$MainContent$txtStreetName": self._street,
75 "ctl00$MainContent$txtHouseNumber": self._house_number,
76 "ctl00$MainContent$txtUnitNumber": self._unit_number,
77 }
78 if event_taget is not None:
79 args["__EVENTTARGET"] = event_taget
80
81 for hidden_val in soup.find_all("input", {"type": "hidden"}):
82 args[hidden_val["name"]] = hidden_val["value"]
83
84 for key, value in additional.items():
85 args[key] = value
86 return args
87
88 def fetch(self):
89 session = requests.Session()
90
91 # get First page
92 r = session.get(API_URL, verify=False)
93 r.raise_for_status()
94
95 # extractt arguments
96 args = self.__set_args(
97 BeautifulSoup(r.text, "html.parser"),
98 event_taget="ctl00$MainContent$btnSearch",
99 )
100
101 r = session.post(API_URL, data=args)
102 r.raise_for_status()
103
104 # get page to select an address
105 soup = BeautifulSoup(r.text, "html.parser")
106
107 selectable = soup.find_all("a", {"class": "anchor-button small"}, text="Select")
108
109 if len(selectable) == 0:
110 raise ValueError("No address found")
111 selected = selectable[0]
112
113 # If multiple addresses are found, try to find the one that matches the input and warn if there are multiple or none matches
114 if len(selectable) > 1:
115 found = [
116 " ".join(
117 [y.text for y in x.parent.parent.find_all("td")[1].find_all("span")]
118 )
119 for x in selectable
120 ]
121 using_index = 0
122
123 match = False
124
125 for index, entry in enumerate(found):
126 entry = entry.lower().strip().replace(" ", "")
127 if (
128 self._house_number.lower().strip().replace(" ", "") in entry
129 and self._street.lower().strip().replace(" ", "") in entry
130 and self._suburb.lower().strip().replace(" ", "") in entry
131 and self._unit_number.lower().strip().replace(" ", "") in entry
132 ):
133 if match:
134 LOGGER.warning(
135 f"Multiple addresses found, using first one \nfound:{', '.join(found[:10])}{'...' if len(found) >= 10 else ''} \nusing:{found[using_index]}"
136 )
137 break
138 using_index = index
139 match = True
140 if not match:
141 LOGGER.warning(
142 f"no perfect address match found, using:{found[using_index]}"
143 )
144
145 # request first address
146 args = self.__set_args(
147 soup,
148 event_taget="ctl00$MainContent$gvPropertyResults$ctl02$btnSelect",
149 additional={selected["href"].split("'")[1]: ""},
150 )
151 r = session.post(API_URL, data=args)
152 r.raise_for_status()
153
154 soup = BeautifulSoup(r.text, "html.parser")
155 cal_header = soup.find("th", {"class": "header-month"}).find("span").text
156
157 from_month = cal_header.split("-")[0].strip()
158 to_month = cal_header.split("-")[1].strip().split(" ")[0]
159 to_year = from_year = cal_header.split("-")[1].strip().split(" ")[1]
160 # if main month contains a year, set it (maybe happens in december???)
161 if len(from_month.split(" ")) > 1:
162 from_year = from_month.split(" ")[1]
163 from_month = from_month.split(" ")[0]
164
165 today_div = soup.find("table", id="cal").find("td", class_="today")
166 print(today_div)
167
168 # if other-month is to_month
169 if (
170 "other-month" in today_div.attrs
171 and datetime.now().strftime("%B") != to_month
172 ):
173 main_month, other_month = from_month, to_month
174 main_year, other_year = from_year, to_year
175 else: # if other-month is from_month
176 main_month, other_month = to_month, from_month
177 main_year, other_year = to_year, from_year
178
179 entries = []
180
181 calendar = soup.find("table", {"class": "collection-day-calendar"})
182 # Iterate over all days with pickups
183 for pickup in calendar.find_all(
184 "div", {"class": re.compile(r"pickup|next-pickup")}
185 ):
186 parent_td = pickup.parent
187 month = (
188 main_month if "main-month" in parent_td.attrs["class"] else other_month
189 )
190 year = main_year if "main-month" in parent_td.attrs["class"] else other_year
191 day = parent_td.find("div", {"class": "daynumber"}).text
192
193 # Iterate over all pickup container types for this day
194 for container in pickup.find_all("div", {"class": "bin-container"}):
195 container_type = " ".join(container.find("div").attrs["class"])
196 container_icon = ICON_MAP.get(container_type)
197
198 date = datetime.strptime(f"{year}-{month}-{day}", "%Y-%B-%d").date()
199 entries.append(
200 Collection(date=date, t=container_type, icon=container_icon)
201 )
202
203 return entries
204
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/portenf_sa_gov_au.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/portenf_sa_gov_au.py
--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/portenf_sa_gov_au.py
+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/portenf_sa_gov_au.py
@@ -163,12 +163,14 @@
from_month = from_month.split(" ")[0]
today_div = soup.find("table", id="cal").find("td", class_="today")
- print(today_div)
# if other-month is to_month
if (
- "other-month" in today_div.attrs
- and datetime.now().strftime("%B") != to_month
+ "other-month" in today_div.attrs["class"]
+ and datetime.now().strftime("%B") == to_month
+ ) or (
+ "main-month" in today_div.attrs["class"]
+ and datetime.now().strftime("%B") == from_month
):
main_month, other_month = from_month, to_month
main_year, other_year = from_year, to_year
| {"golden_diff": "diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/portenf_sa_gov_au.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/portenf_sa_gov_au.py\n--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/portenf_sa_gov_au.py\n+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/portenf_sa_gov_au.py\n@@ -163,12 +163,14 @@\n from_month = from_month.split(\" \")[0]\n \n today_div = soup.find(\"table\", id=\"cal\").find(\"td\", class_=\"today\")\n- print(today_div)\n \n # if other-month is to_month\n if (\n- \"other-month\" in today_div.attrs\n- and datetime.now().strftime(\"%B\") != to_month\n+ \"other-month\" in today_div.attrs[\"class\"]\n+ and datetime.now().strftime(\"%B\") == to_month\n+ ) or (\n+ \"main-month\" in today_div.attrs[\"class\"]\n+ and datetime.now().strftime(\"%B\") == from_month\n ):\n main_month, other_month = from_month, to_month\n main_year, other_year = from_year, to_year\n", "issue": "[Bug]: portenf_sa_gov_au reporting incorrect dates\n### I Have A Problem With:\r\n\r\nA specific source\r\n\r\n### What's Your Problem\r\n\r\nThe portenf_sa_gov_au sensor has been reporting incorrectly since it updated itself on 24 December 2023 (I can see this from HA logs). It appears that when there is 1 week or less left in the month \"main-month\" switches to the coming month and \"other-month\" becomes the current month.\r\n\r\nBecause of this, the integration reports the current collection next month and the next collections as in the past (and hides them).\r\n\r\nThe fix in #1110 by @5ila5 partly addresses the problem but it was not foreseeable to him that EOM would be treated this way. @5ila5 also noted that this might be an issue in that closed issue.\r\n\r\n### Source (if relevant)\r\n\r\nportenf_sa_gov_au\r\n\r\n### Logs\r\n\r\n```Shell\r\nOutput of test_sources.py:\r\n\r\nTesting source portenf_sa_gov_au ...\r\n found 8 entries for Broadview, Regency Road, 565\r\n 2024-01-26 : general-waste bin [mdi:trash-can]\r\n 2024-01-26 : recycling bin [mdi:recycle]\r\n 2023-12-02 : general-waste bin [mdi:trash-can]\r\n 2023-12-02 : organics bin [mdi:leaf]\r\n 2023-12-09 : general-waste bin [mdi:trash-can]\r\n 2023-12-09 : recycling bin [mdi:recycle]\r\n 2023-12-16 : general-waste bin [mdi:trash-can]\r\n 2023-12-16 : organics bin [mdi:leaf]\r\n found 8 entries for 48 Floriedale Rd\r\n 2024-01-26 : general-waste bin [mdi:trash-can]\r\n 2024-01-26 : recycling bin [mdi:recycle]\r\n 2023-12-02 : general-waste bin [mdi:trash-can]\r\n 2023-12-02 : organics bin [mdi:leaf]\r\n 2023-12-09 : general-waste bin [mdi:trash-can]\r\n 2023-12-09 : recycling bin [mdi:recycle]\r\n 2023-12-16 : general-waste bin [mdi:trash-can]\r\n 2023-12-16 : organics bin [mdi:leaf]\r\n found 8 entries for 24 Margaret Terrace\r\n 2024-01-28 : general-waste bin [mdi:trash-can]\r\n 2024-01-28 : organics bin [mdi:leaf]\r\n 2023-12-04 : general-waste bin [mdi:trash-can]\r\n 2023-12-04 : recycling bin [mdi:recycle]\r\n 2023-12-11 : general-waste bin [mdi:trash-can]\r\n 2023-12-11 : organics bin [mdi:leaf]\r\n 2023-12-18 : general-waste bin [mdi:trash-can]\r\n 2023-12-18 : recycling bin [mdi:recycle]\r\n found 8 entries for Addison Road 91 with unit\r\n 2024-01-28 : general-waste bin [mdi:trash-can]\r\n 2024-01-28 : organics bin [mdi:leaf]\r\n 2023-12-04 : general-waste bin [mdi:trash-can]\r\n 2023-12-04 : recycling bin [mdi:recycle]\r\n 2023-12-11 : general-waste bin [mdi:trash-can]\r\n 2023-12-11 : organics bin [mdi:leaf]\r\n 2023-12-18 : general-waste bin [mdi:trash-can]\r\n 2023-12-18 : recycling bin [mdi:recycle]\r\n```\r\n\r\n\r\n### Relevant Configuration\r\n\r\n_No response_\r\n\r\n### Checklist Source Error\r\n\r\n- [X] Use the example parameters for your source (often available in the documentation) (don't forget to restart Home Assistant after changing the configuration)\r\n- [X] Checked that the website of your service provider is still working\r\n- [X] Tested my attributes on the service provider website (if possible)\r\n- [X] I have tested with the latest version of the integration (master) (for HACS in the 3 dot menu of the integration click on \"Redownload\" and choose master as version)\r\n\r\n### Checklist Sensor Error\r\n\r\n- [X] Checked in the Home Assistant Calendar tab if the event names match the types names (if types argument is used)\r\n\r\n### Required\r\n\r\n- [X] I have searched past (closed AND opened) issues to see if this bug has already been reported, and it hasn't been.\r\n- [X] I understand that people give their precious time for free, and thus I've done my very best to make this problem as easy as possible to investigate.\n", "before_files": [{"content": "import logging\nimport re\nfrom datetime import datetime\n\nimport requests\nimport urllib3\nfrom bs4 import BeautifulSoup\nfrom waste_collection_schedule import Collection # type: ignore[attr-defined]\n\n# With verify=True the POST fails due to a SSLCertVerificationError.\n# Using verify=False works, but is not ideal. The following links may provide a better way of dealing with this:\n# https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings\n# https://urllib3.readthedocs.io/en/1.26.x/user-guide.html#ssl\n# These two lines areused to suppress the InsecureRequestWarning when using verify=False\nurllib3.disable_warnings()\n\nTITLE = \"Port Adelaide Enfield, South Australia\"\nDESCRIPTION = \"Source for City of Port Adelaide Enfield, South Australia.\"\nURL = \"https://ecouncil.portenf.sa.gov.au/\"\nTEST_CASES = {\n \"Broadview, Regency Road, 565 \": {\n \"suburb\": \"Broadview\",\n \"street\": \"Regency Road\",\n \"house_number\": 565,\n \"unit_number\": \"\",\n },\n \"48 Floriedale Rd \": {\n \"suburb\": \"Greenacres\",\n \"street\": \"Floriedale Rd\",\n \"house_number\": \"48\",\n },\n \"24 Margaret Terrace\": {\n \"suburb\": \"Rosewater\",\n \"street\": \"Margaret Terrace\",\n \"house_number\": \"24\",\n },\n \"Addison Road 91 with unit\": {\n \"suburb\": \"Rosewater\",\n \"street\": \"Addison Road\",\n \"house_number\": 91,\n \"unit_number\": 2,\n },\n}\n\nICON_MAP = {\n \"general-waste bin\": \"mdi:trash-can\",\n \"organics bin\": \"mdi:leaf\",\n \"recycling bin\": \"mdi:recycle\",\n}\n\nLOGGER = logging.getLogger(__name__)\n\nAPI_URL = \"https://ecouncil.portenf.sa.gov.au/public/propertywastedates/public.aspx\"\n\n\nclass Source:\n def __init__(\n self,\n suburb: str,\n street: str,\n house_number: str | int,\n unit_number: str | int = \"\",\n ):\n self._suburb: str = suburb\n self._street: str = street\n self._house_number: str = str(house_number)\n self._unit_number: str = str(unit_number)\n\n def __set_args(\n self, soup: BeautifulSoup, event_taget=None, additional: dict = {}\n ) -> dict:\n args = {\n \"ctl00$MainContent$txtSuburb\": self._suburb,\n \"ctl00$MainContent$txtStreetName\": self._street,\n \"ctl00$MainContent$txtHouseNumber\": self._house_number,\n \"ctl00$MainContent$txtUnitNumber\": self._unit_number,\n }\n if event_taget is not None:\n args[\"__EVENTTARGET\"] = event_taget\n\n for hidden_val in soup.find_all(\"input\", {\"type\": \"hidden\"}):\n args[hidden_val[\"name\"]] = hidden_val[\"value\"]\n\n for key, value in additional.items():\n args[key] = value\n return args\n\n def fetch(self):\n session = requests.Session()\n\n # get First page\n r = session.get(API_URL, verify=False)\n r.raise_for_status()\n\n # extractt arguments\n args = self.__set_args(\n BeautifulSoup(r.text, \"html.parser\"),\n event_taget=\"ctl00$MainContent$btnSearch\",\n )\n\n r = session.post(API_URL, data=args)\n r.raise_for_status()\n\n # get page to select an address\n soup = BeautifulSoup(r.text, \"html.parser\")\n\n selectable = soup.find_all(\"a\", {\"class\": \"anchor-button small\"}, text=\"Select\")\n\n if len(selectable) == 0:\n raise ValueError(\"No address found\")\n selected = selectable[0]\n\n # If multiple addresses are found, try to find the one that matches the input and warn if there are multiple or none matches\n if len(selectable) > 1:\n found = [\n \" \".join(\n [y.text for y in x.parent.parent.find_all(\"td\")[1].find_all(\"span\")]\n )\n for x in selectable\n ]\n using_index = 0\n\n match = False\n\n for index, entry in enumerate(found):\n entry = entry.lower().strip().replace(\" \", \"\")\n if (\n self._house_number.lower().strip().replace(\" \", \"\") in entry\n and self._street.lower().strip().replace(\" \", \"\") in entry\n and self._suburb.lower().strip().replace(\" \", \"\") in entry\n and self._unit_number.lower().strip().replace(\" \", \"\") in entry\n ):\n if match:\n LOGGER.warning(\n f\"Multiple addresses found, using first one \\nfound:{', '.join(found[:10])}{'...' if len(found) >= 10 else ''} \\nusing:{found[using_index]}\"\n )\n break\n using_index = index\n match = True\n if not match:\n LOGGER.warning(\n f\"no perfect address match found, using:{found[using_index]}\"\n )\n\n # request first address\n args = self.__set_args(\n soup,\n event_taget=\"ctl00$MainContent$gvPropertyResults$ctl02$btnSelect\",\n additional={selected[\"href\"].split(\"'\")[1]: \"\"},\n )\n r = session.post(API_URL, data=args)\n r.raise_for_status()\n\n soup = BeautifulSoup(r.text, \"html.parser\")\n cal_header = soup.find(\"th\", {\"class\": \"header-month\"}).find(\"span\").text\n\n from_month = cal_header.split(\"-\")[0].strip()\n to_month = cal_header.split(\"-\")[1].strip().split(\" \")[0]\n to_year = from_year = cal_header.split(\"-\")[1].strip().split(\" \")[1]\n # if main month contains a year, set it (maybe happens in december???)\n if len(from_month.split(\" \")) > 1:\n from_year = from_month.split(\" \")[1]\n from_month = from_month.split(\" \")[0]\n\n today_div = soup.find(\"table\", id=\"cal\").find(\"td\", class_=\"today\")\n print(today_div)\n\n # if other-month is to_month\n if (\n \"other-month\" in today_div.attrs\n and datetime.now().strftime(\"%B\") != to_month\n ):\n main_month, other_month = from_month, to_month\n main_year, other_year = from_year, to_year\n else: # if other-month is from_month\n main_month, other_month = to_month, from_month\n main_year, other_year = to_year, from_year\n\n entries = []\n\n calendar = soup.find(\"table\", {\"class\": \"collection-day-calendar\"})\n # Iterate over all days with pickups\n for pickup in calendar.find_all(\n \"div\", {\"class\": re.compile(r\"pickup|next-pickup\")}\n ):\n parent_td = pickup.parent\n month = (\n main_month if \"main-month\" in parent_td.attrs[\"class\"] else other_month\n )\n year = main_year if \"main-month\" in parent_td.attrs[\"class\"] else other_year\n day = parent_td.find(\"div\", {\"class\": \"daynumber\"}).text\n\n # Iterate over all pickup container types for this day\n for container in pickup.find_all(\"div\", {\"class\": \"bin-container\"}):\n container_type = \" \".join(container.find(\"div\").attrs[\"class\"])\n container_icon = ICON_MAP.get(container_type)\n\n date = datetime.strptime(f\"{year}-{month}-{day}\", \"%Y-%B-%d\").date()\n entries.append(\n Collection(date=date, t=container_type, icon=container_icon)\n )\n\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/portenf_sa_gov_au.py"}], "after_files": [{"content": "import logging\nimport re\nfrom datetime import datetime\n\nimport requests\nimport urllib3\nfrom bs4 import BeautifulSoup\nfrom waste_collection_schedule import Collection # type: ignore[attr-defined]\n\n# With verify=True the POST fails due to a SSLCertVerificationError.\n# Using verify=False works, but is not ideal. The following links may provide a better way of dealing with this:\n# https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings\n# https://urllib3.readthedocs.io/en/1.26.x/user-guide.html#ssl\n# These two lines areused to suppress the InsecureRequestWarning when using verify=False\nurllib3.disable_warnings()\n\nTITLE = \"Port Adelaide Enfield, South Australia\"\nDESCRIPTION = \"Source for City of Port Adelaide Enfield, South Australia.\"\nURL = \"https://ecouncil.portenf.sa.gov.au/\"\nTEST_CASES = {\n \"Broadview, Regency Road, 565 \": {\n \"suburb\": \"Broadview\",\n \"street\": \"Regency Road\",\n \"house_number\": 565,\n \"unit_number\": \"\",\n },\n \"48 Floriedale Rd \": {\n \"suburb\": \"Greenacres\",\n \"street\": \"Floriedale Rd\",\n \"house_number\": \"48\",\n },\n \"24 Margaret Terrace\": {\n \"suburb\": \"Rosewater\",\n \"street\": \"Margaret Terrace\",\n \"house_number\": \"24\",\n },\n \"Addison Road 91 with unit\": {\n \"suburb\": \"Rosewater\",\n \"street\": \"Addison Road\",\n \"house_number\": 91,\n \"unit_number\": 2,\n },\n}\n\nICON_MAP = {\n \"general-waste bin\": \"mdi:trash-can\",\n \"organics bin\": \"mdi:leaf\",\n \"recycling bin\": \"mdi:recycle\",\n}\n\nLOGGER = logging.getLogger(__name__)\n\nAPI_URL = \"https://ecouncil.portenf.sa.gov.au/public/propertywastedates/public.aspx\"\n\n\nclass Source:\n def __init__(\n self,\n suburb: str,\n street: str,\n house_number: str | int,\n unit_number: str | int = \"\",\n ):\n self._suburb: str = suburb\n self._street: str = street\n self._house_number: str = str(house_number)\n self._unit_number: str = str(unit_number)\n\n def __set_args(\n self, soup: BeautifulSoup, event_taget=None, additional: dict = {}\n ) -> dict:\n args = {\n \"ctl00$MainContent$txtSuburb\": self._suburb,\n \"ctl00$MainContent$txtStreetName\": self._street,\n \"ctl00$MainContent$txtHouseNumber\": self._house_number,\n \"ctl00$MainContent$txtUnitNumber\": self._unit_number,\n }\n if event_taget is not None:\n args[\"__EVENTTARGET\"] = event_taget\n\n for hidden_val in soup.find_all(\"input\", {\"type\": \"hidden\"}):\n args[hidden_val[\"name\"]] = hidden_val[\"value\"]\n\n for key, value in additional.items():\n args[key] = value\n return args\n\n def fetch(self):\n session = requests.Session()\n\n # get First page\n r = session.get(API_URL, verify=False)\n r.raise_for_status()\n\n # extractt arguments\n args = self.__set_args(\n BeautifulSoup(r.text, \"html.parser\"),\n event_taget=\"ctl00$MainContent$btnSearch\",\n )\n\n r = session.post(API_URL, data=args)\n r.raise_for_status()\n\n # get page to select an address\n soup = BeautifulSoup(r.text, \"html.parser\")\n\n selectable = soup.find_all(\"a\", {\"class\": \"anchor-button small\"}, text=\"Select\")\n\n if len(selectable) == 0:\n raise ValueError(\"No address found\")\n selected = selectable[0]\n\n # If multiple addresses are found, try to find the one that matches the input and warn if there are multiple or none matches\n if len(selectable) > 1:\n found = [\n \" \".join(\n [y.text for y in x.parent.parent.find_all(\"td\")[1].find_all(\"span\")]\n )\n for x in selectable\n ]\n using_index = 0\n\n match = False\n\n for index, entry in enumerate(found):\n entry = entry.lower().strip().replace(\" \", \"\")\n if (\n self._house_number.lower().strip().replace(\" \", \"\") in entry\n and self._street.lower().strip().replace(\" \", \"\") in entry\n and self._suburb.lower().strip().replace(\" \", \"\") in entry\n and self._unit_number.lower().strip().replace(\" \", \"\") in entry\n ):\n if match:\n LOGGER.warning(\n f\"Multiple addresses found, using first one \\nfound:{', '.join(found[:10])}{'...' if len(found) >= 10 else ''} \\nusing:{found[using_index]}\"\n )\n break\n using_index = index\n match = True\n if not match:\n LOGGER.warning(\n f\"no perfect address match found, using:{found[using_index]}\"\n )\n\n # request first address\n args = self.__set_args(\n soup,\n event_taget=\"ctl00$MainContent$gvPropertyResults$ctl02$btnSelect\",\n additional={selected[\"href\"].split(\"'\")[1]: \"\"},\n )\n r = session.post(API_URL, data=args)\n r.raise_for_status()\n\n soup = BeautifulSoup(r.text, \"html.parser\")\n cal_header = soup.find(\"th\", {\"class\": \"header-month\"}).find(\"span\").text\n\n from_month = cal_header.split(\"-\")[0].strip()\n to_month = cal_header.split(\"-\")[1].strip().split(\" \")[0]\n to_year = from_year = cal_header.split(\"-\")[1].strip().split(\" \")[1]\n # if main month contains a year, set it (maybe happens in december???)\n if len(from_month.split(\" \")) > 1:\n from_year = from_month.split(\" \")[1]\n from_month = from_month.split(\" \")[0]\n\n today_div = soup.find(\"table\", id=\"cal\").find(\"td\", class_=\"today\")\n\n # if other-month is to_month\n if (\n \"other-month\" in today_div.attrs[\"class\"]\n and datetime.now().strftime(\"%B\") == to_month\n ) or (\n \"main-month\" in today_div.attrs[\"class\"]\n and datetime.now().strftime(\"%B\") == from_month\n ):\n main_month, other_month = from_month, to_month\n main_year, other_year = from_year, to_year\n else: # if other-month is from_month\n main_month, other_month = to_month, from_month\n main_year, other_year = to_year, from_year\n\n entries = []\n\n calendar = soup.find(\"table\", {\"class\": \"collection-day-calendar\"})\n # Iterate over all days with pickups\n for pickup in calendar.find_all(\n \"div\", {\"class\": re.compile(r\"pickup|next-pickup\")}\n ):\n parent_td = pickup.parent\n month = (\n main_month if \"main-month\" in parent_td.attrs[\"class\"] else other_month\n )\n year = main_year if \"main-month\" in parent_td.attrs[\"class\"] else other_year\n day = parent_td.find(\"div\", {\"class\": \"daynumber\"}).text\n\n # Iterate over all pickup container types for this day\n for container in pickup.find_all(\"div\", {\"class\": \"bin-container\"}):\n container_type = \" \".join(container.find(\"div\").attrs[\"class\"])\n container_icon = ICON_MAP.get(container_type)\n\n date = datetime.strptime(f\"{year}-{month}-{day}\", \"%Y-%B-%d\").date()\n entries.append(\n Collection(date=date, t=container_type, icon=container_icon)\n )\n\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/portenf_sa_gov_au.py"}]} | 3,719 | 273 |
gh_patches_debug_4863 | rasdani/github-patches | git_diff | digitalfabrik__integreat-cms-1210 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
PDF Export URL pattern
### Describe the Bug
The web app calls `/REGION/LANG/wp-json/ig-mpdf/v1/pdf` to export a PDF which returns a 404. Our API currently uses `REGION/LANG/pdf`.
The normal mapping does not work, as we
### Steps to Reproduce
```shell
curl 'https://malte-test.tuerantuer.org/joerdenstorf/de/wp-json/ig-mpdf/v1/pdf'
```
### Expected Behavior
Map old URL pattern to new endpoint.
### Actual Behavior
404
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `integreat_cms/api/urls.py`
Content:
```
1 """
2 Expansion of API-Endpoints for the CMS
3 """
4 from django.urls import include, path, re_path
5
6 from .v3.events import events
7 from .v3.feedback import (
8 page_feedback,
9 search_result_feedback,
10 region_feedback,
11 offer_feedback,
12 offer_list_feedback,
13 event_list_feedback,
14 event_feedback,
15 poi_feedback,
16 map_feedback,
17 imprint_page_feedback,
18 legacy_feedback_endpoint,
19 )
20 from .v3.imprint import imprint
21 from .v3.languages import languages
22 from .v3.locations import locations
23 from .v3.pages import pages, children, parents, single_page
24 from .v3.pdf_export import pdf_export
25 from .v3.push_notifications import sent_push_notifications
26 from .v3.regions import regions, liveregions, hiddenregions
27 from .v3.offers import offers
28
29
30 #: The namespace for this URL config (see :attr:`django.urls.ResolverMatch.app_name`)
31 app_name = "api"
32
33 content_api_urlpatterns = [
34 path("pages/", pages, name="pages"),
35 path("locations/", locations, name="locations"),
36 path("events/", events, name="events"),
37 path("page/", single_page, name="single_page"),
38 path("post/", single_page, name="single_page"),
39 path("children/", children, name="children"),
40 path("parents/", parents, name="parents"),
41 path("pdf/", pdf_export, name="pdf_export"),
42 path(
43 "sent_push_notifications/",
44 sent_push_notifications,
45 name="sent_push_notifications",
46 ),
47 path("imprint/", imprint, name="imprint"),
48 path("disclaimer/", imprint, name="imprint"),
49 path("offers/", offers, name="offers"),
50 path("extras/", offers, name="offers"),
51 re_path(
52 r"^feedback/?$",
53 legacy_feedback_endpoint.legacy_feedback_endpoint,
54 name="legacy_feedback_endpoint",
55 ),
56 path(
57 "feedback/",
58 include(
59 [
60 re_path(
61 r"^categories/?$",
62 region_feedback.region_feedback,
63 name="region_feedback",
64 ),
65 re_path(r"^page/?$", page_feedback.page_feedback, name="page_feedback"),
66 re_path(r"^poi/?$", poi_feedback.poi_feedback, name="poi_feedback"),
67 re_path(
68 r"^event/?$", event_feedback.event_feedback, name="event_feedback"
69 ),
70 re_path(
71 r"^events/?$",
72 event_list_feedback.event_list_feedback,
73 name="event_list_feedback",
74 ),
75 re_path(
76 r"^imprint-page/?$",
77 imprint_page_feedback.imprint_page_feedback,
78 name="imprint_page_feedbacks",
79 ),
80 re_path(r"^map/?$", map_feedback.map_feedback, name="map_feedback"),
81 re_path(
82 r"^search/?$",
83 search_result_feedback.search_result_feedback,
84 name="search_result_feedback",
85 ),
86 re_path(
87 r"^offers/?$",
88 offer_list_feedback.offer_list_feedback,
89 name="offer_list_feedback",
90 ),
91 re_path(
92 r"^extras/?$",
93 offer_list_feedback.offer_list_feedback,
94 name="offer_list_feedback",
95 ),
96 re_path(
97 r"^offer/?$", offer_feedback.offer_feedback, name="offer_feedback"
98 ),
99 re_path(
100 r"^extra/?$", offer_feedback.offer_feedback, name="offer_feedback"
101 ),
102 ]
103 ),
104 ),
105 ]
106
107 region_api_urlpatterns = [
108 path("", regions, name="regions"),
109 path("live/", liveregions, name="regions_live"),
110 path("hidden/", hiddenregions, name="regions_hidden"),
111 ]
112
113 #: The url patterns of this module (see :doc:`topics/http/urls`)
114 urlpatterns = [
115 path("api/regions/", include(region_api_urlpatterns)),
116 path("wp-json/extensions/v3/sites/", include(region_api_urlpatterns)),
117 path(
118 "api/<slug:region_slug>/",
119 include(
120 [
121 path("languages/", languages, name="languages"),
122 path("offers/", offers, name="offers"),
123 path("extras/", offers, name="offers"),
124 path("<slug:language_slug>/", include(content_api_urlpatterns)),
125 ]
126 ),
127 ),
128 path(
129 "<slug:region_slug>/",
130 include(
131 [
132 path(
133 "de/wp-json/extensions/v3/languages/", languages, name="languages"
134 ),
135 path(
136 "<slug:language_slug>/wp-json/extensions/v3/",
137 include(content_api_urlpatterns),
138 ),
139 ]
140 ),
141 ),
142 ]
143
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/integreat_cms/api/urls.py b/integreat_cms/api/urls.py
--- a/integreat_cms/api/urls.py
+++ b/integreat_cms/api/urls.py
@@ -136,6 +136,11 @@
"<slug:language_slug>/wp-json/extensions/v3/",
include(content_api_urlpatterns),
),
+ path(
+ "<slug:language_slug>/wp-json/ig-mpdf/v1/pdf/",
+ pdf_export,
+ name="pdf_export",
+ ),
]
),
),
| {"golden_diff": "diff --git a/integreat_cms/api/urls.py b/integreat_cms/api/urls.py\n--- a/integreat_cms/api/urls.py\n+++ b/integreat_cms/api/urls.py\n@@ -136,6 +136,11 @@\n \"<slug:language_slug>/wp-json/extensions/v3/\",\n include(content_api_urlpatterns),\n ),\n+ path(\n+ \"<slug:language_slug>/wp-json/ig-mpdf/v1/pdf/\",\n+ pdf_export,\n+ name=\"pdf_export\",\n+ ),\n ]\n ),\n ),\n", "issue": "PDF Export URL pattern\n### Describe the Bug\r\nThe web app calls `/REGION/LANG/wp-json/ig-mpdf/v1/pdf` to export a PDF which returns a 404. Our API currently uses `REGION/LANG/pdf`.\r\n\r\nThe normal mapping does not work, as we\r\n\r\n### Steps to Reproduce\r\n\r\n```shell\r\ncurl 'https://malte-test.tuerantuer.org/joerdenstorf/de/wp-json/ig-mpdf/v1/pdf'\r\n```\r\n\r\n### Expected Behavior\r\nMap old URL pattern to new endpoint.\r\n\r\n\r\n### Actual Behavior\r\n404\n", "before_files": [{"content": "\"\"\"\nExpansion of API-Endpoints for the CMS\n\"\"\"\nfrom django.urls import include, path, re_path\n\nfrom .v3.events import events\nfrom .v3.feedback import (\n page_feedback,\n search_result_feedback,\n region_feedback,\n offer_feedback,\n offer_list_feedback,\n event_list_feedback,\n event_feedback,\n poi_feedback,\n map_feedback,\n imprint_page_feedback,\n legacy_feedback_endpoint,\n)\nfrom .v3.imprint import imprint\nfrom .v3.languages import languages\nfrom .v3.locations import locations\nfrom .v3.pages import pages, children, parents, single_page\nfrom .v3.pdf_export import pdf_export\nfrom .v3.push_notifications import sent_push_notifications\nfrom .v3.regions import regions, liveregions, hiddenregions\nfrom .v3.offers import offers\n\n\n#: The namespace for this URL config (see :attr:`django.urls.ResolverMatch.app_name`)\napp_name = \"api\"\n\ncontent_api_urlpatterns = [\n path(\"pages/\", pages, name=\"pages\"),\n path(\"locations/\", locations, name=\"locations\"),\n path(\"events/\", events, name=\"events\"),\n path(\"page/\", single_page, name=\"single_page\"),\n path(\"post/\", single_page, name=\"single_page\"),\n path(\"children/\", children, name=\"children\"),\n path(\"parents/\", parents, name=\"parents\"),\n path(\"pdf/\", pdf_export, name=\"pdf_export\"),\n path(\n \"sent_push_notifications/\",\n sent_push_notifications,\n name=\"sent_push_notifications\",\n ),\n path(\"imprint/\", imprint, name=\"imprint\"),\n path(\"disclaimer/\", imprint, name=\"imprint\"),\n path(\"offers/\", offers, name=\"offers\"),\n path(\"extras/\", offers, name=\"offers\"),\n re_path(\n r\"^feedback/?$\",\n legacy_feedback_endpoint.legacy_feedback_endpoint,\n name=\"legacy_feedback_endpoint\",\n ),\n path(\n \"feedback/\",\n include(\n [\n re_path(\n r\"^categories/?$\",\n region_feedback.region_feedback,\n name=\"region_feedback\",\n ),\n re_path(r\"^page/?$\", page_feedback.page_feedback, name=\"page_feedback\"),\n re_path(r\"^poi/?$\", poi_feedback.poi_feedback, name=\"poi_feedback\"),\n re_path(\n r\"^event/?$\", event_feedback.event_feedback, name=\"event_feedback\"\n ),\n re_path(\n r\"^events/?$\",\n event_list_feedback.event_list_feedback,\n name=\"event_list_feedback\",\n ),\n re_path(\n r\"^imprint-page/?$\",\n imprint_page_feedback.imprint_page_feedback,\n name=\"imprint_page_feedbacks\",\n ),\n re_path(r\"^map/?$\", map_feedback.map_feedback, name=\"map_feedback\"),\n re_path(\n r\"^search/?$\",\n search_result_feedback.search_result_feedback,\n name=\"search_result_feedback\",\n ),\n re_path(\n r\"^offers/?$\",\n offer_list_feedback.offer_list_feedback,\n name=\"offer_list_feedback\",\n ),\n re_path(\n r\"^extras/?$\",\n offer_list_feedback.offer_list_feedback,\n name=\"offer_list_feedback\",\n ),\n re_path(\n r\"^offer/?$\", offer_feedback.offer_feedback, name=\"offer_feedback\"\n ),\n re_path(\n r\"^extra/?$\", offer_feedback.offer_feedback, name=\"offer_feedback\"\n ),\n ]\n ),\n ),\n]\n\nregion_api_urlpatterns = [\n path(\"\", regions, name=\"regions\"),\n path(\"live/\", liveregions, name=\"regions_live\"),\n path(\"hidden/\", hiddenregions, name=\"regions_hidden\"),\n]\n\n#: The url patterns of this module (see :doc:`topics/http/urls`)\nurlpatterns = [\n path(\"api/regions/\", include(region_api_urlpatterns)),\n path(\"wp-json/extensions/v3/sites/\", include(region_api_urlpatterns)),\n path(\n \"api/<slug:region_slug>/\",\n include(\n [\n path(\"languages/\", languages, name=\"languages\"),\n path(\"offers/\", offers, name=\"offers\"),\n path(\"extras/\", offers, name=\"offers\"),\n path(\"<slug:language_slug>/\", include(content_api_urlpatterns)),\n ]\n ),\n ),\n path(\n \"<slug:region_slug>/\",\n include(\n [\n path(\n \"de/wp-json/extensions/v3/languages/\", languages, name=\"languages\"\n ),\n path(\n \"<slug:language_slug>/wp-json/extensions/v3/\",\n include(content_api_urlpatterns),\n ),\n ]\n ),\n ),\n]\n", "path": "integreat_cms/api/urls.py"}], "after_files": [{"content": "\"\"\"\nExpansion of API-Endpoints for the CMS\n\"\"\"\nfrom django.urls import include, path, re_path\n\nfrom .v3.events import events\nfrom .v3.feedback import (\n page_feedback,\n search_result_feedback,\n region_feedback,\n offer_feedback,\n offer_list_feedback,\n event_list_feedback,\n event_feedback,\n poi_feedback,\n map_feedback,\n imprint_page_feedback,\n legacy_feedback_endpoint,\n)\nfrom .v3.imprint import imprint\nfrom .v3.languages import languages\nfrom .v3.locations import locations\nfrom .v3.pages import pages, children, parents, single_page\nfrom .v3.pdf_export import pdf_export\nfrom .v3.push_notifications import sent_push_notifications\nfrom .v3.regions import regions, liveregions, hiddenregions\nfrom .v3.offers import offers\n\n\n#: The namespace for this URL config (see :attr:`django.urls.ResolverMatch.app_name`)\napp_name = \"api\"\n\ncontent_api_urlpatterns = [\n path(\"pages/\", pages, name=\"pages\"),\n path(\"locations/\", locations, name=\"locations\"),\n path(\"events/\", events, name=\"events\"),\n path(\"page/\", single_page, name=\"single_page\"),\n path(\"post/\", single_page, name=\"single_page\"),\n path(\"children/\", children, name=\"children\"),\n path(\"parents/\", parents, name=\"parents\"),\n path(\"pdf/\", pdf_export, name=\"pdf_export\"),\n path(\n \"sent_push_notifications/\",\n sent_push_notifications,\n name=\"sent_push_notifications\",\n ),\n path(\"imprint/\", imprint, name=\"imprint\"),\n path(\"disclaimer/\", imprint, name=\"imprint\"),\n path(\"offers/\", offers, name=\"offers\"),\n path(\"extras/\", offers, name=\"offers\"),\n re_path(\n r\"^feedback/?$\",\n legacy_feedback_endpoint.legacy_feedback_endpoint,\n name=\"legacy_feedback_endpoint\",\n ),\n path(\n \"feedback/\",\n include(\n [\n re_path(\n r\"^categories/?$\",\n region_feedback.region_feedback,\n name=\"region_feedback\",\n ),\n re_path(r\"^page/?$\", page_feedback.page_feedback, name=\"page_feedback\"),\n re_path(r\"^poi/?$\", poi_feedback.poi_feedback, name=\"poi_feedback\"),\n re_path(\n r\"^event/?$\", event_feedback.event_feedback, name=\"event_feedback\"\n ),\n re_path(\n r\"^events/?$\",\n event_list_feedback.event_list_feedback,\n name=\"event_list_feedback\",\n ),\n re_path(\n r\"^imprint-page/?$\",\n imprint_page_feedback.imprint_page_feedback,\n name=\"imprint_page_feedbacks\",\n ),\n re_path(r\"^map/?$\", map_feedback.map_feedback, name=\"map_feedback\"),\n re_path(\n r\"^search/?$\",\n search_result_feedback.search_result_feedback,\n name=\"search_result_feedback\",\n ),\n re_path(\n r\"^offers/?$\",\n offer_list_feedback.offer_list_feedback,\n name=\"offer_list_feedback\",\n ),\n re_path(\n r\"^extras/?$\",\n offer_list_feedback.offer_list_feedback,\n name=\"offer_list_feedback\",\n ),\n re_path(\n r\"^offer/?$\", offer_feedback.offer_feedback, name=\"offer_feedback\"\n ),\n re_path(\n r\"^extra/?$\", offer_feedback.offer_feedback, name=\"offer_feedback\"\n ),\n ]\n ),\n ),\n]\n\nregion_api_urlpatterns = [\n path(\"\", regions, name=\"regions\"),\n path(\"live/\", liveregions, name=\"regions_live\"),\n path(\"hidden/\", hiddenregions, name=\"regions_hidden\"),\n]\n\n#: The url patterns of this module (see :doc:`topics/http/urls`)\nurlpatterns = [\n path(\"api/regions/\", include(region_api_urlpatterns)),\n path(\"wp-json/extensions/v3/sites/\", include(region_api_urlpatterns)),\n path(\n \"api/<slug:region_slug>/\",\n include(\n [\n path(\"languages/\", languages, name=\"languages\"),\n path(\"offers/\", offers, name=\"offers\"),\n path(\"extras/\", offers, name=\"offers\"),\n path(\"<slug:language_slug>/\", include(content_api_urlpatterns)),\n ]\n ),\n ),\n path(\n \"<slug:region_slug>/\",\n include(\n [\n path(\n \"de/wp-json/extensions/v3/languages/\", languages, name=\"languages\"\n ),\n path(\n \"<slug:language_slug>/wp-json/extensions/v3/\",\n include(content_api_urlpatterns),\n ),\n path(\n \"<slug:language_slug>/wp-json/ig-mpdf/v1/pdf/\",\n pdf_export,\n name=\"pdf_export\",\n ),\n ]\n ),\n ),\n]\n", "path": "integreat_cms/api/urls.py"}]} | 1,656 | 129 |
gh_patches_debug_29434 | rasdani/github-patches | git_diff | plone__Products.CMFPlone-1515 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Resources from third party add-ons are not being included in compiled plone-legacy bundle
Seems JS resources registered in Plone 5 using old approach (`jsregistry.xml`) are not included in the final compilation: I installed an add-on and, even as I can see the JS resources listed in `default.js`, the source code is not present.
If I enable development mode, then I can see the source code included in `plone-legacy-compiled.js` and it's executed normally.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `Products/CMFPlone/resources/browser/combine.py`
Content:
```
1 from zExceptions import NotFound
2 from Acquisition import aq_base
3 from datetime import datetime
4 from plone.registry.interfaces import IRegistry
5 from plone.resource.file import FilesystemFile
6 from plone.resource.interfaces import IResourceDirectory
7 from Products.CMFPlone.interfaces import IBundleRegistry
8 from Products.CMFPlone.interfaces.resources import (
9 OVERRIDE_RESOURCE_DIRECTORY_NAME,
10 )
11 from StringIO import StringIO
12 from zope.component import getUtility
13 from zope.component import queryUtility
14
15 PRODUCTION_RESOURCE_DIRECTORY = "production"
16
17
18 def get_production_resource_directory():
19 persistent_directory = queryUtility(IResourceDirectory, name="persistent")
20 if persistent_directory is None:
21 return ''
22 container = persistent_directory[OVERRIDE_RESOURCE_DIRECTORY_NAME]
23 try:
24 production_folder = container[PRODUCTION_RESOURCE_DIRECTORY]
25 except NotFound:
26 return "%s/++unique++1" % PRODUCTION_RESOURCE_DIRECTORY
27 timestamp = production_folder.readFile('timestamp.txt')
28 return "%s/++unique++%s" % (
29 PRODUCTION_RESOURCE_DIRECTORY, timestamp)
30
31
32 def get_resource(context, path):
33 resource = context.unrestrictedTraverse(path)
34 if isinstance(resource, FilesystemFile):
35 (directory, sep, filename) = path.rpartition('/')
36 return context.unrestrictedTraverse(directory).readFile(filename)
37 else:
38 if hasattr(aq_base(resource), 'GET'):
39 # for FileResource
40 return resource.GET()
41 else:
42 # any BrowserView
43 return resource()
44
45
46 def write_js(context, folder, meta_bundle):
47 registry = getUtility(IRegistry)
48 resources = []
49
50 # default resources
51 if meta_bundle == 'default' and registry.records.get(
52 'plone.resources/jquery.js'
53 ):
54 resources.append(get_resource(context,
55 registry.records['plone.resources/jquery.js'].value))
56 resources.append(get_resource(context,
57 registry.records['plone.resources.requirejs'].value))
58 resources.append(get_resource(context,
59 registry.records['plone.resources.configjs'].value))
60
61 # bundles
62 bundles = registry.collectionOfInterface(
63 IBundleRegistry, prefix="plone.bundles", check=False)
64 for bundle in bundles.values():
65 if bundle.merge_with == meta_bundle:
66 resources.append(get_resource(context, bundle.jscompilation))
67
68 fi = StringIO()
69 for script in resources:
70 fi.write(script + '\n')
71 folder.writeFile(meta_bundle + ".js", fi)
72
73
74 def write_css(context, folder, meta_bundle):
75 registry = getUtility(IRegistry)
76 resources = []
77
78 bundles = registry.collectionOfInterface(
79 IBundleRegistry, prefix="plone.bundles", check=False)
80 for bundle in bundles.values():
81 if bundle.merge_with == meta_bundle:
82 resources.append(get_resource(context, bundle.csscompilation))
83
84 fi = StringIO()
85 for script in resources:
86 fi.write(script + '\n')
87 folder.writeFile(meta_bundle + ".css", fi)
88
89
90 def combine_bundles(context):
91 persistent_directory = queryUtility(IResourceDirectory, name="persistent")
92 if persistent_directory is None:
93 return
94 if OVERRIDE_RESOURCE_DIRECTORY_NAME not in persistent_directory:
95 persistent_directory.makeDirectory(OVERRIDE_RESOURCE_DIRECTORY_NAME)
96 container = persistent_directory[OVERRIDE_RESOURCE_DIRECTORY_NAME]
97 if PRODUCTION_RESOURCE_DIRECTORY not in container:
98 container.makeDirectory(PRODUCTION_RESOURCE_DIRECTORY)
99 production_folder = container[PRODUCTION_RESOURCE_DIRECTORY]
100
101 # store timestamp
102 fi = StringIO()
103 fi.write(datetime.now().isoformat())
104 production_folder.writeFile("timestamp.txt", fi)
105
106 # generate new combined bundles
107 write_js(context, production_folder, 'default')
108 write_js(context, production_folder, 'logged-in')
109 write_css(context, production_folder, 'default')
110 write_css(context, production_folder, 'logged-in')
111
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/Products/CMFPlone/resources/browser/combine.py b/Products/CMFPlone/resources/browser/combine.py
--- a/Products/CMFPlone/resources/browser/combine.py
+++ b/Products/CMFPlone/resources/browser/combine.py
@@ -30,6 +30,14 @@
def get_resource(context, path):
+ if path.startswith('++plone++'):
+ # ++plone++ resources can be customized, we return their override
+ # value if any
+ overrides = get_override_directory(context)
+ filepath = path[9:]
+ if overrides.isFile(filepath):
+ return overrides.readFile(filepath)
+
resource = context.unrestrictedTraverse(path)
if isinstance(resource, FilesystemFile):
(directory, sep, filename) = path.rpartition('/')
@@ -87,13 +95,17 @@
folder.writeFile(meta_bundle + ".css", fi)
-def combine_bundles(context):
+def get_override_directory(context):
persistent_directory = queryUtility(IResourceDirectory, name="persistent")
if persistent_directory is None:
return
if OVERRIDE_RESOURCE_DIRECTORY_NAME not in persistent_directory:
persistent_directory.makeDirectory(OVERRIDE_RESOURCE_DIRECTORY_NAME)
- container = persistent_directory[OVERRIDE_RESOURCE_DIRECTORY_NAME]
+ return persistent_directory[OVERRIDE_RESOURCE_DIRECTORY_NAME]
+
+
+def combine_bundles(context):
+ container = get_override_directory(context)
if PRODUCTION_RESOURCE_DIRECTORY not in container:
container.makeDirectory(PRODUCTION_RESOURCE_DIRECTORY)
production_folder = container[PRODUCTION_RESOURCE_DIRECTORY]
| {"golden_diff": "diff --git a/Products/CMFPlone/resources/browser/combine.py b/Products/CMFPlone/resources/browser/combine.py\n--- a/Products/CMFPlone/resources/browser/combine.py\n+++ b/Products/CMFPlone/resources/browser/combine.py\n@@ -30,6 +30,14 @@\n \n \n def get_resource(context, path):\n+ if path.startswith('++plone++'):\n+ # ++plone++ resources can be customized, we return their override\n+ # value if any\n+ overrides = get_override_directory(context)\n+ filepath = path[9:]\n+ if overrides.isFile(filepath):\n+ return overrides.readFile(filepath)\n+\n resource = context.unrestrictedTraverse(path)\n if isinstance(resource, FilesystemFile):\n (directory, sep, filename) = path.rpartition('/')\n@@ -87,13 +95,17 @@\n folder.writeFile(meta_bundle + \".css\", fi)\n \n \n-def combine_bundles(context):\n+def get_override_directory(context):\n persistent_directory = queryUtility(IResourceDirectory, name=\"persistent\")\n if persistent_directory is None:\n return\n if OVERRIDE_RESOURCE_DIRECTORY_NAME not in persistent_directory:\n persistent_directory.makeDirectory(OVERRIDE_RESOURCE_DIRECTORY_NAME)\n- container = persistent_directory[OVERRIDE_RESOURCE_DIRECTORY_NAME]\n+ return persistent_directory[OVERRIDE_RESOURCE_DIRECTORY_NAME]\n+\n+\n+def combine_bundles(context):\n+ container = get_override_directory(context)\n if PRODUCTION_RESOURCE_DIRECTORY not in container:\n container.makeDirectory(PRODUCTION_RESOURCE_DIRECTORY)\n production_folder = container[PRODUCTION_RESOURCE_DIRECTORY]\n", "issue": "Resources from third party add-ons are not being included in compiled plone-legacy bundle\nSeems JS resources registered in Plone 5 using old approach (`jsregistry.xml`) are not included in the final compilation: I installed an add-on and, even as I can see the JS resources listed in `default.js`, the source code is not present.\n\nIf I enable development mode, then I can see the source code included in `plone-legacy-compiled.js` and it's executed normally.\n\n", "before_files": [{"content": "from zExceptions import NotFound\nfrom Acquisition import aq_base\nfrom datetime import datetime\nfrom plone.registry.interfaces import IRegistry\nfrom plone.resource.file import FilesystemFile\nfrom plone.resource.interfaces import IResourceDirectory\nfrom Products.CMFPlone.interfaces import IBundleRegistry\nfrom Products.CMFPlone.interfaces.resources import (\n OVERRIDE_RESOURCE_DIRECTORY_NAME,\n)\nfrom StringIO import StringIO\nfrom zope.component import getUtility\nfrom zope.component import queryUtility\n\nPRODUCTION_RESOURCE_DIRECTORY = \"production\"\n\n\ndef get_production_resource_directory():\n persistent_directory = queryUtility(IResourceDirectory, name=\"persistent\")\n if persistent_directory is None:\n return ''\n container = persistent_directory[OVERRIDE_RESOURCE_DIRECTORY_NAME]\n try:\n production_folder = container[PRODUCTION_RESOURCE_DIRECTORY]\n except NotFound:\n return \"%s/++unique++1\" % PRODUCTION_RESOURCE_DIRECTORY\n timestamp = production_folder.readFile('timestamp.txt')\n return \"%s/++unique++%s\" % (\n PRODUCTION_RESOURCE_DIRECTORY, timestamp)\n\n\ndef get_resource(context, path):\n resource = context.unrestrictedTraverse(path)\n if isinstance(resource, FilesystemFile):\n (directory, sep, filename) = path.rpartition('/')\n return context.unrestrictedTraverse(directory).readFile(filename)\n else:\n if hasattr(aq_base(resource), 'GET'):\n # for FileResource\n return resource.GET()\n else:\n # any BrowserView\n return resource()\n\n\ndef write_js(context, folder, meta_bundle):\n registry = getUtility(IRegistry)\n resources = []\n\n # default resources\n if meta_bundle == 'default' and registry.records.get(\n 'plone.resources/jquery.js'\n ):\n resources.append(get_resource(context,\n registry.records['plone.resources/jquery.js'].value))\n resources.append(get_resource(context,\n registry.records['plone.resources.requirejs'].value))\n resources.append(get_resource(context,\n registry.records['plone.resources.configjs'].value))\n\n # bundles\n bundles = registry.collectionOfInterface(\n IBundleRegistry, prefix=\"plone.bundles\", check=False)\n for bundle in bundles.values():\n if bundle.merge_with == meta_bundle:\n resources.append(get_resource(context, bundle.jscompilation))\n\n fi = StringIO()\n for script in resources:\n fi.write(script + '\\n')\n folder.writeFile(meta_bundle + \".js\", fi)\n\n\ndef write_css(context, folder, meta_bundle):\n registry = getUtility(IRegistry)\n resources = []\n\n bundles = registry.collectionOfInterface(\n IBundleRegistry, prefix=\"plone.bundles\", check=False)\n for bundle in bundles.values():\n if bundle.merge_with == meta_bundle:\n resources.append(get_resource(context, bundle.csscompilation))\n\n fi = StringIO()\n for script in resources:\n fi.write(script + '\\n')\n folder.writeFile(meta_bundle + \".css\", fi)\n\n\ndef combine_bundles(context):\n persistent_directory = queryUtility(IResourceDirectory, name=\"persistent\")\n if persistent_directory is None:\n return\n if OVERRIDE_RESOURCE_DIRECTORY_NAME not in persistent_directory:\n persistent_directory.makeDirectory(OVERRIDE_RESOURCE_DIRECTORY_NAME)\n container = persistent_directory[OVERRIDE_RESOURCE_DIRECTORY_NAME]\n if PRODUCTION_RESOURCE_DIRECTORY not in container:\n container.makeDirectory(PRODUCTION_RESOURCE_DIRECTORY)\n production_folder = container[PRODUCTION_RESOURCE_DIRECTORY]\n\n # store timestamp\n fi = StringIO()\n fi.write(datetime.now().isoformat())\n production_folder.writeFile(\"timestamp.txt\", fi)\n\n # generate new combined bundles\n write_js(context, production_folder, 'default')\n write_js(context, production_folder, 'logged-in')\n write_css(context, production_folder, 'default')\n write_css(context, production_folder, 'logged-in')\n", "path": "Products/CMFPlone/resources/browser/combine.py"}], "after_files": [{"content": "from zExceptions import NotFound\nfrom Acquisition import aq_base\nfrom datetime import datetime\nfrom plone.registry.interfaces import IRegistry\nfrom plone.resource.file import FilesystemFile\nfrom plone.resource.interfaces import IResourceDirectory\nfrom Products.CMFPlone.interfaces import IBundleRegistry\nfrom Products.CMFPlone.interfaces.resources import (\n OVERRIDE_RESOURCE_DIRECTORY_NAME,\n)\nfrom StringIO import StringIO\nfrom zope.component import getUtility\nfrom zope.component import queryUtility\n\nPRODUCTION_RESOURCE_DIRECTORY = \"production\"\n\n\ndef get_production_resource_directory():\n persistent_directory = queryUtility(IResourceDirectory, name=\"persistent\")\n if persistent_directory is None:\n return ''\n container = persistent_directory[OVERRIDE_RESOURCE_DIRECTORY_NAME]\n try:\n production_folder = container[PRODUCTION_RESOURCE_DIRECTORY]\n except NotFound:\n return \"%s/++unique++1\" % PRODUCTION_RESOURCE_DIRECTORY\n timestamp = production_folder.readFile('timestamp.txt')\n return \"%s/++unique++%s\" % (\n PRODUCTION_RESOURCE_DIRECTORY, timestamp)\n\n\ndef get_resource(context, path):\n if path.startswith('++plone++'):\n # ++plone++ resources can be customized, we return their override\n # value if any\n overrides = get_override_directory(context)\n filepath = path[9:]\n if overrides.isFile(filepath):\n return overrides.readFile(filepath)\n\n resource = context.unrestrictedTraverse(path)\n if isinstance(resource, FilesystemFile):\n (directory, sep, filename) = path.rpartition('/')\n return context.unrestrictedTraverse(directory).readFile(filename)\n else:\n if hasattr(aq_base(resource), 'GET'):\n # for FileResource\n return resource.GET()\n else:\n # any BrowserView\n return resource()\n\n\ndef write_js(context, folder, meta_bundle):\n registry = getUtility(IRegistry)\n resources = []\n\n # default resources\n if meta_bundle == 'default' and registry.records.get(\n 'plone.resources/jquery.js'\n ):\n resources.append(get_resource(context,\n registry.records['plone.resources/jquery.js'].value))\n resources.append(get_resource(context,\n registry.records['plone.resources.requirejs'].value))\n resources.append(get_resource(context,\n registry.records['plone.resources.configjs'].value))\n\n # bundles\n bundles = registry.collectionOfInterface(\n IBundleRegistry, prefix=\"plone.bundles\", check=False)\n for bundle in bundles.values():\n if bundle.merge_with == meta_bundle:\n resources.append(get_resource(context, bundle.jscompilation))\n\n fi = StringIO()\n for script in resources:\n fi.write(script + '\\n')\n folder.writeFile(meta_bundle + \".js\", fi)\n\n\ndef write_css(context, folder, meta_bundle):\n registry = getUtility(IRegistry)\n resources = []\n\n bundles = registry.collectionOfInterface(\n IBundleRegistry, prefix=\"plone.bundles\", check=False)\n for bundle in bundles.values():\n if bundle.merge_with == meta_bundle:\n resources.append(get_resource(context, bundle.csscompilation))\n\n fi = StringIO()\n for script in resources:\n fi.write(script + '\\n')\n folder.writeFile(meta_bundle + \".css\", fi)\n\n\ndef get_override_directory(context):\n persistent_directory = queryUtility(IResourceDirectory, name=\"persistent\")\n if persistent_directory is None:\n return\n if OVERRIDE_RESOURCE_DIRECTORY_NAME not in persistent_directory:\n persistent_directory.makeDirectory(OVERRIDE_RESOURCE_DIRECTORY_NAME)\n return persistent_directory[OVERRIDE_RESOURCE_DIRECTORY_NAME]\n\n\ndef combine_bundles(context):\n container = get_override_directory(context)\n if PRODUCTION_RESOURCE_DIRECTORY not in container:\n container.makeDirectory(PRODUCTION_RESOURCE_DIRECTORY)\n production_folder = container[PRODUCTION_RESOURCE_DIRECTORY]\n\n # store timestamp\n fi = StringIO()\n fi.write(datetime.now().isoformat())\n production_folder.writeFile(\"timestamp.txt\", fi)\n\n # generate new combined bundles\n write_js(context, production_folder, 'default')\n write_js(context, production_folder, 'logged-in')\n write_css(context, production_folder, 'default')\n write_css(context, production_folder, 'logged-in')\n", "path": "Products/CMFPlone/resources/browser/combine.py"}]} | 1,383 | 338 |
gh_patches_debug_22011 | rasdani/github-patches | git_diff | docker__docker-py-1330 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add docker network IPAM options parameter
IPAM driver missing options
supports an options field in the IPAM config
It introduced in API v1.22.
```
POST /networks/create Now supports an options field in the IPAM config that provides options for custom IPAM plugins.
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docker/types/networks.py`
Content:
```
1 from .. import errors
2 from ..utils import normalize_links, version_lt
3
4
5 class EndpointConfig(dict):
6 def __init__(self, version, aliases=None, links=None, ipv4_address=None,
7 ipv6_address=None, link_local_ips=None):
8 if version_lt(version, '1.22'):
9 raise errors.InvalidVersion(
10 'Endpoint config is not supported for API version < 1.22'
11 )
12
13 if aliases:
14 self["Aliases"] = aliases
15
16 if links:
17 self["Links"] = normalize_links(links)
18
19 ipam_config = {}
20 if ipv4_address:
21 ipam_config['IPv4Address'] = ipv4_address
22
23 if ipv6_address:
24 ipam_config['IPv6Address'] = ipv6_address
25
26 if link_local_ips is not None:
27 if version_lt(version, '1.24'):
28 raise errors.InvalidVersion(
29 'link_local_ips is not supported for API version < 1.24'
30 )
31 ipam_config['LinkLocalIPs'] = link_local_ips
32
33 if ipam_config:
34 self['IPAMConfig'] = ipam_config
35
36
37 class NetworkingConfig(dict):
38 def __init__(self, endpoints_config=None):
39 if endpoints_config:
40 self["EndpointsConfig"] = endpoints_config
41
42
43 class IPAMConfig(dict):
44 """
45 Create an IPAM (IP Address Management) config dictionary to be used with
46 :py:meth:`~docker.api.network.NetworkApiMixin.create_network`.
47
48 Args:
49
50 driver (str): The IPAM driver to use. Defaults to ``default``.
51 pool_configs (list): A list of pool configurations
52 (:py:class:`~docker.types.IPAMPool`). Defaults to empty list.
53
54 Example:
55
56 >>> ipam_config = docker.types.IPAMConfig(driver='default')
57 >>> network = client.create_network('network1', ipam=ipam_config)
58
59 """
60 def __init__(self, driver='default', pool_configs=None):
61 self.update({
62 'Driver': driver,
63 'Config': pool_configs or []
64 })
65
66
67 class IPAMPool(dict):
68 """
69 Create an IPAM pool config dictionary to be added to the
70 ``pool_configs`` parameter of
71 :py:class:`~docker.types.IPAMConfig`.
72
73 Args:
74
75 subnet (str): Custom subnet for this IPAM pool using the CIDR
76 notation. Defaults to ``None``.
77 iprange (str): Custom IP range for endpoints in this IPAM pool using
78 the CIDR notation. Defaults to ``None``.
79 gateway (str): Custom IP address for the pool's gateway.
80 aux_addresses (dict): A dictionary of ``key -> ip_address``
81 relationships specifying auxiliary addresses that need to be
82 allocated by the IPAM driver.
83
84 Example:
85
86 >>> ipam_pool = docker.types.IPAMPool(
87 subnet='124.42.0.0/16',
88 iprange='124.42.0.0/24',
89 gateway='124.42.0.254',
90 aux_addresses={
91 'reserved1': '124.42.1.1'
92 }
93 )
94 >>> ipam_config = docker.types.IPAMConfig(
95 pool_configs=[ipam_pool])
96 """
97 def __init__(self, subnet=None, iprange=None, gateway=None,
98 aux_addresses=None):
99 self.update({
100 'Subnet': subnet,
101 'IPRange': iprange,
102 'Gateway': gateway,
103 'AuxiliaryAddresses': aux_addresses
104 })
105
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docker/types/networks.py b/docker/types/networks.py
--- a/docker/types/networks.py
+++ b/docker/types/networks.py
@@ -50,6 +50,8 @@
driver (str): The IPAM driver to use. Defaults to ``default``.
pool_configs (list): A list of pool configurations
(:py:class:`~docker.types.IPAMPool`). Defaults to empty list.
+ options (dict): Driver options as a key-value dictionary.
+ Defaults to `None`.
Example:
@@ -57,12 +59,17 @@
>>> network = client.create_network('network1', ipam=ipam_config)
"""
- def __init__(self, driver='default', pool_configs=None):
+ def __init__(self, driver='default', pool_configs=None, options=None):
self.update({
'Driver': driver,
'Config': pool_configs or []
})
+ if options:
+ if not isinstance(options, dict):
+ raise TypeError('IPAMConfig options must be a dictionary')
+ self['Options'] = options
+
class IPAMPool(dict):
"""
| {"golden_diff": "diff --git a/docker/types/networks.py b/docker/types/networks.py\n--- a/docker/types/networks.py\n+++ b/docker/types/networks.py\n@@ -50,6 +50,8 @@\n driver (str): The IPAM driver to use. Defaults to ``default``.\n pool_configs (list): A list of pool configurations\n (:py:class:`~docker.types.IPAMPool`). Defaults to empty list.\n+ options (dict): Driver options as a key-value dictionary.\n+ Defaults to `None`.\n \n Example:\n \n@@ -57,12 +59,17 @@\n >>> network = client.create_network('network1', ipam=ipam_config)\n \n \"\"\"\n- def __init__(self, driver='default', pool_configs=None):\n+ def __init__(self, driver='default', pool_configs=None, options=None):\n self.update({\n 'Driver': driver,\n 'Config': pool_configs or []\n })\n \n+ if options:\n+ if not isinstance(options, dict):\n+ raise TypeError('IPAMConfig options must be a dictionary')\n+ self['Options'] = options\n+\n \n class IPAMPool(dict):\n \"\"\"\n", "issue": "Add docker network IPAM options parameter\nIPAM driver missing options\n\nsupports an options field in the IPAM config \nIt introduced in API v1.22.\n\n```\nPOST /networks/create Now supports an options field in the IPAM config that provides options for custom IPAM plugins.\n```\n\n", "before_files": [{"content": "from .. import errors\nfrom ..utils import normalize_links, version_lt\n\n\nclass EndpointConfig(dict):\n def __init__(self, version, aliases=None, links=None, ipv4_address=None,\n ipv6_address=None, link_local_ips=None):\n if version_lt(version, '1.22'):\n raise errors.InvalidVersion(\n 'Endpoint config is not supported for API version < 1.22'\n )\n\n if aliases:\n self[\"Aliases\"] = aliases\n\n if links:\n self[\"Links\"] = normalize_links(links)\n\n ipam_config = {}\n if ipv4_address:\n ipam_config['IPv4Address'] = ipv4_address\n\n if ipv6_address:\n ipam_config['IPv6Address'] = ipv6_address\n\n if link_local_ips is not None:\n if version_lt(version, '1.24'):\n raise errors.InvalidVersion(\n 'link_local_ips is not supported for API version < 1.24'\n )\n ipam_config['LinkLocalIPs'] = link_local_ips\n\n if ipam_config:\n self['IPAMConfig'] = ipam_config\n\n\nclass NetworkingConfig(dict):\n def __init__(self, endpoints_config=None):\n if endpoints_config:\n self[\"EndpointsConfig\"] = endpoints_config\n\n\nclass IPAMConfig(dict):\n \"\"\"\n Create an IPAM (IP Address Management) config dictionary to be used with\n :py:meth:`~docker.api.network.NetworkApiMixin.create_network`.\n\n Args:\n\n driver (str): The IPAM driver to use. Defaults to ``default``.\n pool_configs (list): A list of pool configurations\n (:py:class:`~docker.types.IPAMPool`). Defaults to empty list.\n\n Example:\n\n >>> ipam_config = docker.types.IPAMConfig(driver='default')\n >>> network = client.create_network('network1', ipam=ipam_config)\n\n \"\"\"\n def __init__(self, driver='default', pool_configs=None):\n self.update({\n 'Driver': driver,\n 'Config': pool_configs or []\n })\n\n\nclass IPAMPool(dict):\n \"\"\"\n Create an IPAM pool config dictionary to be added to the\n ``pool_configs`` parameter of\n :py:class:`~docker.types.IPAMConfig`.\n\n Args:\n\n subnet (str): Custom subnet for this IPAM pool using the CIDR\n notation. Defaults to ``None``.\n iprange (str): Custom IP range for endpoints in this IPAM pool using\n the CIDR notation. Defaults to ``None``.\n gateway (str): Custom IP address for the pool's gateway.\n aux_addresses (dict): A dictionary of ``key -> ip_address``\n relationships specifying auxiliary addresses that need to be\n allocated by the IPAM driver.\n\n Example:\n\n >>> ipam_pool = docker.types.IPAMPool(\n subnet='124.42.0.0/16',\n iprange='124.42.0.0/24',\n gateway='124.42.0.254',\n aux_addresses={\n 'reserved1': '124.42.1.1'\n }\n )\n >>> ipam_config = docker.types.IPAMConfig(\n pool_configs=[ipam_pool])\n \"\"\"\n def __init__(self, subnet=None, iprange=None, gateway=None,\n aux_addresses=None):\n self.update({\n 'Subnet': subnet,\n 'IPRange': iprange,\n 'Gateway': gateway,\n 'AuxiliaryAddresses': aux_addresses\n })\n", "path": "docker/types/networks.py"}], "after_files": [{"content": "from .. import errors\nfrom ..utils import normalize_links, version_lt\n\n\nclass EndpointConfig(dict):\n def __init__(self, version, aliases=None, links=None, ipv4_address=None,\n ipv6_address=None, link_local_ips=None):\n if version_lt(version, '1.22'):\n raise errors.InvalidVersion(\n 'Endpoint config is not supported for API version < 1.22'\n )\n\n if aliases:\n self[\"Aliases\"] = aliases\n\n if links:\n self[\"Links\"] = normalize_links(links)\n\n ipam_config = {}\n if ipv4_address:\n ipam_config['IPv4Address'] = ipv4_address\n\n if ipv6_address:\n ipam_config['IPv6Address'] = ipv6_address\n\n if link_local_ips is not None:\n if version_lt(version, '1.24'):\n raise errors.InvalidVersion(\n 'link_local_ips is not supported for API version < 1.24'\n )\n ipam_config['LinkLocalIPs'] = link_local_ips\n\n if ipam_config:\n self['IPAMConfig'] = ipam_config\n\n\nclass NetworkingConfig(dict):\n def __init__(self, endpoints_config=None):\n if endpoints_config:\n self[\"EndpointsConfig\"] = endpoints_config\n\n\nclass IPAMConfig(dict):\n \"\"\"\n Create an IPAM (IP Address Management) config dictionary to be used with\n :py:meth:`~docker.api.network.NetworkApiMixin.create_network`.\n\n Args:\n\n driver (str): The IPAM driver to use. Defaults to ``default``.\n pool_configs (list): A list of pool configurations\n (:py:class:`~docker.types.IPAMPool`). Defaults to empty list.\n options (dict): Driver options as a key-value dictionary.\n Defaults to `None`.\n\n Example:\n\n >>> ipam_config = docker.types.IPAMConfig(driver='default')\n >>> network = client.create_network('network1', ipam=ipam_config)\n\n \"\"\"\n def __init__(self, driver='default', pool_configs=None, options=None):\n self.update({\n 'Driver': driver,\n 'Config': pool_configs or []\n })\n\n if options:\n if not isinstance(options, dict):\n raise TypeError('IPAMConfig options must be a dictionary')\n self['Options'] = options\n\n\nclass IPAMPool(dict):\n \"\"\"\n Create an IPAM pool config dictionary to be added to the\n ``pool_configs`` parameter of\n :py:class:`~docker.types.IPAMConfig`.\n\n Args:\n\n subnet (str): Custom subnet for this IPAM pool using the CIDR\n notation. Defaults to ``None``.\n iprange (str): Custom IP range for endpoints in this IPAM pool using\n the CIDR notation. Defaults to ``None``.\n gateway (str): Custom IP address for the pool's gateway.\n aux_addresses (dict): A dictionary of ``key -> ip_address``\n relationships specifying auxiliary addresses that need to be\n allocated by the IPAM driver.\n\n Example:\n\n >>> ipam_pool = docker.types.IPAMPool(\n subnet='124.42.0.0/16',\n iprange='124.42.0.0/24',\n gateway='124.42.0.254',\n aux_addresses={\n 'reserved1': '124.42.1.1'\n }\n )\n >>> ipam_config = docker.types.IPAMConfig(\n pool_configs=[ipam_pool])\n \"\"\"\n def __init__(self, subnet=None, iprange=None, gateway=None,\n aux_addresses=None):\n self.update({\n 'Subnet': subnet,\n 'IPRange': iprange,\n 'Gateway': gateway,\n 'AuxiliaryAddresses': aux_addresses\n })\n", "path": "docker/types/networks.py"}]} | 1,302 | 254 |
gh_patches_debug_2452 | rasdani/github-patches | git_diff | pyinstaller__pyinstaller-2225 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
missing hidden import for skimage
When packaging an application that imports skimage.feature (and nothing else), the app would not run due to an ImportError on the "transform" module. This can be fixed by adding one item to the hiddenimports in hook-skimage.transform.py file (bolded below):
> hiddenimports = ['skimage.draw.draw',
> 'skimage._shared.geometry',
> 'skimage.filters.rank.core_cy',
> **'skimage._shared.transform'**]
>
> datas = collect_data_files('skimage')
PyInstaller 3.2, Windows 7 64 bit, Python 2.7.12, Anaconda 4.1.1 distribution.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `PyInstaller/hooks/hook-skimage.transform.py`
Content:
```
1 #-----------------------------------------------------------------------------
2 # Copyright (c) 2014-2016, PyInstaller Development Team.
3 #
4 # Distributed under the terms of the GNU General Public License with exception
5 # for distributing bootloader.
6 #
7 # The full license is in the file COPYING.txt, distributed with this software.
8 #-----------------------------------------------------------------------------
9 from PyInstaller.utils.hooks import collect_data_files
10
11 # Hook tested with scikit-image (skimage) 0.9.3 on Mac OS 10.9 and Windows 7
12 # 64-bit
13 hiddenimports = ['skimage.draw.draw',
14 'skimage._shared.geometry',
15 'skimage.filters.rank.core_cy']
16
17 datas = collect_data_files('skimage')
18
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/PyInstaller/hooks/hook-skimage.transform.py b/PyInstaller/hooks/hook-skimage.transform.py
--- a/PyInstaller/hooks/hook-skimage.transform.py
+++ b/PyInstaller/hooks/hook-skimage.transform.py
@@ -12,6 +12,7 @@
# 64-bit
hiddenimports = ['skimage.draw.draw',
'skimage._shared.geometry',
+ 'skimage._shared.transform',
'skimage.filters.rank.core_cy']
datas = collect_data_files('skimage')
| {"golden_diff": "diff --git a/PyInstaller/hooks/hook-skimage.transform.py b/PyInstaller/hooks/hook-skimage.transform.py\n--- a/PyInstaller/hooks/hook-skimage.transform.py\n+++ b/PyInstaller/hooks/hook-skimage.transform.py\n@@ -12,6 +12,7 @@\n # 64-bit\n hiddenimports = ['skimage.draw.draw',\n 'skimage._shared.geometry',\n+ 'skimage._shared.transform',\n 'skimage.filters.rank.core_cy']\n \n datas = collect_data_files('skimage')\n", "issue": "missing hidden import for skimage\nWhen packaging an application that imports skimage.feature (and nothing else), the app would not run due to an ImportError on the \"transform\" module. This can be fixed by adding one item to the hiddenimports in hook-skimage.transform.py file (bolded below):\n\n> hiddenimports = ['skimage.draw.draw',\n> 'skimage._shared.geometry',\n> 'skimage.filters.rank.core_cy',\n> **'skimage._shared.transform'**] \n> \n> datas = collect_data_files('skimage')\n\nPyInstaller 3.2, Windows 7 64 bit, Python 2.7.12, Anaconda 4.1.1 distribution.\n\n", "before_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2014-2016, PyInstaller Development Team.\n#\n# Distributed under the terms of the GNU General Public License with exception\n# for distributing bootloader.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#-----------------------------------------------------------------------------\nfrom PyInstaller.utils.hooks import collect_data_files\n\n# Hook tested with scikit-image (skimage) 0.9.3 on Mac OS 10.9 and Windows 7\n# 64-bit\nhiddenimports = ['skimage.draw.draw',\n 'skimage._shared.geometry',\n 'skimage.filters.rank.core_cy']\n\ndatas = collect_data_files('skimage')\n", "path": "PyInstaller/hooks/hook-skimage.transform.py"}], "after_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2014-2016, PyInstaller Development Team.\n#\n# Distributed under the terms of the GNU General Public License with exception\n# for distributing bootloader.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#-----------------------------------------------------------------------------\nfrom PyInstaller.utils.hooks import collect_data_files\n\n# Hook tested with scikit-image (skimage) 0.9.3 on Mac OS 10.9 and Windows 7\n# 64-bit\nhiddenimports = ['skimage.draw.draw',\n 'skimage._shared.geometry',\n 'skimage._shared.transform',\n 'skimage.filters.rank.core_cy']\n\ndatas = collect_data_files('skimage')\n", "path": "PyInstaller/hooks/hook-skimage.transform.py"}]} | 588 | 116 |
gh_patches_debug_35099 | rasdani/github-patches | git_diff | sanic-org__sanic-2774 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Headers from Exceptions
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Describe the bug
Headers set on Exception objects not carried through on all renderers
### Code snippet
```py
raise Unauthorized(
"Auth required.",
headers={"foo": "bar"},
)
```
### Expected Behavior
Response should have:
```
Foo: bar
```
### How do you run Sanic?
Sanic CLI
### Operating System
all
### Sanic Version
23.3
### Additional context
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sanic/errorpages.py`
Content:
```
1 """
2 Sanic `provides a pattern
3 <https://sanicframework.org/guide/best-practices/exceptions.html#using-sanic-exceptions>`_
4 for providing a response when an exception occurs. However, if you do no handle
5 an exception, it will provide a fallback. There are three fallback types:
6
7 - HTML - *default*
8 - Text
9 - JSON
10
11 Setting ``app.config.FALLBACK_ERROR_FORMAT = "auto"`` will enable a switch that
12 will attempt to provide an appropriate response format based upon the
13 request type.
14 """
15 from __future__ import annotations
16
17 import sys
18 import typing as t
19
20 from functools import partial
21 from traceback import extract_tb
22
23 from sanic.exceptions import BadRequest, SanicException
24 from sanic.helpers import STATUS_CODES
25 from sanic.log import deprecation, logger
26 from sanic.pages.error import ErrorPage
27 from sanic.response import html, json, text
28
29
30 dumps: t.Callable[..., str]
31 try:
32 from ujson import dumps
33
34 dumps = partial(dumps, escape_forward_slashes=False)
35 except ImportError: # noqa
36 from json import dumps
37
38 if t.TYPE_CHECKING:
39 from sanic import HTTPResponse, Request
40
41 DEFAULT_FORMAT = "auto"
42 FALLBACK_TEXT = """\
43 The application encountered an unexpected error and could not continue.\
44 """
45 FALLBACK_STATUS = 500
46 JSON = "application/json"
47
48
49 class BaseRenderer:
50 """
51 Base class that all renderers must inherit from.
52 """
53
54 dumps = staticmethod(dumps)
55
56 def __init__(self, request, exception, debug):
57 self.request = request
58 self.exception = exception
59 self.debug = debug
60
61 @property
62 def headers(self):
63 if isinstance(self.exception, SanicException):
64 return getattr(self.exception, "headers", {})
65 return {}
66
67 @property
68 def status(self):
69 if isinstance(self.exception, SanicException):
70 return getattr(self.exception, "status_code", FALLBACK_STATUS)
71 return FALLBACK_STATUS
72
73 @property
74 def text(self):
75 if self.debug or isinstance(self.exception, SanicException):
76 return str(self.exception)
77 return FALLBACK_TEXT
78
79 @property
80 def title(self):
81 status_text = STATUS_CODES.get(self.status, b"Error Occurred").decode()
82 return f"{self.status} — {status_text}"
83
84 def render(self) -> HTTPResponse:
85 """
86 Outputs the exception as a :class:`HTTPResponse`.
87
88 :return: The formatted exception
89 :rtype: str
90 """
91 output = (
92 self.full
93 if self.debug and not getattr(self.exception, "quiet", False)
94 else self.minimal
95 )
96 return output()
97
98 def minimal(self) -> HTTPResponse: # noqa
99 """
100 Provide a formatted message that is meant to not show any sensitive
101 data or details.
102 """
103 raise NotImplementedError
104
105 def full(self) -> HTTPResponse: # noqa
106 """
107 Provide a formatted message that has all details and is mean to be used
108 primarily for debugging and non-production environments.
109 """
110 raise NotImplementedError
111
112
113 class HTMLRenderer(BaseRenderer):
114 """
115 Render an exception as HTML.
116
117 The default fallback type.
118 """
119
120 def full(self) -> HTTPResponse:
121 page = ErrorPage(
122 debug=self.debug,
123 title=super().title,
124 text=super().text,
125 request=self.request,
126 exc=self.exception,
127 )
128 return html(page.render(), status=self.status, headers=self.headers)
129
130 def minimal(self) -> HTTPResponse:
131 return self.full()
132
133
134 class TextRenderer(BaseRenderer):
135 """
136 Render an exception as plain text.
137 """
138
139 OUTPUT_TEXT = "{title}\n{bar}\n{text}\n\n{body}"
140 SPACER = " "
141
142 def full(self) -> HTTPResponse:
143 return text(
144 self.OUTPUT_TEXT.format(
145 title=self.title,
146 text=self.text,
147 bar=("=" * len(self.title)),
148 body=self._generate_body(full=True),
149 ),
150 status=self.status,
151 )
152
153 def minimal(self) -> HTTPResponse:
154 return text(
155 self.OUTPUT_TEXT.format(
156 title=self.title,
157 text=self.text,
158 bar=("=" * len(self.title)),
159 body=self._generate_body(full=False),
160 ),
161 status=self.status,
162 headers=self.headers,
163 )
164
165 @property
166 def title(self):
167 return f"⚠️ {super().title}"
168
169 def _generate_body(self, *, full):
170 lines = []
171 if full:
172 _, exc_value, __ = sys.exc_info()
173 exceptions = []
174
175 lines += [
176 f"{self.exception.__class__.__name__}: {self.exception} while "
177 f"handling path {self.request.path}",
178 f"Traceback of {self.request.app.name} "
179 "(most recent call last):\n",
180 ]
181
182 while exc_value:
183 exceptions.append(self._format_exc(exc_value))
184 exc_value = exc_value.__cause__
185
186 lines += exceptions[::-1]
187
188 for attr, display in (("context", True), ("extra", bool(full))):
189 info = getattr(self.exception, attr, None)
190 if info and display:
191 lines += self._generate_object_display_list(info, attr)
192
193 return "\n".join(lines)
194
195 def _format_exc(self, exc):
196 frames = "\n\n".join(
197 [
198 f"{self.SPACER * 2}File {frame.filename}, "
199 f"line {frame.lineno}, in "
200 f"{frame.name}\n{self.SPACER * 2}{frame.line}"
201 for frame in extract_tb(exc.__traceback__)
202 ]
203 )
204 return f"{self.SPACER}{exc.__class__.__name__}: {exc}\n{frames}"
205
206 def _generate_object_display_list(self, obj, descriptor):
207 lines = [f"\n{descriptor.title()}"]
208 for key, value in obj.items():
209 display = self.dumps(value)
210 lines.append(f"{self.SPACER * 2}{key}: {display}")
211 return lines
212
213
214 class JSONRenderer(BaseRenderer):
215 """
216 Render an exception as JSON.
217 """
218
219 def full(self) -> HTTPResponse:
220 output = self._generate_output(full=True)
221 return json(output, status=self.status, dumps=self.dumps)
222
223 def minimal(self) -> HTTPResponse:
224 output = self._generate_output(full=False)
225 return json(output, status=self.status, dumps=self.dumps)
226
227 def _generate_output(self, *, full):
228 output = {
229 "description": self.title,
230 "status": self.status,
231 "message": self.text,
232 }
233
234 for attr, display in (("context", True), ("extra", bool(full))):
235 info = getattr(self.exception, attr, None)
236 if info and display:
237 output[attr] = info
238
239 if full:
240 _, exc_value, __ = sys.exc_info()
241 exceptions = []
242
243 while exc_value:
244 exceptions.append(
245 {
246 "type": exc_value.__class__.__name__,
247 "exception": str(exc_value),
248 "frames": [
249 {
250 "file": frame.filename,
251 "line": frame.lineno,
252 "name": frame.name,
253 "src": frame.line,
254 }
255 for frame in extract_tb(exc_value.__traceback__)
256 ],
257 }
258 )
259 exc_value = exc_value.__cause__
260
261 output["path"] = self.request.path
262 output["args"] = self.request.args
263 output["exceptions"] = exceptions[::-1]
264
265 return output
266
267 @property
268 def title(self):
269 return STATUS_CODES.get(self.status, b"Error Occurred").decode()
270
271
272 def escape(text):
273 """
274 Minimal HTML escaping, not for attribute values (unlike html.escape).
275 """
276 return f"{text}".replace("&", "&").replace("<", "<")
277
278
279 MIME_BY_CONFIG = {
280 "text": "text/plain",
281 "json": "application/json",
282 "html": "text/html",
283 }
284 CONFIG_BY_MIME = {v: k for k, v in MIME_BY_CONFIG.items()}
285 RENDERERS_BY_CONTENT_TYPE = {
286 "text/plain": TextRenderer,
287 "application/json": JSONRenderer,
288 "multipart/form-data": HTMLRenderer,
289 "text/html": HTMLRenderer,
290 }
291
292 # Handler source code is checked for which response types it returns with the
293 # route error_format="auto" (default) to determine which format to use.
294 RESPONSE_MAPPING = {
295 "json": "json",
296 "text": "text",
297 "html": "html",
298 "JSONResponse": "json",
299 "text/plain": "text",
300 "text/html": "html",
301 "application/json": "json",
302 }
303
304
305 def check_error_format(format):
306 if format not in MIME_BY_CONFIG and format != "auto":
307 raise SanicException(f"Unknown format: {format}")
308
309
310 def exception_response(
311 request: Request,
312 exception: Exception,
313 debug: bool,
314 fallback: str,
315 base: t.Type[BaseRenderer],
316 renderer: t.Type[t.Optional[BaseRenderer]] = None,
317 ) -> HTTPResponse:
318 """
319 Render a response for the default FALLBACK exception handler.
320 """
321 if not renderer:
322 mt = guess_mime(request, fallback)
323 renderer = RENDERERS_BY_CONTENT_TYPE.get(mt, base)
324
325 renderer = t.cast(t.Type[BaseRenderer], renderer)
326 return renderer(request, exception, debug).render()
327
328
329 def guess_mime(req: Request, fallback: str) -> str:
330 # Attempt to find a suitable MIME format for the response.
331 # Insertion-ordered map of formats["html"] = "source of that suggestion"
332 formats = {}
333 name = ""
334 # Route error_format (by magic from handler code if auto, the default)
335 if req.route:
336 name = req.route.name
337 f = req.route.extra.error_format
338 if f in MIME_BY_CONFIG:
339 formats[f] = name
340
341 if not formats and fallback in MIME_BY_CONFIG:
342 formats[fallback] = "FALLBACK_ERROR_FORMAT"
343
344 # If still not known, check for the request for clues of JSON
345 if not formats and fallback == "auto" and req.accept.match(JSON):
346 if JSON in req.accept: # Literally, not wildcard
347 formats["json"] = "request.accept"
348 elif JSON in req.headers.getone("content-type", ""):
349 formats["json"] = "content-type"
350 # DEPRECATION: Remove this block in 24.3
351 else:
352 c = None
353 try:
354 c = req.json
355 except BadRequest:
356 pass
357 if c:
358 formats["json"] = "request.json"
359 deprecation(
360 "Response type was determined by the JSON content of "
361 "the request. This behavior is deprecated and will be "
362 "removed in v24.3. Please specify the format either by\n"
363 f' error_format="json" on route {name}, by\n'
364 ' FALLBACK_ERROR_FORMAT = "json", or by adding header\n'
365 " accept: application/json to your requests.",
366 24.3,
367 )
368
369 # Any other supported formats
370 if fallback == "auto":
371 for k in MIME_BY_CONFIG:
372 if k not in formats:
373 formats[k] = "any"
374
375 mimes = [MIME_BY_CONFIG[k] for k in formats]
376 m = req.accept.match(*mimes)
377 if m:
378 format = CONFIG_BY_MIME[m.mime]
379 source = formats[format]
380 logger.debug(
381 f"The client accepts {m.header}, using '{format}' from {source}"
382 )
383 else:
384 logger.debug(f"No format found, the client accepts {req.accept!r}")
385 return m.mime
386
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sanic/errorpages.py b/sanic/errorpages.py
--- a/sanic/errorpages.py
+++ b/sanic/errorpages.py
@@ -92,8 +92,10 @@
self.full
if self.debug and not getattr(self.exception, "quiet", False)
else self.minimal
- )
- return output()
+ )()
+ output.status = self.status
+ output.headers.update(self.headers)
+ return output
def minimal(self) -> HTTPResponse: # noqa
"""
@@ -125,7 +127,7 @@
request=self.request,
exc=self.exception,
)
- return html(page.render(), status=self.status, headers=self.headers)
+ return html(page.render())
def minimal(self) -> HTTPResponse:
return self.full()
@@ -146,8 +148,7 @@
text=self.text,
bar=("=" * len(self.title)),
body=self._generate_body(full=True),
- ),
- status=self.status,
+ )
)
def minimal(self) -> HTTPResponse:
@@ -157,9 +158,7 @@
text=self.text,
bar=("=" * len(self.title)),
body=self._generate_body(full=False),
- ),
- status=self.status,
- headers=self.headers,
+ )
)
@property
@@ -218,11 +217,11 @@
def full(self) -> HTTPResponse:
output = self._generate_output(full=True)
- return json(output, status=self.status, dumps=self.dumps)
+ return json(output, dumps=self.dumps)
def minimal(self) -> HTTPResponse:
output = self._generate_output(full=False)
- return json(output, status=self.status, dumps=self.dumps)
+ return json(output, dumps=self.dumps)
def _generate_output(self, *, full):
output = {
| {"golden_diff": "diff --git a/sanic/errorpages.py b/sanic/errorpages.py\n--- a/sanic/errorpages.py\n+++ b/sanic/errorpages.py\n@@ -92,8 +92,10 @@\n self.full\n if self.debug and not getattr(self.exception, \"quiet\", False)\n else self.minimal\n- )\n- return output()\n+ )()\n+ output.status = self.status\n+ output.headers.update(self.headers)\n+ return output\n \n def minimal(self) -> HTTPResponse: # noqa\n \"\"\"\n@@ -125,7 +127,7 @@\n request=self.request,\n exc=self.exception,\n )\n- return html(page.render(), status=self.status, headers=self.headers)\n+ return html(page.render())\n \n def minimal(self) -> HTTPResponse:\n return self.full()\n@@ -146,8 +148,7 @@\n text=self.text,\n bar=(\"=\" * len(self.title)),\n body=self._generate_body(full=True),\n- ),\n- status=self.status,\n+ )\n )\n \n def minimal(self) -> HTTPResponse:\n@@ -157,9 +158,7 @@\n text=self.text,\n bar=(\"=\" * len(self.title)),\n body=self._generate_body(full=False),\n- ),\n- status=self.status,\n- headers=self.headers,\n+ )\n )\n \n @property\n@@ -218,11 +217,11 @@\n \n def full(self) -> HTTPResponse:\n output = self._generate_output(full=True)\n- return json(output, status=self.status, dumps=self.dumps)\n+ return json(output, dumps=self.dumps)\n \n def minimal(self) -> HTTPResponse:\n output = self._generate_output(full=False)\n- return json(output, status=self.status, dumps=self.dumps)\n+ return json(output, dumps=self.dumps)\n \n def _generate_output(self, *, full):\n output = {\n", "issue": "Headers from Exceptions\n### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Describe the bug\n\nHeaders set on Exception objects not carried through on all renderers\n\n### Code snippet\n\n```py\r\nraise Unauthorized(\r\n \"Auth required.\",\r\n headers={\"foo\": \"bar\"},\r\n)\r\n```\n\n### Expected Behavior\n\nResponse should have:\r\n\r\n```\r\nFoo: bar\r\n```\n\n### How do you run Sanic?\n\nSanic CLI\n\n### Operating System\n\nall\n\n### Sanic Version\n\n23.3\n\n### Additional context\n\n_No response_\n", "before_files": [{"content": "\"\"\"\nSanic `provides a pattern\n<https://sanicframework.org/guide/best-practices/exceptions.html#using-sanic-exceptions>`_\nfor providing a response when an exception occurs. However, if you do no handle\nan exception, it will provide a fallback. There are three fallback types:\n\n- HTML - *default*\n- Text\n- JSON\n\nSetting ``app.config.FALLBACK_ERROR_FORMAT = \"auto\"`` will enable a switch that\nwill attempt to provide an appropriate response format based upon the\nrequest type.\n\"\"\"\nfrom __future__ import annotations\n\nimport sys\nimport typing as t\n\nfrom functools import partial\nfrom traceback import extract_tb\n\nfrom sanic.exceptions import BadRequest, SanicException\nfrom sanic.helpers import STATUS_CODES\nfrom sanic.log import deprecation, logger\nfrom sanic.pages.error import ErrorPage\nfrom sanic.response import html, json, text\n\n\ndumps: t.Callable[..., str]\ntry:\n from ujson import dumps\n\n dumps = partial(dumps, escape_forward_slashes=False)\nexcept ImportError: # noqa\n from json import dumps\n\nif t.TYPE_CHECKING:\n from sanic import HTTPResponse, Request\n\nDEFAULT_FORMAT = \"auto\"\nFALLBACK_TEXT = \"\"\"\\\nThe application encountered an unexpected error and could not continue.\\\n\"\"\"\nFALLBACK_STATUS = 500\nJSON = \"application/json\"\n\n\nclass BaseRenderer:\n \"\"\"\n Base class that all renderers must inherit from.\n \"\"\"\n\n dumps = staticmethod(dumps)\n\n def __init__(self, request, exception, debug):\n self.request = request\n self.exception = exception\n self.debug = debug\n\n @property\n def headers(self):\n if isinstance(self.exception, SanicException):\n return getattr(self.exception, \"headers\", {})\n return {}\n\n @property\n def status(self):\n if isinstance(self.exception, SanicException):\n return getattr(self.exception, \"status_code\", FALLBACK_STATUS)\n return FALLBACK_STATUS\n\n @property\n def text(self):\n if self.debug or isinstance(self.exception, SanicException):\n return str(self.exception)\n return FALLBACK_TEXT\n\n @property\n def title(self):\n status_text = STATUS_CODES.get(self.status, b\"Error Occurred\").decode()\n return f\"{self.status} \u2014 {status_text}\"\n\n def render(self) -> HTTPResponse:\n \"\"\"\n Outputs the exception as a :class:`HTTPResponse`.\n\n :return: The formatted exception\n :rtype: str\n \"\"\"\n output = (\n self.full\n if self.debug and not getattr(self.exception, \"quiet\", False)\n else self.minimal\n )\n return output()\n\n def minimal(self) -> HTTPResponse: # noqa\n \"\"\"\n Provide a formatted message that is meant to not show any sensitive\n data or details.\n \"\"\"\n raise NotImplementedError\n\n def full(self) -> HTTPResponse: # noqa\n \"\"\"\n Provide a formatted message that has all details and is mean to be used\n primarily for debugging and non-production environments.\n \"\"\"\n raise NotImplementedError\n\n\nclass HTMLRenderer(BaseRenderer):\n \"\"\"\n Render an exception as HTML.\n\n The default fallback type.\n \"\"\"\n\n def full(self) -> HTTPResponse:\n page = ErrorPage(\n debug=self.debug,\n title=super().title,\n text=super().text,\n request=self.request,\n exc=self.exception,\n )\n return html(page.render(), status=self.status, headers=self.headers)\n\n def minimal(self) -> HTTPResponse:\n return self.full()\n\n\nclass TextRenderer(BaseRenderer):\n \"\"\"\n Render an exception as plain text.\n \"\"\"\n\n OUTPUT_TEXT = \"{title}\\n{bar}\\n{text}\\n\\n{body}\"\n SPACER = \" \"\n\n def full(self) -> HTTPResponse:\n return text(\n self.OUTPUT_TEXT.format(\n title=self.title,\n text=self.text,\n bar=(\"=\" * len(self.title)),\n body=self._generate_body(full=True),\n ),\n status=self.status,\n )\n\n def minimal(self) -> HTTPResponse:\n return text(\n self.OUTPUT_TEXT.format(\n title=self.title,\n text=self.text,\n bar=(\"=\" * len(self.title)),\n body=self._generate_body(full=False),\n ),\n status=self.status,\n headers=self.headers,\n )\n\n @property\n def title(self):\n return f\"\u26a0\ufe0f {super().title}\"\n\n def _generate_body(self, *, full):\n lines = []\n if full:\n _, exc_value, __ = sys.exc_info()\n exceptions = []\n\n lines += [\n f\"{self.exception.__class__.__name__}: {self.exception} while \"\n f\"handling path {self.request.path}\",\n f\"Traceback of {self.request.app.name} \"\n \"(most recent call last):\\n\",\n ]\n\n while exc_value:\n exceptions.append(self._format_exc(exc_value))\n exc_value = exc_value.__cause__\n\n lines += exceptions[::-1]\n\n for attr, display in ((\"context\", True), (\"extra\", bool(full))):\n info = getattr(self.exception, attr, None)\n if info and display:\n lines += self._generate_object_display_list(info, attr)\n\n return \"\\n\".join(lines)\n\n def _format_exc(self, exc):\n frames = \"\\n\\n\".join(\n [\n f\"{self.SPACER * 2}File {frame.filename}, \"\n f\"line {frame.lineno}, in \"\n f\"{frame.name}\\n{self.SPACER * 2}{frame.line}\"\n for frame in extract_tb(exc.__traceback__)\n ]\n )\n return f\"{self.SPACER}{exc.__class__.__name__}: {exc}\\n{frames}\"\n\n def _generate_object_display_list(self, obj, descriptor):\n lines = [f\"\\n{descriptor.title()}\"]\n for key, value in obj.items():\n display = self.dumps(value)\n lines.append(f\"{self.SPACER * 2}{key}: {display}\")\n return lines\n\n\nclass JSONRenderer(BaseRenderer):\n \"\"\"\n Render an exception as JSON.\n \"\"\"\n\n def full(self) -> HTTPResponse:\n output = self._generate_output(full=True)\n return json(output, status=self.status, dumps=self.dumps)\n\n def minimal(self) -> HTTPResponse:\n output = self._generate_output(full=False)\n return json(output, status=self.status, dumps=self.dumps)\n\n def _generate_output(self, *, full):\n output = {\n \"description\": self.title,\n \"status\": self.status,\n \"message\": self.text,\n }\n\n for attr, display in ((\"context\", True), (\"extra\", bool(full))):\n info = getattr(self.exception, attr, None)\n if info and display:\n output[attr] = info\n\n if full:\n _, exc_value, __ = sys.exc_info()\n exceptions = []\n\n while exc_value:\n exceptions.append(\n {\n \"type\": exc_value.__class__.__name__,\n \"exception\": str(exc_value),\n \"frames\": [\n {\n \"file\": frame.filename,\n \"line\": frame.lineno,\n \"name\": frame.name,\n \"src\": frame.line,\n }\n for frame in extract_tb(exc_value.__traceback__)\n ],\n }\n )\n exc_value = exc_value.__cause__\n\n output[\"path\"] = self.request.path\n output[\"args\"] = self.request.args\n output[\"exceptions\"] = exceptions[::-1]\n\n return output\n\n @property\n def title(self):\n return STATUS_CODES.get(self.status, b\"Error Occurred\").decode()\n\n\ndef escape(text):\n \"\"\"\n Minimal HTML escaping, not for attribute values (unlike html.escape).\n \"\"\"\n return f\"{text}\".replace(\"&\", \"&\").replace(\"<\", \"<\")\n\n\nMIME_BY_CONFIG = {\n \"text\": \"text/plain\",\n \"json\": \"application/json\",\n \"html\": \"text/html\",\n}\nCONFIG_BY_MIME = {v: k for k, v in MIME_BY_CONFIG.items()}\nRENDERERS_BY_CONTENT_TYPE = {\n \"text/plain\": TextRenderer,\n \"application/json\": JSONRenderer,\n \"multipart/form-data\": HTMLRenderer,\n \"text/html\": HTMLRenderer,\n}\n\n# Handler source code is checked for which response types it returns with the\n# route error_format=\"auto\" (default) to determine which format to use.\nRESPONSE_MAPPING = {\n \"json\": \"json\",\n \"text\": \"text\",\n \"html\": \"html\",\n \"JSONResponse\": \"json\",\n \"text/plain\": \"text\",\n \"text/html\": \"html\",\n \"application/json\": \"json\",\n}\n\n\ndef check_error_format(format):\n if format not in MIME_BY_CONFIG and format != \"auto\":\n raise SanicException(f\"Unknown format: {format}\")\n\n\ndef exception_response(\n request: Request,\n exception: Exception,\n debug: bool,\n fallback: str,\n base: t.Type[BaseRenderer],\n renderer: t.Type[t.Optional[BaseRenderer]] = None,\n) -> HTTPResponse:\n \"\"\"\n Render a response for the default FALLBACK exception handler.\n \"\"\"\n if not renderer:\n mt = guess_mime(request, fallback)\n renderer = RENDERERS_BY_CONTENT_TYPE.get(mt, base)\n\n renderer = t.cast(t.Type[BaseRenderer], renderer)\n return renderer(request, exception, debug).render()\n\n\ndef guess_mime(req: Request, fallback: str) -> str:\n # Attempt to find a suitable MIME format for the response.\n # Insertion-ordered map of formats[\"html\"] = \"source of that suggestion\"\n formats = {}\n name = \"\"\n # Route error_format (by magic from handler code if auto, the default)\n if req.route:\n name = req.route.name\n f = req.route.extra.error_format\n if f in MIME_BY_CONFIG:\n formats[f] = name\n\n if not formats and fallback in MIME_BY_CONFIG:\n formats[fallback] = \"FALLBACK_ERROR_FORMAT\"\n\n # If still not known, check for the request for clues of JSON\n if not formats and fallback == \"auto\" and req.accept.match(JSON):\n if JSON in req.accept: # Literally, not wildcard\n formats[\"json\"] = \"request.accept\"\n elif JSON in req.headers.getone(\"content-type\", \"\"):\n formats[\"json\"] = \"content-type\"\n # DEPRECATION: Remove this block in 24.3\n else:\n c = None\n try:\n c = req.json\n except BadRequest:\n pass\n if c:\n formats[\"json\"] = \"request.json\"\n deprecation(\n \"Response type was determined by the JSON content of \"\n \"the request. This behavior is deprecated and will be \"\n \"removed in v24.3. Please specify the format either by\\n\"\n f' error_format=\"json\" on route {name}, by\\n'\n ' FALLBACK_ERROR_FORMAT = \"json\", or by adding header\\n'\n \" accept: application/json to your requests.\",\n 24.3,\n )\n\n # Any other supported formats\n if fallback == \"auto\":\n for k in MIME_BY_CONFIG:\n if k not in formats:\n formats[k] = \"any\"\n\n mimes = [MIME_BY_CONFIG[k] for k in formats]\n m = req.accept.match(*mimes)\n if m:\n format = CONFIG_BY_MIME[m.mime]\n source = formats[format]\n logger.debug(\n f\"The client accepts {m.header}, using '{format}' from {source}\"\n )\n else:\n logger.debug(f\"No format found, the client accepts {req.accept!r}\")\n return m.mime\n", "path": "sanic/errorpages.py"}], "after_files": [{"content": "\"\"\"\nSanic `provides a pattern\n<https://sanicframework.org/guide/best-practices/exceptions.html#using-sanic-exceptions>`_\nfor providing a response when an exception occurs. However, if you do no handle\nan exception, it will provide a fallback. There are three fallback types:\n\n- HTML - *default*\n- Text\n- JSON\n\nSetting ``app.config.FALLBACK_ERROR_FORMAT = \"auto\"`` will enable a switch that\nwill attempt to provide an appropriate response format based upon the\nrequest type.\n\"\"\"\nfrom __future__ import annotations\n\nimport sys\nimport typing as t\n\nfrom functools import partial\nfrom traceback import extract_tb\n\nfrom sanic.exceptions import BadRequest, SanicException\nfrom sanic.helpers import STATUS_CODES\nfrom sanic.log import deprecation, logger\nfrom sanic.pages.error import ErrorPage\nfrom sanic.response import html, json, text\n\n\ndumps: t.Callable[..., str]\ntry:\n from ujson import dumps\n\n dumps = partial(dumps, escape_forward_slashes=False)\nexcept ImportError: # noqa\n from json import dumps\n\nif t.TYPE_CHECKING:\n from sanic import HTTPResponse, Request\n\nDEFAULT_FORMAT = \"auto\"\nFALLBACK_TEXT = \"\"\"\\\nThe application encountered an unexpected error and could not continue.\\\n\"\"\"\nFALLBACK_STATUS = 500\nJSON = \"application/json\"\n\n\nclass BaseRenderer:\n \"\"\"\n Base class that all renderers must inherit from.\n \"\"\"\n\n dumps = staticmethod(dumps)\n\n def __init__(self, request, exception, debug):\n self.request = request\n self.exception = exception\n self.debug = debug\n\n @property\n def headers(self):\n if isinstance(self.exception, SanicException):\n return getattr(self.exception, \"headers\", {})\n return {}\n\n @property\n def status(self):\n if isinstance(self.exception, SanicException):\n return getattr(self.exception, \"status_code\", FALLBACK_STATUS)\n return FALLBACK_STATUS\n\n @property\n def text(self):\n if self.debug or isinstance(self.exception, SanicException):\n return str(self.exception)\n return FALLBACK_TEXT\n\n @property\n def title(self):\n status_text = STATUS_CODES.get(self.status, b\"Error Occurred\").decode()\n return f\"{self.status} \u2014 {status_text}\"\n\n def render(self) -> HTTPResponse:\n \"\"\"\n Outputs the exception as a :class:`HTTPResponse`.\n\n :return: The formatted exception\n :rtype: str\n \"\"\"\n output = (\n self.full\n if self.debug and not getattr(self.exception, \"quiet\", False)\n else self.minimal\n )()\n output.status = self.status\n output.headers.update(self.headers)\n return output\n\n def minimal(self) -> HTTPResponse: # noqa\n \"\"\"\n Provide a formatted message that is meant to not show any sensitive\n data or details.\n \"\"\"\n raise NotImplementedError\n\n def full(self) -> HTTPResponse: # noqa\n \"\"\"\n Provide a formatted message that has all details and is mean to be used\n primarily for debugging and non-production environments.\n \"\"\"\n raise NotImplementedError\n\n\nclass HTMLRenderer(BaseRenderer):\n \"\"\"\n Render an exception as HTML.\n\n The default fallback type.\n \"\"\"\n\n def full(self) -> HTTPResponse:\n page = ErrorPage(\n debug=self.debug,\n title=super().title,\n text=super().text,\n request=self.request,\n exc=self.exception,\n )\n return html(page.render())\n\n def minimal(self) -> HTTPResponse:\n return self.full()\n\n\nclass TextRenderer(BaseRenderer):\n \"\"\"\n Render an exception as plain text.\n \"\"\"\n\n OUTPUT_TEXT = \"{title}\\n{bar}\\n{text}\\n\\n{body}\"\n SPACER = \" \"\n\n def full(self) -> HTTPResponse:\n return text(\n self.OUTPUT_TEXT.format(\n title=self.title,\n text=self.text,\n bar=(\"=\" * len(self.title)),\n body=self._generate_body(full=True),\n )\n )\n\n def minimal(self) -> HTTPResponse:\n return text(\n self.OUTPUT_TEXT.format(\n title=self.title,\n text=self.text,\n bar=(\"=\" * len(self.title)),\n body=self._generate_body(full=False),\n )\n )\n\n @property\n def title(self):\n return f\"\u26a0\ufe0f {super().title}\"\n\n def _generate_body(self, *, full):\n lines = []\n if full:\n _, exc_value, __ = sys.exc_info()\n exceptions = []\n\n lines += [\n f\"{self.exception.__class__.__name__}: {self.exception} while \"\n f\"handling path {self.request.path}\",\n f\"Traceback of {self.request.app.name} \"\n \"(most recent call last):\\n\",\n ]\n\n while exc_value:\n exceptions.append(self._format_exc(exc_value))\n exc_value = exc_value.__cause__\n\n lines += exceptions[::-1]\n\n for attr, display in ((\"context\", True), (\"extra\", bool(full))):\n info = getattr(self.exception, attr, None)\n if info and display:\n lines += self._generate_object_display_list(info, attr)\n\n return \"\\n\".join(lines)\n\n def _format_exc(self, exc):\n frames = \"\\n\\n\".join(\n [\n f\"{self.SPACER * 2}File {frame.filename}, \"\n f\"line {frame.lineno}, in \"\n f\"{frame.name}\\n{self.SPACER * 2}{frame.line}\"\n for frame in extract_tb(exc.__traceback__)\n ]\n )\n return f\"{self.SPACER}{exc.__class__.__name__}: {exc}\\n{frames}\"\n\n def _generate_object_display_list(self, obj, descriptor):\n lines = [f\"\\n{descriptor.title()}\"]\n for key, value in obj.items():\n display = self.dumps(value)\n lines.append(f\"{self.SPACER * 2}{key}: {display}\")\n return lines\n\n\nclass JSONRenderer(BaseRenderer):\n \"\"\"\n Render an exception as JSON.\n \"\"\"\n\n def full(self) -> HTTPResponse:\n output = self._generate_output(full=True)\n return json(output, dumps=self.dumps)\n\n def minimal(self) -> HTTPResponse:\n output = self._generate_output(full=False)\n return json(output, dumps=self.dumps)\n\n def _generate_output(self, *, full):\n output = {\n \"description\": self.title,\n \"status\": self.status,\n \"message\": self.text,\n }\n\n for attr, display in ((\"context\", True), (\"extra\", bool(full))):\n info = getattr(self.exception, attr, None)\n if info and display:\n output[attr] = info\n\n if full:\n _, exc_value, __ = sys.exc_info()\n exceptions = []\n\n while exc_value:\n exceptions.append(\n {\n \"type\": exc_value.__class__.__name__,\n \"exception\": str(exc_value),\n \"frames\": [\n {\n \"file\": frame.filename,\n \"line\": frame.lineno,\n \"name\": frame.name,\n \"src\": frame.line,\n }\n for frame in extract_tb(exc_value.__traceback__)\n ],\n }\n )\n exc_value = exc_value.__cause__\n\n output[\"path\"] = self.request.path\n output[\"args\"] = self.request.args\n output[\"exceptions\"] = exceptions[::-1]\n\n return output\n\n @property\n def title(self):\n return STATUS_CODES.get(self.status, b\"Error Occurred\").decode()\n\n\ndef escape(text):\n \"\"\"\n Minimal HTML escaping, not for attribute values (unlike html.escape).\n \"\"\"\n return f\"{text}\".replace(\"&\", \"&\").replace(\"<\", \"<\")\n\n\nMIME_BY_CONFIG = {\n \"text\": \"text/plain\",\n \"json\": \"application/json\",\n \"html\": \"text/html\",\n}\nCONFIG_BY_MIME = {v: k for k, v in MIME_BY_CONFIG.items()}\nRENDERERS_BY_CONTENT_TYPE = {\n \"text/plain\": TextRenderer,\n \"application/json\": JSONRenderer,\n \"multipart/form-data\": HTMLRenderer,\n \"text/html\": HTMLRenderer,\n}\n\n# Handler source code is checked for which response types it returns with the\n# route error_format=\"auto\" (default) to determine which format to use.\nRESPONSE_MAPPING = {\n \"json\": \"json\",\n \"text\": \"text\",\n \"html\": \"html\",\n \"JSONResponse\": \"json\",\n \"text/plain\": \"text\",\n \"text/html\": \"html\",\n \"application/json\": \"json\",\n}\n\n\ndef check_error_format(format):\n if format not in MIME_BY_CONFIG and format != \"auto\":\n raise SanicException(f\"Unknown format: {format}\")\n\n\ndef exception_response(\n request: Request,\n exception: Exception,\n debug: bool,\n fallback: str,\n base: t.Type[BaseRenderer],\n renderer: t.Type[t.Optional[BaseRenderer]] = None,\n) -> HTTPResponse:\n \"\"\"\n Render a response for the default FALLBACK exception handler.\n \"\"\"\n if not renderer:\n mt = guess_mime(request, fallback)\n renderer = RENDERERS_BY_CONTENT_TYPE.get(mt, base)\n\n renderer = t.cast(t.Type[BaseRenderer], renderer)\n return renderer(request, exception, debug).render()\n\n\ndef guess_mime(req: Request, fallback: str) -> str:\n # Attempt to find a suitable MIME format for the response.\n # Insertion-ordered map of formats[\"html\"] = \"source of that suggestion\"\n formats = {}\n name = \"\"\n # Route error_format (by magic from handler code if auto, the default)\n if req.route:\n name = req.route.name\n f = req.route.extra.error_format\n if f in MIME_BY_CONFIG:\n formats[f] = name\n\n if not formats and fallback in MIME_BY_CONFIG:\n formats[fallback] = \"FALLBACK_ERROR_FORMAT\"\n\n # If still not known, check for the request for clues of JSON\n if not formats and fallback == \"auto\" and req.accept.match(JSON):\n if JSON in req.accept: # Literally, not wildcard\n formats[\"json\"] = \"request.accept\"\n elif JSON in req.headers.getone(\"content-type\", \"\"):\n formats[\"json\"] = \"content-type\"\n # DEPRECATION: Remove this block in 24.3\n else:\n c = None\n try:\n c = req.json\n except BadRequest:\n pass\n if c:\n formats[\"json\"] = \"request.json\"\n deprecation(\n \"Response type was determined by the JSON content of \"\n \"the request. This behavior is deprecated and will be \"\n \"removed in v24.3. Please specify the format either by\\n\"\n f' error_format=\"json\" on route {name}, by\\n'\n ' FALLBACK_ERROR_FORMAT = \"json\", or by adding header\\n'\n \" accept: application/json to your requests.\",\n 24.3,\n )\n\n # Any other supported formats\n if fallback == \"auto\":\n for k in MIME_BY_CONFIG:\n if k not in formats:\n formats[k] = \"any\"\n\n mimes = [MIME_BY_CONFIG[k] for k in formats]\n m = req.accept.match(*mimes)\n if m:\n format = CONFIG_BY_MIME[m.mime]\n source = formats[format]\n logger.debug(\n f\"The client accepts {m.header}, using '{format}' from {source}\"\n )\n else:\n logger.debug(f\"No format found, the client accepts {req.accept!r}\")\n return m.mime\n", "path": "sanic/errorpages.py"}]} | 3,973 | 423 |
gh_patches_debug_7034 | rasdani/github-patches | git_diff | aws__aws-cli-5019 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add support for PyYAML 5.3
Closes: https://github.com/aws/aws-cli/issues/4828
Signed-off-by: Igor Raits <[email protected]>
*Issue #, if available:*
*Description of changes:*
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 import codecs
3 import os.path
4 import re
5 import sys
6
7 from setuptools import setup, find_packages
8
9
10 here = os.path.abspath(os.path.dirname(__file__))
11
12
13 def read(*parts):
14 return codecs.open(os.path.join(here, *parts), 'r').read()
15
16
17 def find_version(*file_paths):
18 version_file = read(*file_paths)
19 version_match = re.search(r"^__version__ = ['\"]([^'\"]*)['\"]",
20 version_file, re.M)
21 if version_match:
22 return version_match.group(1)
23 raise RuntimeError("Unable to find version string.")
24
25
26 install_requires = [
27 'botocore==1.15.10',
28 'docutils>=0.10,<0.16',
29 'rsa>=3.1.2,<=3.5.0',
30 's3transfer>=0.3.0,<0.4.0',
31 'PyYAML>=3.10,<5.3',
32 ]
33
34
35 if sys.version_info[:2] == (3, 4):
36 install_requires.append('colorama>=0.2.5,<0.4.2')
37 else:
38 install_requires.append('colorama>=0.2.5,<0.4.4')
39
40
41 setup_options = dict(
42 name='awscli',
43 version=find_version("awscli", "__init__.py"),
44 description='Universal Command Line Environment for AWS.',
45 long_description=read('README.rst'),
46 author='Amazon Web Services',
47 url='http://aws.amazon.com/cli/',
48 scripts=['bin/aws', 'bin/aws.cmd',
49 'bin/aws_completer', 'bin/aws_zsh_completer.sh',
50 'bin/aws_bash_completer'],
51 packages=find_packages(exclude=['tests*']),
52 package_data={'awscli': ['data/*.json', 'examples/*/*.rst',
53 'examples/*/*.txt', 'examples/*/*/*.txt',
54 'examples/*/*/*.rst', 'topics/*.rst',
55 'topics/*.json']},
56 install_requires=install_requires,
57 extras_require={},
58 license="Apache License 2.0",
59 classifiers=[
60 'Development Status :: 5 - Production/Stable',
61 'Intended Audience :: Developers',
62 'Intended Audience :: System Administrators',
63 'Natural Language :: English',
64 'License :: OSI Approved :: Apache Software License',
65 'Programming Language :: Python',
66 'Programming Language :: Python :: 2',
67 'Programming Language :: Python :: 2.7',
68 'Programming Language :: Python :: 3',
69 'Programming Language :: Python :: 3.4',
70 'Programming Language :: Python :: 3.5',
71 'Programming Language :: Python :: 3.6',
72 'Programming Language :: Python :: 3.7',
73 'Programming Language :: Python :: 3.8',
74 ],
75 )
76
77
78 if 'py2exe' in sys.argv:
79 # This will actually give us a py2exe command.
80 import py2exe
81 # And we have some py2exe specific options.
82 setup_options['options'] = {
83 'py2exe': {
84 'optimize': 0,
85 'skip_archive': True,
86 'dll_excludes': ['crypt32.dll'],
87 'packages': ['docutils', 'urllib', 'httplib', 'HTMLParser',
88 'awscli', 'ConfigParser', 'xml.etree', 'pipes'],
89 }
90 }
91 setup_options['console'] = ['bin/aws']
92
93
94 setup(**setup_options)
95
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -28,13 +28,14 @@
'docutils>=0.10,<0.16',
'rsa>=3.1.2,<=3.5.0',
's3transfer>=0.3.0,<0.4.0',
- 'PyYAML>=3.10,<5.3',
]
if sys.version_info[:2] == (3, 4):
+ install_requires.append('PyYAML>=3.10,<5.3')
install_requires.append('colorama>=0.2.5,<0.4.2')
else:
+ install_requires.append('PyYAML>=3.10,<5.4')
install_requires.append('colorama>=0.2.5,<0.4.4')
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -28,13 +28,14 @@\n 'docutils>=0.10,<0.16',\n 'rsa>=3.1.2,<=3.5.0',\n 's3transfer>=0.3.0,<0.4.0',\n- 'PyYAML>=3.10,<5.3',\n ]\n \n \n if sys.version_info[:2] == (3, 4):\n+ install_requires.append('PyYAML>=3.10,<5.3')\n install_requires.append('colorama>=0.2.5,<0.4.2')\n else:\n+ install_requires.append('PyYAML>=3.10,<5.4')\n install_requires.append('colorama>=0.2.5,<0.4.4')\n", "issue": "Add support for PyYAML 5.3\nCloses: https://github.com/aws/aws-cli/issues/4828\r\nSigned-off-by: Igor Raits <[email protected]>\r\n\r\n*Issue #, if available:*\r\n\r\n*Description of changes:*\r\n\r\n\r\nBy submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\nimport codecs\nimport os.path\nimport re\nimport sys\n\nfrom setuptools import setup, find_packages\n\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\n\ndef read(*parts):\n return codecs.open(os.path.join(here, *parts), 'r').read()\n\n\ndef find_version(*file_paths):\n version_file = read(*file_paths)\n version_match = re.search(r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\",\n version_file, re.M)\n if version_match:\n return version_match.group(1)\n raise RuntimeError(\"Unable to find version string.\")\n\n\ninstall_requires = [\n 'botocore==1.15.10',\n 'docutils>=0.10,<0.16',\n 'rsa>=3.1.2,<=3.5.0',\n 's3transfer>=0.3.0,<0.4.0',\n 'PyYAML>=3.10,<5.3',\n]\n\n\nif sys.version_info[:2] == (3, 4):\n install_requires.append('colorama>=0.2.5,<0.4.2')\nelse:\n install_requires.append('colorama>=0.2.5,<0.4.4')\n\n\nsetup_options = dict(\n name='awscli',\n version=find_version(\"awscli\", \"__init__.py\"),\n description='Universal Command Line Environment for AWS.',\n long_description=read('README.rst'),\n author='Amazon Web Services',\n url='http://aws.amazon.com/cli/',\n scripts=['bin/aws', 'bin/aws.cmd',\n 'bin/aws_completer', 'bin/aws_zsh_completer.sh',\n 'bin/aws_bash_completer'],\n packages=find_packages(exclude=['tests*']),\n package_data={'awscli': ['data/*.json', 'examples/*/*.rst',\n 'examples/*/*.txt', 'examples/*/*/*.txt',\n 'examples/*/*/*.rst', 'topics/*.rst',\n 'topics/*.json']},\n install_requires=install_requires,\n extras_require={},\n license=\"Apache License 2.0\",\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'Intended Audience :: System Administrators',\n 'Natural Language :: English',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n ],\n)\n\n\nif 'py2exe' in sys.argv:\n # This will actually give us a py2exe command.\n import py2exe\n # And we have some py2exe specific options.\n setup_options['options'] = {\n 'py2exe': {\n 'optimize': 0,\n 'skip_archive': True,\n 'dll_excludes': ['crypt32.dll'],\n 'packages': ['docutils', 'urllib', 'httplib', 'HTMLParser',\n 'awscli', 'ConfigParser', 'xml.etree', 'pipes'],\n }\n }\n setup_options['console'] = ['bin/aws']\n\n\nsetup(**setup_options)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\nimport codecs\nimport os.path\nimport re\nimport sys\n\nfrom setuptools import setup, find_packages\n\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\n\ndef read(*parts):\n return codecs.open(os.path.join(here, *parts), 'r').read()\n\n\ndef find_version(*file_paths):\n version_file = read(*file_paths)\n version_match = re.search(r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\",\n version_file, re.M)\n if version_match:\n return version_match.group(1)\n raise RuntimeError(\"Unable to find version string.\")\n\n\ninstall_requires = [\n 'botocore==1.15.10',\n 'docutils>=0.10,<0.16',\n 'rsa>=3.1.2,<=3.5.0',\n 's3transfer>=0.3.0,<0.4.0',\n]\n\n\nif sys.version_info[:2] == (3, 4):\n install_requires.append('PyYAML>=3.10,<5.3')\n install_requires.append('colorama>=0.2.5,<0.4.2')\nelse:\n install_requires.append('PyYAML>=3.10,<5.4')\n install_requires.append('colorama>=0.2.5,<0.4.4')\n\n\nsetup_options = dict(\n name='awscli',\n version=find_version(\"awscli\", \"__init__.py\"),\n description='Universal Command Line Environment for AWS.',\n long_description=read('README.rst'),\n author='Amazon Web Services',\n url='http://aws.amazon.com/cli/',\n scripts=['bin/aws', 'bin/aws.cmd',\n 'bin/aws_completer', 'bin/aws_zsh_completer.sh',\n 'bin/aws_bash_completer'],\n packages=find_packages(exclude=['tests*']),\n package_data={'awscli': ['data/*.json', 'examples/*/*.rst',\n 'examples/*/*.txt', 'examples/*/*/*.txt',\n 'examples/*/*/*.rst', 'topics/*.rst',\n 'topics/*.json']},\n install_requires=install_requires,\n extras_require={},\n license=\"Apache License 2.0\",\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'Intended Audience :: System Administrators',\n 'Natural Language :: English',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n ],\n)\n\n\nif 'py2exe' in sys.argv:\n # This will actually give us a py2exe command.\n import py2exe\n # And we have some py2exe specific options.\n setup_options['options'] = {\n 'py2exe': {\n 'optimize': 0,\n 'skip_archive': True,\n 'dll_excludes': ['crypt32.dll'],\n 'packages': ['docutils', 'urllib', 'httplib', 'HTMLParser',\n 'awscli', 'ConfigParser', 'xml.etree', 'pipes'],\n }\n }\n setup_options['console'] = ['bin/aws']\n\n\nsetup(**setup_options)\n", "path": "setup.py"}]} | 1,288 | 196 |
gh_patches_debug_18897 | rasdani/github-patches | git_diff | quantumlib__Cirq-1897 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Handedness between ISwapPowGate and FSim is different.
The ISwapPowGate and FSim gate rotate in different directions on the |01>, |10> subspace. This feels a bit odd to me.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cirq/ops/fsim_gate.py`
Content:
```
1 # Copyright 2019 The Cirq Developers
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """Defines the fermionic simulation gate family.
15
16 This is the family of two-qubit gates that preserve excitations (number of ON
17 qubits), ignoring single-qubit gates and global phase. For example, when using
18 the second quantized representation of electrons to simulate chemistry, this is
19 a natural gateset because each ON qubit corresponds to an electron and in the
20 context of chemistry the electron count is conserved over time. This property
21 applies more generally to fermions, thus the name of the gate.
22 """
23
24 import cmath
25 import math
26 from typing import Optional
27
28 import numpy as np
29
30 import cirq
31 from cirq import protocols, value
32 from cirq._compat import proper_repr
33 from cirq.ops import gate_features
34
35
36 @value.value_equality(approximate=True)
37 class FSimGate(gate_features.TwoQubitGate,
38 gate_features.InterchangeableQubitsGate):
39 """Fermionic simulation gate family.
40
41 Contains all two qubit interactions that preserve excitations, up to
42 single-qubit rotations and global phase.
43
44 The unitary matrix of this gate is:
45
46 [[1, 0, 0, 0],
47 [0, a, b, 0],
48 [0, b, a, 0],
49 [0, 0, 0, c]]
50
51 where:
52
53 a = cos(theta)
54 b = -i·sin(theta)
55 c = exp(-i·phi)
56 """
57
58 def __init__(self, theta: float, phi: float):
59 """
60 Args:
61 theta: The strength of the XX+YY interaction, in radians.
62 phi: CPhase angle in radians. Determines how much the |11⟩ state is
63 phased.
64 """
65 self.theta = theta
66 self.phi = phi
67
68 def _value_equality_values_(self):
69 return self.theta, self.phi
70
71 def _is_parameterized_(self):
72 return cirq.is_parameterized(self.theta) or cirq.is_parameterized(
73 self.phi)
74
75 def _unitary_(self) -> Optional[np.ndarray]:
76 if cirq.is_parameterized(self):
77 return None
78 a = math.cos(self.theta)
79 b = -1j * math.sin(self.theta)
80 c = cmath.exp(-1j * self.phi)
81 return np.array([
82 [1, 0, 0, 0],
83 [0, a, b, 0],
84 [0, b, a, 0],
85 [0, 0, 0, c],
86 ])
87
88 def _pauli_expansion_(self) -> value.LinearDict[str]:
89 if protocols.is_parameterized(self):
90 return NotImplemented
91 a = math.cos(self.theta)
92 b = -1j * math.sin(self.theta)
93 c = cmath.exp(-1j * self.phi)
94 return value.LinearDict({
95 'II': (1 + c) / 4 + a / 2,
96 'IZ': (1 - c) / 4,
97 'ZI': (1 - c) / 4,
98 'ZZ': (1 + c) / 4 - a / 2,
99 'XX': b / 2,
100 'YY': b / 2,
101 })
102
103 def _resolve_parameters_(self, param_resolver: 'cirq.ParamResolver'
104 ) -> 'cirq.FSimGate':
105 return FSimGate(
106 protocols.resolve_parameters(self.theta, param_resolver),
107 protocols.resolve_parameters(self.phi, param_resolver))
108
109 def _apply_unitary_(self,
110 args: 'cirq.ApplyUnitaryArgs') -> Optional[np.ndarray]:
111 if cirq.is_parameterized(self):
112 return None
113 if self.theta != 0:
114 inner_matrix = protocols.unitary(cirq.Rx(2 * self.theta))
115 oi = args.subspace_index(0b01)
116 io = args.subspace_index(0b10)
117 out = cirq.apply_matrix_to_slices(args.target_tensor,
118 inner_matrix,
119 slices=[oi, io],
120 out=args.available_buffer)
121 else:
122 out = args.target_tensor
123 if self.phi != 0:
124 ii = args.subspace_index(0b11)
125 out[ii] *= cmath.exp(-1j * self.phi)
126 return out
127
128 def _decompose_(self, qubits) -> 'cirq.OP_TREE':
129 a, b = qubits
130 xx = cirq.XXPowGate(exponent=self.theta / np.pi, global_shift=-0.5)
131 yy = cirq.YYPowGate(exponent=self.theta / np.pi, global_shift=-0.5)
132 yield xx(a, b)
133 yield yy(a, b)
134 yield cirq.CZ(a, b)**(-self.phi / np.pi)
135
136 def _circuit_diagram_info_(self, args: 'cirq.CircuitDiagramInfoArgs'):
137 t = _format_rads(args, self.theta)
138 p = _format_rads(args, self.phi)
139 return 'fsim({}, {})'.format(t, p), '#2'
140
141 def __pow__(self, power):
142 return FSimGate(cirq.mul(self.theta, power), cirq.mul(self.phi, power))
143
144 def __repr__(self):
145 return 'cirq.FSimGate(theta={}, phi={})'.format(proper_repr(self.theta),
146 proper_repr(self.phi))
147
148
149 def _format_rads(args: 'cirq.CircuitDiagramInfoArgs', radians: float) -> str:
150 if cirq.is_parameterized(radians):
151 return str(radians)
152 unit = 'π' if args.use_unicode_characters else 'pi'
153 if radians == np.pi:
154 return unit
155 if radians == 0:
156 return '0'
157 if radians == -np.pi:
158 return '-' + unit
159 if args.precision is not None:
160 quantity = '{{:.{}}}'.format(args.precision).format(radians / np.pi)
161 return quantity + unit
162 return repr(radians)
163
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/cirq/ops/fsim_gate.py b/cirq/ops/fsim_gate.py
--- a/cirq/ops/fsim_gate.py
+++ b/cirq/ops/fsim_gate.py
@@ -53,14 +53,23 @@
a = cos(theta)
b = -i·sin(theta)
c = exp(-i·phi)
+
+ Note the difference in sign conventions between FSimGate and the
+ ISWAP and CZPowGate:
+
+ FSimGate(θ, φ) = ISWAP**(-2θ/π) CZPowGate(exponent=-φ/π)
"""
def __init__(self, theta: float, phi: float):
"""
Args:
- theta: The strength of the XX+YY interaction, in radians.
- phi: CPhase angle in radians. Determines how much the |11⟩ state is
- phased.
+ theta: Swap angle on the span(|01⟩, |10⟩) subspace, in radians.
+ Determined by the strength and duration of the XX+YY
+ interaction. Note: uses opposite sign convention to the
+ iSWAP gate.
+ phi: Controlled phase angle, in radians. Determines how much the
+ |11⟩ state is phased. Note: uses opposite sign convention to
+ the CZPowGate.
"""
self.theta = theta
self.phi = phi
| {"golden_diff": "diff --git a/cirq/ops/fsim_gate.py b/cirq/ops/fsim_gate.py\n--- a/cirq/ops/fsim_gate.py\n+++ b/cirq/ops/fsim_gate.py\n@@ -53,14 +53,23 @@\n a = cos(theta)\n b = -i\u00b7sin(theta)\n c = exp(-i\u00b7phi)\n+\n+ Note the difference in sign conventions between FSimGate and the\n+ ISWAP and CZPowGate:\n+\n+ FSimGate(\u03b8, \u03c6) = ISWAP**(-2\u03b8/\u03c0) CZPowGate(exponent=-\u03c6/\u03c0)\n \"\"\"\n \n def __init__(self, theta: float, phi: float):\n \"\"\"\n Args:\n- theta: The strength of the XX+YY interaction, in radians.\n- phi: CPhase angle in radians. Determines how much the |11\u27e9 state is\n- phased.\n+ theta: Swap angle on the span(|01\u27e9, |10\u27e9) subspace, in radians.\n+ Determined by the strength and duration of the XX+YY\n+ interaction. Note: uses opposite sign convention to the\n+ iSWAP gate.\n+ phi: Controlled phase angle, in radians. Determines how much the\n+ |11\u27e9 state is phased. Note: uses opposite sign convention to\n+ the CZPowGate.\n \"\"\"\n self.theta = theta\n self.phi = phi\n", "issue": "Handedness between ISwapPowGate and FSim is different.\nThe ISwapPowGate and FSim gate rotate in different directions on the |01>, |10> subspace. This feels a bit odd to me. \n", "before_files": [{"content": "# Copyright 2019 The Cirq Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Defines the fermionic simulation gate family.\n\nThis is the family of two-qubit gates that preserve excitations (number of ON\nqubits), ignoring single-qubit gates and global phase. For example, when using\nthe second quantized representation of electrons to simulate chemistry, this is\na natural gateset because each ON qubit corresponds to an electron and in the\ncontext of chemistry the electron count is conserved over time. This property\napplies more generally to fermions, thus the name of the gate.\n\"\"\"\n\nimport cmath\nimport math\nfrom typing import Optional\n\nimport numpy as np\n\nimport cirq\nfrom cirq import protocols, value\nfrom cirq._compat import proper_repr\nfrom cirq.ops import gate_features\n\n\[email protected]_equality(approximate=True)\nclass FSimGate(gate_features.TwoQubitGate,\n gate_features.InterchangeableQubitsGate):\n \"\"\"Fermionic simulation gate family.\n\n Contains all two qubit interactions that preserve excitations, up to\n single-qubit rotations and global phase.\n\n The unitary matrix of this gate is:\n\n [[1, 0, 0, 0],\n [0, a, b, 0],\n [0, b, a, 0],\n [0, 0, 0, c]]\n\n where:\n\n a = cos(theta)\n b = -i\u00b7sin(theta)\n c = exp(-i\u00b7phi)\n \"\"\"\n\n def __init__(self, theta: float, phi: float):\n \"\"\"\n Args:\n theta: The strength of the XX+YY interaction, in radians.\n phi: CPhase angle in radians. Determines how much the |11\u27e9 state is\n phased.\n \"\"\"\n self.theta = theta\n self.phi = phi\n\n def _value_equality_values_(self):\n return self.theta, self.phi\n\n def _is_parameterized_(self):\n return cirq.is_parameterized(self.theta) or cirq.is_parameterized(\n self.phi)\n\n def _unitary_(self) -> Optional[np.ndarray]:\n if cirq.is_parameterized(self):\n return None\n a = math.cos(self.theta)\n b = -1j * math.sin(self.theta)\n c = cmath.exp(-1j * self.phi)\n return np.array([\n [1, 0, 0, 0],\n [0, a, b, 0],\n [0, b, a, 0],\n [0, 0, 0, c],\n ])\n\n def _pauli_expansion_(self) -> value.LinearDict[str]:\n if protocols.is_parameterized(self):\n return NotImplemented\n a = math.cos(self.theta)\n b = -1j * math.sin(self.theta)\n c = cmath.exp(-1j * self.phi)\n return value.LinearDict({\n 'II': (1 + c) / 4 + a / 2,\n 'IZ': (1 - c) / 4,\n 'ZI': (1 - c) / 4,\n 'ZZ': (1 + c) / 4 - a / 2,\n 'XX': b / 2,\n 'YY': b / 2,\n })\n\n def _resolve_parameters_(self, param_resolver: 'cirq.ParamResolver'\n ) -> 'cirq.FSimGate':\n return FSimGate(\n protocols.resolve_parameters(self.theta, param_resolver),\n protocols.resolve_parameters(self.phi, param_resolver))\n\n def _apply_unitary_(self,\n args: 'cirq.ApplyUnitaryArgs') -> Optional[np.ndarray]:\n if cirq.is_parameterized(self):\n return None\n if self.theta != 0:\n inner_matrix = protocols.unitary(cirq.Rx(2 * self.theta))\n oi = args.subspace_index(0b01)\n io = args.subspace_index(0b10)\n out = cirq.apply_matrix_to_slices(args.target_tensor,\n inner_matrix,\n slices=[oi, io],\n out=args.available_buffer)\n else:\n out = args.target_tensor\n if self.phi != 0:\n ii = args.subspace_index(0b11)\n out[ii] *= cmath.exp(-1j * self.phi)\n return out\n\n def _decompose_(self, qubits) -> 'cirq.OP_TREE':\n a, b = qubits\n xx = cirq.XXPowGate(exponent=self.theta / np.pi, global_shift=-0.5)\n yy = cirq.YYPowGate(exponent=self.theta / np.pi, global_shift=-0.5)\n yield xx(a, b)\n yield yy(a, b)\n yield cirq.CZ(a, b)**(-self.phi / np.pi)\n\n def _circuit_diagram_info_(self, args: 'cirq.CircuitDiagramInfoArgs'):\n t = _format_rads(args, self.theta)\n p = _format_rads(args, self.phi)\n return 'fsim({}, {})'.format(t, p), '#2'\n\n def __pow__(self, power):\n return FSimGate(cirq.mul(self.theta, power), cirq.mul(self.phi, power))\n\n def __repr__(self):\n return 'cirq.FSimGate(theta={}, phi={})'.format(proper_repr(self.theta),\n proper_repr(self.phi))\n\n\ndef _format_rads(args: 'cirq.CircuitDiagramInfoArgs', radians: float) -> str:\n if cirq.is_parameterized(radians):\n return str(radians)\n unit = '\u03c0' if args.use_unicode_characters else 'pi'\n if radians == np.pi:\n return unit\n if radians == 0:\n return '0'\n if radians == -np.pi:\n return '-' + unit\n if args.precision is not None:\n quantity = '{{:.{}}}'.format(args.precision).format(radians / np.pi)\n return quantity + unit\n return repr(radians)\n", "path": "cirq/ops/fsim_gate.py"}], "after_files": [{"content": "# Copyright 2019 The Cirq Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Defines the fermionic simulation gate family.\n\nThis is the family of two-qubit gates that preserve excitations (number of ON\nqubits), ignoring single-qubit gates and global phase. For example, when using\nthe second quantized representation of electrons to simulate chemistry, this is\na natural gateset because each ON qubit corresponds to an electron and in the\ncontext of chemistry the electron count is conserved over time. This property\napplies more generally to fermions, thus the name of the gate.\n\"\"\"\n\nimport cmath\nimport math\nfrom typing import Optional\n\nimport numpy as np\n\nimport cirq\nfrom cirq import protocols, value\nfrom cirq._compat import proper_repr\nfrom cirq.ops import gate_features\n\n\[email protected]_equality(approximate=True)\nclass FSimGate(gate_features.TwoQubitGate,\n gate_features.InterchangeableQubitsGate):\n \"\"\"Fermionic simulation gate family.\n\n Contains all two qubit interactions that preserve excitations, up to\n single-qubit rotations and global phase.\n\n The unitary matrix of this gate is:\n\n [[1, 0, 0, 0],\n [0, a, b, 0],\n [0, b, a, 0],\n [0, 0, 0, c]]\n\n where:\n\n a = cos(theta)\n b = -i\u00b7sin(theta)\n c = exp(-i\u00b7phi)\n\n Note the difference in sign conventions between FSimGate and the\n ISWAP and CZPowGate:\n\n FSimGate(\u03b8, \u03c6) = ISWAP**(-2\u03b8/\u03c0) CZPowGate(exponent=-\u03c6/\u03c0)\n \"\"\"\n\n def __init__(self, theta: float, phi: float):\n \"\"\"\n Args:\n theta: Swap angle on the span(|01\u27e9, |10\u27e9) subspace, in radians.\n Determined by the strength and duration of the XX+YY\n interaction. Note: uses opposite sign convention to the\n iSWAP gate.\n phi: Controlled phase angle, in radians. Determines how much the\n |11\u27e9 state is phased. Note: uses opposite sign convention to\n the CZPowGate.\n \"\"\"\n self.theta = theta\n self.phi = phi\n\n def _value_equality_values_(self):\n return self.theta, self.phi\n\n def _is_parameterized_(self):\n return cirq.is_parameterized(self.theta) or cirq.is_parameterized(\n self.phi)\n\n def _unitary_(self) -> Optional[np.ndarray]:\n if cirq.is_parameterized(self):\n return None\n a = math.cos(self.theta)\n b = -1j * math.sin(self.theta)\n c = cmath.exp(-1j * self.phi)\n return np.array([\n [1, 0, 0, 0],\n [0, a, b, 0],\n [0, b, a, 0],\n [0, 0, 0, c],\n ])\n\n def _pauli_expansion_(self) -> value.LinearDict[str]:\n if protocols.is_parameterized(self):\n return NotImplemented\n a = math.cos(self.theta)\n b = -1j * math.sin(self.theta)\n c = cmath.exp(-1j * self.phi)\n return value.LinearDict({\n 'II': (1 + c) / 4 + a / 2,\n 'IZ': (1 - c) / 4,\n 'ZI': (1 - c) / 4,\n 'ZZ': (1 + c) / 4 - a / 2,\n 'XX': b / 2,\n 'YY': b / 2,\n })\n\n def _resolve_parameters_(self, param_resolver: 'cirq.ParamResolver'\n ) -> 'cirq.FSimGate':\n return FSimGate(\n protocols.resolve_parameters(self.theta, param_resolver),\n protocols.resolve_parameters(self.phi, param_resolver))\n\n def _apply_unitary_(self,\n args: 'cirq.ApplyUnitaryArgs') -> Optional[np.ndarray]:\n if cirq.is_parameterized(self):\n return None\n if self.theta != 0:\n inner_matrix = protocols.unitary(cirq.Rx(2 * self.theta))\n oi = args.subspace_index(0b01)\n io = args.subspace_index(0b10)\n out = cirq.apply_matrix_to_slices(args.target_tensor,\n inner_matrix,\n slices=[oi, io],\n out=args.available_buffer)\n else:\n out = args.target_tensor\n if self.phi != 0:\n ii = args.subspace_index(0b11)\n out[ii] *= cmath.exp(-1j * self.phi)\n return out\n\n def _decompose_(self, qubits) -> 'cirq.OP_TREE':\n a, b = qubits\n xx = cirq.XXPowGate(exponent=self.theta / np.pi, global_shift=-0.5)\n yy = cirq.YYPowGate(exponent=self.theta / np.pi, global_shift=-0.5)\n yield xx(a, b)\n yield yy(a, b)\n yield cirq.CZ(a, b)**(-self.phi / np.pi)\n\n def _circuit_diagram_info_(self, args: 'cirq.CircuitDiagramInfoArgs'):\n t = _format_rads(args, self.theta)\n p = _format_rads(args, self.phi)\n return 'fsim({}, {})'.format(t, p), '#2'\n\n def __pow__(self, power):\n return FSimGate(cirq.mul(self.theta, power), cirq.mul(self.phi, power))\n\n def __repr__(self):\n return 'cirq.FSimGate(theta={}, phi={})'.format(proper_repr(self.theta),\n proper_repr(self.phi))\n\n\ndef _format_rads(args: 'cirq.CircuitDiagramInfoArgs', radians: float) -> str:\n if cirq.is_parameterized(radians):\n return str(radians)\n unit = '\u03c0' if args.use_unicode_characters else 'pi'\n if radians == np.pi:\n return unit\n if radians == 0:\n return '0'\n if radians == -np.pi:\n return '-' + unit\n if args.precision is not None:\n quantity = '{{:.{}}}'.format(args.precision).format(radians / np.pi)\n return quantity + unit\n return repr(radians)\n", "path": "cirq/ops/fsim_gate.py"}]} | 2,120 | 319 |
gh_patches_debug_7774 | rasdani/github-patches | git_diff | aio-libs__aiohttp-4120 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix connection info in ClientConnectorError client exception
The message like `Cannot connect to host www.google.com:443 ssl:None` is misleading.
`ssl:None` means 'default', not 'disabled'
`Cannot connect to host www.google.com:443 ssl:[default]` looks much better
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `aiohttp/client_exceptions.py`
Content:
```
1 """HTTP related errors."""
2
3 import asyncio
4 from typing import TYPE_CHECKING, Any, Optional, Tuple, Union
5
6 from .typedefs import _CIMultiDict
7
8 try:
9 import ssl
10 SSLContext = ssl.SSLContext
11 except ImportError: # pragma: no cover
12 ssl = SSLContext = None # type: ignore
13
14
15 if TYPE_CHECKING: # pragma: no cover
16 from .client_reqrep import (RequestInfo, ClientResponse, ConnectionKey, # noqa
17 Fingerprint)
18 else:
19 RequestInfo = ClientResponse = ConnectionKey = None
20
21
22 __all__ = (
23 'ClientError',
24
25 'ClientConnectionError',
26 'ClientOSError', 'ClientConnectorError', 'ClientProxyConnectionError',
27
28 'ClientSSLError',
29 'ClientConnectorSSLError', 'ClientConnectorCertificateError',
30
31 'ServerConnectionError', 'ServerTimeoutError', 'ServerDisconnectedError',
32 'ServerFingerprintMismatch',
33
34 'ClientResponseError', 'ClientHttpProxyError',
35 'WSServerHandshakeError', 'ContentTypeError',
36
37 'ClientPayloadError', 'InvalidURL')
38
39
40 class ClientError(Exception):
41 """Base class for client connection errors."""
42
43
44 class ClientResponseError(ClientError):
45 """Connection error during reading response.
46
47 request_info: instance of RequestInfo
48 """
49
50 def __init__(self, request_info: RequestInfo,
51 history: Tuple[ClientResponse, ...], *,
52 status: Optional[int]=None,
53 message: str='',
54 headers: Optional[_CIMultiDict]=None) -> None:
55 self.request_info = request_info
56 if status is not None:
57 self.status = status
58 else:
59 self.status = 0
60 self.message = message
61 self.headers = headers
62 self.history = history
63 self.args = (request_info, history)
64
65 def __str__(self) -> str:
66 return ("%s, message=%r, url=%r" %
67 (self.status, self.message, self.request_info.real_url))
68
69 def __repr__(self) -> str:
70 args = "%r, %r" % (self.request_info, self.history)
71 if self.status != 0:
72 args += ", status=%r" % (self.status,)
73 if self.message != '':
74 args += ", message=%r" % (self.message,)
75 if self.headers is not None:
76 args += ", headers=%r" % (self.headers,)
77 return "%s(%s)" % (type(self).__name__, args)
78
79
80 class ContentTypeError(ClientResponseError):
81 """ContentType found is not valid."""
82
83
84 class WSServerHandshakeError(ClientResponseError):
85 """websocket server handshake error."""
86
87
88 class ClientHttpProxyError(ClientResponseError):
89 """HTTP proxy error.
90
91 Raised in :class:`aiohttp.connector.TCPConnector` if
92 proxy responds with status other than ``200 OK``
93 on ``CONNECT`` request.
94 """
95
96
97 class TooManyRedirects(ClientResponseError):
98 """Client was redirected too many times."""
99
100
101 class ClientConnectionError(ClientError):
102 """Base class for client socket errors."""
103
104
105 class ClientOSError(ClientConnectionError, OSError):
106 """OSError error."""
107
108
109 class ClientConnectorError(ClientOSError):
110 """Client connector error.
111
112 Raised in :class:`aiohttp.connector.TCPConnector` if
113 connection to proxy can not be established.
114 """
115 def __init__(self, connection_key: ConnectionKey,
116 os_error: OSError) -> None:
117 self._conn_key = connection_key
118 self._os_error = os_error
119 super().__init__(os_error.errno, os_error.strerror)
120 self.args = (connection_key, os_error)
121
122 @property
123 def os_error(self) -> OSError:
124 return self._os_error
125
126 @property
127 def host(self) -> str:
128 return self._conn_key.host
129
130 @property
131 def port(self) -> Optional[int]:
132 return self._conn_key.port
133
134 @property
135 def ssl(self) -> Union[SSLContext, None, bool, 'Fingerprint']:
136 return self._conn_key.ssl
137
138 def __str__(self) -> str:
139 return ('Cannot connect to host {0.host}:{0.port} ssl:{0.ssl} [{1}]'
140 .format(self, self.strerror))
141
142 # OSError.__reduce__ does too much black magick
143 __reduce__ = BaseException.__reduce__
144
145
146 class ClientProxyConnectionError(ClientConnectorError):
147 """Proxy connection error.
148
149 Raised in :class:`aiohttp.connector.TCPConnector` if
150 connection to proxy can not be established.
151 """
152
153
154 class ServerConnectionError(ClientConnectionError):
155 """Server connection errors."""
156
157
158 class ServerDisconnectedError(ServerConnectionError):
159 """Server disconnected."""
160
161 def __init__(self, message: Optional[str]=None) -> None:
162 self.message = message
163 if message is None:
164 self.args = ()
165 else:
166 self.args = (message,)
167
168
169 class ServerTimeoutError(ServerConnectionError, asyncio.TimeoutError):
170 """Server timeout error."""
171
172
173 class ServerFingerprintMismatch(ServerConnectionError):
174 """SSL certificate does not match expected fingerprint."""
175
176 def __init__(self, expected: bytes, got: bytes,
177 host: str, port: int) -> None:
178 self.expected = expected
179 self.got = got
180 self.host = host
181 self.port = port
182 self.args = (expected, got, host, port)
183
184 def __repr__(self) -> str:
185 return '<{} expected={!r} got={!r} host={!r} port={!r}>'.format(
186 self.__class__.__name__, self.expected, self.got,
187 self.host, self.port)
188
189
190 class ClientPayloadError(ClientError):
191 """Response payload error."""
192
193
194 class InvalidURL(ClientError, ValueError):
195 """Invalid URL.
196
197 URL used for fetching is malformed, e.g. it doesn't contains host
198 part."""
199
200 # Derive from ValueError for backward compatibility
201
202 def __init__(self, url: Any) -> None:
203 # The type of url is not yarl.URL because the exception can be raised
204 # on URL(url) call
205 super().__init__(url)
206
207 @property
208 def url(self) -> Any:
209 return self.args[0]
210
211 def __repr__(self) -> str:
212 return '<{} {}>'.format(self.__class__.__name__, self.url)
213
214
215 class ClientSSLError(ClientConnectorError):
216 """Base error for ssl.*Errors."""
217
218
219 if ssl is not None:
220 cert_errors = (ssl.CertificateError,)
221 cert_errors_bases = (ClientSSLError, ssl.CertificateError,)
222
223 ssl_errors = (ssl.SSLError,)
224 ssl_error_bases = (ClientSSLError, ssl.SSLError)
225 else: # pragma: no cover
226 cert_errors = tuple()
227 cert_errors_bases = (ClientSSLError, ValueError,)
228
229 ssl_errors = tuple()
230 ssl_error_bases = (ClientSSLError,)
231
232
233 class ClientConnectorSSLError(*ssl_error_bases): # type: ignore
234 """Response ssl error."""
235
236
237 class ClientConnectorCertificateError(*cert_errors_bases): # type: ignore
238 """Response certificate error."""
239
240 def __init__(self, connection_key:
241 ConnectionKey, certificate_error: Exception) -> None:
242 self._conn_key = connection_key
243 self._certificate_error = certificate_error
244 self.args = (connection_key, certificate_error)
245
246 @property
247 def certificate_error(self) -> Exception:
248 return self._certificate_error
249
250 @property
251 def host(self) -> str:
252 return self._conn_key.host
253
254 @property
255 def port(self) -> Optional[int]:
256 return self._conn_key.port
257
258 @property
259 def ssl(self) -> bool:
260 return self._conn_key.is_ssl
261
262 def __str__(self) -> str:
263 return ('Cannot connect to host {0.host}:{0.port} ssl:{0.ssl} '
264 '[{0.certificate_error.__class__.__name__}: '
265 '{0.certificate_error.args}]'.format(self))
266
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/aiohttp/client_exceptions.py b/aiohttp/client_exceptions.py
--- a/aiohttp/client_exceptions.py
+++ b/aiohttp/client_exceptions.py
@@ -136,8 +136,9 @@
return self._conn_key.ssl
def __str__(self) -> str:
- return ('Cannot connect to host {0.host}:{0.port} ssl:{0.ssl} [{1}]'
- .format(self, self.strerror))
+ return ('Cannot connect to host {0.host}:{0.port} ssl:{1} [{2}]'
+ .format(self, self.ssl if self.ssl is not None else 'default',
+ self.strerror))
# OSError.__reduce__ does too much black magick
__reduce__ = BaseException.__reduce__
| {"golden_diff": "diff --git a/aiohttp/client_exceptions.py b/aiohttp/client_exceptions.py\n--- a/aiohttp/client_exceptions.py\n+++ b/aiohttp/client_exceptions.py\n@@ -136,8 +136,9 @@\n return self._conn_key.ssl\n \n def __str__(self) -> str:\n- return ('Cannot connect to host {0.host}:{0.port} ssl:{0.ssl} [{1}]'\n- .format(self, self.strerror))\n+ return ('Cannot connect to host {0.host}:{0.port} ssl:{1} [{2}]'\n+ .format(self, self.ssl if self.ssl is not None else 'default',\n+ self.strerror))\n \n # OSError.__reduce__ does too much black magick\n __reduce__ = BaseException.__reduce__\n", "issue": "Fix connection info in ClientConnectorError client exception\nThe message like `Cannot connect to host www.google.com:443 ssl:None` is misleading.\r\n`ssl:None` means 'default', not 'disabled'\r\n\r\n`Cannot connect to host www.google.com:443 ssl:[default]` looks much better\n", "before_files": [{"content": "\"\"\"HTTP related errors.\"\"\"\n\nimport asyncio\nfrom typing import TYPE_CHECKING, Any, Optional, Tuple, Union\n\nfrom .typedefs import _CIMultiDict\n\ntry:\n import ssl\n SSLContext = ssl.SSLContext\nexcept ImportError: # pragma: no cover\n ssl = SSLContext = None # type: ignore\n\n\nif TYPE_CHECKING: # pragma: no cover\n from .client_reqrep import (RequestInfo, ClientResponse, ConnectionKey, # noqa\n Fingerprint)\nelse:\n RequestInfo = ClientResponse = ConnectionKey = None\n\n\n__all__ = (\n 'ClientError',\n\n 'ClientConnectionError',\n 'ClientOSError', 'ClientConnectorError', 'ClientProxyConnectionError',\n\n 'ClientSSLError',\n 'ClientConnectorSSLError', 'ClientConnectorCertificateError',\n\n 'ServerConnectionError', 'ServerTimeoutError', 'ServerDisconnectedError',\n 'ServerFingerprintMismatch',\n\n 'ClientResponseError', 'ClientHttpProxyError',\n 'WSServerHandshakeError', 'ContentTypeError',\n\n 'ClientPayloadError', 'InvalidURL')\n\n\nclass ClientError(Exception):\n \"\"\"Base class for client connection errors.\"\"\"\n\n\nclass ClientResponseError(ClientError):\n \"\"\"Connection error during reading response.\n\n request_info: instance of RequestInfo\n \"\"\"\n\n def __init__(self, request_info: RequestInfo,\n history: Tuple[ClientResponse, ...], *,\n status: Optional[int]=None,\n message: str='',\n headers: Optional[_CIMultiDict]=None) -> None:\n self.request_info = request_info\n if status is not None:\n self.status = status\n else:\n self.status = 0\n self.message = message\n self.headers = headers\n self.history = history\n self.args = (request_info, history)\n\n def __str__(self) -> str:\n return (\"%s, message=%r, url=%r\" %\n (self.status, self.message, self.request_info.real_url))\n\n def __repr__(self) -> str:\n args = \"%r, %r\" % (self.request_info, self.history)\n if self.status != 0:\n args += \", status=%r\" % (self.status,)\n if self.message != '':\n args += \", message=%r\" % (self.message,)\n if self.headers is not None:\n args += \", headers=%r\" % (self.headers,)\n return \"%s(%s)\" % (type(self).__name__, args)\n\n\nclass ContentTypeError(ClientResponseError):\n \"\"\"ContentType found is not valid.\"\"\"\n\n\nclass WSServerHandshakeError(ClientResponseError):\n \"\"\"websocket server handshake error.\"\"\"\n\n\nclass ClientHttpProxyError(ClientResponseError):\n \"\"\"HTTP proxy error.\n\n Raised in :class:`aiohttp.connector.TCPConnector` if\n proxy responds with status other than ``200 OK``\n on ``CONNECT`` request.\n \"\"\"\n\n\nclass TooManyRedirects(ClientResponseError):\n \"\"\"Client was redirected too many times.\"\"\"\n\n\nclass ClientConnectionError(ClientError):\n \"\"\"Base class for client socket errors.\"\"\"\n\n\nclass ClientOSError(ClientConnectionError, OSError):\n \"\"\"OSError error.\"\"\"\n\n\nclass ClientConnectorError(ClientOSError):\n \"\"\"Client connector error.\n\n Raised in :class:`aiohttp.connector.TCPConnector` if\n connection to proxy can not be established.\n \"\"\"\n def __init__(self, connection_key: ConnectionKey,\n os_error: OSError) -> None:\n self._conn_key = connection_key\n self._os_error = os_error\n super().__init__(os_error.errno, os_error.strerror)\n self.args = (connection_key, os_error)\n\n @property\n def os_error(self) -> OSError:\n return self._os_error\n\n @property\n def host(self) -> str:\n return self._conn_key.host\n\n @property\n def port(self) -> Optional[int]:\n return self._conn_key.port\n\n @property\n def ssl(self) -> Union[SSLContext, None, bool, 'Fingerprint']:\n return self._conn_key.ssl\n\n def __str__(self) -> str:\n return ('Cannot connect to host {0.host}:{0.port} ssl:{0.ssl} [{1}]'\n .format(self, self.strerror))\n\n # OSError.__reduce__ does too much black magick\n __reduce__ = BaseException.__reduce__\n\n\nclass ClientProxyConnectionError(ClientConnectorError):\n \"\"\"Proxy connection error.\n\n Raised in :class:`aiohttp.connector.TCPConnector` if\n connection to proxy can not be established.\n \"\"\"\n\n\nclass ServerConnectionError(ClientConnectionError):\n \"\"\"Server connection errors.\"\"\"\n\n\nclass ServerDisconnectedError(ServerConnectionError):\n \"\"\"Server disconnected.\"\"\"\n\n def __init__(self, message: Optional[str]=None) -> None:\n self.message = message\n if message is None:\n self.args = ()\n else:\n self.args = (message,)\n\n\nclass ServerTimeoutError(ServerConnectionError, asyncio.TimeoutError):\n \"\"\"Server timeout error.\"\"\"\n\n\nclass ServerFingerprintMismatch(ServerConnectionError):\n \"\"\"SSL certificate does not match expected fingerprint.\"\"\"\n\n def __init__(self, expected: bytes, got: bytes,\n host: str, port: int) -> None:\n self.expected = expected\n self.got = got\n self.host = host\n self.port = port\n self.args = (expected, got, host, port)\n\n def __repr__(self) -> str:\n return '<{} expected={!r} got={!r} host={!r} port={!r}>'.format(\n self.__class__.__name__, self.expected, self.got,\n self.host, self.port)\n\n\nclass ClientPayloadError(ClientError):\n \"\"\"Response payload error.\"\"\"\n\n\nclass InvalidURL(ClientError, ValueError):\n \"\"\"Invalid URL.\n\n URL used for fetching is malformed, e.g. it doesn't contains host\n part.\"\"\"\n\n # Derive from ValueError for backward compatibility\n\n def __init__(self, url: Any) -> None:\n # The type of url is not yarl.URL because the exception can be raised\n # on URL(url) call\n super().__init__(url)\n\n @property\n def url(self) -> Any:\n return self.args[0]\n\n def __repr__(self) -> str:\n return '<{} {}>'.format(self.__class__.__name__, self.url)\n\n\nclass ClientSSLError(ClientConnectorError):\n \"\"\"Base error for ssl.*Errors.\"\"\"\n\n\nif ssl is not None:\n cert_errors = (ssl.CertificateError,)\n cert_errors_bases = (ClientSSLError, ssl.CertificateError,)\n\n ssl_errors = (ssl.SSLError,)\n ssl_error_bases = (ClientSSLError, ssl.SSLError)\nelse: # pragma: no cover\n cert_errors = tuple()\n cert_errors_bases = (ClientSSLError, ValueError,)\n\n ssl_errors = tuple()\n ssl_error_bases = (ClientSSLError,)\n\n\nclass ClientConnectorSSLError(*ssl_error_bases): # type: ignore\n \"\"\"Response ssl error.\"\"\"\n\n\nclass ClientConnectorCertificateError(*cert_errors_bases): # type: ignore\n \"\"\"Response certificate error.\"\"\"\n\n def __init__(self, connection_key:\n ConnectionKey, certificate_error: Exception) -> None:\n self._conn_key = connection_key\n self._certificate_error = certificate_error\n self.args = (connection_key, certificate_error)\n\n @property\n def certificate_error(self) -> Exception:\n return self._certificate_error\n\n @property\n def host(self) -> str:\n return self._conn_key.host\n\n @property\n def port(self) -> Optional[int]:\n return self._conn_key.port\n\n @property\n def ssl(self) -> bool:\n return self._conn_key.is_ssl\n\n def __str__(self) -> str:\n return ('Cannot connect to host {0.host}:{0.port} ssl:{0.ssl} '\n '[{0.certificate_error.__class__.__name__}: '\n '{0.certificate_error.args}]'.format(self))\n", "path": "aiohttp/client_exceptions.py"}], "after_files": [{"content": "\"\"\"HTTP related errors.\"\"\"\n\nimport asyncio\nfrom typing import TYPE_CHECKING, Any, Optional, Tuple, Union\n\nfrom .typedefs import _CIMultiDict\n\ntry:\n import ssl\n SSLContext = ssl.SSLContext\nexcept ImportError: # pragma: no cover\n ssl = SSLContext = None # type: ignore\n\n\nif TYPE_CHECKING: # pragma: no cover\n from .client_reqrep import (RequestInfo, ClientResponse, ConnectionKey, # noqa\n Fingerprint)\nelse:\n RequestInfo = ClientResponse = ConnectionKey = None\n\n\n__all__ = (\n 'ClientError',\n\n 'ClientConnectionError',\n 'ClientOSError', 'ClientConnectorError', 'ClientProxyConnectionError',\n\n 'ClientSSLError',\n 'ClientConnectorSSLError', 'ClientConnectorCertificateError',\n\n 'ServerConnectionError', 'ServerTimeoutError', 'ServerDisconnectedError',\n 'ServerFingerprintMismatch',\n\n 'ClientResponseError', 'ClientHttpProxyError',\n 'WSServerHandshakeError', 'ContentTypeError',\n\n 'ClientPayloadError', 'InvalidURL')\n\n\nclass ClientError(Exception):\n \"\"\"Base class for client connection errors.\"\"\"\n\n\nclass ClientResponseError(ClientError):\n \"\"\"Connection error during reading response.\n\n request_info: instance of RequestInfo\n \"\"\"\n\n def __init__(self, request_info: RequestInfo,\n history: Tuple[ClientResponse, ...], *,\n status: Optional[int]=None,\n message: str='',\n headers: Optional[_CIMultiDict]=None) -> None:\n self.request_info = request_info\n if status is not None:\n self.status = status\n else:\n self.status = 0\n self.message = message\n self.headers = headers\n self.history = history\n self.args = (request_info, history)\n\n def __str__(self) -> str:\n return (\"%s, message=%r, url=%r\" %\n (self.status, self.message, self.request_info.real_url))\n\n def __repr__(self) -> str:\n args = \"%r, %r\" % (self.request_info, self.history)\n if self.status != 0:\n args += \", status=%r\" % (self.status,)\n if self.message != '':\n args += \", message=%r\" % (self.message,)\n if self.headers is not None:\n args += \", headers=%r\" % (self.headers,)\n return \"%s(%s)\" % (type(self).__name__, args)\n\n\nclass ContentTypeError(ClientResponseError):\n \"\"\"ContentType found is not valid.\"\"\"\n\n\nclass WSServerHandshakeError(ClientResponseError):\n \"\"\"websocket server handshake error.\"\"\"\n\n\nclass ClientHttpProxyError(ClientResponseError):\n \"\"\"HTTP proxy error.\n\n Raised in :class:`aiohttp.connector.TCPConnector` if\n proxy responds with status other than ``200 OK``\n on ``CONNECT`` request.\n \"\"\"\n\n\nclass TooManyRedirects(ClientResponseError):\n \"\"\"Client was redirected too many times.\"\"\"\n\n\nclass ClientConnectionError(ClientError):\n \"\"\"Base class for client socket errors.\"\"\"\n\n\nclass ClientOSError(ClientConnectionError, OSError):\n \"\"\"OSError error.\"\"\"\n\n\nclass ClientConnectorError(ClientOSError):\n \"\"\"Client connector error.\n\n Raised in :class:`aiohttp.connector.TCPConnector` if\n connection to proxy can not be established.\n \"\"\"\n def __init__(self, connection_key: ConnectionKey,\n os_error: OSError) -> None:\n self._conn_key = connection_key\n self._os_error = os_error\n super().__init__(os_error.errno, os_error.strerror)\n self.args = (connection_key, os_error)\n\n @property\n def os_error(self) -> OSError:\n return self._os_error\n\n @property\n def host(self) -> str:\n return self._conn_key.host\n\n @property\n def port(self) -> Optional[int]:\n return self._conn_key.port\n\n @property\n def ssl(self) -> Union[SSLContext, None, bool, 'Fingerprint']:\n return self._conn_key.ssl\n\n def __str__(self) -> str:\n return ('Cannot connect to host {0.host}:{0.port} ssl:{1} [{2}]'\n .format(self, self.ssl if self.ssl is not None else 'default',\n self.strerror))\n\n # OSError.__reduce__ does too much black magick\n __reduce__ = BaseException.__reduce__\n\n\nclass ClientProxyConnectionError(ClientConnectorError):\n \"\"\"Proxy connection error.\n\n Raised in :class:`aiohttp.connector.TCPConnector` if\n connection to proxy can not be established.\n \"\"\"\n\n\nclass ServerConnectionError(ClientConnectionError):\n \"\"\"Server connection errors.\"\"\"\n\n\nclass ServerDisconnectedError(ServerConnectionError):\n \"\"\"Server disconnected.\"\"\"\n\n def __init__(self, message: Optional[str]=None) -> None:\n self.message = message\n if message is None:\n self.args = ()\n else:\n self.args = (message,)\n\n\nclass ServerTimeoutError(ServerConnectionError, asyncio.TimeoutError):\n \"\"\"Server timeout error.\"\"\"\n\n\nclass ServerFingerprintMismatch(ServerConnectionError):\n \"\"\"SSL certificate does not match expected fingerprint.\"\"\"\n\n def __init__(self, expected: bytes, got: bytes,\n host: str, port: int) -> None:\n self.expected = expected\n self.got = got\n self.host = host\n self.port = port\n self.args = (expected, got, host, port)\n\n def __repr__(self) -> str:\n return '<{} expected={!r} got={!r} host={!r} port={!r}>'.format(\n self.__class__.__name__, self.expected, self.got,\n self.host, self.port)\n\n\nclass ClientPayloadError(ClientError):\n \"\"\"Response payload error.\"\"\"\n\n\nclass InvalidURL(ClientError, ValueError):\n \"\"\"Invalid URL.\n\n URL used for fetching is malformed, e.g. it doesn't contains host\n part.\"\"\"\n\n # Derive from ValueError for backward compatibility\n\n def __init__(self, url: Any) -> None:\n # The type of url is not yarl.URL because the exception can be raised\n # on URL(url) call\n super().__init__(url)\n\n @property\n def url(self) -> Any:\n return self.args[0]\n\n def __repr__(self) -> str:\n return '<{} {}>'.format(self.__class__.__name__, self.url)\n\n\nclass ClientSSLError(ClientConnectorError):\n \"\"\"Base error for ssl.*Errors.\"\"\"\n\n\nif ssl is not None:\n cert_errors = (ssl.CertificateError,)\n cert_errors_bases = (ClientSSLError, ssl.CertificateError,)\n\n ssl_errors = (ssl.SSLError,)\n ssl_error_bases = (ClientSSLError, ssl.SSLError)\nelse: # pragma: no cover\n cert_errors = tuple()\n cert_errors_bases = (ClientSSLError, ValueError,)\n\n ssl_errors = tuple()\n ssl_error_bases = (ClientSSLError,)\n\n\nclass ClientConnectorSSLError(*ssl_error_bases): # type: ignore\n \"\"\"Response ssl error.\"\"\"\n\n\nclass ClientConnectorCertificateError(*cert_errors_bases): # type: ignore\n \"\"\"Response certificate error.\"\"\"\n\n def __init__(self, connection_key:\n ConnectionKey, certificate_error: Exception) -> None:\n self._conn_key = connection_key\n self._certificate_error = certificate_error\n self.args = (connection_key, certificate_error)\n\n @property\n def certificate_error(self) -> Exception:\n return self._certificate_error\n\n @property\n def host(self) -> str:\n return self._conn_key.host\n\n @property\n def port(self) -> Optional[int]:\n return self._conn_key.port\n\n @property\n def ssl(self) -> bool:\n return self._conn_key.is_ssl\n\n def __str__(self) -> str:\n return ('Cannot connect to host {0.host}:{0.port} ssl:{0.ssl} '\n '[{0.certificate_error.__class__.__name__}: '\n '{0.certificate_error.args}]'.format(self))\n", "path": "aiohttp/client_exceptions.py"}]} | 2,790 | 173 |
gh_patches_debug_1002 | rasdani/github-patches | git_diff | optuna__optuna-5306 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Feature to determine whether `GridSampler` has exhausted the grid
### Motivation
Currently, there is no good way in which we can determine whether the grid of `GridSampler` is exhausted when resuming a study.
While the method `_get_unvisited_grid_ids()` exists, it is a protected method and should not be used outside of the library.
### Description
I suggest adding a method called something like `is_exhausted()` that is public and simply returns `len(self._get_unvisited_grid_ids()) == 0`
### Alternatives (optional)
Alternatively, it may be useful to add keyword arguments to be passed to `GridSampler` that give more control over what happens if the grid is already exhausted. For example, one could add a keyword like `reevaluate_when_exhausted: bool`.
### Additional context (optional)
I encountered this suggestion when receiving the following warning: `GridSampler` is re-evaluating a configuration because the grid has been exhausted. This may happen due to a timing issue during distributed optimization or when re-running optimizations on already finished studies."
This issue further builds upon the discussion of https://github.com/optuna/optuna/issues/3256
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `optuna/samplers/_grid.py`
Content:
```
1 import itertools
2 from numbers import Real
3 from typing import Any
4 from typing import Dict
5 from typing import List
6 from typing import Mapping
7 from typing import Optional
8 from typing import Sequence
9 from typing import Union
10 import warnings
11
12 import numpy as np
13
14 from optuna.distributions import BaseDistribution
15 from optuna.logging import get_logger
16 from optuna.samplers import BaseSampler
17 from optuna.samplers._lazy_random_state import LazyRandomState
18 from optuna.study import Study
19 from optuna.trial import FrozenTrial
20 from optuna.trial import TrialState
21
22
23 GridValueType = Union[str, float, int, bool, None]
24
25
26 _logger = get_logger(__name__)
27
28
29 class GridSampler(BaseSampler):
30 """Sampler using grid search.
31
32 With :class:`~optuna.samplers.GridSampler`, the trials suggest all combinations of parameters
33 in the given search space during the study.
34
35 Example:
36
37 .. testcode::
38
39 import optuna
40
41
42 def objective(trial):
43 x = trial.suggest_float("x", -100, 100)
44 y = trial.suggest_int("y", -100, 100)
45 return x**2 + y**2
46
47
48 search_space = {"x": [-50, 0, 50], "y": [-99, 0, 99]}
49 study = optuna.create_study(sampler=optuna.samplers.GridSampler(search_space))
50 study.optimize(objective)
51
52 Note:
53
54 This sampler with :ref:`ask_and_tell` raises :exc:`RuntimeError` just after evaluating
55 the final grid. This is because :class:`~optuna.samplers.GridSampler` automatically
56 stops the optimization if all combinations in the passed ``search_space`` have already
57 been evaluated, internally invoking the :func:`~optuna.study.Study.stop` method.
58 As a workaround, we need to handle the error manually as in
59 https://github.com/optuna/optuna/issues/4121#issuecomment-1305289910.
60
61 Note:
62
63 :class:`~optuna.samplers.GridSampler` does not take care of a parameter's quantization
64 specified by discrete suggest methods but just samples one of values specified in the
65 search space. E.g., in the following code snippet, either of ``-0.5`` or ``0.5`` is
66 sampled as ``x`` instead of an integer point.
67
68 .. testcode::
69
70 import optuna
71
72
73 def objective(trial):
74 # The following suggest method specifies integer points between -5 and 5.
75 x = trial.suggest_float("x", -5, 5, step=1)
76 return x**2
77
78
79 # Non-int points are specified in the grid.
80 search_space = {"x": [-0.5, 0.5]}
81 study = optuna.create_study(sampler=optuna.samplers.GridSampler(search_space))
82 study.optimize(objective, n_trials=2)
83
84 Note:
85 A parameter configuration in the grid is not considered finished until its trial is
86 finished. Therefore, during distributed optimization where trials run concurrently,
87 different workers will occasionally suggest the same parameter configuration.
88 The total number of actual trials may therefore exceed the size of the grid.
89
90 Note:
91 All parameters must be specified when using :class:`~optuna.samplers.GridSampler` with
92 :meth:`~optuna.study.Study.enqueue_trial`.
93
94 Args:
95 search_space:
96 A dictionary whose key and value are a parameter name and the corresponding candidates
97 of values, respectively.
98 seed:
99 A seed to fix the order of trials as the grid is randomly shuffled. Please note that
100 it is not recommended using this option in distributed optimization settings since
101 this option cannot ensure the order of trials and may increase the number of duplicate
102 suggestions during distributed optimization.
103 """
104
105 def __init__(
106 self, search_space: Mapping[str, Sequence[GridValueType]], seed: Optional[int] = None
107 ) -> None:
108 for param_name, param_values in search_space.items():
109 for value in param_values:
110 self._check_value(param_name, value)
111
112 self._search_space = {}
113 for param_name, param_values in sorted(search_space.items()):
114 self._search_space[param_name] = list(param_values)
115
116 self._all_grids = list(itertools.product(*self._search_space.values()))
117 self._param_names = sorted(search_space.keys())
118 self._n_min_trials = len(self._all_grids)
119 self._rng = LazyRandomState(seed)
120 self._rng.rng.shuffle(self._all_grids)
121
122 def reseed_rng(self) -> None:
123 self._rng.rng.seed()
124
125 def before_trial(self, study: Study, trial: FrozenTrial) -> None:
126 # Instead of returning param values, GridSampler puts the target grid id as a system attr,
127 # and the values are returned from `sample_independent`. This is because the distribution
128 # object is hard to get at the beginning of trial, while we need the access to the object
129 # to validate the sampled value.
130
131 # When the trial is created by RetryFailedTrialCallback or enqueue_trial, we should not
132 # assign a new grid_id.
133 if "grid_id" in trial.system_attrs or "fixed_params" in trial.system_attrs:
134 return
135
136 if 0 <= trial.number and trial.number < self._n_min_trials:
137 study._storage.set_trial_system_attr(
138 trial._trial_id, "search_space", self._search_space
139 )
140 study._storage.set_trial_system_attr(trial._trial_id, "grid_id", trial.number)
141 return
142
143 target_grids = self._get_unvisited_grid_ids(study)
144
145 if len(target_grids) == 0:
146 # This case may occur with distributed optimization or trial queue. If there is no
147 # target grid, `GridSampler` evaluates a visited, duplicated point with the current
148 # trial. After that, the optimization stops.
149
150 _logger.warning(
151 "`GridSampler` is re-evaluating a configuration because the grid has been "
152 "exhausted. This may happen due to a timing issue during distributed optimization "
153 "or when re-running optimizations on already finished studies."
154 )
155
156 # One of all grids is randomly picked up in this case.
157 target_grids = list(range(len(self._all_grids)))
158
159 # In distributed optimization, multiple workers may simultaneously pick up the same grid.
160 # To make the conflict less frequent, the grid is chosen randomly.
161 grid_id = int(self._rng.rng.choice(target_grids))
162
163 study._storage.set_trial_system_attr(trial._trial_id, "search_space", self._search_space)
164 study._storage.set_trial_system_attr(trial._trial_id, "grid_id", grid_id)
165
166 def infer_relative_search_space(
167 self, study: Study, trial: FrozenTrial
168 ) -> Dict[str, BaseDistribution]:
169 return {}
170
171 def sample_relative(
172 self, study: Study, trial: FrozenTrial, search_space: Dict[str, BaseDistribution]
173 ) -> Dict[str, Any]:
174 return {}
175
176 def sample_independent(
177 self,
178 study: Study,
179 trial: FrozenTrial,
180 param_name: str,
181 param_distribution: BaseDistribution,
182 ) -> Any:
183 if "grid_id" not in trial.system_attrs:
184 message = "All parameters must be specified when using GridSampler with enqueue_trial."
185 raise ValueError(message)
186
187 if param_name not in self._search_space:
188 message = "The parameter name, {}, is not found in the given grid.".format(param_name)
189 raise ValueError(message)
190
191 # TODO(c-bata): Reduce the number of duplicated evaluations on multiple workers.
192 # Current selection logic may evaluate the same parameters multiple times.
193 # See https://gist.github.com/c-bata/f759f64becb24eea2040f4b2e3afce8f for details.
194 grid_id = trial.system_attrs["grid_id"]
195 param_value = self._all_grids[grid_id][self._param_names.index(param_name)]
196 contains = param_distribution._contains(param_distribution.to_internal_repr(param_value))
197 if not contains:
198 warnings.warn(
199 f"The value `{param_value}` is out of range of the parameter `{param_name}`. "
200 f"The value will be used but the actual distribution is: `{param_distribution}`."
201 )
202
203 return param_value
204
205 def after_trial(
206 self,
207 study: Study,
208 trial: FrozenTrial,
209 state: TrialState,
210 values: Optional[Sequence[float]],
211 ) -> None:
212 target_grids = self._get_unvisited_grid_ids(study)
213
214 if len(target_grids) == 0:
215 study.stop()
216 elif len(target_grids) == 1:
217 grid_id = study._storage.get_trial_system_attrs(trial._trial_id)["grid_id"]
218 if grid_id == target_grids[0]:
219 study.stop()
220
221 @staticmethod
222 def _check_value(param_name: str, param_value: Any) -> None:
223 if param_value is None or isinstance(param_value, (str, int, float, bool)):
224 return
225
226 message = (
227 "{} contains a value with the type of {}, which is not supported by "
228 "`GridSampler`. Please make sure a value is `str`, `int`, `float`, `bool`"
229 " or `None` for persistent storage.".format(param_name, type(param_value))
230 )
231 warnings.warn(message)
232
233 def _get_unvisited_grid_ids(self, study: Study) -> List[int]:
234 # List up unvisited grids based on already finished ones.
235 visited_grids = []
236 running_grids = []
237
238 # We directly query the storage to get trials here instead of `study.get_trials`,
239 # since some pruners such as `HyperbandPruner` use the study transformed
240 # to filter trials. See https://github.com/optuna/optuna/issues/2327 for details.
241 trials = study._storage.get_all_trials(study._study_id, deepcopy=False)
242
243 for t in trials:
244 if "grid_id" in t.system_attrs and self._same_search_space(
245 t.system_attrs["search_space"]
246 ):
247 if t.state.is_finished():
248 visited_grids.append(t.system_attrs["grid_id"])
249 elif t.state == TrialState.RUNNING:
250 running_grids.append(t.system_attrs["grid_id"])
251
252 unvisited_grids = set(range(self._n_min_trials)) - set(visited_grids) - set(running_grids)
253
254 # If evaluations for all grids have been started, return grids that have not yet finished
255 # because all grids should be evaluated before stopping the optimization.
256 if len(unvisited_grids) == 0:
257 unvisited_grids = set(range(self._n_min_trials)) - set(visited_grids)
258
259 return list(unvisited_grids)
260
261 @staticmethod
262 def _grid_value_equal(value1: GridValueType, value2: GridValueType) -> bool:
263 value1_is_nan = isinstance(value1, Real) and np.isnan(float(value1))
264 value2_is_nan = isinstance(value2, Real) and np.isnan(float(value2))
265 return (value1 == value2) or (value1_is_nan and value2_is_nan)
266
267 def _same_search_space(self, search_space: Mapping[str, Sequence[GridValueType]]) -> bool:
268 if set(search_space.keys()) != set(self._search_space.keys()):
269 return False
270
271 for param_name in search_space.keys():
272 if len(search_space[param_name]) != len(self._search_space[param_name]):
273 return False
274
275 for i, param_value in enumerate(search_space[param_name]):
276 if not self._grid_value_equal(param_value, self._search_space[param_name][i]):
277 return False
278
279 return True
280
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/optuna/samplers/_grid.py b/optuna/samplers/_grid.py
--- a/optuna/samplers/_grid.py
+++ b/optuna/samplers/_grid.py
@@ -277,3 +277,9 @@
return False
return True
+
+ def is_exhausted(self, study: Study) -> bool:
+ """
+ Return True if all the possible params are evaluated, otherwise return False.
+ """
+ return len(self._get_unvisited_grid_ids(study)) == 0
| {"golden_diff": "diff --git a/optuna/samplers/_grid.py b/optuna/samplers/_grid.py\n--- a/optuna/samplers/_grid.py\n+++ b/optuna/samplers/_grid.py\n@@ -277,3 +277,9 @@\n return False\n \n return True\n+\n+ def is_exhausted(self, study: Study) -> bool:\n+ \"\"\"\n+ Return True if all the possible params are evaluated, otherwise return False.\n+ \"\"\"\n+ return len(self._get_unvisited_grid_ids(study)) == 0\n", "issue": "Feature to determine whether `GridSampler` has exhausted the grid\n### Motivation\r\n\r\nCurrently, there is no good way in which we can determine whether the grid of `GridSampler` is exhausted when resuming a study.\r\nWhile the method `_get_unvisited_grid_ids()` exists, it is a protected method and should not be used outside of the library.\r\n\r\n### Description\r\n\r\nI suggest adding a method called something like `is_exhausted()` that is public and simply returns `len(self._get_unvisited_grid_ids()) == 0`\r\n\r\n### Alternatives (optional)\r\n\r\nAlternatively, it may be useful to add keyword arguments to be passed to `GridSampler` that give more control over what happens if the grid is already exhausted. For example, one could add a keyword like `reevaluate_when_exhausted: bool`.\r\n\r\n### Additional context (optional)\r\n\r\nI encountered this suggestion when receiving the following warning: `GridSampler` is re-evaluating a configuration because the grid has been exhausted. This may happen due to a timing issue during distributed optimization or when re-running optimizations on already finished studies.\"\r\n\r\nThis issue further builds upon the discussion of https://github.com/optuna/optuna/issues/3256\n", "before_files": [{"content": "import itertools\nfrom numbers import Real\nfrom typing import Any\nfrom typing import Dict\nfrom typing import List\nfrom typing import Mapping\nfrom typing import Optional\nfrom typing import Sequence\nfrom typing import Union\nimport warnings\n\nimport numpy as np\n\nfrom optuna.distributions import BaseDistribution\nfrom optuna.logging import get_logger\nfrom optuna.samplers import BaseSampler\nfrom optuna.samplers._lazy_random_state import LazyRandomState\nfrom optuna.study import Study\nfrom optuna.trial import FrozenTrial\nfrom optuna.trial import TrialState\n\n\nGridValueType = Union[str, float, int, bool, None]\n\n\n_logger = get_logger(__name__)\n\n\nclass GridSampler(BaseSampler):\n \"\"\"Sampler using grid search.\n\n With :class:`~optuna.samplers.GridSampler`, the trials suggest all combinations of parameters\n in the given search space during the study.\n\n Example:\n\n .. testcode::\n\n import optuna\n\n\n def objective(trial):\n x = trial.suggest_float(\"x\", -100, 100)\n y = trial.suggest_int(\"y\", -100, 100)\n return x**2 + y**2\n\n\n search_space = {\"x\": [-50, 0, 50], \"y\": [-99, 0, 99]}\n study = optuna.create_study(sampler=optuna.samplers.GridSampler(search_space))\n study.optimize(objective)\n\n Note:\n\n This sampler with :ref:`ask_and_tell` raises :exc:`RuntimeError` just after evaluating\n the final grid. This is because :class:`~optuna.samplers.GridSampler` automatically\n stops the optimization if all combinations in the passed ``search_space`` have already\n been evaluated, internally invoking the :func:`~optuna.study.Study.stop` method.\n As a workaround, we need to handle the error manually as in\n https://github.com/optuna/optuna/issues/4121#issuecomment-1305289910.\n\n Note:\n\n :class:`~optuna.samplers.GridSampler` does not take care of a parameter's quantization\n specified by discrete suggest methods but just samples one of values specified in the\n search space. E.g., in the following code snippet, either of ``-0.5`` or ``0.5`` is\n sampled as ``x`` instead of an integer point.\n\n .. testcode::\n\n import optuna\n\n\n def objective(trial):\n # The following suggest method specifies integer points between -5 and 5.\n x = trial.suggest_float(\"x\", -5, 5, step=1)\n return x**2\n\n\n # Non-int points are specified in the grid.\n search_space = {\"x\": [-0.5, 0.5]}\n study = optuna.create_study(sampler=optuna.samplers.GridSampler(search_space))\n study.optimize(objective, n_trials=2)\n\n Note:\n A parameter configuration in the grid is not considered finished until its trial is\n finished. Therefore, during distributed optimization where trials run concurrently,\n different workers will occasionally suggest the same parameter configuration.\n The total number of actual trials may therefore exceed the size of the grid.\n\n Note:\n All parameters must be specified when using :class:`~optuna.samplers.GridSampler` with\n :meth:`~optuna.study.Study.enqueue_trial`.\n\n Args:\n search_space:\n A dictionary whose key and value are a parameter name and the corresponding candidates\n of values, respectively.\n seed:\n A seed to fix the order of trials as the grid is randomly shuffled. Please note that\n it is not recommended using this option in distributed optimization settings since\n this option cannot ensure the order of trials and may increase the number of duplicate\n suggestions during distributed optimization.\n \"\"\"\n\n def __init__(\n self, search_space: Mapping[str, Sequence[GridValueType]], seed: Optional[int] = None\n ) -> None:\n for param_name, param_values in search_space.items():\n for value in param_values:\n self._check_value(param_name, value)\n\n self._search_space = {}\n for param_name, param_values in sorted(search_space.items()):\n self._search_space[param_name] = list(param_values)\n\n self._all_grids = list(itertools.product(*self._search_space.values()))\n self._param_names = sorted(search_space.keys())\n self._n_min_trials = len(self._all_grids)\n self._rng = LazyRandomState(seed)\n self._rng.rng.shuffle(self._all_grids)\n\n def reseed_rng(self) -> None:\n self._rng.rng.seed()\n\n def before_trial(self, study: Study, trial: FrozenTrial) -> None:\n # Instead of returning param values, GridSampler puts the target grid id as a system attr,\n # and the values are returned from `sample_independent`. This is because the distribution\n # object is hard to get at the beginning of trial, while we need the access to the object\n # to validate the sampled value.\n\n # When the trial is created by RetryFailedTrialCallback or enqueue_trial, we should not\n # assign a new grid_id.\n if \"grid_id\" in trial.system_attrs or \"fixed_params\" in trial.system_attrs:\n return\n\n if 0 <= trial.number and trial.number < self._n_min_trials:\n study._storage.set_trial_system_attr(\n trial._trial_id, \"search_space\", self._search_space\n )\n study._storage.set_trial_system_attr(trial._trial_id, \"grid_id\", trial.number)\n return\n\n target_grids = self._get_unvisited_grid_ids(study)\n\n if len(target_grids) == 0:\n # This case may occur with distributed optimization or trial queue. If there is no\n # target grid, `GridSampler` evaluates a visited, duplicated point with the current\n # trial. After that, the optimization stops.\n\n _logger.warning(\n \"`GridSampler` is re-evaluating a configuration because the grid has been \"\n \"exhausted. This may happen due to a timing issue during distributed optimization \"\n \"or when re-running optimizations on already finished studies.\"\n )\n\n # One of all grids is randomly picked up in this case.\n target_grids = list(range(len(self._all_grids)))\n\n # In distributed optimization, multiple workers may simultaneously pick up the same grid.\n # To make the conflict less frequent, the grid is chosen randomly.\n grid_id = int(self._rng.rng.choice(target_grids))\n\n study._storage.set_trial_system_attr(trial._trial_id, \"search_space\", self._search_space)\n study._storage.set_trial_system_attr(trial._trial_id, \"grid_id\", grid_id)\n\n def infer_relative_search_space(\n self, study: Study, trial: FrozenTrial\n ) -> Dict[str, BaseDistribution]:\n return {}\n\n def sample_relative(\n self, study: Study, trial: FrozenTrial, search_space: Dict[str, BaseDistribution]\n ) -> Dict[str, Any]:\n return {}\n\n def sample_independent(\n self,\n study: Study,\n trial: FrozenTrial,\n param_name: str,\n param_distribution: BaseDistribution,\n ) -> Any:\n if \"grid_id\" not in trial.system_attrs:\n message = \"All parameters must be specified when using GridSampler with enqueue_trial.\"\n raise ValueError(message)\n\n if param_name not in self._search_space:\n message = \"The parameter name, {}, is not found in the given grid.\".format(param_name)\n raise ValueError(message)\n\n # TODO(c-bata): Reduce the number of duplicated evaluations on multiple workers.\n # Current selection logic may evaluate the same parameters multiple times.\n # See https://gist.github.com/c-bata/f759f64becb24eea2040f4b2e3afce8f for details.\n grid_id = trial.system_attrs[\"grid_id\"]\n param_value = self._all_grids[grid_id][self._param_names.index(param_name)]\n contains = param_distribution._contains(param_distribution.to_internal_repr(param_value))\n if not contains:\n warnings.warn(\n f\"The value `{param_value}` is out of range of the parameter `{param_name}`. \"\n f\"The value will be used but the actual distribution is: `{param_distribution}`.\"\n )\n\n return param_value\n\n def after_trial(\n self,\n study: Study,\n trial: FrozenTrial,\n state: TrialState,\n values: Optional[Sequence[float]],\n ) -> None:\n target_grids = self._get_unvisited_grid_ids(study)\n\n if len(target_grids) == 0:\n study.stop()\n elif len(target_grids) == 1:\n grid_id = study._storage.get_trial_system_attrs(trial._trial_id)[\"grid_id\"]\n if grid_id == target_grids[0]:\n study.stop()\n\n @staticmethod\n def _check_value(param_name: str, param_value: Any) -> None:\n if param_value is None or isinstance(param_value, (str, int, float, bool)):\n return\n\n message = (\n \"{} contains a value with the type of {}, which is not supported by \"\n \"`GridSampler`. Please make sure a value is `str`, `int`, `float`, `bool`\"\n \" or `None` for persistent storage.\".format(param_name, type(param_value))\n )\n warnings.warn(message)\n\n def _get_unvisited_grid_ids(self, study: Study) -> List[int]:\n # List up unvisited grids based on already finished ones.\n visited_grids = []\n running_grids = []\n\n # We directly query the storage to get trials here instead of `study.get_trials`,\n # since some pruners such as `HyperbandPruner` use the study transformed\n # to filter trials. See https://github.com/optuna/optuna/issues/2327 for details.\n trials = study._storage.get_all_trials(study._study_id, deepcopy=False)\n\n for t in trials:\n if \"grid_id\" in t.system_attrs and self._same_search_space(\n t.system_attrs[\"search_space\"]\n ):\n if t.state.is_finished():\n visited_grids.append(t.system_attrs[\"grid_id\"])\n elif t.state == TrialState.RUNNING:\n running_grids.append(t.system_attrs[\"grid_id\"])\n\n unvisited_grids = set(range(self._n_min_trials)) - set(visited_grids) - set(running_grids)\n\n # If evaluations for all grids have been started, return grids that have not yet finished\n # because all grids should be evaluated before stopping the optimization.\n if len(unvisited_grids) == 0:\n unvisited_grids = set(range(self._n_min_trials)) - set(visited_grids)\n\n return list(unvisited_grids)\n\n @staticmethod\n def _grid_value_equal(value1: GridValueType, value2: GridValueType) -> bool:\n value1_is_nan = isinstance(value1, Real) and np.isnan(float(value1))\n value2_is_nan = isinstance(value2, Real) and np.isnan(float(value2))\n return (value1 == value2) or (value1_is_nan and value2_is_nan)\n\n def _same_search_space(self, search_space: Mapping[str, Sequence[GridValueType]]) -> bool:\n if set(search_space.keys()) != set(self._search_space.keys()):\n return False\n\n for param_name in search_space.keys():\n if len(search_space[param_name]) != len(self._search_space[param_name]):\n return False\n\n for i, param_value in enumerate(search_space[param_name]):\n if not self._grid_value_equal(param_value, self._search_space[param_name][i]):\n return False\n\n return True\n", "path": "optuna/samplers/_grid.py"}], "after_files": [{"content": "import itertools\nfrom numbers import Real\nfrom typing import Any\nfrom typing import Dict\nfrom typing import List\nfrom typing import Mapping\nfrom typing import Optional\nfrom typing import Sequence\nfrom typing import Union\nimport warnings\n\nimport numpy as np\n\nfrom optuna.distributions import BaseDistribution\nfrom optuna.logging import get_logger\nfrom optuna.samplers import BaseSampler\nfrom optuna.samplers._lazy_random_state import LazyRandomState\nfrom optuna.study import Study\nfrom optuna.trial import FrozenTrial\nfrom optuna.trial import TrialState\n\n\nGridValueType = Union[str, float, int, bool, None]\n\n\n_logger = get_logger(__name__)\n\n\nclass GridSampler(BaseSampler):\n \"\"\"Sampler using grid search.\n\n With :class:`~optuna.samplers.GridSampler`, the trials suggest all combinations of parameters\n in the given search space during the study.\n\n Example:\n\n .. testcode::\n\n import optuna\n\n\n def objective(trial):\n x = trial.suggest_float(\"x\", -100, 100)\n y = trial.suggest_int(\"y\", -100, 100)\n return x**2 + y**2\n\n\n search_space = {\"x\": [-50, 0, 50], \"y\": [-99, 0, 99]}\n study = optuna.create_study(sampler=optuna.samplers.GridSampler(search_space))\n study.optimize(objective)\n\n Note:\n\n This sampler with :ref:`ask_and_tell` raises :exc:`RuntimeError` just after evaluating\n the final grid. This is because :class:`~optuna.samplers.GridSampler` automatically\n stops the optimization if all combinations in the passed ``search_space`` have already\n been evaluated, internally invoking the :func:`~optuna.study.Study.stop` method.\n As a workaround, we need to handle the error manually as in\n https://github.com/optuna/optuna/issues/4121#issuecomment-1305289910.\n\n Note:\n\n :class:`~optuna.samplers.GridSampler` does not take care of a parameter's quantization\n specified by discrete suggest methods but just samples one of values specified in the\n search space. E.g., in the following code snippet, either of ``-0.5`` or ``0.5`` is\n sampled as ``x`` instead of an integer point.\n\n .. testcode::\n\n import optuna\n\n\n def objective(trial):\n # The following suggest method specifies integer points between -5 and 5.\n x = trial.suggest_float(\"x\", -5, 5, step=1)\n return x**2\n\n\n # Non-int points are specified in the grid.\n search_space = {\"x\": [-0.5, 0.5]}\n study = optuna.create_study(sampler=optuna.samplers.GridSampler(search_space))\n study.optimize(objective, n_trials=2)\n\n Note:\n A parameter configuration in the grid is not considered finished until its trial is\n finished. Therefore, during distributed optimization where trials run concurrently,\n different workers will occasionally suggest the same parameter configuration.\n The total number of actual trials may therefore exceed the size of the grid.\n\n Note:\n All parameters must be specified when using :class:`~optuna.samplers.GridSampler` with\n :meth:`~optuna.study.Study.enqueue_trial`.\n\n Args:\n search_space:\n A dictionary whose key and value are a parameter name and the corresponding candidates\n of values, respectively.\n seed:\n A seed to fix the order of trials as the grid is randomly shuffled. Please note that\n it is not recommended using this option in distributed optimization settings since\n this option cannot ensure the order of trials and may increase the number of duplicate\n suggestions during distributed optimization.\n \"\"\"\n\n def __init__(\n self, search_space: Mapping[str, Sequence[GridValueType]], seed: Optional[int] = None\n ) -> None:\n for param_name, param_values in search_space.items():\n for value in param_values:\n self._check_value(param_name, value)\n\n self._search_space = {}\n for param_name, param_values in sorted(search_space.items()):\n self._search_space[param_name] = list(param_values)\n\n self._all_grids = list(itertools.product(*self._search_space.values()))\n self._param_names = sorted(search_space.keys())\n self._n_min_trials = len(self._all_grids)\n self._rng = LazyRandomState(seed)\n self._rng.rng.shuffle(self._all_grids)\n\n def reseed_rng(self) -> None:\n self._rng.rng.seed()\n\n def before_trial(self, study: Study, trial: FrozenTrial) -> None:\n # Instead of returning param values, GridSampler puts the target grid id as a system attr,\n # and the values are returned from `sample_independent`. This is because the distribution\n # object is hard to get at the beginning of trial, while we need the access to the object\n # to validate the sampled value.\n\n # When the trial is created by RetryFailedTrialCallback or enqueue_trial, we should not\n # assign a new grid_id.\n if \"grid_id\" in trial.system_attrs or \"fixed_params\" in trial.system_attrs:\n return\n\n if 0 <= trial.number and trial.number < self._n_min_trials:\n study._storage.set_trial_system_attr(\n trial._trial_id, \"search_space\", self._search_space\n )\n study._storage.set_trial_system_attr(trial._trial_id, \"grid_id\", trial.number)\n return\n\n target_grids = self._get_unvisited_grid_ids(study)\n\n if len(target_grids) == 0:\n # This case may occur with distributed optimization or trial queue. If there is no\n # target grid, `GridSampler` evaluates a visited, duplicated point with the current\n # trial. After that, the optimization stops.\n\n _logger.warning(\n \"`GridSampler` is re-evaluating a configuration because the grid has been \"\n \"exhausted. This may happen due to a timing issue during distributed optimization \"\n \"or when re-running optimizations on already finished studies.\"\n )\n\n # One of all grids is randomly picked up in this case.\n target_grids = list(range(len(self._all_grids)))\n\n # In distributed optimization, multiple workers may simultaneously pick up the same grid.\n # To make the conflict less frequent, the grid is chosen randomly.\n grid_id = int(self._rng.rng.choice(target_grids))\n\n study._storage.set_trial_system_attr(trial._trial_id, \"search_space\", self._search_space)\n study._storage.set_trial_system_attr(trial._trial_id, \"grid_id\", grid_id)\n\n def infer_relative_search_space(\n self, study: Study, trial: FrozenTrial\n ) -> Dict[str, BaseDistribution]:\n return {}\n\n def sample_relative(\n self, study: Study, trial: FrozenTrial, search_space: Dict[str, BaseDistribution]\n ) -> Dict[str, Any]:\n return {}\n\n def sample_independent(\n self,\n study: Study,\n trial: FrozenTrial,\n param_name: str,\n param_distribution: BaseDistribution,\n ) -> Any:\n if \"grid_id\" not in trial.system_attrs:\n message = \"All parameters must be specified when using GridSampler with enqueue_trial.\"\n raise ValueError(message)\n\n if param_name not in self._search_space:\n message = \"The parameter name, {}, is not found in the given grid.\".format(param_name)\n raise ValueError(message)\n\n # TODO(c-bata): Reduce the number of duplicated evaluations on multiple workers.\n # Current selection logic may evaluate the same parameters multiple times.\n # See https://gist.github.com/c-bata/f759f64becb24eea2040f4b2e3afce8f for details.\n grid_id = trial.system_attrs[\"grid_id\"]\n param_value = self._all_grids[grid_id][self._param_names.index(param_name)]\n contains = param_distribution._contains(param_distribution.to_internal_repr(param_value))\n if not contains:\n warnings.warn(\n f\"The value `{param_value}` is out of range of the parameter `{param_name}`. \"\n f\"The value will be used but the actual distribution is: `{param_distribution}`.\"\n )\n\n return param_value\n\n def after_trial(\n self,\n study: Study,\n trial: FrozenTrial,\n state: TrialState,\n values: Optional[Sequence[float]],\n ) -> None:\n target_grids = self._get_unvisited_grid_ids(study)\n\n if len(target_grids) == 0:\n study.stop()\n elif len(target_grids) == 1:\n grid_id = study._storage.get_trial_system_attrs(trial._trial_id)[\"grid_id\"]\n if grid_id == target_grids[0]:\n study.stop()\n\n @staticmethod\n def _check_value(param_name: str, param_value: Any) -> None:\n if param_value is None or isinstance(param_value, (str, int, float, bool)):\n return\n\n message = (\n \"{} contains a value with the type of {}, which is not supported by \"\n \"`GridSampler`. Please make sure a value is `str`, `int`, `float`, `bool`\"\n \" or `None` for persistent storage.\".format(param_name, type(param_value))\n )\n warnings.warn(message)\n\n def _get_unvisited_grid_ids(self, study: Study) -> List[int]:\n # List up unvisited grids based on already finished ones.\n visited_grids = []\n running_grids = []\n\n # We directly query the storage to get trials here instead of `study.get_trials`,\n # since some pruners such as `HyperbandPruner` use the study transformed\n # to filter trials. See https://github.com/optuna/optuna/issues/2327 for details.\n trials = study._storage.get_all_trials(study._study_id, deepcopy=False)\n\n for t in trials:\n if \"grid_id\" in t.system_attrs and self._same_search_space(\n t.system_attrs[\"search_space\"]\n ):\n if t.state.is_finished():\n visited_grids.append(t.system_attrs[\"grid_id\"])\n elif t.state == TrialState.RUNNING:\n running_grids.append(t.system_attrs[\"grid_id\"])\n\n unvisited_grids = set(range(self._n_min_trials)) - set(visited_grids) - set(running_grids)\n\n # If evaluations for all grids have been started, return grids that have not yet finished\n # because all grids should be evaluated before stopping the optimization.\n if len(unvisited_grids) == 0:\n unvisited_grids = set(range(self._n_min_trials)) - set(visited_grids)\n\n return list(unvisited_grids)\n\n @staticmethod\n def _grid_value_equal(value1: GridValueType, value2: GridValueType) -> bool:\n value1_is_nan = isinstance(value1, Real) and np.isnan(float(value1))\n value2_is_nan = isinstance(value2, Real) and np.isnan(float(value2))\n return (value1 == value2) or (value1_is_nan and value2_is_nan)\n\n def _same_search_space(self, search_space: Mapping[str, Sequence[GridValueType]]) -> bool:\n if set(search_space.keys()) != set(self._search_space.keys()):\n return False\n\n for param_name in search_space.keys():\n if len(search_space[param_name]) != len(self._search_space[param_name]):\n return False\n\n for i, param_value in enumerate(search_space[param_name]):\n if not self._grid_value_equal(param_value, self._search_space[param_name][i]):\n return False\n\n return True\n\n def is_exhausted(self, study: Study) -> bool:\n \"\"\"\n Return True if all the possible params are evaluated, otherwise return False.\n \"\"\"\n return len(self._get_unvisited_grid_ids(study)) == 0\n", "path": "optuna/samplers/_grid.py"}]} | 3,822 | 123 |
gh_patches_debug_26432 | rasdani/github-patches | git_diff | stephenmcd__mezzanine-846 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
UnicodeDecodeError ('ascii' codec can't decode) when memcached enabled
I can reproduce this on Ubuntu 12 and CentOS 6.
memcached installed and enabled.
Install new virtualenv and activate
~/venv/workingvirtualenv
source ~/venv/workingvirtulaenv/bin/activate
pip install mezzanine
pip install python-memcached
mezzanine-project testproject
add the live_settings.py cache settings to local_settings.py
Now create the DB
python manage.py createdb
Select all defaults and answer yes to questions including creating demo data..
Can browse demo pages including /gallery with no errors and caching appears to be working.
Now create and activate a new virtualenv and install everything exactly the same only this time install mezzanine from github.
git clone https://github.com/stephenmcd/mezzanine.git
cd mezzanine
python setup.py install
Now when you try browse the pages they will work until you go to /gallery/ and you will get the following error.
The error is:
"UnicodeDecodeError: 'ascii' codec can't decode byte 0xcc in position 31: ordinal not in range(128)"
If I install the DB with caching not configured in local_settings.py I can browse to /gallery with no error. Then when I try enable caching and browse to /gallery I get the error again. I also tried recreating the DB in the new virtualenv with the newer version of mezzanine. The problem still occurs.
I can reproduce all this when I use MySQL instead of SQLite as well. Also occurs on blogposts. My live site is using this github version of Mezzanine because my theme uses bootstrap3. Just want to get caching working now.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mezzanine/core/middleware.py`
Content:
```
1 from __future__ import unicode_literals
2
3 from future.utils import native_str
4
5 from django.contrib import admin
6 from django.contrib.auth import logout
7 from django.contrib.redirects.models import Redirect
8 from django.core.exceptions import MiddlewareNotUsed
9 from django.core.urlresolvers import reverse
10 from django.http import (HttpResponse, HttpResponseRedirect,
11 HttpResponsePermanentRedirect, HttpResponseGone)
12 from django.utils.cache import get_max_age
13 from django.template import Template, RequestContext
14 from django.middleware.csrf import CsrfViewMiddleware, get_token
15
16 from mezzanine.conf import settings
17 from mezzanine.core.models import SitePermission
18 from mezzanine.utils.cache import (cache_key_prefix, nevercache_token,
19 cache_get, cache_set, cache_installed)
20 from mezzanine.utils.device import templates_for_device
21 from mezzanine.utils.sites import current_site_id, templates_for_host
22 from mezzanine.utils.urls import next_url
23
24
25 _deprecated = {
26 "AdminLoginInterfaceSelector": "AdminLoginInterfaceSelectorMiddleware",
27 "DeviceAwareUpdateCacheMiddleware": "UpdateCacheMiddleware",
28 "DeviceAwareFetchFromCacheMiddleware": "FetchFromCacheMiddleware",
29 }
30
31
32 class _Deprecated(object):
33 def __init__(self, *args, **kwargs):
34 from warnings import warn
35 msg = "mezzanine.core.middleware.%s is deprecated." % self.old
36 if self.new:
37 msg += (" Please change the MIDDLEWARE_CLASSES setting to use "
38 "mezzanine.core.middleware.%s" % self.new)
39 warn(msg)
40
41 for old, new in _deprecated.items():
42 globals()[old] = type(native_str(old),
43 (_Deprecated,),
44 {"old": old, "new": new})
45
46
47 class AdminLoginInterfaceSelectorMiddleware(object):
48 """
49 Checks for a POST from the admin login view and if authentication is
50 successful and the "site" interface is selected, redirect to the site.
51 """
52 def process_view(self, request, view_func, view_args, view_kwargs):
53 login_type = request.POST.get("mezzanine_login_interface")
54 if login_type and not request.user.is_authenticated():
55 response = view_func(request, *view_args, **view_kwargs)
56 if request.user.is_authenticated():
57 if login_type == "admin":
58 next = request.get_full_path()
59 else:
60 next = next_url(request) or "/"
61 return HttpResponseRedirect(next)
62 else:
63 return response
64 return None
65
66
67 class SitePermissionMiddleware(object):
68 """
69 Marks the current user with a ``has_site_permission`` which is
70 used in place of ``user.is_staff`` to achieve per-site staff
71 access.
72 """
73 def process_view(self, request, view_func, view_args, view_kwargs):
74 has_site_permission = False
75 if request.user.is_superuser:
76 has_site_permission = True
77 elif request.user.is_staff:
78 lookup = {"user": request.user, "sites": current_site_id()}
79 try:
80 SitePermission.objects.get(**lookup)
81 except SitePermission.DoesNotExist:
82 admin_index = reverse("admin:index")
83 if request.path.startswith(admin_index):
84 logout(request)
85 view_func = admin.site.login
86 extra_context = {"no_site_permission": True}
87 return view_func(request, extra_context=extra_context)
88 else:
89 has_site_permission = True
90 request.user.has_site_permission = has_site_permission
91
92
93 class TemplateForDeviceMiddleware(object):
94 """
95 Inserts device-specific templates to the template list.
96 """
97 def process_template_response(self, request, response):
98 if hasattr(response, "template_name"):
99 if not isinstance(response.template_name, Template):
100 templates = templates_for_device(request,
101 response.template_name)
102 response.template_name = templates
103 return response
104
105
106 class TemplateForHostMiddleware(object):
107 """
108 Inserts host-specific templates to the template list.
109 """
110 def process_template_response(self, request, response):
111 if hasattr(response, "template_name"):
112 if not isinstance(response.template_name, Template):
113 templates = templates_for_host(request,
114 response.template_name)
115 response.template_name = templates
116 return response
117
118
119 class UpdateCacheMiddleware(object):
120 """
121 Response phase for Mezzanine's cache middleware. Handles caching
122 the response, and then performing the second phase of rendering,
123 for content enclosed by the ``nevercache`` tag.
124 """
125
126 def process_response(self, request, response):
127
128 # Cache the response if all the required conditions are met.
129 # Response must be marked for updating by the
130 # ``FetchFromCacheMiddleware`` having a cache get miss, the
131 # user must not be authenticated, the HTTP status must be OK
132 # and the response mustn't include an expiry age, incicating it
133 # shouldn't be cached.
134 marked_for_update = getattr(request, "_update_cache", False)
135 anon = hasattr(request, "user") and not request.user.is_authenticated()
136 valid_status = response.status_code == 200
137 timeout = get_max_age(response)
138 if timeout is None:
139 timeout = settings.CACHE_MIDDLEWARE_SECONDS
140 if anon and valid_status and marked_for_update and timeout:
141 cache_key = cache_key_prefix(request) + request.get_full_path()
142 _cache_set = lambda r: cache_set(cache_key, r.content, timeout)
143 if callable(getattr(response, "render", None)):
144 response.add_post_render_callback(_cache_set)
145 else:
146 _cache_set(response)
147
148 # Second phase rendering for non-cached template code and
149 # content. Split on the delimiter the ``nevercache`` tag
150 # wrapped its contents in, and render only the content
151 # enclosed by it, to avoid possible template code injection.
152 parts = response.content.split(nevercache_token())
153 content_type = response.get("content-type", "")
154 if content_type.startswith("text") and len(parts) > 1:
155 # Restore csrf token from cookie - check the response
156 # first as it may be being set for the first time.
157 csrf_token = None
158 try:
159 csrf_token = response.cookies[settings.CSRF_COOKIE_NAME].value
160 except KeyError:
161 try:
162 csrf_token = request.COOKIES[settings.CSRF_COOKIE_NAME]
163 except KeyError:
164 pass
165 if csrf_token:
166 request.META["CSRF_COOKIE"] = csrf_token
167 context = RequestContext(request)
168 for i, part in enumerate(parts):
169 if i % 2:
170 part = Template(part).render(context).encode("utf-8")
171 parts[i] = part
172 response.content = "".join(parts)
173 response["Content-Length"] = len(response.content)
174 if hasattr(request, '_messages'):
175 # Required to clear out user messages.
176 request._messages.update(response)
177 return response
178
179
180 class FetchFromCacheMiddleware(object):
181 """
182 Request phase for Mezzanine cache middleware. Return a response
183 from cache if found, othwerwise mark the request for updating
184 the cache in ``UpdateCacheMiddleware``.
185 """
186
187 def process_request(self, request):
188 if (cache_installed() and request.method == "GET" and
189 not request.user.is_authenticated()):
190 cache_key = cache_key_prefix(request) + request.get_full_path()
191 response = cache_get(cache_key)
192 # We need to force a csrf token here, as new sessions
193 # won't receieve one on their first request, with cache
194 # middleware running.
195 csrf_mw_name = "django.middleware.csrf.CsrfViewMiddleware"
196 if csrf_mw_name in settings.MIDDLEWARE_CLASSES:
197 csrf_mw = CsrfViewMiddleware()
198 csrf_mw.process_view(request, lambda x: None, None, None)
199 get_token(request)
200 if response is None:
201 request._update_cache = True
202 else:
203 return HttpResponse(response)
204
205
206 class SSLRedirectMiddleware(object):
207 """
208 Handles redirections required for SSL when ``SSL_ENABLED`` is ``True``.
209
210 If ``SSL_FORCE_HOST`` is ``True``, and is not the current host,
211 redirect to it.
212
213 Also ensure URLs defined by ``SSL_FORCE_URL_PREFIXES`` are redirect
214 to HTTPS, and redirect all other URLs to HTTP if on HTTPS.
215 """
216 def process_request(self, request):
217 settings.use_editable()
218 force_host = settings.SSL_FORCE_HOST
219 if force_host and request.get_host().split(":")[0] != force_host:
220 url = "http://%s%s" % (force_host, request.get_full_path())
221 return HttpResponsePermanentRedirect(url)
222 if settings.SSL_ENABLED and not settings.DEV_SERVER:
223 url = "%s%s" % (request.get_host(), request.get_full_path())
224 if request.path.startswith(settings.SSL_FORCE_URL_PREFIXES):
225 if not request.is_secure():
226 return HttpResponseRedirect("https://%s" % url)
227 elif request.is_secure() and settings.SSL_FORCED_PREFIXES_ONLY:
228 return HttpResponseRedirect("http://%s" % url)
229
230
231 class RedirectFallbackMiddleware(object):
232 """
233 Port of Django's ``RedirectFallbackMiddleware`` that uses
234 Mezzanine's approach for determining the current site.
235 """
236
237 def __init__(self):
238 if "django.contrib.redirects" not in settings.INSTALLED_APPS:
239 raise MiddlewareNotUsed
240
241 def process_response(self, request, response):
242 if response.status_code == 404:
243 lookup = {
244 "site_id": current_site_id(),
245 "old_path": request.get_full_path(),
246 }
247 try:
248 redirect = Redirect.objects.get(**lookup)
249 except Redirect.DoesNotExist:
250 pass
251 else:
252 if not redirect.new_path:
253 response = HttpResponseGone()
254 else:
255 response = HttpResponseRedirect(redirect.new_path)
256 return response
257
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mezzanine/core/middleware.py b/mezzanine/core/middleware.py
--- a/mezzanine/core/middleware.py
+++ b/mezzanine/core/middleware.py
@@ -149,7 +149,12 @@
# content. Split on the delimiter the ``nevercache`` tag
# wrapped its contents in, and render only the content
# enclosed by it, to avoid possible template code injection.
- parts = response.content.split(nevercache_token())
+ token = nevercache_token()
+ try:
+ token = token.encode('utf-8')
+ except AttributeError:
+ pass
+ parts = response.content.split(token)
content_type = response.get("content-type", "")
if content_type.startswith("text") and len(parts) > 1:
# Restore csrf token from cookie - check the response
@@ -169,7 +174,7 @@
if i % 2:
part = Template(part).render(context).encode("utf-8")
parts[i] = part
- response.content = "".join(parts)
+ response.content = b"".join(parts)
response["Content-Length"] = len(response.content)
if hasattr(request, '_messages'):
# Required to clear out user messages.
| {"golden_diff": "diff --git a/mezzanine/core/middleware.py b/mezzanine/core/middleware.py\n--- a/mezzanine/core/middleware.py\n+++ b/mezzanine/core/middleware.py\n@@ -149,7 +149,12 @@\n # content. Split on the delimiter the ``nevercache`` tag\n # wrapped its contents in, and render only the content\n # enclosed by it, to avoid possible template code injection.\n- parts = response.content.split(nevercache_token())\n+ token = nevercache_token()\n+ try:\n+ token = token.encode('utf-8')\n+ except AttributeError:\n+ pass\n+ parts = response.content.split(token)\n content_type = response.get(\"content-type\", \"\")\n if content_type.startswith(\"text\") and len(parts) > 1:\n # Restore csrf token from cookie - check the response\n@@ -169,7 +174,7 @@\n if i % 2:\n part = Template(part).render(context).encode(\"utf-8\")\n parts[i] = part\n- response.content = \"\".join(parts)\n+ response.content = b\"\".join(parts)\n response[\"Content-Length\"] = len(response.content)\n if hasattr(request, '_messages'):\n # Required to clear out user messages.\n", "issue": "UnicodeDecodeError ('ascii' codec can't decode) when memcached enabled\nI can reproduce this on Ubuntu 12 and CentOS 6. \n\nmemcached installed and enabled. \n\nInstall new virtualenv and activate\n~/venv/workingvirtualenv\nsource ~/venv/workingvirtulaenv/bin/activate\n\npip install mezzanine\npip install python-memcached\nmezzanine-project testproject\n\nadd the live_settings.py cache settings to local_settings.py\n\nNow create the DB\npython manage.py createdb\n\nSelect all defaults and answer yes to questions including creating demo data..\nCan browse demo pages including /gallery with no errors and caching appears to be working.\n\nNow create and activate a new virtualenv and install everything exactly the same only this time install mezzanine from github.\ngit clone https://github.com/stephenmcd/mezzanine.git\ncd mezzanine\npython setup.py install\n\nNow when you try browse the pages they will work until you go to /gallery/ and you will get the following error.\n\nThe error is:\n\"UnicodeDecodeError: 'ascii' codec can't decode byte 0xcc in position 31: ordinal not in range(128)\"\n\nIf I install the DB with caching not configured in local_settings.py I can browse to /gallery with no error. Then when I try enable caching and browse to /gallery I get the error again. I also tried recreating the DB in the new virtualenv with the newer version of mezzanine. The problem still occurs.\n\nI can reproduce all this when I use MySQL instead of SQLite as well. Also occurs on blogposts. My live site is using this github version of Mezzanine because my theme uses bootstrap3. Just want to get caching working now.\n\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nfrom future.utils import native_str\n\nfrom django.contrib import admin\nfrom django.contrib.auth import logout\nfrom django.contrib.redirects.models import Redirect\nfrom django.core.exceptions import MiddlewareNotUsed\nfrom django.core.urlresolvers import reverse\nfrom django.http import (HttpResponse, HttpResponseRedirect,\n HttpResponsePermanentRedirect, HttpResponseGone)\nfrom django.utils.cache import get_max_age\nfrom django.template import Template, RequestContext\nfrom django.middleware.csrf import CsrfViewMiddleware, get_token\n\nfrom mezzanine.conf import settings\nfrom mezzanine.core.models import SitePermission\nfrom mezzanine.utils.cache import (cache_key_prefix, nevercache_token,\n cache_get, cache_set, cache_installed)\nfrom mezzanine.utils.device import templates_for_device\nfrom mezzanine.utils.sites import current_site_id, templates_for_host\nfrom mezzanine.utils.urls import next_url\n\n\n_deprecated = {\n \"AdminLoginInterfaceSelector\": \"AdminLoginInterfaceSelectorMiddleware\",\n \"DeviceAwareUpdateCacheMiddleware\": \"UpdateCacheMiddleware\",\n \"DeviceAwareFetchFromCacheMiddleware\": \"FetchFromCacheMiddleware\",\n}\n\n\nclass _Deprecated(object):\n def __init__(self, *args, **kwargs):\n from warnings import warn\n msg = \"mezzanine.core.middleware.%s is deprecated.\" % self.old\n if self.new:\n msg += (\" Please change the MIDDLEWARE_CLASSES setting to use \"\n \"mezzanine.core.middleware.%s\" % self.new)\n warn(msg)\n\nfor old, new in _deprecated.items():\n globals()[old] = type(native_str(old),\n (_Deprecated,),\n {\"old\": old, \"new\": new})\n\n\nclass AdminLoginInterfaceSelectorMiddleware(object):\n \"\"\"\n Checks for a POST from the admin login view and if authentication is\n successful and the \"site\" interface is selected, redirect to the site.\n \"\"\"\n def process_view(self, request, view_func, view_args, view_kwargs):\n login_type = request.POST.get(\"mezzanine_login_interface\")\n if login_type and not request.user.is_authenticated():\n response = view_func(request, *view_args, **view_kwargs)\n if request.user.is_authenticated():\n if login_type == \"admin\":\n next = request.get_full_path()\n else:\n next = next_url(request) or \"/\"\n return HttpResponseRedirect(next)\n else:\n return response\n return None\n\n\nclass SitePermissionMiddleware(object):\n \"\"\"\n Marks the current user with a ``has_site_permission`` which is\n used in place of ``user.is_staff`` to achieve per-site staff\n access.\n \"\"\"\n def process_view(self, request, view_func, view_args, view_kwargs):\n has_site_permission = False\n if request.user.is_superuser:\n has_site_permission = True\n elif request.user.is_staff:\n lookup = {\"user\": request.user, \"sites\": current_site_id()}\n try:\n SitePermission.objects.get(**lookup)\n except SitePermission.DoesNotExist:\n admin_index = reverse(\"admin:index\")\n if request.path.startswith(admin_index):\n logout(request)\n view_func = admin.site.login\n extra_context = {\"no_site_permission\": True}\n return view_func(request, extra_context=extra_context)\n else:\n has_site_permission = True\n request.user.has_site_permission = has_site_permission\n\n\nclass TemplateForDeviceMiddleware(object):\n \"\"\"\n Inserts device-specific templates to the template list.\n \"\"\"\n def process_template_response(self, request, response):\n if hasattr(response, \"template_name\"):\n if not isinstance(response.template_name, Template):\n templates = templates_for_device(request,\n response.template_name)\n response.template_name = templates\n return response\n\n\nclass TemplateForHostMiddleware(object):\n \"\"\"\n Inserts host-specific templates to the template list.\n \"\"\"\n def process_template_response(self, request, response):\n if hasattr(response, \"template_name\"):\n if not isinstance(response.template_name, Template):\n templates = templates_for_host(request,\n response.template_name)\n response.template_name = templates\n return response\n\n\nclass UpdateCacheMiddleware(object):\n \"\"\"\n Response phase for Mezzanine's cache middleware. Handles caching\n the response, and then performing the second phase of rendering,\n for content enclosed by the ``nevercache`` tag.\n \"\"\"\n\n def process_response(self, request, response):\n\n # Cache the response if all the required conditions are met.\n # Response must be marked for updating by the\n # ``FetchFromCacheMiddleware`` having a cache get miss, the\n # user must not be authenticated, the HTTP status must be OK\n # and the response mustn't include an expiry age, incicating it\n # shouldn't be cached.\n marked_for_update = getattr(request, \"_update_cache\", False)\n anon = hasattr(request, \"user\") and not request.user.is_authenticated()\n valid_status = response.status_code == 200\n timeout = get_max_age(response)\n if timeout is None:\n timeout = settings.CACHE_MIDDLEWARE_SECONDS\n if anon and valid_status and marked_for_update and timeout:\n cache_key = cache_key_prefix(request) + request.get_full_path()\n _cache_set = lambda r: cache_set(cache_key, r.content, timeout)\n if callable(getattr(response, \"render\", None)):\n response.add_post_render_callback(_cache_set)\n else:\n _cache_set(response)\n\n # Second phase rendering for non-cached template code and\n # content. Split on the delimiter the ``nevercache`` tag\n # wrapped its contents in, and render only the content\n # enclosed by it, to avoid possible template code injection.\n parts = response.content.split(nevercache_token())\n content_type = response.get(\"content-type\", \"\")\n if content_type.startswith(\"text\") and len(parts) > 1:\n # Restore csrf token from cookie - check the response\n # first as it may be being set for the first time.\n csrf_token = None\n try:\n csrf_token = response.cookies[settings.CSRF_COOKIE_NAME].value\n except KeyError:\n try:\n csrf_token = request.COOKIES[settings.CSRF_COOKIE_NAME]\n except KeyError:\n pass\n if csrf_token:\n request.META[\"CSRF_COOKIE\"] = csrf_token\n context = RequestContext(request)\n for i, part in enumerate(parts):\n if i % 2:\n part = Template(part).render(context).encode(\"utf-8\")\n parts[i] = part\n response.content = \"\".join(parts)\n response[\"Content-Length\"] = len(response.content)\n if hasattr(request, '_messages'):\n # Required to clear out user messages.\n request._messages.update(response)\n return response\n\n\nclass FetchFromCacheMiddleware(object):\n \"\"\"\n Request phase for Mezzanine cache middleware. Return a response\n from cache if found, othwerwise mark the request for updating\n the cache in ``UpdateCacheMiddleware``.\n \"\"\"\n\n def process_request(self, request):\n if (cache_installed() and request.method == \"GET\" and\n not request.user.is_authenticated()):\n cache_key = cache_key_prefix(request) + request.get_full_path()\n response = cache_get(cache_key)\n # We need to force a csrf token here, as new sessions\n # won't receieve one on their first request, with cache\n # middleware running.\n csrf_mw_name = \"django.middleware.csrf.CsrfViewMiddleware\"\n if csrf_mw_name in settings.MIDDLEWARE_CLASSES:\n csrf_mw = CsrfViewMiddleware()\n csrf_mw.process_view(request, lambda x: None, None, None)\n get_token(request)\n if response is None:\n request._update_cache = True\n else:\n return HttpResponse(response)\n\n\nclass SSLRedirectMiddleware(object):\n \"\"\"\n Handles redirections required for SSL when ``SSL_ENABLED`` is ``True``.\n\n If ``SSL_FORCE_HOST`` is ``True``, and is not the current host,\n redirect to it.\n\n Also ensure URLs defined by ``SSL_FORCE_URL_PREFIXES`` are redirect\n to HTTPS, and redirect all other URLs to HTTP if on HTTPS.\n \"\"\"\n def process_request(self, request):\n settings.use_editable()\n force_host = settings.SSL_FORCE_HOST\n if force_host and request.get_host().split(\":\")[0] != force_host:\n url = \"http://%s%s\" % (force_host, request.get_full_path())\n return HttpResponsePermanentRedirect(url)\n if settings.SSL_ENABLED and not settings.DEV_SERVER:\n url = \"%s%s\" % (request.get_host(), request.get_full_path())\n if request.path.startswith(settings.SSL_FORCE_URL_PREFIXES):\n if not request.is_secure():\n return HttpResponseRedirect(\"https://%s\" % url)\n elif request.is_secure() and settings.SSL_FORCED_PREFIXES_ONLY:\n return HttpResponseRedirect(\"http://%s\" % url)\n\n\nclass RedirectFallbackMiddleware(object):\n \"\"\"\n Port of Django's ``RedirectFallbackMiddleware`` that uses\n Mezzanine's approach for determining the current site.\n \"\"\"\n\n def __init__(self):\n if \"django.contrib.redirects\" not in settings.INSTALLED_APPS:\n raise MiddlewareNotUsed\n\n def process_response(self, request, response):\n if response.status_code == 404:\n lookup = {\n \"site_id\": current_site_id(),\n \"old_path\": request.get_full_path(),\n }\n try:\n redirect = Redirect.objects.get(**lookup)\n except Redirect.DoesNotExist:\n pass\n else:\n if not redirect.new_path:\n response = HttpResponseGone()\n else:\n response = HttpResponseRedirect(redirect.new_path)\n return response\n", "path": "mezzanine/core/middleware.py"}], "after_files": [{"content": "from __future__ import unicode_literals\n\nfrom future.utils import native_str\n\nfrom django.contrib import admin\nfrom django.contrib.auth import logout\nfrom django.contrib.redirects.models import Redirect\nfrom django.core.exceptions import MiddlewareNotUsed\nfrom django.core.urlresolvers import reverse\nfrom django.http import (HttpResponse, HttpResponseRedirect,\n HttpResponsePermanentRedirect, HttpResponseGone)\nfrom django.utils.cache import get_max_age\nfrom django.template import Template, RequestContext\nfrom django.middleware.csrf import CsrfViewMiddleware, get_token\n\nfrom mezzanine.conf import settings\nfrom mezzanine.core.models import SitePermission\nfrom mezzanine.utils.cache import (cache_key_prefix, nevercache_token,\n cache_get, cache_set, cache_installed)\nfrom mezzanine.utils.device import templates_for_device\nfrom mezzanine.utils.sites import current_site_id, templates_for_host\nfrom mezzanine.utils.urls import next_url\n\n\n_deprecated = {\n \"AdminLoginInterfaceSelector\": \"AdminLoginInterfaceSelectorMiddleware\",\n \"DeviceAwareUpdateCacheMiddleware\": \"UpdateCacheMiddleware\",\n \"DeviceAwareFetchFromCacheMiddleware\": \"FetchFromCacheMiddleware\",\n}\n\n\nclass _Deprecated(object):\n def __init__(self, *args, **kwargs):\n from warnings import warn\n msg = \"mezzanine.core.middleware.%s is deprecated.\" % self.old\n if self.new:\n msg += (\" Please change the MIDDLEWARE_CLASSES setting to use \"\n \"mezzanine.core.middleware.%s\" % self.new)\n warn(msg)\n\nfor old, new in _deprecated.items():\n globals()[old] = type(native_str(old),\n (_Deprecated,),\n {\"old\": old, \"new\": new})\n\n\nclass AdminLoginInterfaceSelectorMiddleware(object):\n \"\"\"\n Checks for a POST from the admin login view and if authentication is\n successful and the \"site\" interface is selected, redirect to the site.\n \"\"\"\n def process_view(self, request, view_func, view_args, view_kwargs):\n login_type = request.POST.get(\"mezzanine_login_interface\")\n if login_type and not request.user.is_authenticated():\n response = view_func(request, *view_args, **view_kwargs)\n if request.user.is_authenticated():\n if login_type == \"admin\":\n next = request.get_full_path()\n else:\n next = next_url(request) or \"/\"\n return HttpResponseRedirect(next)\n else:\n return response\n return None\n\n\nclass SitePermissionMiddleware(object):\n \"\"\"\n Marks the current user with a ``has_site_permission`` which is\n used in place of ``user.is_staff`` to achieve per-site staff\n access.\n \"\"\"\n def process_view(self, request, view_func, view_args, view_kwargs):\n has_site_permission = False\n if request.user.is_superuser:\n has_site_permission = True\n elif request.user.is_staff:\n lookup = {\"user\": request.user, \"sites\": current_site_id()}\n try:\n SitePermission.objects.get(**lookup)\n except SitePermission.DoesNotExist:\n admin_index = reverse(\"admin:index\")\n if request.path.startswith(admin_index):\n logout(request)\n view_func = admin.site.login\n extra_context = {\"no_site_permission\": True}\n return view_func(request, extra_context=extra_context)\n else:\n has_site_permission = True\n request.user.has_site_permission = has_site_permission\n\n\nclass TemplateForDeviceMiddleware(object):\n \"\"\"\n Inserts device-specific templates to the template list.\n \"\"\"\n def process_template_response(self, request, response):\n if hasattr(response, \"template_name\"):\n if not isinstance(response.template_name, Template):\n templates = templates_for_device(request,\n response.template_name)\n response.template_name = templates\n return response\n\n\nclass TemplateForHostMiddleware(object):\n \"\"\"\n Inserts host-specific templates to the template list.\n \"\"\"\n def process_template_response(self, request, response):\n if hasattr(response, \"template_name\"):\n if not isinstance(response.template_name, Template):\n templates = templates_for_host(request,\n response.template_name)\n response.template_name = templates\n return response\n\n\nclass UpdateCacheMiddleware(object):\n \"\"\"\n Response phase for Mezzanine's cache middleware. Handles caching\n the response, and then performing the second phase of rendering,\n for content enclosed by the ``nevercache`` tag.\n \"\"\"\n\n def process_response(self, request, response):\n\n # Cache the response if all the required conditions are met.\n # Response must be marked for updating by the\n # ``FetchFromCacheMiddleware`` having a cache get miss, the\n # user must not be authenticated, the HTTP status must be OK\n # and the response mustn't include an expiry age, incicating it\n # shouldn't be cached.\n marked_for_update = getattr(request, \"_update_cache\", False)\n anon = hasattr(request, \"user\") and not request.user.is_authenticated()\n valid_status = response.status_code == 200\n timeout = get_max_age(response)\n if timeout is None:\n timeout = settings.CACHE_MIDDLEWARE_SECONDS\n if anon and valid_status and marked_for_update and timeout:\n cache_key = cache_key_prefix(request) + request.get_full_path()\n _cache_set = lambda r: cache_set(cache_key, r.content, timeout)\n if callable(getattr(response, \"render\", None)):\n response.add_post_render_callback(_cache_set)\n else:\n _cache_set(response)\n\n # Second phase rendering for non-cached template code and\n # content. Split on the delimiter the ``nevercache`` tag\n # wrapped its contents in, and render only the content\n # enclosed by it, to avoid possible template code injection.\n token = nevercache_token()\n try:\n token = token.encode('utf-8')\n except AttributeError:\n pass\n parts = response.content.split(token)\n content_type = response.get(\"content-type\", \"\")\n if content_type.startswith(\"text\") and len(parts) > 1:\n # Restore csrf token from cookie - check the response\n # first as it may be being set for the first time.\n csrf_token = None\n try:\n csrf_token = response.cookies[settings.CSRF_COOKIE_NAME].value\n except KeyError:\n try:\n csrf_token = request.COOKIES[settings.CSRF_COOKIE_NAME]\n except KeyError:\n pass\n if csrf_token:\n request.META[\"CSRF_COOKIE\"] = csrf_token\n context = RequestContext(request)\n for i, part in enumerate(parts):\n if i % 2:\n part = Template(part).render(context).encode(\"utf-8\")\n parts[i] = part\n response.content = b\"\".join(parts)\n response[\"Content-Length\"] = len(response.content)\n if hasattr(request, '_messages'):\n # Required to clear out user messages.\n request._messages.update(response)\n return response\n\n\nclass FetchFromCacheMiddleware(object):\n \"\"\"\n Request phase for Mezzanine cache middleware. Return a response\n from cache if found, othwerwise mark the request for updating\n the cache in ``UpdateCacheMiddleware``.\n \"\"\"\n\n def process_request(self, request):\n if (cache_installed() and request.method == \"GET\" and\n not request.user.is_authenticated()):\n cache_key = cache_key_prefix(request) + request.get_full_path()\n response = cache_get(cache_key)\n # We need to force a csrf token here, as new sessions\n # won't receieve one on their first request, with cache\n # middleware running.\n csrf_mw_name = \"django.middleware.csrf.CsrfViewMiddleware\"\n if csrf_mw_name in settings.MIDDLEWARE_CLASSES:\n csrf_mw = CsrfViewMiddleware()\n csrf_mw.process_view(request, lambda x: None, None, None)\n get_token(request)\n if response is None:\n request._update_cache = True\n else:\n return HttpResponse(response)\n\n\nclass SSLRedirectMiddleware(object):\n \"\"\"\n Handles redirections required for SSL when ``SSL_ENABLED`` is ``True``.\n\n If ``SSL_FORCE_HOST`` is ``True``, and is not the current host,\n redirect to it.\n\n Also ensure URLs defined by ``SSL_FORCE_URL_PREFIXES`` are redirect\n to HTTPS, and redirect all other URLs to HTTP if on HTTPS.\n \"\"\"\n def process_request(self, request):\n settings.use_editable()\n force_host = settings.SSL_FORCE_HOST\n if force_host and request.get_host().split(\":\")[0] != force_host:\n url = \"http://%s%s\" % (force_host, request.get_full_path())\n return HttpResponsePermanentRedirect(url)\n if settings.SSL_ENABLED and not settings.DEV_SERVER:\n url = \"%s%s\" % (request.get_host(), request.get_full_path())\n if request.path.startswith(settings.SSL_FORCE_URL_PREFIXES):\n if not request.is_secure():\n return HttpResponseRedirect(\"https://%s\" % url)\n elif request.is_secure() and settings.SSL_FORCED_PREFIXES_ONLY:\n return HttpResponseRedirect(\"http://%s\" % url)\n\n\nclass RedirectFallbackMiddleware(object):\n \"\"\"\n Port of Django's ``RedirectFallbackMiddleware`` that uses\n Mezzanine's approach for determining the current site.\n \"\"\"\n\n def __init__(self):\n if \"django.contrib.redirects\" not in settings.INSTALLED_APPS:\n raise MiddlewareNotUsed\n\n def process_response(self, request, response):\n if response.status_code == 404:\n lookup = {\n \"site_id\": current_site_id(),\n \"old_path\": request.get_full_path(),\n }\n try:\n redirect = Redirect.objects.get(**lookup)\n except Redirect.DoesNotExist:\n pass\n else:\n if not redirect.new_path:\n response = HttpResponseGone()\n else:\n response = HttpResponseRedirect(redirect.new_path)\n return response\n", "path": "mezzanine/core/middleware.py"}]} | 3,356 | 278 |
gh_patches_debug_14458 | rasdani/github-patches | git_diff | kovidgoyal__kitty-5211 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
MacOS: Launch OS Window via Remote Control
**Describe the bug**
Ref: https://github.com/kovidgoyal/kitty/issues/45#issuecomment-915753960
Remote control via socket doesn't work opening a new OS window unless there is an existing window open already.
**To Reproduce**
Steps to reproduce the behavior:
1. Launch kitty without window:
````
kitty --config NONE --listen-on=unix:/tmp/scratch -o allow_remote_control=yes -o macos_quit_when_last_window_closed=no -1 --instance-group scratch false &
````
2. Attempt to open OS Window using remote control:
````
kitty @ --to unix:/tmp/scratch launch --type=os-window --title=test
````
3. No window opens up
4. Right click icon -> Open OS Window
5. Reattempt remote control:
````
kitty @ --to unix:/tmp/scratch launch --type=os-window --title=test
````
6. Window opens up fine with title "test"
**Environment details**
```
kitty 0.25.2 created by Kovid Goyal
Darwin gtd.lan 21.5.0 Darwin Kernel Version 21.5.0: Tue Apr 26 21:08:22 PDT 2022; root:xnu-8020.121.3~4/RELEASE_X86_64 x86_64
ProductName: macOS ProductVersion: 12.4 BuildVersion: 21F79
Frozen: True
Paths:
kitty: /Applications/kitty.app/Contents/MacOS/kitty
base dir: /Applications/kitty.app/Contents/Resources/kitty
extensions dir: /Applications/kitty.app/Contents/Resources/Python/lib/kitty-extensions
system shell: /bin/zsh
Loaded config overrides:
allow_remote_control yes
macos_quit_when_last_window_closed no
Config options different from defaults:
allow_remote_control y
Important environment variables seen by the kitty process:
PATH /usr/local/opt/coreutils/libexec/gnubin:/Users/hars/.config/bin:/Users/hars/.dwm/statusbar:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/Apple/usr/bin:/usr/local/opt/coreutils/libexec/gnubin:/Users/hars/.config/bin:/Users/hars/.dwm/statusbar:/Applications/kitty.app/Contents/MacOS:/Users/hars/.local/share/sheldon/repos/github.com/kazhala/dotbare:/usr/local/opt/fzf/bin
LANG en_AU.UTF-8
VISUAL nvim
EDITOR nvim
SHELL /bin/zsh
USER hars
XDG_CONFIG_HOME /Users/hars/.config
XDG_CACHE_HOME /Users/hars/.cache
```
**Additional context**
Also tried ``new-window --window-type=os``
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kitty/rc/launch.py`
Content:
```
1 #!/usr/bin/env python
2 # License: GPLv3 Copyright: 2020, Kovid Goyal <kovid at kovidgoyal.net>
3
4
5 from typing import TYPE_CHECKING, Optional
6
7 from kitty.cli_stub import LaunchCLIOptions
8 from kitty.launch import (
9 launch as do_launch, options_spec as launch_options_spec,
10 parse_launch_args
11 )
12
13 from .base import (
14 MATCH_TAB_OPTION, ArgsType, Boss, PayloadGetType, PayloadType, RCOptions,
15 RemoteCommand, ResponseType, Window
16 )
17
18 if TYPE_CHECKING:
19 from kitty.cli_stub import LaunchRCOptions as CLIOptions
20
21
22 class Launch(RemoteCommand):
23
24 '''
25 args+: The command line to run in the new window, as a list, use an empty list to run the default shell
26 match: The tab to open the new window in
27 window_title: Title for the new window
28 cwd: Working directory for the new window
29 env: List of environment variables of the form NAME=VALUE
30 tab_title: Title for the new tab
31 type: The type of window to open
32 keep_focus: Boolean indicating whether the current window should retain focus or not
33 copy_colors: Boolean indicating whether to copy the colors from the current window
34 copy_cmdline: Boolean indicating whether to copy the cmdline from the current window
35 copy_env: Boolean indicating whether to copy the environ from the current window
36 hold: Boolean indicating whether to keep window open after cmd exits
37 location: Where in the tab to open the new window
38 allow_remote_control: Boolean indicating whether to allow remote control from the new window
39 stdin_source: Where to get stdin for thew process from
40 stdin_add_formatting: Boolean indicating whether to add formatting codes to stdin
41 stdin_add_line_wrap_markers: Boolean indicating whether to add line wrap markers to stdin
42 no_response: Boolean indicating whether to send back the window id
43 marker: Specification for marker for new window, for example: "text 1 ERROR"
44 logo: Path to window logo
45 logo_position: Window logo position as string or empty string to use default
46 logo_alpha: Window logo alpha or -1 to use default
47 self: Boolean, if True use tab the command was run in
48 '''
49
50 short_desc = 'Run an arbitrary process in a new window/tab'
51 desc = (
52 'Prints out the id of the newly opened window. Any command line arguments'
53 ' are assumed to be the command line used to run in the new window, if none'
54 ' are provided, the default shell is run. For example:'
55 ' :code:`kitty @ launch --title=Email mutt`.'
56 )
57 options_spec = MATCH_TAB_OPTION + '\n\n' + '''\
58 --no-response
59 type=bool-set
60 Do not print out the id of the newly created window.
61
62
63 --self
64 type=bool-set
65 If specified the tab containing the window this command is run in is used
66 instead of the active tab
67 ''' + '\n\n' + launch_options_spec().replace(':option:`launch', ':option:`kitty @ launch')
68 argspec = '[CMD ...]'
69
70 def message_to_kitty(self, global_opts: RCOptions, opts: 'CLIOptions', args: ArgsType) -> PayloadType:
71 ans = {'args': args or []}
72 for attr, val in opts.__dict__.items():
73 ans[attr] = val
74 return ans
75
76 def response_from_kitty(self, boss: Boss, window: Optional[Window], payload_get: PayloadGetType) -> ResponseType:
77 default_opts = parse_launch_args()[0]
78 opts = LaunchCLIOptions()
79 for key, default_value in default_opts.__dict__.items():
80 val = payload_get(key)
81 if val is None:
82 val = default_value
83 setattr(opts, key, val)
84 tabs = self.tabs_for_match_payload(boss, window, payload_get)
85 if tabs and tabs[0]:
86 w = do_launch(boss, opts, payload_get('args') or [], target_tab=tabs[0])
87 return None if payload_get('no_response') else str(getattr(w, 'id', 0))
88 return None
89
90
91 launch = Launch()
92
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/kitty/rc/launch.py b/kitty/rc/launch.py
--- a/kitty/rc/launch.py
+++ b/kitty/rc/launch.py
@@ -81,11 +81,14 @@
if val is None:
val = default_value
setattr(opts, key, val)
+ target_tab = None
tabs = self.tabs_for_match_payload(boss, window, payload_get)
if tabs and tabs[0]:
- w = do_launch(boss, opts, payload_get('args') or [], target_tab=tabs[0])
- return None if payload_get('no_response') else str(getattr(w, 'id', 0))
- return None
+ target_tab = tabs[0]
+ elif payload_get('type') not in ('os-window', 'background'):
+ return None
+ w = do_launch(boss, opts, payload_get('args') or [], target_tab=target_tab)
+ return None if payload_get('no_response') else str(getattr(w, 'id', 0))
launch = Launch()
| {"golden_diff": "diff --git a/kitty/rc/launch.py b/kitty/rc/launch.py\n--- a/kitty/rc/launch.py\n+++ b/kitty/rc/launch.py\n@@ -81,11 +81,14 @@\n if val is None:\n val = default_value\n setattr(opts, key, val)\n+ target_tab = None\n tabs = self.tabs_for_match_payload(boss, window, payload_get)\n if tabs and tabs[0]:\n- w = do_launch(boss, opts, payload_get('args') or [], target_tab=tabs[0])\n- return None if payload_get('no_response') else str(getattr(w, 'id', 0))\n- return None\n+ target_tab = tabs[0]\n+ elif payload_get('type') not in ('os-window', 'background'):\n+ return None\n+ w = do_launch(boss, opts, payload_get('args') or [], target_tab=target_tab)\n+ return None if payload_get('no_response') else str(getattr(w, 'id', 0))\n \n \n launch = Launch()\n", "issue": "MacOS: Launch OS Window via Remote Control \n**Describe the bug** \r\n\r\nRef: https://github.com/kovidgoyal/kitty/issues/45#issuecomment-915753960\r\n\r\nRemote control via socket doesn't work opening a new OS window unless there is an existing window open already. \r\n\r\n**To Reproduce** \r\n\r\nSteps to reproduce the behavior:\r\n1. Launch kitty without window:\r\n````\r\nkitty --config NONE --listen-on=unix:/tmp/scratch -o allow_remote_control=yes -o macos_quit_when_last_window_closed=no -1 --instance-group scratch false &\r\n````\r\n2. Attempt to open OS Window using remote control:\r\n````\r\nkitty @ --to unix:/tmp/scratch launch --type=os-window --title=test\r\n````\r\n3. No window opens up\r\n\r\n4. Right click icon -> Open OS Window\r\n\r\n5. Reattempt remote control:\r\n````\r\nkitty @ --to unix:/tmp/scratch launch --type=os-window --title=test\r\n````\r\n6. Window opens up fine with title \"test\"\r\n\r\n**Environment details**\r\n```\r\nkitty 0.25.2 created by Kovid Goyal\r\nDarwin gtd.lan 21.5.0 Darwin Kernel Version 21.5.0: Tue Apr 26 21:08:22 PDT 2022; root:xnu-8020.121.3~4/RELEASE_X86_64 x86_64\r\nProductName:\tmacOS ProductVersion:\t12.4 BuildVersion:\t21F79\r\nFrozen: True\r\nPaths:\r\n kitty: /Applications/kitty.app/Contents/MacOS/kitty\r\n base dir: /Applications/kitty.app/Contents/Resources/kitty\r\n extensions dir: /Applications/kitty.app/Contents/Resources/Python/lib/kitty-extensions\r\n system shell: /bin/zsh\r\nLoaded config overrides:\r\n allow_remote_control yes\r\n macos_quit_when_last_window_closed no\r\n\r\nConfig options different from defaults:\r\nallow_remote_control y\r\n\r\nImportant environment variables seen by the kitty process:\r\n\tPATH /usr/local/opt/coreutils/libexec/gnubin:/Users/hars/.config/bin:/Users/hars/.dwm/statusbar:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/Apple/usr/bin:/usr/local/opt/coreutils/libexec/gnubin:/Users/hars/.config/bin:/Users/hars/.dwm/statusbar:/Applications/kitty.app/Contents/MacOS:/Users/hars/.local/share/sheldon/repos/github.com/kazhala/dotbare:/usr/local/opt/fzf/bin\r\n\tLANG en_AU.UTF-8\r\n\tVISUAL nvim\r\n\tEDITOR nvim\r\n\tSHELL /bin/zsh\r\n\tUSER hars\r\n\tXDG_CONFIG_HOME /Users/hars/.config\r\n\tXDG_CACHE_HOME /Users/hars/.cache\r\n\r\n```\r\n**Additional context**\r\n\r\nAlso tried ``new-window --window-type=os``\r\n\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# License: GPLv3 Copyright: 2020, Kovid Goyal <kovid at kovidgoyal.net>\n\n\nfrom typing import TYPE_CHECKING, Optional\n\nfrom kitty.cli_stub import LaunchCLIOptions\nfrom kitty.launch import (\n launch as do_launch, options_spec as launch_options_spec,\n parse_launch_args\n)\n\nfrom .base import (\n MATCH_TAB_OPTION, ArgsType, Boss, PayloadGetType, PayloadType, RCOptions,\n RemoteCommand, ResponseType, Window\n)\n\nif TYPE_CHECKING:\n from kitty.cli_stub import LaunchRCOptions as CLIOptions\n\n\nclass Launch(RemoteCommand):\n\n '''\n args+: The command line to run in the new window, as a list, use an empty list to run the default shell\n match: The tab to open the new window in\n window_title: Title for the new window\n cwd: Working directory for the new window\n env: List of environment variables of the form NAME=VALUE\n tab_title: Title for the new tab\n type: The type of window to open\n keep_focus: Boolean indicating whether the current window should retain focus or not\n copy_colors: Boolean indicating whether to copy the colors from the current window\n copy_cmdline: Boolean indicating whether to copy the cmdline from the current window\n copy_env: Boolean indicating whether to copy the environ from the current window\n hold: Boolean indicating whether to keep window open after cmd exits\n location: Where in the tab to open the new window\n allow_remote_control: Boolean indicating whether to allow remote control from the new window\n stdin_source: Where to get stdin for thew process from\n stdin_add_formatting: Boolean indicating whether to add formatting codes to stdin\n stdin_add_line_wrap_markers: Boolean indicating whether to add line wrap markers to stdin\n no_response: Boolean indicating whether to send back the window id\n marker: Specification for marker for new window, for example: \"text 1 ERROR\"\n logo: Path to window logo\n logo_position: Window logo position as string or empty string to use default\n logo_alpha: Window logo alpha or -1 to use default\n self: Boolean, if True use tab the command was run in\n '''\n\n short_desc = 'Run an arbitrary process in a new window/tab'\n desc = (\n 'Prints out the id of the newly opened window. Any command line arguments'\n ' are assumed to be the command line used to run in the new window, if none'\n ' are provided, the default shell is run. For example:'\n ' :code:`kitty @ launch --title=Email mutt`.'\n )\n options_spec = MATCH_TAB_OPTION + '\\n\\n' + '''\\\n--no-response\ntype=bool-set\nDo not print out the id of the newly created window.\n\n\n--self\ntype=bool-set\nIf specified the tab containing the window this command is run in is used\ninstead of the active tab\n ''' + '\\n\\n' + launch_options_spec().replace(':option:`launch', ':option:`kitty @ launch')\n argspec = '[CMD ...]'\n\n def message_to_kitty(self, global_opts: RCOptions, opts: 'CLIOptions', args: ArgsType) -> PayloadType:\n ans = {'args': args or []}\n for attr, val in opts.__dict__.items():\n ans[attr] = val\n return ans\n\n def response_from_kitty(self, boss: Boss, window: Optional[Window], payload_get: PayloadGetType) -> ResponseType:\n default_opts = parse_launch_args()[0]\n opts = LaunchCLIOptions()\n for key, default_value in default_opts.__dict__.items():\n val = payload_get(key)\n if val is None:\n val = default_value\n setattr(opts, key, val)\n tabs = self.tabs_for_match_payload(boss, window, payload_get)\n if tabs and tabs[0]:\n w = do_launch(boss, opts, payload_get('args') or [], target_tab=tabs[0])\n return None if payload_get('no_response') else str(getattr(w, 'id', 0))\n return None\n\n\nlaunch = Launch()\n", "path": "kitty/rc/launch.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# License: GPLv3 Copyright: 2020, Kovid Goyal <kovid at kovidgoyal.net>\n\n\nfrom typing import TYPE_CHECKING, Optional\n\nfrom kitty.cli_stub import LaunchCLIOptions\nfrom kitty.launch import (\n launch as do_launch, options_spec as launch_options_spec,\n parse_launch_args\n)\n\nfrom .base import (\n MATCH_TAB_OPTION, ArgsType, Boss, PayloadGetType, PayloadType, RCOptions,\n RemoteCommand, ResponseType, Window\n)\n\nif TYPE_CHECKING:\n from kitty.cli_stub import LaunchRCOptions as CLIOptions\n\n\nclass Launch(RemoteCommand):\n\n '''\n args+: The command line to run in the new window, as a list, use an empty list to run the default shell\n match: The tab to open the new window in\n window_title: Title for the new window\n cwd: Working directory for the new window\n env: List of environment variables of the form NAME=VALUE\n tab_title: Title for the new tab\n type: The type of window to open\n keep_focus: Boolean indicating whether the current window should retain focus or not\n copy_colors: Boolean indicating whether to copy the colors from the current window\n copy_cmdline: Boolean indicating whether to copy the cmdline from the current window\n copy_env: Boolean indicating whether to copy the environ from the current window\n hold: Boolean indicating whether to keep window open after cmd exits\n location: Where in the tab to open the new window\n allow_remote_control: Boolean indicating whether to allow remote control from the new window\n stdin_source: Where to get stdin for thew process from\n stdin_add_formatting: Boolean indicating whether to add formatting codes to stdin\n stdin_add_line_wrap_markers: Boolean indicating whether to add line wrap markers to stdin\n no_response: Boolean indicating whether to send back the window id\n marker: Specification for marker for new window, for example: \"text 1 ERROR\"\n logo: Path to window logo\n logo_position: Window logo position as string or empty string to use default\n logo_alpha: Window logo alpha or -1 to use default\n self: Boolean, if True use tab the command was run in\n '''\n\n short_desc = 'Run an arbitrary process in a new window/tab'\n desc = (\n 'Prints out the id of the newly opened window. Any command line arguments'\n ' are assumed to be the command line used to run in the new window, if none'\n ' are provided, the default shell is run. For example:'\n ' :code:`kitty @ launch --title=Email mutt`.'\n )\n options_spec = MATCH_TAB_OPTION + '\\n\\n' + '''\\\n--no-response\ntype=bool-set\nDo not print out the id of the newly created window.\n\n\n--self\ntype=bool-set\nIf specified the tab containing the window this command is run in is used\ninstead of the active tab\n ''' + '\\n\\n' + launch_options_spec().replace(':option:`launch', ':option:`kitty @ launch')\n argspec = '[CMD ...]'\n\n def message_to_kitty(self, global_opts: RCOptions, opts: 'CLIOptions', args: ArgsType) -> PayloadType:\n ans = {'args': args or []}\n for attr, val in opts.__dict__.items():\n ans[attr] = val\n return ans\n\n def response_from_kitty(self, boss: Boss, window: Optional[Window], payload_get: PayloadGetType) -> ResponseType:\n default_opts = parse_launch_args()[0]\n opts = LaunchCLIOptions()\n for key, default_value in default_opts.__dict__.items():\n val = payload_get(key)\n if val is None:\n val = default_value\n setattr(opts, key, val)\n target_tab = None\n tabs = self.tabs_for_match_payload(boss, window, payload_get)\n if tabs and tabs[0]:\n target_tab = tabs[0]\n elif payload_get('type') not in ('os-window', 'background'):\n return None\n w = do_launch(boss, opts, payload_get('args') or [], target_tab=target_tab)\n return None if payload_get('no_response') else str(getattr(w, 'id', 0))\n\n\nlaunch = Launch()\n", "path": "kitty/rc/launch.py"}]} | 1,990 | 236 |
gh_patches_debug_23025 | rasdani/github-patches | git_diff | cookiecutter__cookiecutter-862 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Empty hook file causes cryptic error
If you have a pre_gen_project.sh or a post_gen_project.sh file with no data in it, cookiecutter fails with an unhelpful traceback.
```
Traceback (most recent call last):
File "/usr/local/bin/cookiecutter", line 11, in <module>
sys.exit(main())
File "/usr/local/lib/python2.7/site-packages/click/core.py", line 716, in __call__
return self.main(*args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/click/core.py", line 696, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python2.7/site-packages/click/core.py", line 889, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python2.7/site-packages/click/core.py", line 534, in invoke
return callback(*args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/cookiecutter/cli.py", line 100, in main
config_file=user_config
File "/usr/local/lib/python2.7/site-packages/cookiecutter/main.py", line 140, in cookiecutter
output_dir=output_dir
File "/usr/local/lib/python2.7/site-packages/cookiecutter/generate.py", line 273, in generate_files
_run_hook_from_repo_dir(repo_dir, 'pre_gen_project', project_dir, context)
File "/usr/local/lib/python2.7/site-packages/cookiecutter/generate.py", line 232, in _run_hook_from_repo_dir
run_hook(hook_name, project_dir, context)
File "/usr/local/lib/python2.7/site-packages/cookiecutter/hooks.py", line 116, in run_hook
run_script_with_context(script, project_dir, context)
File "/usr/local/lib/python2.7/site-packages/cookiecutter/hooks.py", line 101, in run_script_with_context
run_script(temp.name, cwd)
File "/usr/local/lib/python2.7/site-packages/cookiecutter/hooks.py", line 73, in run_script
cwd=cwd
File "/usr/local/Cellar/python/2.7.10_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 656, in __init__
_cleanup()
File "/usr/local/Cellar/python/2.7.10_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 1335, in _execute_child
raise child_exception
OSError: [Errno 8] Exec format error
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cookiecutter/hooks.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 """Functions for discovering and executing various cookiecutter hooks."""
4
5 import io
6 import logging
7 import os
8 import subprocess
9 import sys
10 import tempfile
11
12 from jinja2 import Template
13
14 from cookiecutter import utils
15 from .exceptions import FailedHookException
16
17 logger = logging.getLogger(__name__)
18
19 _HOOKS = [
20 'pre_gen_project',
21 'post_gen_project',
22 ]
23 EXIT_SUCCESS = 0
24
25
26 def valid_hook(hook_file, hook_name):
27 """Determine if a hook file is valid.
28
29 :param hook_file: The hook file to consider for validity
30 :param hook_name: The hook to find
31 :return: The hook file validity
32 """
33 filename = os.path.basename(hook_file)
34 basename = os.path.splitext(filename)[0]
35
36 matching_hook = basename == hook_name
37 supported_hook = basename in _HOOKS
38 backup_file = filename.endswith('~')
39
40 return matching_hook and supported_hook and not backup_file
41
42
43 def find_hook(hook_name, hooks_dir='hooks'):
44 """Return a dict of all hook scripts provided.
45
46 Must be called with the project template as the current working directory.
47 Dict's key will be the hook/script's name, without extension, while values
48 will be the absolute path to the script. Missing scripts will not be
49 included in the returned dict.
50
51 :param hook_name: The hook to find
52 :param hooks_dir: The hook directory in the template
53 :return: The absolute path to the hook script or None
54 """
55 logger.debug('hooks_dir is {}'.format(os.path.abspath(hooks_dir)))
56
57 if not os.path.isdir(hooks_dir):
58 logger.debug('No hooks/ dir in template_dir')
59 return None
60
61 for hook_file in os.listdir(hooks_dir):
62 if valid_hook(hook_file, hook_name):
63 return os.path.abspath(os.path.join(hooks_dir, hook_file))
64
65 return None
66
67
68 def run_script(script_path, cwd='.'):
69 """Execute a script from a working directory.
70
71 :param script_path: Absolute path to the script to run.
72 :param cwd: The directory to run the script from.
73 """
74 run_thru_shell = sys.platform.startswith('win')
75 if script_path.endswith('.py'):
76 script_command = [sys.executable, script_path]
77 else:
78 script_command = [script_path]
79
80 utils.make_executable(script_path)
81
82 proc = subprocess.Popen(
83 script_command,
84 shell=run_thru_shell,
85 cwd=cwd
86 )
87 exit_status = proc.wait()
88 if exit_status != EXIT_SUCCESS:
89 raise FailedHookException(
90 "Hook script failed (exit status: %d)" % exit_status)
91
92
93 def run_script_with_context(script_path, cwd, context):
94 """Execute a script after rendering it with Jinja.
95
96 :param script_path: Absolute path to the script to run.
97 :param cwd: The directory to run the script from.
98 :param context: Cookiecutter project template context.
99 """
100 _, extension = os.path.splitext(script_path)
101
102 contents = io.open(script_path, 'r', encoding='utf-8').read()
103
104 with tempfile.NamedTemporaryFile(
105 delete=False,
106 mode='wb',
107 suffix=extension
108 ) as temp:
109 output = Template(contents).render(**context)
110 temp.write(output.encode('utf-8'))
111
112 run_script(temp.name, cwd)
113
114
115 def run_hook(hook_name, project_dir, context):
116 """
117 Try to find and execute a hook from the specified project directory.
118
119 :param hook_name: The hook to execute.
120 :param project_dir: The directory to execute the script from.
121 :param context: Cookiecutter project context.
122 """
123 script = find_hook(hook_name)
124 if script is None:
125 logger.debug('No hooks found')
126 return
127 logger.debug('Running hook {}'.format(hook_name))
128 run_script_with_context(script, project_dir, context)
129
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/cookiecutter/hooks.py b/cookiecutter/hooks.py
--- a/cookiecutter/hooks.py
+++ b/cookiecutter/hooks.py
@@ -2,6 +2,7 @@
"""Functions for discovering and executing various cookiecutter hooks."""
+import errno
import io
import logging
import os
@@ -79,15 +80,26 @@
utils.make_executable(script_path)
- proc = subprocess.Popen(
- script_command,
- shell=run_thru_shell,
- cwd=cwd
- )
- exit_status = proc.wait()
- if exit_status != EXIT_SUCCESS:
+ try:
+ proc = subprocess.Popen(
+ script_command,
+ shell=run_thru_shell,
+ cwd=cwd
+ )
+ exit_status = proc.wait()
+ if exit_status != EXIT_SUCCESS:
+ raise FailedHookException(
+ 'Hook script failed (exit status: {})'.format(exit_status)
+ )
+ except OSError as os_error:
+ if os_error.errno == errno.ENOEXEC:
+ raise FailedHookException(
+ 'Hook script failed, might be an '
+ 'empty file or missing a shebang'
+ )
raise FailedHookException(
- "Hook script failed (exit status: %d)" % exit_status)
+ 'Hook script failed (error: {})'.format(os_error)
+ )
def run_script_with_context(script_path, cwd, context):
| {"golden_diff": "diff --git a/cookiecutter/hooks.py b/cookiecutter/hooks.py\n--- a/cookiecutter/hooks.py\n+++ b/cookiecutter/hooks.py\n@@ -2,6 +2,7 @@\n \n \"\"\"Functions for discovering and executing various cookiecutter hooks.\"\"\"\n \n+import errno\n import io\n import logging\n import os\n@@ -79,15 +80,26 @@\n \n utils.make_executable(script_path)\n \n- proc = subprocess.Popen(\n- script_command,\n- shell=run_thru_shell,\n- cwd=cwd\n- )\n- exit_status = proc.wait()\n- if exit_status != EXIT_SUCCESS:\n+ try:\n+ proc = subprocess.Popen(\n+ script_command,\n+ shell=run_thru_shell,\n+ cwd=cwd\n+ )\n+ exit_status = proc.wait()\n+ if exit_status != EXIT_SUCCESS:\n+ raise FailedHookException(\n+ 'Hook script failed (exit status: {})'.format(exit_status)\n+ )\n+ except OSError as os_error:\n+ if os_error.errno == errno.ENOEXEC:\n+ raise FailedHookException(\n+ 'Hook script failed, might be an '\n+ 'empty file or missing a shebang'\n+ )\n raise FailedHookException(\n- \"Hook script failed (exit status: %d)\" % exit_status)\n+ 'Hook script failed (error: {})'.format(os_error)\n+ )\n \n \n def run_script_with_context(script_path, cwd, context):\n", "issue": "Empty hook file causes cryptic error\nIf you have a pre_gen_project.sh or a post_gen_project.sh file with no data in it, cookiecutter fails with an unhelpful traceback.\n\n```\nTraceback (most recent call last):\n File \"/usr/local/bin/cookiecutter\", line 11, in <module>\n sys.exit(main())\n File \"/usr/local/lib/python2.7/site-packages/click/core.py\", line 716, in __call__\n return self.main(*args, **kwargs)\n File \"/usr/local/lib/python2.7/site-packages/click/core.py\", line 696, in main\n rv = self.invoke(ctx)\n File \"/usr/local/lib/python2.7/site-packages/click/core.py\", line 889, in invoke\n return ctx.invoke(self.callback, **ctx.params)\n File \"/usr/local/lib/python2.7/site-packages/click/core.py\", line 534, in invoke\n return callback(*args, **kwargs)\n File \"/usr/local/lib/python2.7/site-packages/cookiecutter/cli.py\", line 100, in main\n config_file=user_config\n File \"/usr/local/lib/python2.7/site-packages/cookiecutter/main.py\", line 140, in cookiecutter\n output_dir=output_dir\n File \"/usr/local/lib/python2.7/site-packages/cookiecutter/generate.py\", line 273, in generate_files\n _run_hook_from_repo_dir(repo_dir, 'pre_gen_project', project_dir, context)\n File \"/usr/local/lib/python2.7/site-packages/cookiecutter/generate.py\", line 232, in _run_hook_from_repo_dir\n run_hook(hook_name, project_dir, context)\n File \"/usr/local/lib/python2.7/site-packages/cookiecutter/hooks.py\", line 116, in run_hook\n run_script_with_context(script, project_dir, context)\n File \"/usr/local/lib/python2.7/site-packages/cookiecutter/hooks.py\", line 101, in run_script_with_context\n run_script(temp.name, cwd)\n File \"/usr/local/lib/python2.7/site-packages/cookiecutter/hooks.py\", line 73, in run_script\n cwd=cwd\n File \"/usr/local/Cellar/python/2.7.10_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py\", line 656, in __init__\n _cleanup()\n File \"/usr/local/Cellar/python/2.7.10_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py\", line 1335, in _execute_child\n raise child_exception\nOSError: [Errno 8] Exec format error\n```\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"Functions for discovering and executing various cookiecutter hooks.\"\"\"\n\nimport io\nimport logging\nimport os\nimport subprocess\nimport sys\nimport tempfile\n\nfrom jinja2 import Template\n\nfrom cookiecutter import utils\nfrom .exceptions import FailedHookException\n\nlogger = logging.getLogger(__name__)\n\n_HOOKS = [\n 'pre_gen_project',\n 'post_gen_project',\n]\nEXIT_SUCCESS = 0\n\n\ndef valid_hook(hook_file, hook_name):\n \"\"\"Determine if a hook file is valid.\n\n :param hook_file: The hook file to consider for validity\n :param hook_name: The hook to find\n :return: The hook file validity\n \"\"\"\n filename = os.path.basename(hook_file)\n basename = os.path.splitext(filename)[0]\n\n matching_hook = basename == hook_name\n supported_hook = basename in _HOOKS\n backup_file = filename.endswith('~')\n\n return matching_hook and supported_hook and not backup_file\n\n\ndef find_hook(hook_name, hooks_dir='hooks'):\n \"\"\"Return a dict of all hook scripts provided.\n\n Must be called with the project template as the current working directory.\n Dict's key will be the hook/script's name, without extension, while values\n will be the absolute path to the script. Missing scripts will not be\n included in the returned dict.\n\n :param hook_name: The hook to find\n :param hooks_dir: The hook directory in the template\n :return: The absolute path to the hook script or None\n \"\"\"\n logger.debug('hooks_dir is {}'.format(os.path.abspath(hooks_dir)))\n\n if not os.path.isdir(hooks_dir):\n logger.debug('No hooks/ dir in template_dir')\n return None\n\n for hook_file in os.listdir(hooks_dir):\n if valid_hook(hook_file, hook_name):\n return os.path.abspath(os.path.join(hooks_dir, hook_file))\n\n return None\n\n\ndef run_script(script_path, cwd='.'):\n \"\"\"Execute a script from a working directory.\n\n :param script_path: Absolute path to the script to run.\n :param cwd: The directory to run the script from.\n \"\"\"\n run_thru_shell = sys.platform.startswith('win')\n if script_path.endswith('.py'):\n script_command = [sys.executable, script_path]\n else:\n script_command = [script_path]\n\n utils.make_executable(script_path)\n\n proc = subprocess.Popen(\n script_command,\n shell=run_thru_shell,\n cwd=cwd\n )\n exit_status = proc.wait()\n if exit_status != EXIT_SUCCESS:\n raise FailedHookException(\n \"Hook script failed (exit status: %d)\" % exit_status)\n\n\ndef run_script_with_context(script_path, cwd, context):\n \"\"\"Execute a script after rendering it with Jinja.\n\n :param script_path: Absolute path to the script to run.\n :param cwd: The directory to run the script from.\n :param context: Cookiecutter project template context.\n \"\"\"\n _, extension = os.path.splitext(script_path)\n\n contents = io.open(script_path, 'r', encoding='utf-8').read()\n\n with tempfile.NamedTemporaryFile(\n delete=False,\n mode='wb',\n suffix=extension\n ) as temp:\n output = Template(contents).render(**context)\n temp.write(output.encode('utf-8'))\n\n run_script(temp.name, cwd)\n\n\ndef run_hook(hook_name, project_dir, context):\n \"\"\"\n Try to find and execute a hook from the specified project directory.\n\n :param hook_name: The hook to execute.\n :param project_dir: The directory to execute the script from.\n :param context: Cookiecutter project context.\n \"\"\"\n script = find_hook(hook_name)\n if script is None:\n logger.debug('No hooks found')\n return\n logger.debug('Running hook {}'.format(hook_name))\n run_script_with_context(script, project_dir, context)\n", "path": "cookiecutter/hooks.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"Functions for discovering and executing various cookiecutter hooks.\"\"\"\n\nimport errno\nimport io\nimport logging\nimport os\nimport subprocess\nimport sys\nimport tempfile\n\nfrom jinja2 import Template\n\nfrom cookiecutter import utils\nfrom .exceptions import FailedHookException\n\nlogger = logging.getLogger(__name__)\n\n_HOOKS = [\n 'pre_gen_project',\n 'post_gen_project',\n]\nEXIT_SUCCESS = 0\n\n\ndef valid_hook(hook_file, hook_name):\n \"\"\"Determine if a hook file is valid.\n\n :param hook_file: The hook file to consider for validity\n :param hook_name: The hook to find\n :return: The hook file validity\n \"\"\"\n filename = os.path.basename(hook_file)\n basename = os.path.splitext(filename)[0]\n\n matching_hook = basename == hook_name\n supported_hook = basename in _HOOKS\n backup_file = filename.endswith('~')\n\n return matching_hook and supported_hook and not backup_file\n\n\ndef find_hook(hook_name, hooks_dir='hooks'):\n \"\"\"Return a dict of all hook scripts provided.\n\n Must be called with the project template as the current working directory.\n Dict's key will be the hook/script's name, without extension, while values\n will be the absolute path to the script. Missing scripts will not be\n included in the returned dict.\n\n :param hook_name: The hook to find\n :param hooks_dir: The hook directory in the template\n :return: The absolute path to the hook script or None\n \"\"\"\n logger.debug('hooks_dir is {}'.format(os.path.abspath(hooks_dir)))\n\n if not os.path.isdir(hooks_dir):\n logger.debug('No hooks/ dir in template_dir')\n return None\n\n for hook_file in os.listdir(hooks_dir):\n if valid_hook(hook_file, hook_name):\n return os.path.abspath(os.path.join(hooks_dir, hook_file))\n\n return None\n\n\ndef run_script(script_path, cwd='.'):\n \"\"\"Execute a script from a working directory.\n\n :param script_path: Absolute path to the script to run.\n :param cwd: The directory to run the script from.\n \"\"\"\n run_thru_shell = sys.platform.startswith('win')\n if script_path.endswith('.py'):\n script_command = [sys.executable, script_path]\n else:\n script_command = [script_path]\n\n utils.make_executable(script_path)\n\n try:\n proc = subprocess.Popen(\n script_command,\n shell=run_thru_shell,\n cwd=cwd\n )\n exit_status = proc.wait()\n if exit_status != EXIT_SUCCESS:\n raise FailedHookException(\n 'Hook script failed (exit status: {})'.format(exit_status)\n )\n except OSError as os_error:\n if os_error.errno == errno.ENOEXEC:\n raise FailedHookException(\n 'Hook script failed, might be an '\n 'empty file or missing a shebang'\n )\n raise FailedHookException(\n 'Hook script failed (error: {})'.format(os_error)\n )\n\n\ndef run_script_with_context(script_path, cwd, context):\n \"\"\"Execute a script after rendering it with Jinja.\n\n :param script_path: Absolute path to the script to run.\n :param cwd: The directory to run the script from.\n :param context: Cookiecutter project template context.\n \"\"\"\n _, extension = os.path.splitext(script_path)\n\n contents = io.open(script_path, 'r', encoding='utf-8').read()\n\n with tempfile.NamedTemporaryFile(\n delete=False,\n mode='wb',\n suffix=extension\n ) as temp:\n output = Template(contents).render(**context)\n temp.write(output.encode('utf-8'))\n\n run_script(temp.name, cwd)\n\n\ndef run_hook(hook_name, project_dir, context):\n \"\"\"\n Try to find and execute a hook from the specified project directory.\n\n :param hook_name: The hook to execute.\n :param project_dir: The directory to execute the script from.\n :param context: Cookiecutter project context.\n \"\"\"\n script = find_hook(hook_name)\n if script is None:\n logger.debug('No hooks found')\n return\n logger.debug('Running hook {}'.format(hook_name))\n run_script_with_context(script, project_dir, context)\n", "path": "cookiecutter/hooks.py"}]} | 2,021 | 328 |
gh_patches_debug_14431 | rasdani/github-patches | git_diff | tiangolo__fastapi-637 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
JSON output reordered by pydantic @validator
This isn't a big problem, but I'm wondering if it's possible to keep the JSON output key ordering unaffected by the presence of a Pydantic `@validator(...)`? I like consistency. 🤷♂️
I have this pydantic model structure:
```
class IngredientBase(BaseIngredientMixin):
name: str = None
unit: str = None
unit_price: decimal.Decimal = None
flavour_warning: bool = None
@validator('unit')
def valid_units_only(cls, v):
choices = set(SingleIngredientTable.unit_choices.keys())
if v and v not in choices:
raise ValueError(f'must be one of {choices}')
return v
class IngredientIn(IngredientBase):
pass
class IngredientNew(IngredientIn):
name: str = ...
unit: str = ...
unit_price: decimal.Decimal = ...
class IngredientOut(IngredientBase, TestModelMixin, TimestampModelMixin, IDModelMixin):
pass
```
This will produce:
```
{
"unit": "IU",
"id": "sing_qOYSyqZhfOcJKHsfVz7tnfP6M",
"created": 1571357369,
"updated": 1571418480,
"is_test": false,
"name": "Vitamin C",
"unit_price": 10.568536363534536,
"flavour_warning": false
}
```
Without the validator, it produces what I would expect:
```
{
"id": "sing_qOYSyqZhfOcJKHsfVz7tnfP6M",
"created": 1571357369,
"updated": 1571418480,
"is_test": false,
"name": "Vitamin C",
"unit": "IU",
"unit_price": 10.568536363534536,
"flavour_warning": false
}
```
How can I continue to use `@validator`s and maintain the expected JSON key ordering?
**Edit**: I can pull out the validator into a 'mixin' class that I apply ONLY to the `IngredientIn` and `IngredientNew` models (leaving `IngredientOut` untouched). But is that my only path here?
JSON output reordered by pydantic @validator
This isn't a big problem, but I'm wondering if it's possible to keep the JSON output key ordering unaffected by the presence of a Pydantic `@validator(...)`? I like consistency. 🤷♂️
I have this pydantic model structure:
```
class IngredientBase(BaseIngredientMixin):
name: str = None
unit: str = None
unit_price: decimal.Decimal = None
flavour_warning: bool = None
@validator('unit')
def valid_units_only(cls, v):
choices = set(SingleIngredientTable.unit_choices.keys())
if v and v not in choices:
raise ValueError(f'must be one of {choices}')
return v
class IngredientIn(IngredientBase):
pass
class IngredientNew(IngredientIn):
name: str = ...
unit: str = ...
unit_price: decimal.Decimal = ...
class IngredientOut(IngredientBase, TestModelMixin, TimestampModelMixin, IDModelMixin):
pass
```
This will produce:
```
{
"unit": "IU",
"id": "sing_qOYSyqZhfOcJKHsfVz7tnfP6M",
"created": 1571357369,
"updated": 1571418480,
"is_test": false,
"name": "Vitamin C",
"unit_price": 10.568536363534536,
"flavour_warning": false
}
```
Without the validator, it produces what I would expect:
```
{
"id": "sing_qOYSyqZhfOcJKHsfVz7tnfP6M",
"created": 1571357369,
"updated": 1571418480,
"is_test": false,
"name": "Vitamin C",
"unit": "IU",
"unit_price": 10.568536363534536,
"flavour_warning": false
}
```
How can I continue to use `@validator`s and maintain the expected JSON key ordering?
**Edit**: I can pull out the validator into a 'mixin' class that I apply ONLY to the `IngredientIn` and `IngredientNew` models (leaving `IngredientOut` untouched). But is that my only path here?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `fastapi/utils.py`
Content:
```
1 import re
2 from dataclasses import is_dataclass
3 from typing import Any, Dict, List, Sequence, Set, Type, cast
4
5 from fastapi import routing
6 from fastapi.openapi.constants import REF_PREFIX
7 from pydantic import BaseConfig, BaseModel, Schema, create_model
8 from pydantic.fields import Field
9 from pydantic.schema import get_flat_models_from_fields, model_process_schema
10 from pydantic.utils import lenient_issubclass
11 from starlette.routing import BaseRoute
12
13
14 def get_flat_models_from_routes(routes: Sequence[BaseRoute]) -> Set[Type[BaseModel]]:
15 body_fields_from_routes: List[Field] = []
16 responses_from_routes: List[Field] = []
17 for route in routes:
18 if getattr(route, "include_in_schema", None) and isinstance(
19 route, routing.APIRoute
20 ):
21 if route.body_field:
22 assert isinstance(
23 route.body_field, Field
24 ), "A request body must be a Pydantic Field"
25 body_fields_from_routes.append(route.body_field)
26 if route.response_field:
27 responses_from_routes.append(route.response_field)
28 if route.response_fields:
29 responses_from_routes.extend(route.response_fields.values())
30 flat_models = get_flat_models_from_fields(
31 body_fields_from_routes + responses_from_routes, known_models=set()
32 )
33 return flat_models
34
35
36 def get_model_definitions(
37 *, flat_models: Set[Type[BaseModel]], model_name_map: Dict[Type[BaseModel], str]
38 ) -> Dict[str, Any]:
39 definitions: Dict[str, Dict] = {}
40 for model in flat_models:
41 m_schema, m_definitions, m_nested_models = model_process_schema(
42 model, model_name_map=model_name_map, ref_prefix=REF_PREFIX
43 )
44 definitions.update(m_definitions)
45 model_name = model_name_map[model]
46 definitions[model_name] = m_schema
47 return definitions
48
49
50 def get_path_param_names(path: str) -> Set[str]:
51 return {item.strip("{}") for item in re.findall("{[^}]*}", path)}
52
53
54 def create_cloned_field(field: Field) -> Field:
55 original_type = field.type_
56 if is_dataclass(original_type) and hasattr(original_type, "__pydantic_model__"):
57 original_type = original_type.__pydantic_model__ # type: ignore
58 use_type = original_type
59 if lenient_issubclass(original_type, BaseModel):
60 original_type = cast(Type[BaseModel], original_type)
61 use_type = create_model(
62 original_type.__name__,
63 __config__=original_type.__config__,
64 __validators__=original_type.__validators__, # type: ignore
65 )
66 for f in original_type.__fields__.values():
67 use_type.__fields__[f.name] = f
68 new_field = Field(
69 name=field.name,
70 type_=use_type,
71 class_validators={},
72 default=None,
73 required=False,
74 model_config=BaseConfig,
75 schema=Schema(None),
76 )
77 new_field.has_alias = field.has_alias
78 new_field.alias = field.alias
79 new_field.class_validators = field.class_validators
80 new_field.default = field.default
81 new_field.required = field.required
82 new_field.model_config = field.model_config
83 new_field.schema = field.schema
84 new_field.allow_none = field.allow_none
85 new_field.validate_always = field.validate_always
86 if field.sub_fields:
87 new_field.sub_fields = [
88 create_cloned_field(sub_field) for sub_field in field.sub_fields
89 ]
90 if field.key_field:
91 new_field.key_field = create_cloned_field(field.key_field)
92 new_field.validators = field.validators
93 new_field.whole_pre_validators = field.whole_pre_validators
94 new_field.whole_post_validators = field.whole_post_validators
95 new_field.parse_json = field.parse_json
96 new_field.shape = field.shape
97 new_field._populate_validators()
98 return new_field
99
100
101 def generate_operation_id_for_path(*, name: str, path: str, method: str) -> str:
102 operation_id = name + path
103 operation_id = operation_id.replace("{", "_").replace("}", "_").replace("/", "_")
104 operation_id = operation_id + "_" + method.lower()
105 return operation_id
106
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/fastapi/utils.py b/fastapi/utils.py
--- a/fastapi/utils.py
+++ b/fastapi/utils.py
@@ -59,12 +59,11 @@
if lenient_issubclass(original_type, BaseModel):
original_type = cast(Type[BaseModel], original_type)
use_type = create_model(
- original_type.__name__,
- __config__=original_type.__config__,
- __validators__=original_type.__validators__, # type: ignore
+ original_type.__name__, __config__=original_type.__config__
)
for f in original_type.__fields__.values():
use_type.__fields__[f.name] = f
+ use_type.__validators__ = original_type.__validators__
new_field = Field(
name=field.name,
type_=use_type,
| {"golden_diff": "diff --git a/fastapi/utils.py b/fastapi/utils.py\n--- a/fastapi/utils.py\n+++ b/fastapi/utils.py\n@@ -59,12 +59,11 @@\n if lenient_issubclass(original_type, BaseModel):\n original_type = cast(Type[BaseModel], original_type)\n use_type = create_model(\n- original_type.__name__,\n- __config__=original_type.__config__,\n- __validators__=original_type.__validators__, # type: ignore\n+ original_type.__name__, __config__=original_type.__config__\n )\n for f in original_type.__fields__.values():\n use_type.__fields__[f.name] = f\n+ use_type.__validators__ = original_type.__validators__\n new_field = Field(\n name=field.name,\n type_=use_type,\n", "issue": "JSON output reordered by pydantic @validator\nThis isn't a big problem, but I'm wondering if it's possible to keep the JSON output key ordering unaffected by the presence of a Pydantic `@validator(...)`? I like consistency. \ud83e\udd37\u200d\u2642\ufe0f\r\n\r\nI have this pydantic model structure:\r\n```\r\nclass IngredientBase(BaseIngredientMixin):\r\n name: str = None\r\n unit: str = None\r\n unit_price: decimal.Decimal = None\r\n flavour_warning: bool = None\r\n\r\n @validator('unit')\r\n def valid_units_only(cls, v):\r\n choices = set(SingleIngredientTable.unit_choices.keys())\r\n if v and v not in choices:\r\n raise ValueError(f'must be one of {choices}')\r\n return v\r\n\r\n\r\nclass IngredientIn(IngredientBase):\r\n pass\r\n\r\n\r\nclass IngredientNew(IngredientIn):\r\n name: str = ...\r\n unit: str = ...\r\n unit_price: decimal.Decimal = ...\r\n\r\n\r\nclass IngredientOut(IngredientBase, TestModelMixin, TimestampModelMixin, IDModelMixin):\r\n pass\r\n```\r\n\r\nThis will produce:\r\n```\r\n{\r\n \"unit\": \"IU\",\r\n \"id\": \"sing_qOYSyqZhfOcJKHsfVz7tnfP6M\",\r\n \"created\": 1571357369,\r\n \"updated\": 1571418480,\r\n \"is_test\": false,\r\n \"name\": \"Vitamin C\",\r\n \"unit_price\": 10.568536363534536,\r\n \"flavour_warning\": false\r\n}\r\n```\r\nWithout the validator, it produces what I would expect:\r\n```\r\n{\r\n \"id\": \"sing_qOYSyqZhfOcJKHsfVz7tnfP6M\",\r\n \"created\": 1571357369,\r\n \"updated\": 1571418480,\r\n \"is_test\": false,\r\n \"name\": \"Vitamin C\",\r\n \"unit\": \"IU\",\r\n \"unit_price\": 10.568536363534536,\r\n \"flavour_warning\": false\r\n}\r\n```\r\n\r\nHow can I continue to use `@validator`s and maintain the expected JSON key ordering?\r\n\r\n**Edit**: I can pull out the validator into a 'mixin' class that I apply ONLY to the `IngredientIn` and `IngredientNew` models (leaving `IngredientOut` untouched). But is that my only path here?\nJSON output reordered by pydantic @validator\nThis isn't a big problem, but I'm wondering if it's possible to keep the JSON output key ordering unaffected by the presence of a Pydantic `@validator(...)`? I like consistency. \ud83e\udd37\u200d\u2642\ufe0f\r\n\r\nI have this pydantic model structure:\r\n```\r\nclass IngredientBase(BaseIngredientMixin):\r\n name: str = None\r\n unit: str = None\r\n unit_price: decimal.Decimal = None\r\n flavour_warning: bool = None\r\n\r\n @validator('unit')\r\n def valid_units_only(cls, v):\r\n choices = set(SingleIngredientTable.unit_choices.keys())\r\n if v and v not in choices:\r\n raise ValueError(f'must be one of {choices}')\r\n return v\r\n\r\n\r\nclass IngredientIn(IngredientBase):\r\n pass\r\n\r\n\r\nclass IngredientNew(IngredientIn):\r\n name: str = ...\r\n unit: str = ...\r\n unit_price: decimal.Decimal = ...\r\n\r\n\r\nclass IngredientOut(IngredientBase, TestModelMixin, TimestampModelMixin, IDModelMixin):\r\n pass\r\n```\r\n\r\nThis will produce:\r\n```\r\n{\r\n \"unit\": \"IU\",\r\n \"id\": \"sing_qOYSyqZhfOcJKHsfVz7tnfP6M\",\r\n \"created\": 1571357369,\r\n \"updated\": 1571418480,\r\n \"is_test\": false,\r\n \"name\": \"Vitamin C\",\r\n \"unit_price\": 10.568536363534536,\r\n \"flavour_warning\": false\r\n}\r\n```\r\nWithout the validator, it produces what I would expect:\r\n```\r\n{\r\n \"id\": \"sing_qOYSyqZhfOcJKHsfVz7tnfP6M\",\r\n \"created\": 1571357369,\r\n \"updated\": 1571418480,\r\n \"is_test\": false,\r\n \"name\": \"Vitamin C\",\r\n \"unit\": \"IU\",\r\n \"unit_price\": 10.568536363534536,\r\n \"flavour_warning\": false\r\n}\r\n```\r\n\r\nHow can I continue to use `@validator`s and maintain the expected JSON key ordering?\r\n\r\n**Edit**: I can pull out the validator into a 'mixin' class that I apply ONLY to the `IngredientIn` and `IngredientNew` models (leaving `IngredientOut` untouched). But is that my only path here?\n", "before_files": [{"content": "import re\nfrom dataclasses import is_dataclass\nfrom typing import Any, Dict, List, Sequence, Set, Type, cast\n\nfrom fastapi import routing\nfrom fastapi.openapi.constants import REF_PREFIX\nfrom pydantic import BaseConfig, BaseModel, Schema, create_model\nfrom pydantic.fields import Field\nfrom pydantic.schema import get_flat_models_from_fields, model_process_schema\nfrom pydantic.utils import lenient_issubclass\nfrom starlette.routing import BaseRoute\n\n\ndef get_flat_models_from_routes(routes: Sequence[BaseRoute]) -> Set[Type[BaseModel]]:\n body_fields_from_routes: List[Field] = []\n responses_from_routes: List[Field] = []\n for route in routes:\n if getattr(route, \"include_in_schema\", None) and isinstance(\n route, routing.APIRoute\n ):\n if route.body_field:\n assert isinstance(\n route.body_field, Field\n ), \"A request body must be a Pydantic Field\"\n body_fields_from_routes.append(route.body_field)\n if route.response_field:\n responses_from_routes.append(route.response_field)\n if route.response_fields:\n responses_from_routes.extend(route.response_fields.values())\n flat_models = get_flat_models_from_fields(\n body_fields_from_routes + responses_from_routes, known_models=set()\n )\n return flat_models\n\n\ndef get_model_definitions(\n *, flat_models: Set[Type[BaseModel]], model_name_map: Dict[Type[BaseModel], str]\n) -> Dict[str, Any]:\n definitions: Dict[str, Dict] = {}\n for model in flat_models:\n m_schema, m_definitions, m_nested_models = model_process_schema(\n model, model_name_map=model_name_map, ref_prefix=REF_PREFIX\n )\n definitions.update(m_definitions)\n model_name = model_name_map[model]\n definitions[model_name] = m_schema\n return definitions\n\n\ndef get_path_param_names(path: str) -> Set[str]:\n return {item.strip(\"{}\") for item in re.findall(\"{[^}]*}\", path)}\n\n\ndef create_cloned_field(field: Field) -> Field:\n original_type = field.type_\n if is_dataclass(original_type) and hasattr(original_type, \"__pydantic_model__\"):\n original_type = original_type.__pydantic_model__ # type: ignore\n use_type = original_type\n if lenient_issubclass(original_type, BaseModel):\n original_type = cast(Type[BaseModel], original_type)\n use_type = create_model(\n original_type.__name__,\n __config__=original_type.__config__,\n __validators__=original_type.__validators__, # type: ignore\n )\n for f in original_type.__fields__.values():\n use_type.__fields__[f.name] = f\n new_field = Field(\n name=field.name,\n type_=use_type,\n class_validators={},\n default=None,\n required=False,\n model_config=BaseConfig,\n schema=Schema(None),\n )\n new_field.has_alias = field.has_alias\n new_field.alias = field.alias\n new_field.class_validators = field.class_validators\n new_field.default = field.default\n new_field.required = field.required\n new_field.model_config = field.model_config\n new_field.schema = field.schema\n new_field.allow_none = field.allow_none\n new_field.validate_always = field.validate_always\n if field.sub_fields:\n new_field.sub_fields = [\n create_cloned_field(sub_field) for sub_field in field.sub_fields\n ]\n if field.key_field:\n new_field.key_field = create_cloned_field(field.key_field)\n new_field.validators = field.validators\n new_field.whole_pre_validators = field.whole_pre_validators\n new_field.whole_post_validators = field.whole_post_validators\n new_field.parse_json = field.parse_json\n new_field.shape = field.shape\n new_field._populate_validators()\n return new_field\n\n\ndef generate_operation_id_for_path(*, name: str, path: str, method: str) -> str:\n operation_id = name + path\n operation_id = operation_id.replace(\"{\", \"_\").replace(\"}\", \"_\").replace(\"/\", \"_\")\n operation_id = operation_id + \"_\" + method.lower()\n return operation_id\n", "path": "fastapi/utils.py"}], "after_files": [{"content": "import re\nfrom dataclasses import is_dataclass\nfrom typing import Any, Dict, List, Sequence, Set, Type, cast\n\nfrom fastapi import routing\nfrom fastapi.openapi.constants import REF_PREFIX\nfrom pydantic import BaseConfig, BaseModel, Schema, create_model\nfrom pydantic.fields import Field\nfrom pydantic.schema import get_flat_models_from_fields, model_process_schema\nfrom pydantic.utils import lenient_issubclass\nfrom starlette.routing import BaseRoute\n\n\ndef get_flat_models_from_routes(routes: Sequence[BaseRoute]) -> Set[Type[BaseModel]]:\n body_fields_from_routes: List[Field] = []\n responses_from_routes: List[Field] = []\n for route in routes:\n if getattr(route, \"include_in_schema\", None) and isinstance(\n route, routing.APIRoute\n ):\n if route.body_field:\n assert isinstance(\n route.body_field, Field\n ), \"A request body must be a Pydantic Field\"\n body_fields_from_routes.append(route.body_field)\n if route.response_field:\n responses_from_routes.append(route.response_field)\n if route.response_fields:\n responses_from_routes.extend(route.response_fields.values())\n flat_models = get_flat_models_from_fields(\n body_fields_from_routes + responses_from_routes, known_models=set()\n )\n return flat_models\n\n\ndef get_model_definitions(\n *, flat_models: Set[Type[BaseModel]], model_name_map: Dict[Type[BaseModel], str]\n) -> Dict[str, Any]:\n definitions: Dict[str, Dict] = {}\n for model in flat_models:\n m_schema, m_definitions, m_nested_models = model_process_schema(\n model, model_name_map=model_name_map, ref_prefix=REF_PREFIX\n )\n definitions.update(m_definitions)\n model_name = model_name_map[model]\n definitions[model_name] = m_schema\n return definitions\n\n\ndef get_path_param_names(path: str) -> Set[str]:\n return {item.strip(\"{}\") for item in re.findall(\"{[^}]*}\", path)}\n\n\ndef create_cloned_field(field: Field) -> Field:\n original_type = field.type_\n if is_dataclass(original_type) and hasattr(original_type, \"__pydantic_model__\"):\n original_type = original_type.__pydantic_model__ # type: ignore\n use_type = original_type\n if lenient_issubclass(original_type, BaseModel):\n original_type = cast(Type[BaseModel], original_type)\n use_type = create_model(\n original_type.__name__, __config__=original_type.__config__\n )\n for f in original_type.__fields__.values():\n use_type.__fields__[f.name] = f\n use_type.__validators__ = original_type.__validators__\n new_field = Field(\n name=field.name,\n type_=use_type,\n class_validators={},\n default=None,\n required=False,\n model_config=BaseConfig,\n schema=Schema(None),\n )\n new_field.has_alias = field.has_alias\n new_field.alias = field.alias\n new_field.class_validators = field.class_validators\n new_field.default = field.default\n new_field.required = field.required\n new_field.model_config = field.model_config\n new_field.schema = field.schema\n new_field.allow_none = field.allow_none\n new_field.validate_always = field.validate_always\n if field.sub_fields:\n new_field.sub_fields = [\n create_cloned_field(sub_field) for sub_field in field.sub_fields\n ]\n if field.key_field:\n new_field.key_field = create_cloned_field(field.key_field)\n new_field.validators = field.validators\n new_field.whole_pre_validators = field.whole_pre_validators\n new_field.whole_post_validators = field.whole_post_validators\n new_field.parse_json = field.parse_json\n new_field.shape = field.shape\n new_field._populate_validators()\n return new_field\n\n\ndef generate_operation_id_for_path(*, name: str, path: str, method: str) -> str:\n operation_id = name + path\n operation_id = operation_id.replace(\"{\", \"_\").replace(\"}\", \"_\").replace(\"/\", \"_\")\n operation_id = operation_id + \"_\" + method.lower()\n return operation_id\n", "path": "fastapi/utils.py"}]} | 2,479 | 182 |
gh_patches_debug_6280 | rasdani/github-patches | git_diff | PennyLaneAI__pennylane-2060 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Raise exception or warning when `qml.adjoint` is used on operation list instead of function.
The newer `qml.adjoint` function does not have any effect when acting on operation lists like in the following example:
```python
params = list(range(4))
qml.adjoint(qml.templates.AngleEmbedding(params))
```
Users might try this, because it worked like this with `qml.inv` which `qml.adjoint` is replacing. Therefore, we should raise
an exception whenever this is attempted by checking for the input to be `callable`. Alternatively, a warning could be raised, but
the behaviour differs fundamentally from the expected, an exception seems more appropriate.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pennylane/transforms/adjoint.py`
Content:
```
1 # Copyright 2018-2021 Xanadu Quantum Technologies Inc.
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6
7 # http://www.apache.org/licenses/LICENSE-2.0
8
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """Code for the adjoint transform."""
15
16 from functools import wraps
17 from pennylane.tape import QuantumTape, stop_recording
18
19
20 def adjoint(fn):
21 """Create a function that applies the adjoint (inverse) of the provided operation or template.
22
23 This transform can be used to apply the adjoint of an arbitrary sequence of operations.
24
25 Args:
26 fn (function): A quantum function that applies quantum operations.
27
28 Returns:
29 function: A new function that will apply the same operations but adjointed and in reverse order.
30
31 **Example**
32
33 The adjoint transforms can be used within a QNode to apply the adjoint of
34 any quantum function. Consider the following quantum function, that applies two
35 operations:
36
37 .. code-block:: python3
38
39 def my_ops(a, b, wire):
40 qml.RX(a, wires=wire)
41 qml.RY(b, wires=wire)
42
43 We can create a QNode that applies this quantum function,
44 followed by the adjoint of this function:
45
46 .. code-block:: python3
47
48 dev = qml.device('default.qubit', wires=1)
49
50 @qml.qnode(dev)
51 def circuit(a, b):
52 my_ops(a, b, wire=0)
53 qml.adjoint(my_ops)(a, b, wire=0)
54 return qml.expval(qml.PauliZ(0))
55
56 Printing this out, we can see that the inverse quantum
57 function has indeed been applied:
58
59 >>> print(qml.draw(circuit)(0.2, 0.5))
60 0: ──RX(0.2)──RY(0.5)──RY(-0.5)──RX(-0.2)──┤ ⟨Z⟩
61
62 The adjoint function can also be applied directly to templates and operations:
63
64 >>> qml.adjoint(qml.RX)(0.123, wires=0)
65 >>> qml.adjoint(qml.templates.StronglyEntanglingLayers)(weights, wires=[0, 1])
66
67 .. UsageDetails::
68
69 **Adjoint of a function**
70
71 Here, we apply the ``subroutine`` function, and then apply its inverse.
72 Notice that in addition to adjointing all of the operations, they are also
73 applied in reverse construction order.
74
75 .. code-block:: python3
76
77 def subroutine(wire):
78 qml.RX(0.123, wires=wire)
79 qml.RY(0.456, wires=wire)
80
81 dev = qml.device('default.qubit', wires=1)
82 @qml.qnode(dev)
83 def circuit():
84 subroutine(0)
85 qml.adjoint(subroutine)(0)
86 return qml.expval(qml.PauliZ(0))
87
88 This creates the following circuit:
89
90 >>> print(qml.draw(circuit)())
91 0: --RX(0.123)--RY(0.456)--RY(-0.456)--RX(-0.123)--| <Z>
92
93 **Single operation**
94
95 You can also easily adjoint a single operation just by wrapping it with ``adjoint``:
96
97 .. code-block:: python3
98
99 dev = qml.device('default.qubit', wires=1)
100 @qml.qnode(dev)
101 def circuit():
102 qml.RX(0.123, wires=0)
103 qml.adjoint(qml.RX)(0.123, wires=0)
104 return qml.expval(qml.PauliZ(0))
105
106 This creates the following circuit:
107
108 >>> print(qml.draw(circuit)())
109 0: --RX(0.123)--RX(-0.123)--| <Z>
110 """
111
112 @wraps(fn)
113 def wrapper(*args, **kwargs):
114 with stop_recording(), QuantumTape() as tape:
115 fn(*args, **kwargs)
116
117 if not tape.operations:
118 # we called op.expand(): get the outputted tape
119 tape = fn(*args, **kwargs)
120
121 adjoint_ops = []
122 for op in reversed(tape.operations):
123 try:
124 new_op = op.adjoint()
125 adjoint_ops.append(new_op)
126 except NotImplementedError:
127 # Expand the operation and adjoint the result.
128 new_ops = adjoint(op.expand)()
129
130 if isinstance(new_ops, QuantumTape):
131 new_ops = new_ops.operations
132
133 adjoint_ops.extend(new_ops)
134
135 if len(adjoint_ops) == 1:
136 adjoint_ops = adjoint_ops[0]
137
138 return adjoint_ops
139
140 return wrapper
141
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pennylane/transforms/adjoint.py b/pennylane/transforms/adjoint.py
--- a/pennylane/transforms/adjoint.py
+++ b/pennylane/transforms/adjoint.py
@@ -108,6 +108,12 @@
>>> print(qml.draw(circuit)())
0: --RX(0.123)--RX(-0.123)--| <Z>
"""
+ if not callable(fn):
+ raise ValueError(
+ f"The object {fn} of type {type(fn)} is not callable. "
+ "This error might occur if you apply adjoint to a list "
+ "of operations instead of a function or template."
+ )
@wraps(fn)
def wrapper(*args, **kwargs):
| {"golden_diff": "diff --git a/pennylane/transforms/adjoint.py b/pennylane/transforms/adjoint.py\n--- a/pennylane/transforms/adjoint.py\n+++ b/pennylane/transforms/adjoint.py\n@@ -108,6 +108,12 @@\n >>> print(qml.draw(circuit)())\r\n 0: --RX(0.123)--RX(-0.123)--| <Z>\r\n \"\"\"\r\n+ if not callable(fn):\r\n+ raise ValueError(\r\n+ f\"The object {fn} of type {type(fn)} is not callable. \"\r\n+ \"This error might occur if you apply adjoint to a list \"\r\n+ \"of operations instead of a function or template.\"\r\n+ )\r\n \r\n @wraps(fn)\r\n def wrapper(*args, **kwargs):\n", "issue": "Raise exception or warning when `qml.adjoint` is used on operation list instead of function.\nThe newer `qml.adjoint` function does not have any effect when acting on operation lists like in the following example:\r\n```python\r\nparams = list(range(4))\r\nqml.adjoint(qml.templates.AngleEmbedding(params))\r\n```\r\nUsers might try this, because it worked like this with `qml.inv` which `qml.adjoint` is replacing. Therefore, we should raise\r\nan exception whenever this is attempted by checking for the input to be `callable`. Alternatively, a warning could be raised, but \r\nthe behaviour differs fundamentally from the expected, an exception seems more appropriate.\r\n\r\n\n", "before_files": [{"content": "# Copyright 2018-2021 Xanadu Quantum Technologies Inc.\r\n\r\n# Licensed under the Apache License, Version 2.0 (the \"License\");\r\n# you may not use this file except in compliance with the License.\r\n# You may obtain a copy of the License at\r\n\r\n# http://www.apache.org/licenses/LICENSE-2.0\r\n\r\n# Unless required by applicable law or agreed to in writing, software\r\n# distributed under the License is distributed on an \"AS IS\" BASIS,\r\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\r\n# See the License for the specific language governing permissions and\r\n# limitations under the License.\r\n\"\"\"Code for the adjoint transform.\"\"\"\r\n\r\nfrom functools import wraps\r\nfrom pennylane.tape import QuantumTape, stop_recording\r\n\r\n\r\ndef adjoint(fn):\r\n \"\"\"Create a function that applies the adjoint (inverse) of the provided operation or template.\r\n\r\n This transform can be used to apply the adjoint of an arbitrary sequence of operations.\r\n\r\n Args:\r\n fn (function): A quantum function that applies quantum operations.\r\n\r\n Returns:\r\n function: A new function that will apply the same operations but adjointed and in reverse order.\r\n\r\n **Example**\r\n\r\n The adjoint transforms can be used within a QNode to apply the adjoint of\r\n any quantum function. Consider the following quantum function, that applies two\r\n operations:\r\n\r\n .. code-block:: python3\r\n\r\n def my_ops(a, b, wire):\r\n qml.RX(a, wires=wire)\r\n qml.RY(b, wires=wire)\r\n\r\n We can create a QNode that applies this quantum function,\r\n followed by the adjoint of this function:\r\n\r\n .. code-block:: python3\r\n\r\n dev = qml.device('default.qubit', wires=1)\r\n\r\n @qml.qnode(dev)\r\n def circuit(a, b):\r\n my_ops(a, b, wire=0)\r\n qml.adjoint(my_ops)(a, b, wire=0)\r\n return qml.expval(qml.PauliZ(0))\r\n\r\n Printing this out, we can see that the inverse quantum\r\n function has indeed been applied:\r\n\r\n >>> print(qml.draw(circuit)(0.2, 0.5))\r\n 0: \u2500\u2500RX(0.2)\u2500\u2500RY(0.5)\u2500\u2500RY(-0.5)\u2500\u2500RX(-0.2)\u2500\u2500\u2524 \u27e8Z\u27e9\r\n\r\n The adjoint function can also be applied directly to templates and operations:\r\n\r\n >>> qml.adjoint(qml.RX)(0.123, wires=0)\r\n >>> qml.adjoint(qml.templates.StronglyEntanglingLayers)(weights, wires=[0, 1])\r\n\r\n .. UsageDetails::\r\n\r\n **Adjoint of a function**\r\n\r\n Here, we apply the ``subroutine`` function, and then apply its inverse.\r\n Notice that in addition to adjointing all of the operations, they are also\r\n applied in reverse construction order.\r\n\r\n .. code-block:: python3\r\n\r\n def subroutine(wire):\r\n qml.RX(0.123, wires=wire)\r\n qml.RY(0.456, wires=wire)\r\n\r\n dev = qml.device('default.qubit', wires=1)\r\n @qml.qnode(dev)\r\n def circuit():\r\n subroutine(0)\r\n qml.adjoint(subroutine)(0)\r\n return qml.expval(qml.PauliZ(0))\r\n\r\n This creates the following circuit:\r\n\r\n >>> print(qml.draw(circuit)())\r\n 0: --RX(0.123)--RY(0.456)--RY(-0.456)--RX(-0.123)--| <Z>\r\n\r\n **Single operation**\r\n\r\n You can also easily adjoint a single operation just by wrapping it with ``adjoint``:\r\n\r\n .. code-block:: python3\r\n\r\n dev = qml.device('default.qubit', wires=1)\r\n @qml.qnode(dev)\r\n def circuit():\r\n qml.RX(0.123, wires=0)\r\n qml.adjoint(qml.RX)(0.123, wires=0)\r\n return qml.expval(qml.PauliZ(0))\r\n\r\n This creates the following circuit:\r\n\r\n >>> print(qml.draw(circuit)())\r\n 0: --RX(0.123)--RX(-0.123)--| <Z>\r\n \"\"\"\r\n\r\n @wraps(fn)\r\n def wrapper(*args, **kwargs):\r\n with stop_recording(), QuantumTape() as tape:\r\n fn(*args, **kwargs)\r\n\r\n if not tape.operations:\r\n # we called op.expand(): get the outputted tape\r\n tape = fn(*args, **kwargs)\r\n\r\n adjoint_ops = []\r\n for op in reversed(tape.operations):\r\n try:\r\n new_op = op.adjoint()\r\n adjoint_ops.append(new_op)\r\n except NotImplementedError:\r\n # Expand the operation and adjoint the result.\r\n new_ops = adjoint(op.expand)()\r\n\r\n if isinstance(new_ops, QuantumTape):\r\n new_ops = new_ops.operations\r\n\r\n adjoint_ops.extend(new_ops)\r\n\r\n if len(adjoint_ops) == 1:\r\n adjoint_ops = adjoint_ops[0]\r\n\r\n return adjoint_ops\r\n\r\n return wrapper\r\n", "path": "pennylane/transforms/adjoint.py"}], "after_files": [{"content": "# Copyright 2018-2021 Xanadu Quantum Technologies Inc.\r\n\r\n# Licensed under the Apache License, Version 2.0 (the \"License\");\r\n# you may not use this file except in compliance with the License.\r\n# You may obtain a copy of the License at\r\n\r\n# http://www.apache.org/licenses/LICENSE-2.0\r\n\r\n# Unless required by applicable law or agreed to in writing, software\r\n# distributed under the License is distributed on an \"AS IS\" BASIS,\r\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\r\n# See the License for the specific language governing permissions and\r\n# limitations under the License.\r\n\"\"\"Code for the adjoint transform.\"\"\"\r\n\r\nfrom functools import wraps\r\nfrom pennylane.tape import QuantumTape, stop_recording\r\n\r\n\r\ndef adjoint(fn):\r\n \"\"\"Create a function that applies the adjoint (inverse) of the provided operation or template.\r\n\r\n This transform can be used to apply the adjoint of an arbitrary sequence of operations.\r\n\r\n Args:\r\n fn (function): A quantum function that applies quantum operations.\r\n\r\n Returns:\r\n function: A new function that will apply the same operations but adjointed and in reverse order.\r\n\r\n **Example**\r\n\r\n The adjoint transforms can be used within a QNode to apply the adjoint of\r\n any quantum function. Consider the following quantum function, that applies two\r\n operations:\r\n\r\n .. code-block:: python3\r\n\r\n def my_ops(a, b, wire):\r\n qml.RX(a, wires=wire)\r\n qml.RY(b, wires=wire)\r\n\r\n We can create a QNode that applies this quantum function,\r\n followed by the adjoint of this function:\r\n\r\n .. code-block:: python3\r\n\r\n dev = qml.device('default.qubit', wires=1)\r\n\r\n @qml.qnode(dev)\r\n def circuit(a, b):\r\n my_ops(a, b, wire=0)\r\n qml.adjoint(my_ops)(a, b, wire=0)\r\n return qml.expval(qml.PauliZ(0))\r\n\r\n Printing this out, we can see that the inverse quantum\r\n function has indeed been applied:\r\n\r\n >>> print(qml.draw(circuit)(0.2, 0.5))\r\n 0: \u2500\u2500RX(0.2)\u2500\u2500RY(0.5)\u2500\u2500RY(-0.5)\u2500\u2500RX(-0.2)\u2500\u2500\u2524 \u27e8Z\u27e9\r\n\r\n The adjoint function can also be applied directly to templates and operations:\r\n\r\n >>> qml.adjoint(qml.RX)(0.123, wires=0)\r\n >>> qml.adjoint(qml.templates.StronglyEntanglingLayers)(weights, wires=[0, 1])\r\n\r\n .. UsageDetails::\r\n\r\n **Adjoint of a function**\r\n\r\n Here, we apply the ``subroutine`` function, and then apply its inverse.\r\n Notice that in addition to adjointing all of the operations, they are also\r\n applied in reverse construction order.\r\n\r\n .. code-block:: python3\r\n\r\n def subroutine(wire):\r\n qml.RX(0.123, wires=wire)\r\n qml.RY(0.456, wires=wire)\r\n\r\n dev = qml.device('default.qubit', wires=1)\r\n @qml.qnode(dev)\r\n def circuit():\r\n subroutine(0)\r\n qml.adjoint(subroutine)(0)\r\n return qml.expval(qml.PauliZ(0))\r\n\r\n This creates the following circuit:\r\n\r\n >>> print(qml.draw(circuit)())\r\n 0: --RX(0.123)--RY(0.456)--RY(-0.456)--RX(-0.123)--| <Z>\r\n\r\n **Single operation**\r\n\r\n You can also easily adjoint a single operation just by wrapping it with ``adjoint``:\r\n\r\n .. code-block:: python3\r\n\r\n dev = qml.device('default.qubit', wires=1)\r\n @qml.qnode(dev)\r\n def circuit():\r\n qml.RX(0.123, wires=0)\r\n qml.adjoint(qml.RX)(0.123, wires=0)\r\n return qml.expval(qml.PauliZ(0))\r\n\r\n This creates the following circuit:\r\n\r\n >>> print(qml.draw(circuit)())\r\n 0: --RX(0.123)--RX(-0.123)--| <Z>\r\n \"\"\"\r\n if not callable(fn):\r\n raise ValueError(\r\n f\"The object {fn} of type {type(fn)} is not callable. \"\r\n \"This error might occur if you apply adjoint to a list \"\r\n \"of operations instead of a function or template.\"\r\n )\r\n\r\n @wraps(fn)\r\n def wrapper(*args, **kwargs):\r\n with stop_recording(), QuantumTape() as tape:\r\n fn(*args, **kwargs)\r\n\r\n if not tape.operations:\r\n # we called op.expand(): get the outputted tape\r\n tape = fn(*args, **kwargs)\r\n\r\n adjoint_ops = []\r\n for op in reversed(tape.operations):\r\n try:\r\n new_op = op.adjoint()\r\n adjoint_ops.append(new_op)\r\n except NotImplementedError:\r\n # Expand the operation and adjoint the result.\r\n new_ops = adjoint(op.expand)()\r\n\r\n if isinstance(new_ops, QuantumTape):\r\n new_ops = new_ops.operations\r\n\r\n adjoint_ops.extend(new_ops)\r\n\r\n if len(adjoint_ops) == 1:\r\n adjoint_ops = adjoint_ops[0]\r\n\r\n return adjoint_ops\r\n\r\n return wrapper\r\n", "path": "pennylane/transforms/adjoint.py"}]} | 1,883 | 182 |
gh_patches_debug_14172 | rasdani/github-patches | git_diff | gammapy__gammapy-3905 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
FluxEstimator._set_norm_parameter crashes
**Gammapy version**
0.19
**Bug description**
Any code passing trough the "scan" branch of `FluxPointsEstimator._nan_result()` will crash due the invocation of `_set_norm_parameter` without the argument `scaled_parameter`.
**Expected behavior**
Either `_nan_result` passing the correct arguments, or `_set_norm_parameter` properly handling its default arguments.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gammapy/estimators/flux.py`
Content:
```
1 # Licensed under a 3-clause BSD style license - see LICENSE.rst
2 import logging
3 import numpy as np
4 from gammapy.datasets import Datasets
5 from gammapy.estimators.parameter import ParameterEstimator
6 from gammapy.maps import Map, MapAxis
7 from gammapy.modeling import Parameter
8 from gammapy.modeling.models import ScaleSpectralModel
9
10 log = logging.getLogger(__name__)
11
12
13 class FluxEstimator(ParameterEstimator):
14 """Flux estimator.
15
16 Estimates flux for a given list of datasets with their model in a given energy range.
17
18 To estimate the model flux the amplitude of the reference spectral model is
19 fitted within the energy range. The amplitude is re-normalized using the "norm" parameter,
20 which specifies the deviation of the flux from the reference model in this
21 energy range.
22
23 Parameters
24 ----------
25 source : str or int
26 For which source in the model to compute the flux.
27 norm_min : float
28 Minimum value for the norm used for the fit statistic profile evaluation.
29 norm_max : float
30 Maximum value for the norm used for the fit statistic profile evaluation.
31 norm_n_values : int
32 Number of norm values used for the fit statistic profile.
33 norm_values : `numpy.ndarray`
34 Array of norm values to be used for the fit statistic profile.
35 n_sigma : int
36 Sigma to use for asymmetric error computation.
37 n_sigma_ul : int
38 Sigma to use for upper limit computation.
39 selection_optional : list of str
40 Which additional quantities to estimate. Available options are:
41
42 * "all": all the optional steps are executed
43 * "errn-errp": estimate asymmetric errors.
44 * "ul": estimate upper limits.
45 * "scan": estimate fit statistic profiles.
46
47 Default is None so the optional steps are not executed.
48 fit : `Fit`
49 Fit instance specifying the backend and fit options.
50 reoptimize : bool
51 Re-optimize other free model parameters. Default is False.
52 """
53
54 tag = "FluxEstimator"
55
56 def __init__(
57 self,
58 source=0,
59 norm_min=0.2,
60 norm_max=5,
61 norm_n_values=11,
62 norm_values=None,
63 n_sigma=1,
64 n_sigma_ul=2,
65 selection_optional=None,
66 fit=None,
67 reoptimize=False,
68 ):
69 self.norm_values = norm_values
70 self.norm_min = norm_min
71 self.norm_max = norm_max
72 self.norm_n_values = norm_n_values
73 self.source = source
74 super().__init__(
75 null_value=0,
76 n_sigma=n_sigma,
77 n_sigma_ul=n_sigma_ul,
78 selection_optional=selection_optional,
79 fit=fit,
80 reoptimize=reoptimize,
81 )
82
83 def _set_norm_parameter(self, norm=None, scaled_parameter=None):
84 """Define properties of the norm spectral parameter."""
85 if norm is None:
86 norm = Parameter("norm", 1, unit="", interp="log")
87
88 norm.value = 1.0
89 norm.frozen = False
90
91 norm.min = scaled_parameter.min / scaled_parameter.value
92 norm.max = scaled_parameter.max / scaled_parameter.value
93 norm.interp = scaled_parameter.interp
94 norm.scan_values = self.norm_values
95 norm.scan_min = self.norm_min
96 norm.scan_max = self.norm_max
97 norm.scan_n_values = self.norm_n_values
98 return norm
99
100 def get_scale_model(self, models):
101 """Set scale model
102
103 Parameters
104 ----------
105 models : `Models`
106 Models
107
108 Returns
109 -------
110 model : `ScaleSpectralModel`
111 Scale spectral model
112 """
113 ref_model = models[self.source].spectral_model
114 scale_model = ScaleSpectralModel(ref_model)
115
116 if "amplitude" in ref_model.parameters.names:
117 scaled_parameter = ref_model.parameters["amplitude"]
118 else:
119 scaled_parameter = ref_model.parameters["norm"]
120
121 scale_model.norm = self._set_norm_parameter(scale_model.norm, scaled_parameter)
122 return scale_model
123
124 def estimate_npred_excess(self, datasets):
125 """Estimate npred excess for the source.
126
127 Parameters
128 ----------
129 datasets : Datasets
130 Datasets
131
132 Returns
133 -------
134 result : dict
135 Dict with an array with one entry per dataset with the sum of the
136 masked npred excess.
137 """
138 npred_excess = []
139
140 for dataset in datasets:
141 name = datasets.models[self.source].name
142 npred_signal = dataset.npred_signal(model_name=name)
143 npred = Map.from_geom(dataset.mask.geom)
144 npred.stack(npred_signal)
145 npred_excess.append(npred.data[dataset.mask].sum())
146
147 return {"npred_excess": np.array(npred_excess), "datasets": datasets.names}
148
149 def run(self, datasets):
150 """Estimate flux for a given energy range.
151
152 Parameters
153 ----------
154 datasets : list of `~gammapy.datasets.SpectrumDataset`
155 Spectrum datasets.
156
157 Returns
158 -------
159 result : dict
160 Dict with results for the flux point.
161 """
162 datasets = Datasets(datasets)
163 models = datasets.models.copy()
164
165 model = self.get_scale_model(models)
166
167 energy_min, energy_max = datasets.energy_ranges
168 energy_axis = MapAxis.from_energy_edges([energy_min.min(), energy_max.max()])
169
170 with np.errstate(invalid="ignore", divide="ignore"):
171 result = model.reference_fluxes(energy_axis=energy_axis)
172 # convert to scalar values
173 result = {key: value.item() for key, value in result.items()}
174
175 models[self.source].spectral_model = model
176 datasets.models = models
177 result.update(super().run(datasets, model.norm))
178
179 # TODO: find a cleaner way of including the npred_excess info
180 datasets.models[self.source].spectral_model.norm.value = result["norm"]
181 result.update(self.estimate_npred_excess(datasets=datasets))
182 return result
183
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/gammapy/estimators/flux.py b/gammapy/estimators/flux.py
--- a/gammapy/estimators/flux.py
+++ b/gammapy/estimators/flux.py
@@ -113,10 +113,12 @@
ref_model = models[self.source].spectral_model
scale_model = ScaleSpectralModel(ref_model)
- if "amplitude" in ref_model.parameters.names:
- scaled_parameter = ref_model.parameters["amplitude"]
+ for scaled_parameter in ref_model.parameters:
+ if scaled_parameter.is_norm:
+ break
else:
- scaled_parameter = ref_model.parameters["norm"]
+ raise ValueError(f"{self.tag} requires a 'norm' or 'amplitude' parameter"
+ " in the model to run")
scale_model.norm = self._set_norm_parameter(scale_model.norm, scaled_parameter)
return scale_model
| {"golden_diff": "diff --git a/gammapy/estimators/flux.py b/gammapy/estimators/flux.py\n--- a/gammapy/estimators/flux.py\n+++ b/gammapy/estimators/flux.py\n@@ -113,10 +113,12 @@\n ref_model = models[self.source].spectral_model\n scale_model = ScaleSpectralModel(ref_model)\n \n- if \"amplitude\" in ref_model.parameters.names:\n- scaled_parameter = ref_model.parameters[\"amplitude\"]\n+ for scaled_parameter in ref_model.parameters:\n+ if scaled_parameter.is_norm:\n+ break\n else:\n- scaled_parameter = ref_model.parameters[\"norm\"]\n+ raise ValueError(f\"{self.tag} requires a 'norm' or 'amplitude' parameter\"\n+ \" in the model to run\")\n \n scale_model.norm = self._set_norm_parameter(scale_model.norm, scaled_parameter)\n return scale_model\n", "issue": "FluxEstimator._set_norm_parameter crashes\n**Gammapy version**\r\n0.19\r\n\r\n**Bug description**\r\nAny code passing trough the \"scan\" branch of `FluxPointsEstimator._nan_result()` will crash due the invocation of `_set_norm_parameter` without the argument `scaled_parameter`.\r\n\r\n**Expected behavior**\r\nEither `_nan_result` passing the correct arguments, or `_set_norm_parameter` properly handling its default arguments.\r\n\n", "before_files": [{"content": "# Licensed under a 3-clause BSD style license - see LICENSE.rst\nimport logging\nimport numpy as np\nfrom gammapy.datasets import Datasets\nfrom gammapy.estimators.parameter import ParameterEstimator\nfrom gammapy.maps import Map, MapAxis\nfrom gammapy.modeling import Parameter\nfrom gammapy.modeling.models import ScaleSpectralModel\n\nlog = logging.getLogger(__name__)\n\n\nclass FluxEstimator(ParameterEstimator):\n \"\"\"Flux estimator.\n\n Estimates flux for a given list of datasets with their model in a given energy range.\n\n To estimate the model flux the amplitude of the reference spectral model is\n fitted within the energy range. The amplitude is re-normalized using the \"norm\" parameter,\n which specifies the deviation of the flux from the reference model in this\n energy range.\n\n Parameters\n ----------\n source : str or int\n For which source in the model to compute the flux.\n norm_min : float\n Minimum value for the norm used for the fit statistic profile evaluation.\n norm_max : float\n Maximum value for the norm used for the fit statistic profile evaluation.\n norm_n_values : int\n Number of norm values used for the fit statistic profile.\n norm_values : `numpy.ndarray`\n Array of norm values to be used for the fit statistic profile.\n n_sigma : int\n Sigma to use for asymmetric error computation.\n n_sigma_ul : int\n Sigma to use for upper limit computation.\n selection_optional : list of str\n Which additional quantities to estimate. Available options are:\n\n * \"all\": all the optional steps are executed\n * \"errn-errp\": estimate asymmetric errors.\n * \"ul\": estimate upper limits.\n * \"scan\": estimate fit statistic profiles.\n\n Default is None so the optional steps are not executed.\n fit : `Fit`\n Fit instance specifying the backend and fit options.\n reoptimize : bool\n Re-optimize other free model parameters. Default is False.\n \"\"\"\n\n tag = \"FluxEstimator\"\n\n def __init__(\n self,\n source=0,\n norm_min=0.2,\n norm_max=5,\n norm_n_values=11,\n norm_values=None,\n n_sigma=1,\n n_sigma_ul=2,\n selection_optional=None,\n fit=None,\n reoptimize=False,\n ):\n self.norm_values = norm_values\n self.norm_min = norm_min\n self.norm_max = norm_max\n self.norm_n_values = norm_n_values\n self.source = source\n super().__init__(\n null_value=0,\n n_sigma=n_sigma,\n n_sigma_ul=n_sigma_ul,\n selection_optional=selection_optional,\n fit=fit,\n reoptimize=reoptimize,\n )\n\n def _set_norm_parameter(self, norm=None, scaled_parameter=None):\n \"\"\"Define properties of the norm spectral parameter.\"\"\"\n if norm is None:\n norm = Parameter(\"norm\", 1, unit=\"\", interp=\"log\")\n\n norm.value = 1.0\n norm.frozen = False\n\n norm.min = scaled_parameter.min / scaled_parameter.value\n norm.max = scaled_parameter.max / scaled_parameter.value\n norm.interp = scaled_parameter.interp\n norm.scan_values = self.norm_values\n norm.scan_min = self.norm_min\n norm.scan_max = self.norm_max\n norm.scan_n_values = self.norm_n_values\n return norm\n\n def get_scale_model(self, models):\n \"\"\"Set scale model\n\n Parameters\n ----------\n models : `Models`\n Models\n\n Returns\n -------\n model : `ScaleSpectralModel`\n Scale spectral model\n \"\"\"\n ref_model = models[self.source].spectral_model\n scale_model = ScaleSpectralModel(ref_model)\n\n if \"amplitude\" in ref_model.parameters.names:\n scaled_parameter = ref_model.parameters[\"amplitude\"]\n else:\n scaled_parameter = ref_model.parameters[\"norm\"]\n\n scale_model.norm = self._set_norm_parameter(scale_model.norm, scaled_parameter)\n return scale_model\n\n def estimate_npred_excess(self, datasets):\n \"\"\"Estimate npred excess for the source.\n\n Parameters\n ----------\n datasets : Datasets\n Datasets\n\n Returns\n -------\n result : dict\n Dict with an array with one entry per dataset with the sum of the\n masked npred excess.\n \"\"\"\n npred_excess = []\n\n for dataset in datasets:\n name = datasets.models[self.source].name\n npred_signal = dataset.npred_signal(model_name=name)\n npred = Map.from_geom(dataset.mask.geom)\n npred.stack(npred_signal)\n npred_excess.append(npred.data[dataset.mask].sum())\n\n return {\"npred_excess\": np.array(npred_excess), \"datasets\": datasets.names}\n\n def run(self, datasets):\n \"\"\"Estimate flux for a given energy range.\n\n Parameters\n ----------\n datasets : list of `~gammapy.datasets.SpectrumDataset`\n Spectrum datasets.\n\n Returns\n -------\n result : dict\n Dict with results for the flux point.\n \"\"\"\n datasets = Datasets(datasets)\n models = datasets.models.copy()\n\n model = self.get_scale_model(models)\n\n energy_min, energy_max = datasets.energy_ranges\n energy_axis = MapAxis.from_energy_edges([energy_min.min(), energy_max.max()])\n\n with np.errstate(invalid=\"ignore\", divide=\"ignore\"):\n result = model.reference_fluxes(energy_axis=energy_axis)\n # convert to scalar values\n result = {key: value.item() for key, value in result.items()}\n\n models[self.source].spectral_model = model\n datasets.models = models\n result.update(super().run(datasets, model.norm))\n\n # TODO: find a cleaner way of including the npred_excess info\n datasets.models[self.source].spectral_model.norm.value = result[\"norm\"]\n result.update(self.estimate_npred_excess(datasets=datasets))\n return result\n", "path": "gammapy/estimators/flux.py"}], "after_files": [{"content": "# Licensed under a 3-clause BSD style license - see LICENSE.rst\nimport logging\nimport numpy as np\nfrom gammapy.datasets import Datasets\nfrom gammapy.estimators.parameter import ParameterEstimator\nfrom gammapy.maps import Map, MapAxis\nfrom gammapy.modeling import Parameter\nfrom gammapy.modeling.models import ScaleSpectralModel\n\nlog = logging.getLogger(__name__)\n\n\nclass FluxEstimator(ParameterEstimator):\n \"\"\"Flux estimator.\n\n Estimates flux for a given list of datasets with their model in a given energy range.\n\n To estimate the model flux the amplitude of the reference spectral model is\n fitted within the energy range. The amplitude is re-normalized using the \"norm\" parameter,\n which specifies the deviation of the flux from the reference model in this\n energy range.\n\n Parameters\n ----------\n source : str or int\n For which source in the model to compute the flux.\n norm_min : float\n Minimum value for the norm used for the fit statistic profile evaluation.\n norm_max : float\n Maximum value for the norm used for the fit statistic profile evaluation.\n norm_n_values : int\n Number of norm values used for the fit statistic profile.\n norm_values : `numpy.ndarray`\n Array of norm values to be used for the fit statistic profile.\n n_sigma : int\n Sigma to use for asymmetric error computation.\n n_sigma_ul : int\n Sigma to use for upper limit computation.\n selection_optional : list of str\n Which additional quantities to estimate. Available options are:\n\n * \"all\": all the optional steps are executed\n * \"errn-errp\": estimate asymmetric errors.\n * \"ul\": estimate upper limits.\n * \"scan\": estimate fit statistic profiles.\n\n Default is None so the optional steps are not executed.\n fit : `Fit`\n Fit instance specifying the backend and fit options.\n reoptimize : bool\n Re-optimize other free model parameters. Default is False.\n \"\"\"\n\n tag = \"FluxEstimator\"\n\n def __init__(\n self,\n source=0,\n norm_min=0.2,\n norm_max=5,\n norm_n_values=11,\n norm_values=None,\n n_sigma=1,\n n_sigma_ul=2,\n selection_optional=None,\n fit=None,\n reoptimize=False,\n ):\n self.norm_values = norm_values\n self.norm_min = norm_min\n self.norm_max = norm_max\n self.norm_n_values = norm_n_values\n self.source = source\n super().__init__(\n null_value=0,\n n_sigma=n_sigma,\n n_sigma_ul=n_sigma_ul,\n selection_optional=selection_optional,\n fit=fit,\n reoptimize=reoptimize,\n )\n\n def _set_norm_parameter(self, norm=None, scaled_parameter=None):\n \"\"\"Define properties of the norm spectral parameter.\"\"\"\n if norm is None:\n norm = Parameter(\"norm\", 1, unit=\"\", interp=\"log\")\n\n norm.value = 1.0\n norm.frozen = False\n\n norm.min = scaled_parameter.min / scaled_parameter.value\n norm.max = scaled_parameter.max / scaled_parameter.value\n norm.interp = scaled_parameter.interp\n norm.scan_values = self.norm_values\n norm.scan_min = self.norm_min\n norm.scan_max = self.norm_max\n norm.scan_n_values = self.norm_n_values\n return norm\n\n def get_scale_model(self, models):\n \"\"\"Set scale model\n\n Parameters\n ----------\n models : `Models`\n Models\n\n Returns\n -------\n model : `ScaleSpectralModel`\n Scale spectral model\n \"\"\"\n ref_model = models[self.source].spectral_model\n scale_model = ScaleSpectralModel(ref_model)\n\n for scaled_parameter in ref_model.parameters:\n if scaled_parameter.is_norm:\n break\n else:\n raise ValueError(f\"{self.tag} requires a 'norm' or 'amplitude' parameter\"\n \" in the model to run\")\n\n scale_model.norm = self._set_norm_parameter(scale_model.norm, scaled_parameter)\n return scale_model\n\n def estimate_npred_excess(self, datasets):\n \"\"\"Estimate npred excess for the source.\n\n Parameters\n ----------\n datasets : Datasets\n Datasets\n\n Returns\n -------\n result : dict\n Dict with an array with one entry per dataset with the sum of the\n masked npred excess.\n \"\"\"\n npred_excess = []\n\n for dataset in datasets:\n name = datasets.models[self.source].name\n npred_signal = dataset.npred_signal(model_name=name)\n npred = Map.from_geom(dataset.mask.geom)\n npred.stack(npred_signal)\n npred_excess.append(npred.data[dataset.mask].sum())\n\n return {\"npred_excess\": np.array(npred_excess), \"datasets\": datasets.names}\n\n def run(self, datasets):\n \"\"\"Estimate flux for a given energy range.\n\n Parameters\n ----------\n datasets : list of `~gammapy.datasets.SpectrumDataset`\n Spectrum datasets.\n\n Returns\n -------\n result : dict\n Dict with results for the flux point.\n \"\"\"\n datasets = Datasets(datasets)\n models = datasets.models.copy()\n\n model = self.get_scale_model(models)\n\n energy_min, energy_max = datasets.energy_ranges\n energy_axis = MapAxis.from_energy_edges([energy_min.min(), energy_max.max()])\n\n with np.errstate(invalid=\"ignore\", divide=\"ignore\"):\n result = model.reference_fluxes(energy_axis=energy_axis)\n # convert to scalar values\n result = {key: value.item() for key, value in result.items()}\n\n models[self.source].spectral_model = model\n datasets.models = models\n result.update(super().run(datasets, model.norm))\n\n # TODO: find a cleaner way of including the npred_excess info\n datasets.models[self.source].spectral_model.norm.value = result[\"norm\"]\n result.update(self.estimate_npred_excess(datasets=datasets))\n return result\n", "path": "gammapy/estimators/flux.py"}]} | 2,091 | 206 |
gh_patches_debug_21471 | rasdani/github-patches | git_diff | pymedusa__Medusa-9939 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[APP SUBMITTED]: AttributeError: 'NoneType' object has no attribute 'json'
### INFO
**Python Version**: `3.8.5 (default, Aug 2 2020, 15:09:07) [GCC 9.3.0]`
**Operating System**: `Linux-4.9.35-v7+-armv7l-with-glibc2.29`
**Locale**: `UTF-8`
**Branch**: [develop](../tree/develop)
**Database**: `44.18`
**Commit**: pymedusa/Medusa@5c8597b932e7975c5619c0ecc906efce625202dc
**Link to Log**: https://gist.github.com/85c8565a27efbcc61281cbd34dec0a9f
### ERROR
<pre>
2021-04-19 03:55:20 ERROR SNATCHQUEUE-SNATCH-75978 :: [5c8597b] Transmission: Failed Sending Torrent
Traceback (most recent call last):
File "/home/pi/Medusa/<a href="../blob/5c8597b932e7975c5619c0ecc906efce625202dc/medusa/clients/torrent/generic.py#L257">medusa/clients/torrent/generic.py</a>", line 257, in send_torrent
r_code = self._add_torrent_file(result)
File "/home/pi/Medusa/<a href="../blob/5c8597b932e7975c5619c0ecc906efce625202dc/medusa/clients/torrent/transmission.py#L128">medusa/clients/torrent/transmission.py</a>", line 128, in _add_torrent_file
return self.check_response()
File "/home/pi/Medusa/<a href="../blob/5c8597b932e7975c5619c0ecc906efce625202dc/medusa/clients/torrent/transmission.py#L47">medusa/clients/torrent/transmission.py</a>", line 47, in check_response
return self.response.json()['result'] == 'success'
AttributeError: 'NoneType' object has no attribute 'json'
</pre>
---
_STAFF NOTIFIED_: @pymedusa/support @pymedusa/moderators
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `medusa/providers/torrent/json/privatehd.py`
Content:
```
1 # coding=utf-8
2
3 """Provider code for PrivateHD."""
4
5 from __future__ import unicode_literals
6
7 import logging
8
9 from medusa import tv
10 from medusa.helper.common import convert_size
11 from medusa.helper.exceptions import AuthException
12 from medusa.logger.adapters.style import BraceAdapter
13 from medusa.providers.torrent.torrent_provider import TorrentProvider
14
15 from requests.compat import urljoin
16
17 log = BraceAdapter(logging.getLogger(__name__))
18 log.logger.addHandler(logging.NullHandler())
19
20
21 class PrivateHDProvider(TorrentProvider):
22 """PrivateHD Torrent provider."""
23
24 def __init__(self):
25 """Initialize the class."""
26 super(PrivateHDProvider, self).__init__('PrivateHD')
27
28 # Credentials
29 self.username = None
30 self.password = None
31 self.pid = None
32 self._token = None
33
34 # URLs
35 self.url = 'https://privatehd.to'
36 self.urls = {
37 'login': urljoin(self.url, 'api/v1/jackett/auth'),
38 'search': urljoin(self.url, 'api/v1/jackett/torrents'),
39 }
40
41 # Proper Strings
42 self.proper_strings = ['PROPER', 'REPACK', 'REAL', 'RERIP']
43
44 # Miscellaneous Options
45 self.freeleech = False
46
47 # Torrent Stats
48
49 # Cache
50 self.cache = tv.Cache(self) # only poll PrivateHD every 10 minutes max
51
52 def search(self, search_strings, age=0, ep_obj=None, **kwargs):
53 """
54 Search a provider and parse the results.
55
56 :param search_strings: A dict with mode (key) and the search value (value)
57 :param age: Not used
58 :param ep_obj: Not used
59 :returns: A list of search results (structure)
60 """
61 results = []
62 if not self.login():
63 return results
64
65 for mode in search_strings:
66 log.debug('Search mode: {0}', mode)
67
68 for search_string in search_strings[mode]:
69 if mode != 'RSS':
70 log.debug('Search string: {search}',
71 {'search': search_string})
72
73 search_params = {
74 'in': 1,
75 'search': search_string,
76 'type': 2,
77 'discount[]': 1 if self.freeleech else None,
78 'tv_type[]': {'episode': 1, 'season': 2}.get(mode.lower())
79 }
80
81 if not search_string:
82 del search_params['search']
83
84 headers = {
85 'Authorization': f'Bearer {self._token}'
86 }
87
88 response = self.session.get(self.urls['search'], params=search_params, headers=headers)
89 try:
90 jdata = response.json()
91 if not jdata.get('data') or not len(jdata['data']):
92 log.debug('No data returned from provider')
93 continue
94 except ValueError:
95 log.debug('No data returned from provider')
96 continue
97
98 results += self.parse(jdata, mode)
99
100 return results
101
102 def parse(self, data, mode):
103 """
104 Parse search results for items.
105
106 :param data: The raw response from a search
107 :param mode: The current mode used to search, e.g. RSS
108
109 :return: A list of items found
110 """
111 items = []
112 json_data = data.get('data', [])
113
114 for row in json_data:
115 try:
116 title = row.pop('file_name')
117 download_url = row.pop('download')
118 if not all([title, download_url]):
119 continue
120
121 seeders = row.pop('seed', 0)
122 leechers = row.pop('leech', 0)
123
124 # Filter unseeded torrent
125 if seeders < self.minseed:
126 if mode != 'RSS':
127 log.debug("Discarding torrent because it doesn't meet the"
128 ' minimum seeders: {0}. Seeders: {1}',
129 title, seeders)
130 continue
131
132 size = convert_size(row.pop('file_size', None), default=-1)
133 pubdate_raw = row.pop('created_at')
134 pubdate = self.parse_pubdate(pubdate_raw)
135
136 item = {
137 'title': title,
138 'link': download_url,
139 'size': size,
140 'seeders': seeders,
141 'leechers': leechers,
142 'pubdate': pubdate,
143 }
144 if mode != 'RSS':
145 log.debug('Found result: {0} with {1} seeders and {2} leechers',
146 title, seeders, leechers)
147
148 items.append(item)
149 except (AttributeError, TypeError, KeyError, ValueError, IndexError):
150 log.exception('Failed parsing provider.')
151 return items
152
153 def login(self):
154 """Login method used for logging in before doing search and torrent downloads."""
155 login_params = {
156 'pid': self.pid,
157 'username': self.username,
158 'password': self.password
159 }
160
161 response = self.session.post(self.urls['login'], data=login_params)
162 try:
163 jdata = response.json()
164 if 'message' in jdata:
165 raise AuthException(f"Error trying to auth, {jdata['message']}")
166 except ValueError:
167 log.debug('No data returned from provider')
168 raise AuthException('Could not get auth token')
169
170 if 'token' in jdata:
171 self._token = jdata['token']
172 return True
173
174 return False
175
176
177 provider = PrivateHDProvider()
178
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/medusa/providers/torrent/json/privatehd.py b/medusa/providers/torrent/json/privatehd.py
--- a/medusa/providers/torrent/json/privatehd.py
+++ b/medusa/providers/torrent/json/privatehd.py
@@ -91,7 +91,7 @@
if not jdata.get('data') or not len(jdata['data']):
log.debug('No data returned from provider')
continue
- except ValueError:
+ except (AttributeError, ValueError):
log.debug('No data returned from provider')
continue
@@ -163,7 +163,7 @@
jdata = response.json()
if 'message' in jdata:
raise AuthException(f"Error trying to auth, {jdata['message']}")
- except ValueError:
+ except (AttributeError, ValueError):
log.debug('No data returned from provider')
raise AuthException('Could not get auth token')
| {"golden_diff": "diff --git a/medusa/providers/torrent/json/privatehd.py b/medusa/providers/torrent/json/privatehd.py\n--- a/medusa/providers/torrent/json/privatehd.py\n+++ b/medusa/providers/torrent/json/privatehd.py\n@@ -91,7 +91,7 @@\n if not jdata.get('data') or not len(jdata['data']):\n log.debug('No data returned from provider')\n continue\n- except ValueError:\n+ except (AttributeError, ValueError):\n log.debug('No data returned from provider')\n continue\n \n@@ -163,7 +163,7 @@\n jdata = response.json()\n if 'message' in jdata:\n raise AuthException(f\"Error trying to auth, {jdata['message']}\")\n- except ValueError:\n+ except (AttributeError, ValueError):\n log.debug('No data returned from provider')\n raise AuthException('Could not get auth token')\n", "issue": "[APP SUBMITTED]: AttributeError: 'NoneType' object has no attribute 'json'\n\n### INFO\n**Python Version**: `3.8.5 (default, Aug 2 2020, 15:09:07) [GCC 9.3.0]`\n**Operating System**: `Linux-4.9.35-v7+-armv7l-with-glibc2.29`\n**Locale**: `UTF-8`\n**Branch**: [develop](../tree/develop)\n**Database**: `44.18`\n**Commit**: pymedusa/Medusa@5c8597b932e7975c5619c0ecc906efce625202dc\n**Link to Log**: https://gist.github.com/85c8565a27efbcc61281cbd34dec0a9f\n### ERROR\n<pre>\n2021-04-19 03:55:20 ERROR SNATCHQUEUE-SNATCH-75978 :: [5c8597b] Transmission: Failed Sending Torrent\nTraceback (most recent call last):\n File \"/home/pi/Medusa/<a href=\"../blob/5c8597b932e7975c5619c0ecc906efce625202dc/medusa/clients/torrent/generic.py#L257\">medusa/clients/torrent/generic.py</a>\", line 257, in send_torrent\n r_code = self._add_torrent_file(result)\n File \"/home/pi/Medusa/<a href=\"../blob/5c8597b932e7975c5619c0ecc906efce625202dc/medusa/clients/torrent/transmission.py#L128\">medusa/clients/torrent/transmission.py</a>\", line 128, in _add_torrent_file\n return self.check_response()\n File \"/home/pi/Medusa/<a href=\"../blob/5c8597b932e7975c5619c0ecc906efce625202dc/medusa/clients/torrent/transmission.py#L47\">medusa/clients/torrent/transmission.py</a>\", line 47, in check_response\n return self.response.json()['result'] == 'success'\nAttributeError: 'NoneType' object has no attribute 'json'\n</pre>\n---\n_STAFF NOTIFIED_: @pymedusa/support @pymedusa/moderators\n\n", "before_files": [{"content": "# coding=utf-8\n\n\"\"\"Provider code for PrivateHD.\"\"\"\n\nfrom __future__ import unicode_literals\n\nimport logging\n\nfrom medusa import tv\nfrom medusa.helper.common import convert_size\nfrom medusa.helper.exceptions import AuthException\nfrom medusa.logger.adapters.style import BraceAdapter\nfrom medusa.providers.torrent.torrent_provider import TorrentProvider\n\nfrom requests.compat import urljoin\n\nlog = BraceAdapter(logging.getLogger(__name__))\nlog.logger.addHandler(logging.NullHandler())\n\n\nclass PrivateHDProvider(TorrentProvider):\n \"\"\"PrivateHD Torrent provider.\"\"\"\n\n def __init__(self):\n \"\"\"Initialize the class.\"\"\"\n super(PrivateHDProvider, self).__init__('PrivateHD')\n\n # Credentials\n self.username = None\n self.password = None\n self.pid = None\n self._token = None\n\n # URLs\n self.url = 'https://privatehd.to'\n self.urls = {\n 'login': urljoin(self.url, 'api/v1/jackett/auth'),\n 'search': urljoin(self.url, 'api/v1/jackett/torrents'),\n }\n\n # Proper Strings\n self.proper_strings = ['PROPER', 'REPACK', 'REAL', 'RERIP']\n\n # Miscellaneous Options\n self.freeleech = False\n\n # Torrent Stats\n\n # Cache\n self.cache = tv.Cache(self) # only poll PrivateHD every 10 minutes max\n\n def search(self, search_strings, age=0, ep_obj=None, **kwargs):\n \"\"\"\n Search a provider and parse the results.\n\n :param search_strings: A dict with mode (key) and the search value (value)\n :param age: Not used\n :param ep_obj: Not used\n :returns: A list of search results (structure)\n \"\"\"\n results = []\n if not self.login():\n return results\n\n for mode in search_strings:\n log.debug('Search mode: {0}', mode)\n\n for search_string in search_strings[mode]:\n if mode != 'RSS':\n log.debug('Search string: {search}',\n {'search': search_string})\n\n search_params = {\n 'in': 1,\n 'search': search_string,\n 'type': 2,\n 'discount[]': 1 if self.freeleech else None,\n 'tv_type[]': {'episode': 1, 'season': 2}.get(mode.lower())\n }\n\n if not search_string:\n del search_params['search']\n\n headers = {\n 'Authorization': f'Bearer {self._token}'\n }\n\n response = self.session.get(self.urls['search'], params=search_params, headers=headers)\n try:\n jdata = response.json()\n if not jdata.get('data') or not len(jdata['data']):\n log.debug('No data returned from provider')\n continue\n except ValueError:\n log.debug('No data returned from provider')\n continue\n\n results += self.parse(jdata, mode)\n\n return results\n\n def parse(self, data, mode):\n \"\"\"\n Parse search results for items.\n\n :param data: The raw response from a search\n :param mode: The current mode used to search, e.g. RSS\n\n :return: A list of items found\n \"\"\"\n items = []\n json_data = data.get('data', [])\n\n for row in json_data:\n try:\n title = row.pop('file_name')\n download_url = row.pop('download')\n if not all([title, download_url]):\n continue\n\n seeders = row.pop('seed', 0)\n leechers = row.pop('leech', 0)\n\n # Filter unseeded torrent\n if seeders < self.minseed:\n if mode != 'RSS':\n log.debug(\"Discarding torrent because it doesn't meet the\"\n ' minimum seeders: {0}. Seeders: {1}',\n title, seeders)\n continue\n\n size = convert_size(row.pop('file_size', None), default=-1)\n pubdate_raw = row.pop('created_at')\n pubdate = self.parse_pubdate(pubdate_raw)\n\n item = {\n 'title': title,\n 'link': download_url,\n 'size': size,\n 'seeders': seeders,\n 'leechers': leechers,\n 'pubdate': pubdate,\n }\n if mode != 'RSS':\n log.debug('Found result: {0} with {1} seeders and {2} leechers',\n title, seeders, leechers)\n\n items.append(item)\n except (AttributeError, TypeError, KeyError, ValueError, IndexError):\n log.exception('Failed parsing provider.')\n return items\n\n def login(self):\n \"\"\"Login method used for logging in before doing search and torrent downloads.\"\"\"\n login_params = {\n 'pid': self.pid,\n 'username': self.username,\n 'password': self.password\n }\n\n response = self.session.post(self.urls['login'], data=login_params)\n try:\n jdata = response.json()\n if 'message' in jdata:\n raise AuthException(f\"Error trying to auth, {jdata['message']}\")\n except ValueError:\n log.debug('No data returned from provider')\n raise AuthException('Could not get auth token')\n\n if 'token' in jdata:\n self._token = jdata['token']\n return True\n\n return False\n\n\nprovider = PrivateHDProvider()\n", "path": "medusa/providers/torrent/json/privatehd.py"}], "after_files": [{"content": "# coding=utf-8\n\n\"\"\"Provider code for PrivateHD.\"\"\"\n\nfrom __future__ import unicode_literals\n\nimport logging\n\nfrom medusa import tv\nfrom medusa.helper.common import convert_size\nfrom medusa.helper.exceptions import AuthException\nfrom medusa.logger.adapters.style import BraceAdapter\nfrom medusa.providers.torrent.torrent_provider import TorrentProvider\n\nfrom requests.compat import urljoin\n\nlog = BraceAdapter(logging.getLogger(__name__))\nlog.logger.addHandler(logging.NullHandler())\n\n\nclass PrivateHDProvider(TorrentProvider):\n \"\"\"PrivateHD Torrent provider.\"\"\"\n\n def __init__(self):\n \"\"\"Initialize the class.\"\"\"\n super(PrivateHDProvider, self).__init__('PrivateHD')\n\n # Credentials\n self.username = None\n self.password = None\n self.pid = None\n self._token = None\n\n # URLs\n self.url = 'https://privatehd.to'\n self.urls = {\n 'login': urljoin(self.url, 'api/v1/jackett/auth'),\n 'search': urljoin(self.url, 'api/v1/jackett/torrents'),\n }\n\n # Proper Strings\n self.proper_strings = ['PROPER', 'REPACK', 'REAL', 'RERIP']\n\n # Miscellaneous Options\n self.freeleech = False\n\n # Torrent Stats\n\n # Cache\n self.cache = tv.Cache(self) # only poll PrivateHD every 10 minutes max\n\n def search(self, search_strings, age=0, ep_obj=None, **kwargs):\n \"\"\"\n Search a provider and parse the results.\n\n :param search_strings: A dict with mode (key) and the search value (value)\n :param age: Not used\n :param ep_obj: Not used\n :returns: A list of search results (structure)\n \"\"\"\n results = []\n if not self.login():\n return results\n\n for mode in search_strings:\n log.debug('Search mode: {0}', mode)\n\n for search_string in search_strings[mode]:\n if mode != 'RSS':\n log.debug('Search string: {search}',\n {'search': search_string})\n\n search_params = {\n 'in': 1,\n 'search': search_string,\n 'type': 2,\n 'discount[]': 1 if self.freeleech else None,\n 'tv_type[]': {'episode': 1, 'season': 2}.get(mode.lower())\n }\n\n if not search_string:\n del search_params['search']\n\n headers = {\n 'Authorization': f'Bearer {self._token}'\n }\n\n response = self.session.get(self.urls['search'], params=search_params, headers=headers)\n try:\n jdata = response.json()\n if not jdata.get('data') or not len(jdata['data']):\n log.debug('No data returned from provider')\n continue\n except (AttributeError, ValueError):\n log.debug('No data returned from provider')\n continue\n\n results += self.parse(jdata, mode)\n\n return results\n\n def parse(self, data, mode):\n \"\"\"\n Parse search results for items.\n\n :param data: The raw response from a search\n :param mode: The current mode used to search, e.g. RSS\n\n :return: A list of items found\n \"\"\"\n items = []\n json_data = data.get('data', [])\n\n for row in json_data:\n try:\n title = row.pop('file_name')\n download_url = row.pop('download')\n if not all([title, download_url]):\n continue\n\n seeders = row.pop('seed', 0)\n leechers = row.pop('leech', 0)\n\n # Filter unseeded torrent\n if seeders < self.minseed:\n if mode != 'RSS':\n log.debug(\"Discarding torrent because it doesn't meet the\"\n ' minimum seeders: {0}. Seeders: {1}',\n title, seeders)\n continue\n\n size = convert_size(row.pop('file_size', None), default=-1)\n pubdate_raw = row.pop('created_at')\n pubdate = self.parse_pubdate(pubdate_raw)\n\n item = {\n 'title': title,\n 'link': download_url,\n 'size': size,\n 'seeders': seeders,\n 'leechers': leechers,\n 'pubdate': pubdate,\n }\n if mode != 'RSS':\n log.debug('Found result: {0} with {1} seeders and {2} leechers',\n title, seeders, leechers)\n\n items.append(item)\n except (AttributeError, TypeError, KeyError, ValueError, IndexError):\n log.exception('Failed parsing provider.')\n return items\n\n def login(self):\n \"\"\"Login method used for logging in before doing search and torrent downloads.\"\"\"\n login_params = {\n 'pid': self.pid,\n 'username': self.username,\n 'password': self.password\n }\n\n response = self.session.post(self.urls['login'], data=login_params)\n try:\n jdata = response.json()\n if 'message' in jdata:\n raise AuthException(f\"Error trying to auth, {jdata['message']}\")\n except (AttributeError, ValueError):\n log.debug('No data returned from provider')\n raise AuthException('Could not get auth token')\n\n if 'token' in jdata:\n self._token = jdata['token']\n return True\n\n return False\n\n\nprovider = PrivateHDProvider()\n", "path": "medusa/providers/torrent/json/privatehd.py"}]} | 2,478 | 204 |
gh_patches_debug_15701 | rasdani/github-patches | git_diff | aio-libs__aiohttp-3055 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
3.3 socket read timeouts not working correctly
@asvetlov looks like the new `sock_read` timeout implementation is not working, see testcase:
```python
import multiprocessing
import asyncio
import aiohttp
import socket
host = "localhost"
def get_free_tcp_port():
sckt = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sckt.bind((host, 0))
addr, port = sckt.getsockname()
sckt.close()
return port
class AIOServer(multiprocessing.Process):
"""
This is a mock AWS service which will 5 seconds before returning
a response to test socket timeouts.
"""
def __init__(self):
super().__init__(target=self._run)
self._loop = None
self._port = get_free_tcp_port()
self.endpoint_url = 'http://{}:{}'.format(host, self._port)
self.daemon = True # die when parent dies
def _run(self):
asyncio.set_event_loop(asyncio.new_event_loop())
app = aiohttp.web.Application()
app.router.add_route('*', '/ok', self.ok)
app.router.add_route('*', '/{anything:.*}', self.stream_handler)
try:
aiohttp.web.run_app(app, host=host, port=self._port,
handle_signals=False)
except BaseException:
pytest.fail('unable to start and connect to aiohttp server')
raise
async def __aenter__(self):
self.start()
await self._wait_until_up()
return self
async def __aexit__(self, exc_type, exc_val, exc_tb):
try:
self.terminate()
except:
pytest.fail("Unable to shut down server")
raise
async def ok(self, request):
return aiohttp.web.Response()
async def stream_handler(self, request):
# Without the Content-Type, most (all?) browsers will not render
# partially downloaded content. Note, the response type is
# StreamResponse not Response.
resp = aiohttp.web.StreamResponse(status=200, reason='OK',
headers={'Content-Type': 'text/html'})
await resp.prepare(request)
await asyncio.sleep(5, loop=self._loop)
await resp.drain()
return resp
async def _wait_until_up(self):
async with aiohttp.ClientSession() as session:
for i in range(0, 30):
if self.exitcode is not None:
pytest.fail('unable to start and connect to aiohttp server')
return
try:
# we need to bypass the proxies due to monkey patches
await session.get(self.endpoint_url + '/ok', timeout=0.5)
return
except (aiohttp.ClientConnectionError, asyncio.TimeoutError):
await asyncio.sleep(0.5)
except BaseException:
pytest.fail('unable to start and connect to aiohttp server')
raise
pytest.fail('unable to start and connect to aiohttp server')
async def main():
async with AIOServer() as server, \
aiohttp.ClientSession(timeout=aiohttp.ClientTimeout(sock_connect=1, sock_read=1)) as session:
try:
response = await session.get(server.endpoint_url)
await response.read()
except asyncio.TimeoutError:
print("Success")
print("failure")
if __name__ == '__main__':
asyncio.get_event_loop().run_until_complete(main())
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `aiohttp/client_proto.py`
Content:
```
1 from contextlib import suppress
2
3 from .base_protocol import BaseProtocol
4 from .client_exceptions import (ClientOSError, ClientPayloadError,
5 ServerDisconnectedError, ServerTimeoutError)
6 from .http import HttpResponseParser
7 from .streams import EMPTY_PAYLOAD, DataQueue
8
9
10 class ResponseHandler(BaseProtocol, DataQueue):
11 """Helper class to adapt between Protocol and StreamReader."""
12
13 def __init__(self, *, loop=None):
14 BaseProtocol.__init__(self, loop=loop)
15 DataQueue.__init__(self, loop=loop)
16
17 self._should_close = False
18
19 self._payload = None
20 self._skip_payload = False
21 self._payload_parser = None
22 self._reading_paused = False
23
24 self._timer = None
25
26 self._tail = b''
27 self._upgraded = False
28 self._parser = None
29
30 self._read_timeout = None
31 self._read_timeout_handle = None
32
33 @property
34 def upgraded(self):
35 return self._upgraded
36
37 @property
38 def should_close(self):
39 if (self._payload is not None and
40 not self._payload.is_eof() or self._upgraded):
41 return True
42
43 return (self._should_close or self._upgraded or
44 self.exception() is not None or
45 self._payload_parser is not None or
46 len(self) or self._tail)
47
48 def close(self):
49 transport = self.transport
50 if transport is not None:
51 transport.close()
52 self.transport = None
53 self._payload = None
54 return transport
55
56 def is_connected(self):
57 return self.transport is not None
58
59 def connection_lost(self, exc):
60 self._drop_timeout()
61
62 if self._payload_parser is not None:
63 with suppress(Exception):
64 self._payload_parser.feed_eof()
65
66 try:
67 uncompleted = self._parser.feed_eof()
68 except Exception:
69 uncompleted = None
70 if self._payload is not None:
71 self._payload.set_exception(
72 ClientPayloadError('Response payload is not completed'))
73
74 if not self.is_eof():
75 if isinstance(exc, OSError):
76 exc = ClientOSError(*exc.args)
77 if exc is None:
78 exc = ServerDisconnectedError(uncompleted)
79 # assigns self._should_close to True as side effect,
80 # we do it anyway below
81 self.set_exception(exc)
82
83 self._should_close = True
84 self._parser = None
85 self._payload = None
86 self._payload_parser = None
87 self._reading_paused = False
88
89 super().connection_lost(exc)
90
91 def eof_received(self):
92 # should call parser.feed_eof() most likely
93 self._drop_timeout()
94
95 def pause_reading(self):
96 if not self._reading_paused:
97 try:
98 self.transport.pause_reading()
99 except (AttributeError, NotImplementedError, RuntimeError):
100 pass
101 self._reading_paused = True
102 self._drop_timeout()
103
104 def resume_reading(self):
105 if self._reading_paused:
106 try:
107 self.transport.resume_reading()
108 except (AttributeError, NotImplementedError, RuntimeError):
109 pass
110 self._reading_paused = False
111 self._reschedule_timeout()
112
113 def set_exception(self, exc):
114 self._should_close = True
115 self._drop_timeout()
116 super().set_exception(exc)
117
118 def set_parser(self, parser, payload):
119 self._payload = payload
120 self._payload_parser = parser
121
122 self._drop_timeout()
123
124 if self._tail:
125 data, self._tail = self._tail, b''
126 self.data_received(data)
127
128 def set_response_params(self, *, timer=None,
129 skip_payload=False,
130 read_until_eof=False,
131 auto_decompress=True,
132 read_timeout=None):
133 self._skip_payload = skip_payload
134
135 self._read_timeout = read_timeout
136 self._reschedule_timeout()
137
138 self._parser = HttpResponseParser(
139 self, self._loop, timer=timer,
140 payload_exception=ClientPayloadError,
141 read_until_eof=read_until_eof,
142 auto_decompress=auto_decompress)
143
144 if self._tail:
145 data, self._tail = self._tail, b''
146 self.data_received(data)
147
148 def _drop_timeout(self):
149 if self._read_timeout_handle is not None:
150 self._read_timeout_handle.cancel()
151 self._read_timeout_handle = None
152
153 def _reschedule_timeout(self):
154 timeout = self._read_timeout
155 if self._read_timeout_handle is not None:
156 self._read_timeout_handle.cancel()
157
158 if timeout:
159 self._read_timeout_handle = self._loop.call_later(
160 timeout, self._on_read_timeout)
161 else:
162 self._read_timeout_handle = None
163
164 def _on_read_timeout(self):
165 self.set_exception(
166 ServerTimeoutError("Timeout on reading data from socket"))
167
168 def data_received(self, data):
169 if not data:
170 return
171
172 # custom payload parser
173 if self._payload_parser is not None:
174 eof, tail = self._payload_parser.feed_data(data)
175 if eof:
176 self._payload = None
177 self._payload_parser = None
178
179 if tail:
180 self.data_received(tail)
181 return
182 else:
183 if self._upgraded or self._parser is None:
184 # i.e. websocket connection, websocket parser is not set yet
185 self._tail += data
186 else:
187 # parse http messages
188 try:
189 messages, upgraded, tail = self._parser.feed_data(data)
190 except BaseException as exc:
191 self.transport.close()
192 # should_close is True after the call
193 self.set_exception(exc)
194 return
195
196 self._upgraded = upgraded
197
198 payload = None
199 for message, payload in messages:
200 if message.should_close:
201 self._should_close = True
202
203 self._payload = payload
204
205 if self._skip_payload or message.code in (204, 304):
206 self.feed_data((message, EMPTY_PAYLOAD), 0)
207 else:
208 self.feed_data((message, payload), 0)
209 if payload is not None:
210 # new message(s) was processed
211 # register timeout handler unsubscribing
212 # either on end-of-stream or immediatelly for
213 # EMPTY_PAYLOAD
214 if payload is not EMPTY_PAYLOAD:
215 payload.on_eof(self._drop_timeout)
216 else:
217 self._drop_timeout()
218
219 if tail:
220 if upgraded:
221 self.data_received(tail)
222 else:
223 self._tail = tail
224
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/aiohttp/client_proto.py b/aiohttp/client_proto.py
--- a/aiohttp/client_proto.py
+++ b/aiohttp/client_proto.py
@@ -51,6 +51,7 @@
transport.close()
self.transport = None
self._payload = None
+ self._drop_timeout()
return transport
def is_connected(self):
@@ -162,8 +163,10 @@
self._read_timeout_handle = None
def _on_read_timeout(self):
- self.set_exception(
- ServerTimeoutError("Timeout on reading data from socket"))
+ exc = ServerTimeoutError("Timeout on reading data from socket")
+ self.set_exception(exc)
+ if self._payload is not None:
+ self._payload.set_exception(exc)
def data_received(self, data):
if not data:
| {"golden_diff": "diff --git a/aiohttp/client_proto.py b/aiohttp/client_proto.py\n--- a/aiohttp/client_proto.py\n+++ b/aiohttp/client_proto.py\n@@ -51,6 +51,7 @@\n transport.close()\n self.transport = None\n self._payload = None\n+ self._drop_timeout()\n return transport\n \n def is_connected(self):\n@@ -162,8 +163,10 @@\n self._read_timeout_handle = None\n \n def _on_read_timeout(self):\n- self.set_exception(\n- ServerTimeoutError(\"Timeout on reading data from socket\"))\n+ exc = ServerTimeoutError(\"Timeout on reading data from socket\")\n+ self.set_exception(exc)\n+ if self._payload is not None:\n+ self._payload.set_exception(exc)\n \n def data_received(self, data):\n if not data:\n", "issue": "3.3 socket read timeouts not working correctly\n@asvetlov looks like the new `sock_read` timeout implementation is not working, see testcase:\r\n\r\n```python\r\nimport multiprocessing\r\nimport asyncio\r\nimport aiohttp\r\nimport socket\r\n\r\nhost = \"localhost\"\r\n\r\n\r\ndef get_free_tcp_port():\r\n sckt = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\r\n sckt.bind((host, 0))\r\n addr, port = sckt.getsockname()\r\n sckt.close()\r\n return port\r\n\r\n\r\nclass AIOServer(multiprocessing.Process):\r\n \"\"\"\r\n This is a mock AWS service which will 5 seconds before returning\r\n a response to test socket timeouts.\r\n \"\"\"\r\n def __init__(self):\r\n super().__init__(target=self._run)\r\n self._loop = None\r\n self._port = get_free_tcp_port()\r\n self.endpoint_url = 'http://{}:{}'.format(host, self._port)\r\n self.daemon = True # die when parent dies\r\n\r\n def _run(self):\r\n asyncio.set_event_loop(asyncio.new_event_loop())\r\n app = aiohttp.web.Application()\r\n app.router.add_route('*', '/ok', self.ok)\r\n app.router.add_route('*', '/{anything:.*}', self.stream_handler)\r\n\r\n try:\r\n aiohttp.web.run_app(app, host=host, port=self._port,\r\n handle_signals=False)\r\n except BaseException:\r\n pytest.fail('unable to start and connect to aiohttp server')\r\n raise\r\n\r\n async def __aenter__(self):\r\n self.start()\r\n await self._wait_until_up()\r\n return self\r\n\r\n async def __aexit__(self, exc_type, exc_val, exc_tb):\r\n try:\r\n self.terminate()\r\n except:\r\n pytest.fail(\"Unable to shut down server\")\r\n raise\r\n\r\n async def ok(self, request):\r\n return aiohttp.web.Response()\r\n\r\n async def stream_handler(self, request):\r\n # Without the Content-Type, most (all?) browsers will not render\r\n # partially downloaded content. Note, the response type is\r\n # StreamResponse not Response.\r\n resp = aiohttp.web.StreamResponse(status=200, reason='OK',\r\n headers={'Content-Type': 'text/html'})\r\n\r\n await resp.prepare(request)\r\n await asyncio.sleep(5, loop=self._loop)\r\n await resp.drain()\r\n return resp\r\n\r\n async def _wait_until_up(self):\r\n async with aiohttp.ClientSession() as session:\r\n for i in range(0, 30):\r\n if self.exitcode is not None:\r\n pytest.fail('unable to start and connect to aiohttp server')\r\n return\r\n\r\n try:\r\n # we need to bypass the proxies due to monkey patches\r\n await session.get(self.endpoint_url + '/ok', timeout=0.5)\r\n return\r\n except (aiohttp.ClientConnectionError, asyncio.TimeoutError):\r\n await asyncio.sleep(0.5)\r\n except BaseException:\r\n pytest.fail('unable to start and connect to aiohttp server')\r\n raise\r\n\r\n pytest.fail('unable to start and connect to aiohttp server')\r\n\r\n\r\nasync def main():\r\n async with AIOServer() as server, \\\r\n aiohttp.ClientSession(timeout=aiohttp.ClientTimeout(sock_connect=1, sock_read=1)) as session:\r\n\r\n try:\r\n response = await session.get(server.endpoint_url)\r\n await response.read()\r\n except asyncio.TimeoutError:\r\n print(\"Success\")\r\n\r\n print(\"failure\")\r\n\r\n\r\nif __name__ == '__main__':\r\n asyncio.get_event_loop().run_until_complete(main())\r\n```\n", "before_files": [{"content": "from contextlib import suppress\n\nfrom .base_protocol import BaseProtocol\nfrom .client_exceptions import (ClientOSError, ClientPayloadError,\n ServerDisconnectedError, ServerTimeoutError)\nfrom .http import HttpResponseParser\nfrom .streams import EMPTY_PAYLOAD, DataQueue\n\n\nclass ResponseHandler(BaseProtocol, DataQueue):\n \"\"\"Helper class to adapt between Protocol and StreamReader.\"\"\"\n\n def __init__(self, *, loop=None):\n BaseProtocol.__init__(self, loop=loop)\n DataQueue.__init__(self, loop=loop)\n\n self._should_close = False\n\n self._payload = None\n self._skip_payload = False\n self._payload_parser = None\n self._reading_paused = False\n\n self._timer = None\n\n self._tail = b''\n self._upgraded = False\n self._parser = None\n\n self._read_timeout = None\n self._read_timeout_handle = None\n\n @property\n def upgraded(self):\n return self._upgraded\n\n @property\n def should_close(self):\n if (self._payload is not None and\n not self._payload.is_eof() or self._upgraded):\n return True\n\n return (self._should_close or self._upgraded or\n self.exception() is not None or\n self._payload_parser is not None or\n len(self) or self._tail)\n\n def close(self):\n transport = self.transport\n if transport is not None:\n transport.close()\n self.transport = None\n self._payload = None\n return transport\n\n def is_connected(self):\n return self.transport is not None\n\n def connection_lost(self, exc):\n self._drop_timeout()\n\n if self._payload_parser is not None:\n with suppress(Exception):\n self._payload_parser.feed_eof()\n\n try:\n uncompleted = self._parser.feed_eof()\n except Exception:\n uncompleted = None\n if self._payload is not None:\n self._payload.set_exception(\n ClientPayloadError('Response payload is not completed'))\n\n if not self.is_eof():\n if isinstance(exc, OSError):\n exc = ClientOSError(*exc.args)\n if exc is None:\n exc = ServerDisconnectedError(uncompleted)\n # assigns self._should_close to True as side effect,\n # we do it anyway below\n self.set_exception(exc)\n\n self._should_close = True\n self._parser = None\n self._payload = None\n self._payload_parser = None\n self._reading_paused = False\n\n super().connection_lost(exc)\n\n def eof_received(self):\n # should call parser.feed_eof() most likely\n self._drop_timeout()\n\n def pause_reading(self):\n if not self._reading_paused:\n try:\n self.transport.pause_reading()\n except (AttributeError, NotImplementedError, RuntimeError):\n pass\n self._reading_paused = True\n self._drop_timeout()\n\n def resume_reading(self):\n if self._reading_paused:\n try:\n self.transport.resume_reading()\n except (AttributeError, NotImplementedError, RuntimeError):\n pass\n self._reading_paused = False\n self._reschedule_timeout()\n\n def set_exception(self, exc):\n self._should_close = True\n self._drop_timeout()\n super().set_exception(exc)\n\n def set_parser(self, parser, payload):\n self._payload = payload\n self._payload_parser = parser\n\n self._drop_timeout()\n\n if self._tail:\n data, self._tail = self._tail, b''\n self.data_received(data)\n\n def set_response_params(self, *, timer=None,\n skip_payload=False,\n read_until_eof=False,\n auto_decompress=True,\n read_timeout=None):\n self._skip_payload = skip_payload\n\n self._read_timeout = read_timeout\n self._reschedule_timeout()\n\n self._parser = HttpResponseParser(\n self, self._loop, timer=timer,\n payload_exception=ClientPayloadError,\n read_until_eof=read_until_eof,\n auto_decompress=auto_decompress)\n\n if self._tail:\n data, self._tail = self._tail, b''\n self.data_received(data)\n\n def _drop_timeout(self):\n if self._read_timeout_handle is not None:\n self._read_timeout_handle.cancel()\n self._read_timeout_handle = None\n\n def _reschedule_timeout(self):\n timeout = self._read_timeout\n if self._read_timeout_handle is not None:\n self._read_timeout_handle.cancel()\n\n if timeout:\n self._read_timeout_handle = self._loop.call_later(\n timeout, self._on_read_timeout)\n else:\n self._read_timeout_handle = None\n\n def _on_read_timeout(self):\n self.set_exception(\n ServerTimeoutError(\"Timeout on reading data from socket\"))\n\n def data_received(self, data):\n if not data:\n return\n\n # custom payload parser\n if self._payload_parser is not None:\n eof, tail = self._payload_parser.feed_data(data)\n if eof:\n self._payload = None\n self._payload_parser = None\n\n if tail:\n self.data_received(tail)\n return\n else:\n if self._upgraded or self._parser is None:\n # i.e. websocket connection, websocket parser is not set yet\n self._tail += data\n else:\n # parse http messages\n try:\n messages, upgraded, tail = self._parser.feed_data(data)\n except BaseException as exc:\n self.transport.close()\n # should_close is True after the call\n self.set_exception(exc)\n return\n\n self._upgraded = upgraded\n\n payload = None\n for message, payload in messages:\n if message.should_close:\n self._should_close = True\n\n self._payload = payload\n\n if self._skip_payload or message.code in (204, 304):\n self.feed_data((message, EMPTY_PAYLOAD), 0)\n else:\n self.feed_data((message, payload), 0)\n if payload is not None:\n # new message(s) was processed\n # register timeout handler unsubscribing\n # either on end-of-stream or immediatelly for\n # EMPTY_PAYLOAD\n if payload is not EMPTY_PAYLOAD:\n payload.on_eof(self._drop_timeout)\n else:\n self._drop_timeout()\n\n if tail:\n if upgraded:\n self.data_received(tail)\n else:\n self._tail = tail\n", "path": "aiohttp/client_proto.py"}], "after_files": [{"content": "from contextlib import suppress\n\nfrom .base_protocol import BaseProtocol\nfrom .client_exceptions import (ClientOSError, ClientPayloadError,\n ServerDisconnectedError, ServerTimeoutError)\nfrom .http import HttpResponseParser\nfrom .streams import EMPTY_PAYLOAD, DataQueue\n\n\nclass ResponseHandler(BaseProtocol, DataQueue):\n \"\"\"Helper class to adapt between Protocol and StreamReader.\"\"\"\n\n def __init__(self, *, loop=None):\n BaseProtocol.__init__(self, loop=loop)\n DataQueue.__init__(self, loop=loop)\n\n self._should_close = False\n\n self._payload = None\n self._skip_payload = False\n self._payload_parser = None\n self._reading_paused = False\n\n self._timer = None\n\n self._tail = b''\n self._upgraded = False\n self._parser = None\n\n self._read_timeout = None\n self._read_timeout_handle = None\n\n @property\n def upgraded(self):\n return self._upgraded\n\n @property\n def should_close(self):\n if (self._payload is not None and\n not self._payload.is_eof() or self._upgraded):\n return True\n\n return (self._should_close or self._upgraded or\n self.exception() is not None or\n self._payload_parser is not None or\n len(self) or self._tail)\n\n def close(self):\n transport = self.transport\n if transport is not None:\n transport.close()\n self.transport = None\n self._payload = None\n self._drop_timeout()\n return transport\n\n def is_connected(self):\n return self.transport is not None\n\n def connection_lost(self, exc):\n self._drop_timeout()\n\n if self._payload_parser is not None:\n with suppress(Exception):\n self._payload_parser.feed_eof()\n\n try:\n uncompleted = self._parser.feed_eof()\n except Exception:\n uncompleted = None\n if self._payload is not None:\n self._payload.set_exception(\n ClientPayloadError('Response payload is not completed'))\n\n if not self.is_eof():\n if isinstance(exc, OSError):\n exc = ClientOSError(*exc.args)\n if exc is None:\n exc = ServerDisconnectedError(uncompleted)\n # assigns self._should_close to True as side effect,\n # we do it anyway below\n self.set_exception(exc)\n\n self._should_close = True\n self._parser = None\n self._payload = None\n self._payload_parser = None\n self._reading_paused = False\n\n super().connection_lost(exc)\n\n def eof_received(self):\n # should call parser.feed_eof() most likely\n self._drop_timeout()\n\n def pause_reading(self):\n if not self._reading_paused:\n try:\n self.transport.pause_reading()\n except (AttributeError, NotImplementedError, RuntimeError):\n pass\n self._reading_paused = True\n self._drop_timeout()\n\n def resume_reading(self):\n if self._reading_paused:\n try:\n self.transport.resume_reading()\n except (AttributeError, NotImplementedError, RuntimeError):\n pass\n self._reading_paused = False\n self._reschedule_timeout()\n\n def set_exception(self, exc):\n self._should_close = True\n self._drop_timeout()\n super().set_exception(exc)\n\n def set_parser(self, parser, payload):\n self._payload = payload\n self._payload_parser = parser\n\n self._drop_timeout()\n\n if self._tail:\n data, self._tail = self._tail, b''\n self.data_received(data)\n\n def set_response_params(self, *, timer=None,\n skip_payload=False,\n read_until_eof=False,\n auto_decompress=True,\n read_timeout=None):\n self._skip_payload = skip_payload\n\n self._read_timeout = read_timeout\n self._reschedule_timeout()\n\n self._parser = HttpResponseParser(\n self, self._loop, timer=timer,\n payload_exception=ClientPayloadError,\n read_until_eof=read_until_eof,\n auto_decompress=auto_decompress)\n\n if self._tail:\n data, self._tail = self._tail, b''\n self.data_received(data)\n\n def _drop_timeout(self):\n if self._read_timeout_handle is not None:\n self._read_timeout_handle.cancel()\n self._read_timeout_handle = None\n\n def _reschedule_timeout(self):\n timeout = self._read_timeout\n if self._read_timeout_handle is not None:\n self._read_timeout_handle.cancel()\n\n if timeout:\n self._read_timeout_handle = self._loop.call_later(\n timeout, self._on_read_timeout)\n else:\n self._read_timeout_handle = None\n\n def _on_read_timeout(self):\n exc = ServerTimeoutError(\"Timeout on reading data from socket\")\n self.set_exception(exc)\n if self._payload is not None:\n self._payload.set_exception(exc)\n\n def data_received(self, data):\n if not data:\n return\n\n # custom payload parser\n if self._payload_parser is not None:\n eof, tail = self._payload_parser.feed_data(data)\n if eof:\n self._payload = None\n self._payload_parser = None\n\n if tail:\n self.data_received(tail)\n return\n else:\n if self._upgraded or self._parser is None:\n # i.e. websocket connection, websocket parser is not set yet\n self._tail += data\n else:\n # parse http messages\n try:\n messages, upgraded, tail = self._parser.feed_data(data)\n except BaseException as exc:\n self.transport.close()\n # should_close is True after the call\n self.set_exception(exc)\n return\n\n self._upgraded = upgraded\n\n payload = None\n for message, payload in messages:\n if message.should_close:\n self._should_close = True\n\n self._payload = payload\n\n if self._skip_payload or message.code in (204, 304):\n self.feed_data((message, EMPTY_PAYLOAD), 0)\n else:\n self.feed_data((message, payload), 0)\n if payload is not None:\n # new message(s) was processed\n # register timeout handler unsubscribing\n # either on end-of-stream or immediatelly for\n # EMPTY_PAYLOAD\n if payload is not EMPTY_PAYLOAD:\n payload.on_eof(self._drop_timeout)\n else:\n self._drop_timeout()\n\n if tail:\n if upgraded:\n self.data_received(tail)\n else:\n self._tail = tail\n", "path": "aiohttp/client_proto.py"}]} | 2,976 | 188 |
gh_patches_debug_37620 | rasdani/github-patches | git_diff | great-expectations__great_expectations-627 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
SparkDFDatasource can't use DatabricksTableGenerator
I'm trying to use `SparkDFDatasource` with a `DatabricksTableGenerator`. To me , it seems multiple problems are happening:
* `Datasource.add_generator` doesn't pass `type_` although `DatabricksTableGenerator requires it:
https://github.com/great-expectations/great_expectations/blob/master/great_expectations/datasource/datasource.py#L202
* `Datasource.get_batch` takes `batch_kwargs` as a `dict`, passing it to `SparkDFDatasource._get_data_asset`, which then tries `batch_kwargs.query`, rather than `batch_kwargs["query"]`:
https://github.com/great-expectations/great_expectations/blob/master/great_expectations/datasource/spark_source.py#L97
* In general, I'm not sure why `DatabricksTableGenerator._get_iterator()` is returning `dict_keyiterator` object. Shouldn't it give back the batch_kwargs dict?
https://github.com/great-expectations/great_expectations/blob/master/great_expectations/datasource/generator/databricks_generator.py#L28
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `great_expectations/datasource/spark_source.py`
Content:
```
1 import logging
2 import time
3 from six import string_types
4
5 from ..exceptions import BatchKwargsError
6
7 from .datasource import Datasource, ReaderMethods
8 from great_expectations.datasource.generator.filesystem_path_generator import SubdirReaderGenerator
9 from great_expectations.datasource.generator.databricks_generator import DatabricksTableGenerator
10 from great_expectations.datasource.generator.in_memory_generator import InMemoryGenerator
11
12 from great_expectations.data_context.types import ClassConfig
13
14 logger = logging.getLogger(__name__)
15
16 try:
17 from great_expectations.dataset.sparkdf_dataset import SparkDFDataset
18 from pyspark.sql import SparkSession, DataFrame
19 except ImportError:
20 # TODO: review logging more detail here
21 logger.debug("Unable to load pyspark; install optional spark dependency for support.")
22
23
24 class SparkDFDatasource(Datasource):
25 """The SparkDFDatasource produces SparkDFDatasets and supports generators capable of interacting with local
26 filesystem (the default subdir_reader generator) and databricks notebooks.
27 """
28
29 def __init__(self, name="default", data_context=None, data_asset_type=None, generators=None, **kwargs):
30 if generators is None:
31 # Provide a gentle way to build a datasource with a sane default,
32 # including ability to specify the base_directory
33 base_directory = kwargs.pop("base_directory", "/data")
34 reader_options = kwargs.pop("reader_options", {})
35 generators = {
36 "default": {
37 "type": "subdir_reader",
38 "base_directory": base_directory,
39 "reader_options": reader_options
40 }
41 }
42
43 if data_asset_type is None:
44 data_asset_type = ClassConfig(
45 class_name="SparkDFDataset"
46 )
47 else:
48 try:
49 data_asset_type = ClassConfig(**data_asset_type)
50 except TypeError:
51 # In this case, we allow the passed config, for now, in case they're using a legacy string-only config
52 pass
53
54 super(SparkDFDatasource, self).__init__(name, type_="spark",
55 data_context=data_context,
56 data_asset_type=data_asset_type,
57 generators=generators)
58 try:
59 self.spark = SparkSession.builder.getOrCreate()
60 except Exception:
61 logger.error("Unable to load spark context; install optional spark dependency for support.")
62 self.spark = None
63
64 self._build_generators()
65
66 def _get_generator_class(self, type_):
67 if type_ == "subdir_reader":
68 return SubdirReaderGenerator
69 elif type_ == "databricks":
70 return DatabricksTableGenerator
71 elif type_ == "memory":
72 return InMemoryGenerator
73 else:
74 raise ValueError("Unrecognized BatchGenerator type %s" % type_)
75
76 def _get_data_asset(self, batch_kwargs, expectation_suite, caching=True, **kwargs):
77 """class-private implementation of get_data_asset"""
78 if self.spark is None:
79 logger.error("No spark session available")
80 return None
81
82 batch_kwargs.update(kwargs)
83 reader_options = batch_kwargs.copy()
84
85 if "data_asset_type" in reader_options:
86 data_asset_type_config = reader_options.pop("data_asset_type") # Get and remove the config
87 try:
88 data_asset_type_config = ClassConfig(**data_asset_type_config)
89 except TypeError:
90 # We tried; we'll pass the config downstream, probably as a string, and handle an error later
91 pass
92 else:
93 data_asset_type_config = self._data_asset_type
94
95 data_asset_type = self._get_data_asset_class(data_asset_type_config)
96 if not issubclass(data_asset_type, SparkDFDataset):
97 raise ValueError("SparkDFDatasource cannot instantiate batch with data_asset_type: '%s'. It "
98 "must be a subclass of SparkDFDataset." % data_asset_type.__name__)
99
100 if "path" in batch_kwargs:
101 path = reader_options.pop("path") # We remove this so it is not used as a reader option
102 reader_options.pop("timestamp", "") # ditto timestamp (but missing ok)
103 reader_method = reader_options.pop("reader_method", None)
104 if reader_method is None:
105 reader_method = self._guess_reader_method_from_path(path)
106 if reader_method is None:
107 raise BatchKwargsError("Unable to determine reader for path: %s" % path, batch_kwargs)
108 else:
109 try:
110 reader_method = ReaderMethods[reader_method]
111 except KeyError:
112 raise BatchKwargsError("Unknown reader method: %s" % reader_method, batch_kwargs)
113
114 reader = self.spark.read
115
116 for option in reader_options.items():
117 reader = reader.option(*option)
118
119 if reader_method == ReaderMethods.CSV:
120 df = reader.csv(path)
121 elif reader_method == ReaderMethods.parquet:
122 df = reader.parquet(path)
123 elif reader_method == ReaderMethods.delta:
124 df = reader.format("delta").load(path)
125 else:
126 raise BatchKwargsError("Unsupported reader: %s" % reader_method.name, batch_kwargs)
127
128 elif "query" in batch_kwargs:
129 df = self.spark.sql(batch_kwargs.query)
130
131 elif "df" in batch_kwargs and isinstance(batch_kwargs["df"], (DataFrame, SparkDFDataset)):
132 df = batch_kwargs.pop("df") # We don't want to store the actual DataFrame in kwargs
133 if isinstance(df, SparkDFDataset):
134 # Grab just the spark_df reference, since we want to override everything else
135 df = df.spark_df
136 batch_kwargs["SparkDFRef"] = True
137 else:
138 raise BatchKwargsError("Unrecognized batch_kwargs for spark_source", batch_kwargs)
139
140 return data_asset_type(df,
141 expectation_suite=expectation_suite,
142 data_context=self._data_context,
143 batch_kwargs=batch_kwargs,
144 caching=caching)
145
146 def build_batch_kwargs(self, *args, **kwargs):
147 if len(args) > 0:
148 if isinstance(args[0], (DataFrame, SparkDFDataset)):
149 kwargs.update({
150 "df": args[0],
151 "timestamp": time.time()
152 })
153 elif isinstance(args[0], string_types):
154 kwargs.update({
155 "path": args[0],
156 "timestamp": time.time()
157 })
158 else:
159 kwargs.update({
160 "timestamp": time.time()
161 })
162 return kwargs
163
```
Path: `great_expectations/datasource/generator/databricks_generator.py`
Content:
```
1 import time
2 import logging
3
4 from great_expectations.datasource.generator.batch_generator import BatchGenerator
5
6 logger = logging.getLogger(__name__)
7
8 try:
9 from pyspark.sql import SparkSession
10 except ImportError:
11 logger.debug("Unable to load spark context; install optional spark dependency for support.")
12
13
14 class DatabricksTableGenerator(BatchGenerator):
15 """Meant to be used in a Databricks notebook
16 """
17
18 def __init__(self, name, type_, datasource, database):
19 super(DatabricksTableGenerator, self).__init__(name, type_, datasource)
20 # this should grab the already instantiated SparkSession available on Databricks notebooks
21 self.spark = datasource.spark
22 self.database = database
23
24 def get_available_data_asset_names(self):
25 tables = self.spark.sql('show tables in {}'.format(self.database))
26 return set([row.tableName for row in tables.collect()])
27
28 def _get_iterator(self, data_asset_name, **kwargs):
29 query = 'select * from {}.{}'.format(self.database, data_asset_name)
30 if kwargs.get('partition'):
31 if not kwargs.get('date_field'):
32 raise Exception('Must specify date_field when using partition.')
33 query += ' where {} = "{}"'.format(kwargs.get('date_field'), kwargs.get('partition'))
34 return iter(
35 {
36 "query": query,
37 "timestamp": time.time()
38 }
39 )
40
```
Path: `great_expectations/datasource/generator/__init__.py`
Content:
```
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/great_expectations/datasource/generator/__init__.py b/great_expectations/datasource/generator/__init__.py
--- a/great_expectations/datasource/generator/__init__.py
+++ b/great_expectations/datasource/generator/__init__.py
@@ -0,0 +1 @@
+from .databricks_generator import DatabricksTableGenerator
\ No newline at end of file
diff --git a/great_expectations/datasource/generator/databricks_generator.py b/great_expectations/datasource/generator/databricks_generator.py
--- a/great_expectations/datasource/generator/databricks_generator.py
+++ b/great_expectations/datasource/generator/databricks_generator.py
@@ -15,13 +15,22 @@
"""Meant to be used in a Databricks notebook
"""
- def __init__(self, name, type_, datasource, database):
- super(DatabricksTableGenerator, self).__init__(name, type_, datasource)
- # this should grab the already instantiated SparkSession available on Databricks notebooks
- self.spark = datasource.spark
+ def __init__(self, name="default",
+ datasource=None,
+ database="default"):
+ super(DatabricksTableGenerator, self).__init__(name, type_="databricks", datasource=datasource)
self.database = database
+ try:
+ self.spark = SparkSession.builder.getOrCreate()
+ except Exception:
+ logger.error("Unable to load spark context; install optional spark dependency for support.")
+ self.spark = None
def get_available_data_asset_names(self):
+ if self.spark is None:
+ logger.warning("No sparkSession available to query for tables.")
+ return set()
+
tables = self.spark.sql('show tables in {}'.format(self.database))
return set([row.tableName for row in tables.collect()])
@@ -31,9 +40,9 @@
if not kwargs.get('date_field'):
raise Exception('Must specify date_field when using partition.')
query += ' where {} = "{}"'.format(kwargs.get('date_field'), kwargs.get('partition'))
- return iter(
+ return iter([
{
"query": query,
"timestamp": time.time()
}
- )
+ ])
diff --git a/great_expectations/datasource/spark_source.py b/great_expectations/datasource/spark_source.py
--- a/great_expectations/datasource/spark_source.py
+++ b/great_expectations/datasource/spark_source.py
@@ -126,7 +126,7 @@
raise BatchKwargsError("Unsupported reader: %s" % reader_method.name, batch_kwargs)
elif "query" in batch_kwargs:
- df = self.spark.sql(batch_kwargs.query)
+ df = self.spark.sql(batch_kwargs["query"])
elif "df" in batch_kwargs and isinstance(batch_kwargs["df"], (DataFrame, SparkDFDataset)):
df = batch_kwargs.pop("df") # We don't want to store the actual DataFrame in kwargs
| {"golden_diff": "diff --git a/great_expectations/datasource/generator/__init__.py b/great_expectations/datasource/generator/__init__.py\n--- a/great_expectations/datasource/generator/__init__.py\n+++ b/great_expectations/datasource/generator/__init__.py\n@@ -0,0 +1 @@\n+from .databricks_generator import DatabricksTableGenerator\n\\ No newline at end of file\ndiff --git a/great_expectations/datasource/generator/databricks_generator.py b/great_expectations/datasource/generator/databricks_generator.py\n--- a/great_expectations/datasource/generator/databricks_generator.py\n+++ b/great_expectations/datasource/generator/databricks_generator.py\n@@ -15,13 +15,22 @@\n \"\"\"Meant to be used in a Databricks notebook\n \"\"\"\n \n- def __init__(self, name, type_, datasource, database):\n- super(DatabricksTableGenerator, self).__init__(name, type_, datasource)\n- # this should grab the already instantiated SparkSession available on Databricks notebooks\n- self.spark = datasource.spark\n+ def __init__(self, name=\"default\",\n+ datasource=None,\n+ database=\"default\"):\n+ super(DatabricksTableGenerator, self).__init__(name, type_=\"databricks\", datasource=datasource)\n self.database = database\n+ try:\n+ self.spark = SparkSession.builder.getOrCreate()\n+ except Exception:\n+ logger.error(\"Unable to load spark context; install optional spark dependency for support.\")\n+ self.spark = None\n \n def get_available_data_asset_names(self):\n+ if self.spark is None:\n+ logger.warning(\"No sparkSession available to query for tables.\")\n+ return set()\n+\n tables = self.spark.sql('show tables in {}'.format(self.database))\n return set([row.tableName for row in tables.collect()])\n \n@@ -31,9 +40,9 @@\n if not kwargs.get('date_field'):\n raise Exception('Must specify date_field when using partition.')\n query += ' where {} = \"{}\"'.format(kwargs.get('date_field'), kwargs.get('partition'))\n- return iter(\n+ return iter([\n {\n \"query\": query,\n \"timestamp\": time.time()\n }\n- )\n+ ])\ndiff --git a/great_expectations/datasource/spark_source.py b/great_expectations/datasource/spark_source.py\n--- a/great_expectations/datasource/spark_source.py\n+++ b/great_expectations/datasource/spark_source.py\n@@ -126,7 +126,7 @@\n raise BatchKwargsError(\"Unsupported reader: %s\" % reader_method.name, batch_kwargs)\n \n elif \"query\" in batch_kwargs:\n- df = self.spark.sql(batch_kwargs.query)\n+ df = self.spark.sql(batch_kwargs[\"query\"])\n \n elif \"df\" in batch_kwargs and isinstance(batch_kwargs[\"df\"], (DataFrame, SparkDFDataset)):\n df = batch_kwargs.pop(\"df\") # We don't want to store the actual DataFrame in kwargs\n", "issue": "SparkDFDatasource can't use DatabricksTableGenerator\nI'm trying to use `SparkDFDatasource` with a `DatabricksTableGenerator`. To me , it seems multiple problems are happening:\r\n\r\n* `Datasource.add_generator` doesn't pass `type_` although `DatabricksTableGenerator requires it:\r\nhttps://github.com/great-expectations/great_expectations/blob/master/great_expectations/datasource/datasource.py#L202\r\n\r\n* `Datasource.get_batch` takes `batch_kwargs` as a `dict`, passing it to `SparkDFDatasource._get_data_asset`, which then tries `batch_kwargs.query`, rather than `batch_kwargs[\"query\"]`:\r\nhttps://github.com/great-expectations/great_expectations/blob/master/great_expectations/datasource/spark_source.py#L97\r\n\r\n* In general, I'm not sure why `DatabricksTableGenerator._get_iterator()` is returning `dict_keyiterator` object. Shouldn't it give back the batch_kwargs dict?\r\nhttps://github.com/great-expectations/great_expectations/blob/master/great_expectations/datasource/generator/databricks_generator.py#L28\n", "before_files": [{"content": "import logging\nimport time\nfrom six import string_types\n\nfrom ..exceptions import BatchKwargsError\n\nfrom .datasource import Datasource, ReaderMethods\nfrom great_expectations.datasource.generator.filesystem_path_generator import SubdirReaderGenerator\nfrom great_expectations.datasource.generator.databricks_generator import DatabricksTableGenerator\nfrom great_expectations.datasource.generator.in_memory_generator import InMemoryGenerator\n\nfrom great_expectations.data_context.types import ClassConfig\n\nlogger = logging.getLogger(__name__)\n\ntry:\n from great_expectations.dataset.sparkdf_dataset import SparkDFDataset\n from pyspark.sql import SparkSession, DataFrame\nexcept ImportError:\n # TODO: review logging more detail here\n logger.debug(\"Unable to load pyspark; install optional spark dependency for support.\")\n\n\nclass SparkDFDatasource(Datasource):\n \"\"\"The SparkDFDatasource produces SparkDFDatasets and supports generators capable of interacting with local\n filesystem (the default subdir_reader generator) and databricks notebooks.\n \"\"\"\n\n def __init__(self, name=\"default\", data_context=None, data_asset_type=None, generators=None, **kwargs):\n if generators is None:\n # Provide a gentle way to build a datasource with a sane default,\n # including ability to specify the base_directory\n base_directory = kwargs.pop(\"base_directory\", \"/data\")\n reader_options = kwargs.pop(\"reader_options\", {})\n generators = {\n \"default\": {\n \"type\": \"subdir_reader\",\n \"base_directory\": base_directory,\n \"reader_options\": reader_options\n }\n }\n\n if data_asset_type is None:\n data_asset_type = ClassConfig(\n class_name=\"SparkDFDataset\"\n )\n else:\n try:\n data_asset_type = ClassConfig(**data_asset_type)\n except TypeError:\n # In this case, we allow the passed config, for now, in case they're using a legacy string-only config\n pass\n\n super(SparkDFDatasource, self).__init__(name, type_=\"spark\",\n data_context=data_context,\n data_asset_type=data_asset_type,\n generators=generators)\n try:\n self.spark = SparkSession.builder.getOrCreate()\n except Exception:\n logger.error(\"Unable to load spark context; install optional spark dependency for support.\")\n self.spark = None\n\n self._build_generators()\n\n def _get_generator_class(self, type_):\n if type_ == \"subdir_reader\":\n return SubdirReaderGenerator\n elif type_ == \"databricks\":\n return DatabricksTableGenerator\n elif type_ == \"memory\":\n return InMemoryGenerator\n else:\n raise ValueError(\"Unrecognized BatchGenerator type %s\" % type_)\n\n def _get_data_asset(self, batch_kwargs, expectation_suite, caching=True, **kwargs):\n \"\"\"class-private implementation of get_data_asset\"\"\"\n if self.spark is None:\n logger.error(\"No spark session available\")\n return None\n\n batch_kwargs.update(kwargs)\n reader_options = batch_kwargs.copy()\n\n if \"data_asset_type\" in reader_options:\n data_asset_type_config = reader_options.pop(\"data_asset_type\") # Get and remove the config\n try:\n data_asset_type_config = ClassConfig(**data_asset_type_config)\n except TypeError:\n # We tried; we'll pass the config downstream, probably as a string, and handle an error later\n pass\n else:\n data_asset_type_config = self._data_asset_type\n\n data_asset_type = self._get_data_asset_class(data_asset_type_config)\n if not issubclass(data_asset_type, SparkDFDataset):\n raise ValueError(\"SparkDFDatasource cannot instantiate batch with data_asset_type: '%s'. It \"\n \"must be a subclass of SparkDFDataset.\" % data_asset_type.__name__)\n\n if \"path\" in batch_kwargs:\n path = reader_options.pop(\"path\") # We remove this so it is not used as a reader option\n reader_options.pop(\"timestamp\", \"\") # ditto timestamp (but missing ok)\n reader_method = reader_options.pop(\"reader_method\", None)\n if reader_method is None:\n reader_method = self._guess_reader_method_from_path(path)\n if reader_method is None:\n raise BatchKwargsError(\"Unable to determine reader for path: %s\" % path, batch_kwargs)\n else:\n try:\n reader_method = ReaderMethods[reader_method]\n except KeyError:\n raise BatchKwargsError(\"Unknown reader method: %s\" % reader_method, batch_kwargs)\n\n reader = self.spark.read\n\n for option in reader_options.items():\n reader = reader.option(*option)\n\n if reader_method == ReaderMethods.CSV:\n df = reader.csv(path)\n elif reader_method == ReaderMethods.parquet:\n df = reader.parquet(path)\n elif reader_method == ReaderMethods.delta:\n df = reader.format(\"delta\").load(path)\n else:\n raise BatchKwargsError(\"Unsupported reader: %s\" % reader_method.name, batch_kwargs)\n \n elif \"query\" in batch_kwargs:\n df = self.spark.sql(batch_kwargs.query)\n\n elif \"df\" in batch_kwargs and isinstance(batch_kwargs[\"df\"], (DataFrame, SparkDFDataset)):\n df = batch_kwargs.pop(\"df\") # We don't want to store the actual DataFrame in kwargs\n if isinstance(df, SparkDFDataset):\n # Grab just the spark_df reference, since we want to override everything else\n df = df.spark_df\n batch_kwargs[\"SparkDFRef\"] = True\n else:\n raise BatchKwargsError(\"Unrecognized batch_kwargs for spark_source\", batch_kwargs)\n\n return data_asset_type(df,\n expectation_suite=expectation_suite,\n data_context=self._data_context,\n batch_kwargs=batch_kwargs,\n caching=caching)\n\n def build_batch_kwargs(self, *args, **kwargs):\n if len(args) > 0:\n if isinstance(args[0], (DataFrame, SparkDFDataset)):\n kwargs.update({\n \"df\": args[0],\n \"timestamp\": time.time()\n })\n elif isinstance(args[0], string_types):\n kwargs.update({\n \"path\": args[0],\n \"timestamp\": time.time()\n })\n else:\n kwargs.update({\n \"timestamp\": time.time()\n })\n return kwargs\n", "path": "great_expectations/datasource/spark_source.py"}, {"content": "import time\nimport logging\n\nfrom great_expectations.datasource.generator.batch_generator import BatchGenerator\n\nlogger = logging.getLogger(__name__)\n\ntry:\n from pyspark.sql import SparkSession\nexcept ImportError:\n logger.debug(\"Unable to load spark context; install optional spark dependency for support.\")\n\n\nclass DatabricksTableGenerator(BatchGenerator):\n \"\"\"Meant to be used in a Databricks notebook\n \"\"\"\n\n def __init__(self, name, type_, datasource, database):\n super(DatabricksTableGenerator, self).__init__(name, type_, datasource)\n # this should grab the already instantiated SparkSession available on Databricks notebooks\n self.spark = datasource.spark\n self.database = database\n\n def get_available_data_asset_names(self):\n tables = self.spark.sql('show tables in {}'.format(self.database))\n return set([row.tableName for row in tables.collect()])\n\n def _get_iterator(self, data_asset_name, **kwargs):\n query = 'select * from {}.{}'.format(self.database, data_asset_name)\n if kwargs.get('partition'):\n if not kwargs.get('date_field'):\n raise Exception('Must specify date_field when using partition.')\n query += ' where {} = \"{}\"'.format(kwargs.get('date_field'), kwargs.get('partition'))\n return iter(\n {\n \"query\": query,\n \"timestamp\": time.time()\n }\n )\n", "path": "great_expectations/datasource/generator/databricks_generator.py"}, {"content": "", "path": "great_expectations/datasource/generator/__init__.py"}], "after_files": [{"content": "import logging\nimport time\nfrom six import string_types\n\nfrom ..exceptions import BatchKwargsError\n\nfrom .datasource import Datasource, ReaderMethods\nfrom great_expectations.datasource.generator.filesystem_path_generator import SubdirReaderGenerator\nfrom great_expectations.datasource.generator.databricks_generator import DatabricksTableGenerator\nfrom great_expectations.datasource.generator.in_memory_generator import InMemoryGenerator\n\nfrom great_expectations.data_context.types import ClassConfig\n\nlogger = logging.getLogger(__name__)\n\ntry:\n from great_expectations.dataset.sparkdf_dataset import SparkDFDataset\n from pyspark.sql import SparkSession, DataFrame\nexcept ImportError:\n # TODO: review logging more detail here\n logger.debug(\"Unable to load pyspark; install optional spark dependency for support.\")\n\n\nclass SparkDFDatasource(Datasource):\n \"\"\"The SparkDFDatasource produces SparkDFDatasets and supports generators capable of interacting with local\n filesystem (the default subdir_reader generator) and databricks notebooks.\n \"\"\"\n\n def __init__(self, name=\"default\", data_context=None, data_asset_type=None, generators=None, **kwargs):\n if generators is None:\n # Provide a gentle way to build a datasource with a sane default,\n # including ability to specify the base_directory\n base_directory = kwargs.pop(\"base_directory\", \"/data\")\n reader_options = kwargs.pop(\"reader_options\", {})\n generators = {\n \"default\": {\n \"type\": \"subdir_reader\",\n \"base_directory\": base_directory,\n \"reader_options\": reader_options\n }\n }\n\n if data_asset_type is None:\n data_asset_type = ClassConfig(\n class_name=\"SparkDFDataset\"\n )\n else:\n try:\n data_asset_type = ClassConfig(**data_asset_type)\n except TypeError:\n # In this case, we allow the passed config, for now, in case they're using a legacy string-only config\n pass\n\n super(SparkDFDatasource, self).__init__(name, type_=\"spark\",\n data_context=data_context,\n data_asset_type=data_asset_type,\n generators=generators)\n try:\n self.spark = SparkSession.builder.getOrCreate()\n except Exception:\n logger.error(\"Unable to load spark context; install optional spark dependency for support.\")\n self.spark = None\n\n self._build_generators()\n\n def _get_generator_class(self, type_):\n if type_ == \"subdir_reader\":\n return SubdirReaderGenerator\n elif type_ == \"databricks\":\n return DatabricksTableGenerator\n elif type_ == \"memory\":\n return InMemoryGenerator\n else:\n raise ValueError(\"Unrecognized BatchGenerator type %s\" % type_)\n\n def _get_data_asset(self, batch_kwargs, expectation_suite, caching=True, **kwargs):\n \"\"\"class-private implementation of get_data_asset\"\"\"\n if self.spark is None:\n logger.error(\"No spark session available\")\n return None\n\n batch_kwargs.update(kwargs)\n reader_options = batch_kwargs.copy()\n\n if \"data_asset_type\" in reader_options:\n data_asset_type_config = reader_options.pop(\"data_asset_type\") # Get and remove the config\n try:\n data_asset_type_config = ClassConfig(**data_asset_type_config)\n except TypeError:\n # We tried; we'll pass the config downstream, probably as a string, and handle an error later\n pass\n else:\n data_asset_type_config = self._data_asset_type\n\n data_asset_type = self._get_data_asset_class(data_asset_type_config)\n if not issubclass(data_asset_type, SparkDFDataset):\n raise ValueError(\"SparkDFDatasource cannot instantiate batch with data_asset_type: '%s'. It \"\n \"must be a subclass of SparkDFDataset.\" % data_asset_type.__name__)\n\n if \"path\" in batch_kwargs:\n path = reader_options.pop(\"path\") # We remove this so it is not used as a reader option\n reader_options.pop(\"timestamp\", \"\") # ditto timestamp (but missing ok)\n reader_method = reader_options.pop(\"reader_method\", None)\n if reader_method is None:\n reader_method = self._guess_reader_method_from_path(path)\n if reader_method is None:\n raise BatchKwargsError(\"Unable to determine reader for path: %s\" % path, batch_kwargs)\n else:\n try:\n reader_method = ReaderMethods[reader_method]\n except KeyError:\n raise BatchKwargsError(\"Unknown reader method: %s\" % reader_method, batch_kwargs)\n\n reader = self.spark.read\n\n for option in reader_options.items():\n reader = reader.option(*option)\n\n if reader_method == ReaderMethods.CSV:\n df = reader.csv(path)\n elif reader_method == ReaderMethods.parquet:\n df = reader.parquet(path)\n elif reader_method == ReaderMethods.delta:\n df = reader.format(\"delta\").load(path)\n else:\n raise BatchKwargsError(\"Unsupported reader: %s\" % reader_method.name, batch_kwargs)\n \n elif \"query\" in batch_kwargs:\n df = self.spark.sql(batch_kwargs[\"query\"])\n\n elif \"df\" in batch_kwargs and isinstance(batch_kwargs[\"df\"], (DataFrame, SparkDFDataset)):\n df = batch_kwargs.pop(\"df\") # We don't want to store the actual DataFrame in kwargs\n if isinstance(df, SparkDFDataset):\n # Grab just the spark_df reference, since we want to override everything else\n df = df.spark_df\n batch_kwargs[\"SparkDFRef\"] = True\n else:\n raise BatchKwargsError(\"Unrecognized batch_kwargs for spark_source\", batch_kwargs)\n\n return data_asset_type(df,\n expectation_suite=expectation_suite,\n data_context=self._data_context,\n batch_kwargs=batch_kwargs,\n caching=caching)\n\n def build_batch_kwargs(self, *args, **kwargs):\n if len(args) > 0:\n if isinstance(args[0], (DataFrame, SparkDFDataset)):\n kwargs.update({\n \"df\": args[0],\n \"timestamp\": time.time()\n })\n elif isinstance(args[0], string_types):\n kwargs.update({\n \"path\": args[0],\n \"timestamp\": time.time()\n })\n else:\n kwargs.update({\n \"timestamp\": time.time()\n })\n return kwargs\n", "path": "great_expectations/datasource/spark_source.py"}, {"content": "import time\nimport logging\n\nfrom great_expectations.datasource.generator.batch_generator import BatchGenerator\n\nlogger = logging.getLogger(__name__)\n\ntry:\n from pyspark.sql import SparkSession\nexcept ImportError:\n logger.debug(\"Unable to load spark context; install optional spark dependency for support.\")\n\n\nclass DatabricksTableGenerator(BatchGenerator):\n \"\"\"Meant to be used in a Databricks notebook\n \"\"\"\n\n def __init__(self, name=\"default\",\n datasource=None,\n database=\"default\"):\n super(DatabricksTableGenerator, self).__init__(name, type_=\"databricks\", datasource=datasource)\n self.database = database\n try:\n self.spark = SparkSession.builder.getOrCreate()\n except Exception:\n logger.error(\"Unable to load spark context; install optional spark dependency for support.\")\n self.spark = None\n\n def get_available_data_asset_names(self):\n if self.spark is None:\n logger.warning(\"No sparkSession available to query for tables.\")\n return set()\n\n tables = self.spark.sql('show tables in {}'.format(self.database))\n return set([row.tableName for row in tables.collect()])\n\n def _get_iterator(self, data_asset_name, **kwargs):\n query = 'select * from {}.{}'.format(self.database, data_asset_name)\n if kwargs.get('partition'):\n if not kwargs.get('date_field'):\n raise Exception('Must specify date_field when using partition.')\n query += ' where {} = \"{}\"'.format(kwargs.get('date_field'), kwargs.get('partition'))\n return iter([\n {\n \"query\": query,\n \"timestamp\": time.time()\n }\n ])\n", "path": "great_expectations/datasource/generator/databricks_generator.py"}, {"content": "from .databricks_generator import DatabricksTableGenerator", "path": "great_expectations/datasource/generator/__init__.py"}]} | 2,646 | 679 |
gh_patches_debug_35249 | rasdani/github-patches | git_diff | sktime__sktime-911 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] Prophet adapter fails to clone after setting parameters
**Describe the bug**
<!--
A clear and concise description of what the bug is.
-->
Using sklearn's `clone` on `Prophet` after either `seasonality_prior_scale`, `changepoint_prior_scale` or `holidays_prior_scale` were set with a non-float value through `set_params` will raise a `RuntimeError` during cloning. This is due to the `float` conversion happening in `__init__`, which is against what sklearn expects during cloning - it expects that all of the values passed as arguments to `__init__` will not be modified in any way when being set as object attributes.
**To Reproduce**
<!--
Add a Minimal, Complete, and Verifiable example (for more details, see e.g. https://stackoverflow.com/help/mcve
If the code is too long, feel free to put it in a public gist and link it in the issue: https://gist.github.com
-->
```python
from sktime.forecasting.fbprophet import Prophet
from sklearn.base import clone
model = Prophet()
model.set_params(changepoint_prior_scale=1)
clone(model)
```
This will raise a sklearn `RuntimeError`.
**Expected behavior**
<!--
A clear and concise description of what you expected to happen.
-->
No exception is raised.
**Additional context**
<!--
Add any other context about the problem here.
-->
**Versions**
<details>
<!--
Please run the following code snippet and paste the output here:
from sktime import show_versions; show_versions()
-->
```
System:
python: 3.8.5 (default, Sep 4 2020, 07:30:14) [GCC 7.3.0]
executable: /home/baum/anaconda3/bin/python
machine: Linux-5.11.0-7614-generic-x86_64-with-glibc2.10
Python dependencies:
pip: 20.2.4
setuptools: 50.3.1.post20201107
sklearn: 0.23.2
sktime: 0.5.3
statsmodels: 0.12.2
numpy: 1.20.1
scipy: 1.6.1
Cython: 0.29.17
pandas: 1.2.3
matplotlib: 3.4.1
joblib: 1.0.1
numba: 0.51.2
pmdarima: 1.8.0
tsfresh: None
```
</details>
<!-- Thanks for contributing! -->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sktime/forecasting/fbprophet.py`
Content:
```
1 #!/usr/bin/env python3 -u
2 # -*- coding: utf-8 -*-
3 # copyright: sktime developers, BSD-3-Clause License (see LICENSE file)
4
5 __author__ = ["Martin Walter"]
6 __all__ = ["Prophet"]
7
8 from sktime.forecasting.base._base import DEFAULT_ALPHA
9 from sktime.forecasting.base.adapters import _ProphetAdapter
10 from sktime.utils.validation._dependencies import _check_soft_dependencies
11
12 _check_soft_dependencies("fbprophet")
13
14
15 class Prophet(_ProphetAdapter):
16 """Prophet forecaster by wrapping fbprophet.
17 Parameters
18 ----------
19 freq: String of DatetimeIndex frequency. See here for possible values:
20 https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html
21 #timeseries-offset-aliases
22 add_seasonality: Dict with args for Prophet.add_seasonality().
23 Dict can have the following keys/values:
24 name: string name of the seasonality component.
25 period: float number of days in one period.
26 fourier_order: int number of Fourier components to use.
27 prior_scale: optional float prior scale for this component.
28 mode: optional 'additive' or 'multiplicative'
29 condition_name: string name of the seasonality condition.
30 add_country_holidays: Dict with args for Prophet.add_country_holidays().
31 Dict can have the following keys/values:
32 country_name: Name of the country, like 'UnitedStates' or 'US'
33 growth: String 'linear' or 'logistic' to specify a linear or logistic
34 trend.
35 changepoints: List of dates at which to include potential changepoints. If
36 not specified, potential changepoints are selected automatically.
37 n_changepoints: Number of potential changepoints to include. Not used
38 if input `changepoints` is supplied. If `changepoints` is not supplied,
39 then n_changepoints potential changepoints are selected uniformly from
40 the first `changepoint_range` proportion of the history.
41 changepoint_range: Proportion of history in which trend changepoints will
42 be estimated. Defaults to 0.8 for the first 80%. Not used if
43 `changepoints` is specified.
44 yearly_seasonality: Fit yearly seasonality.
45 Can be 'auto', True, False, or a number of Fourier terms to generate.
46 weekly_seasonality: Fit weekly seasonality.
47 Can be 'auto', True, False, or a number of Fourier terms to generate.
48 daily_seasonality: Fit daily seasonality.
49 Can be 'auto', True, False, or a number of Fourier terms to generate.
50 holidays: pd.DataFrame with columns holiday (string) and ds (date type)
51 and optionally columns lower_window and upper_window which specify a
52 range of days around the date to be included as holidays.
53 lower_window=-2 will include 2 days prior to the date as holidays. Also
54 optionally can have a column prior_scale specifying the prior scale for
55 that holiday.
56 seasonality_mode: 'additive' (default) or 'multiplicative'.
57 seasonality_prior_scale: Parameter modulating the strength of the
58 seasonality model. Larger values allow the model to fit larger seasonal
59 fluctuations, smaller values dampen the seasonality. Can be specified
60 for individual seasonalities using add_seasonality.
61 holidays_prior_scale: Parameter modulating the strength of the holiday
62 components model, unless overridden in the holidays input.
63 changepoint_prior_scale: Parameter modulating the flexibility of the
64 automatic changepoint selection. Large values will allow many
65 changepoints, small values will allow few changepoints.
66 mcmc_samples: Integer, if greater than 0, will do full Bayesian inference
67 with the specified number of MCMC samples. If 0, will do MAP
68 estimation.
69 alpha: Float, width of the uncertainty intervals provided
70 for the forecast. If mcmc_samples=0, this will be only the uncertainty
71 in the trend using the MAP estimate of the extrapolated generative
72 model. If mcmc.samples>0, this will be integrated over all model
73 parameters, which will include uncertainty in seasonality.
74 uncertainty_samples: Number of simulated draws used to estimate
75 uncertainty intervals. Settings this value to 0 or False will disable
76 uncertainty estimation and speed up the calculation.
77 stan_backend: str as defined in StanBackendEnum default: None - will try to
78 iterate over all available backends and find the working one
79
80 References
81 ----------
82 https://facebook.github.io/prophet
83 https://github.com/facebook/prophet
84
85 Example
86 ----------
87 >>> from sktime.datasets import load_airline
88 >>> from sktime.forecasting.fbprophet import Prophet
89 >>> # Prophet requires to have data with a pandas.DatetimeIndex
90 >>> y = load_airline().to_timestamp(freq='M')
91 >>> forecaster = Prophet(
92 ... seasonality_mode='multiplicative',
93 ... n_changepoints=int(len(y) / 12),
94 ... add_country_holidays={'country_name': 'Germany'},
95 ... yearly_seasonality=True)
96 >>> forecaster.fit(y)
97 Prophet(...)
98 >>> y_pred = forecaster.predict(fh=[1,2,3])
99 """
100
101 def __init__(
102 self,
103 # Args due to wrapping
104 freq=None,
105 add_seasonality=None,
106 add_country_holidays=None,
107 # Args of fbprophet
108 growth="linear",
109 changepoints=None,
110 n_changepoints=25,
111 changepoint_range=0.8,
112 yearly_seasonality="auto",
113 weekly_seasonality="auto",
114 daily_seasonality="auto",
115 holidays=None,
116 seasonality_mode="additive",
117 seasonality_prior_scale=10.0,
118 holidays_prior_scale=10.0,
119 changepoint_prior_scale=0.05,
120 mcmc_samples=0,
121 alpha=DEFAULT_ALPHA,
122 uncertainty_samples=1000,
123 stan_backend=None,
124 verbose=0,
125 ):
126 self.freq = freq
127 self.add_seasonality = add_seasonality
128 self.add_country_holidays = add_country_holidays
129
130 self.growth = growth
131 self.changepoints = changepoints
132 self.n_changepoints = n_changepoints
133 self.changepoint_range = changepoint_range
134 self.yearly_seasonality = yearly_seasonality
135 self.weekly_seasonality = weekly_seasonality
136 self.daily_seasonality = daily_seasonality
137 self.holidays = holidays
138 self.seasonality_mode = seasonality_mode
139 self.seasonality_prior_scale = float(seasonality_prior_scale)
140 self.changepoint_prior_scale = float(changepoint_prior_scale)
141 self.holidays_prior_scale = float(holidays_prior_scale)
142 self.mcmc_samples = mcmc_samples
143 self.alpha = alpha
144 self.uncertainty_samples = uncertainty_samples
145 self.stan_backend = stan_backend
146 self.verbose = verbose
147
148 # import inside method to avoid hard dependency
149 from fbprophet.forecaster import Prophet as _Prophet
150
151 self._ModelClass = _Prophet
152
153 super(Prophet, self).__init__()
154
155 def _instantiate_model(self):
156 self._forecaster = self._ModelClass(
157 growth=self.growth,
158 changepoints=self.changepoints,
159 n_changepoints=self.n_changepoints,
160 changepoint_range=self.changepoint_range,
161 yearly_seasonality=self.yearly_seasonality,
162 weekly_seasonality=self.weekly_seasonality,
163 daily_seasonality=self.daily_seasonality,
164 holidays=self.holidays,
165 seasonality_mode=self.seasonality_mode,
166 seasonality_prior_scale=self.seasonality_prior_scale,
167 holidays_prior_scale=self.holidays_prior_scale,
168 changepoint_prior_scale=self.changepoint_prior_scale,
169 mcmc_samples=self.mcmc_samples,
170 interval_width=1 - self.alpha,
171 uncertainty_samples=self.uncertainty_samples,
172 stan_backend=self.stan_backend,
173 )
174 return self
175
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sktime/forecasting/fbprophet.py b/sktime/forecasting/fbprophet.py
--- a/sktime/forecasting/fbprophet.py
+++ b/sktime/forecasting/fbprophet.py
@@ -2,6 +2,8 @@
# -*- coding: utf-8 -*-
# copyright: sktime developers, BSD-3-Clause License (see LICENSE file)
+"""Prophet forecaster by wrapping fbprophet."""
+
__author__ = ["Martin Walter"]
__all__ = ["Prophet"]
@@ -14,6 +16,7 @@
class Prophet(_ProphetAdapter):
"""Prophet forecaster by wrapping fbprophet.
+
Parameters
----------
freq: String of DatetimeIndex frequency. See here for possible values:
@@ -136,9 +139,9 @@
self.daily_seasonality = daily_seasonality
self.holidays = holidays
self.seasonality_mode = seasonality_mode
- self.seasonality_prior_scale = float(seasonality_prior_scale)
- self.changepoint_prior_scale = float(changepoint_prior_scale)
- self.holidays_prior_scale = float(holidays_prior_scale)
+ self.seasonality_prior_scale = seasonality_prior_scale
+ self.changepoint_prior_scale = changepoint_prior_scale
+ self.holidays_prior_scale = holidays_prior_scale
self.mcmc_samples = mcmc_samples
self.alpha = alpha
self.uncertainty_samples = uncertainty_samples
@@ -163,9 +166,9 @@
daily_seasonality=self.daily_seasonality,
holidays=self.holidays,
seasonality_mode=self.seasonality_mode,
- seasonality_prior_scale=self.seasonality_prior_scale,
- holidays_prior_scale=self.holidays_prior_scale,
- changepoint_prior_scale=self.changepoint_prior_scale,
+ seasonality_prior_scale=float(self.seasonality_prior_scale),
+ holidays_prior_scale=float(self.holidays_prior_scale),
+ changepoint_prior_scale=float(self.changepoint_prior_scale),
mcmc_samples=self.mcmc_samples,
interval_width=1 - self.alpha,
uncertainty_samples=self.uncertainty_samples,
| {"golden_diff": "diff --git a/sktime/forecasting/fbprophet.py b/sktime/forecasting/fbprophet.py\n--- a/sktime/forecasting/fbprophet.py\n+++ b/sktime/forecasting/fbprophet.py\n@@ -2,6 +2,8 @@\n # -*- coding: utf-8 -*-\n # copyright: sktime developers, BSD-3-Clause License (see LICENSE file)\n \n+\"\"\"Prophet forecaster by wrapping fbprophet.\"\"\"\n+\n __author__ = [\"Martin Walter\"]\n __all__ = [\"Prophet\"]\n \n@@ -14,6 +16,7 @@\n \n class Prophet(_ProphetAdapter):\n \"\"\"Prophet forecaster by wrapping fbprophet.\n+\n Parameters\n ----------\n freq: String of DatetimeIndex frequency. See here for possible values:\n@@ -136,9 +139,9 @@\n self.daily_seasonality = daily_seasonality\n self.holidays = holidays\n self.seasonality_mode = seasonality_mode\n- self.seasonality_prior_scale = float(seasonality_prior_scale)\n- self.changepoint_prior_scale = float(changepoint_prior_scale)\n- self.holidays_prior_scale = float(holidays_prior_scale)\n+ self.seasonality_prior_scale = seasonality_prior_scale\n+ self.changepoint_prior_scale = changepoint_prior_scale\n+ self.holidays_prior_scale = holidays_prior_scale\n self.mcmc_samples = mcmc_samples\n self.alpha = alpha\n self.uncertainty_samples = uncertainty_samples\n@@ -163,9 +166,9 @@\n daily_seasonality=self.daily_seasonality,\n holidays=self.holidays,\n seasonality_mode=self.seasonality_mode,\n- seasonality_prior_scale=self.seasonality_prior_scale,\n- holidays_prior_scale=self.holidays_prior_scale,\n- changepoint_prior_scale=self.changepoint_prior_scale,\n+ seasonality_prior_scale=float(self.seasonality_prior_scale),\n+ holidays_prior_scale=float(self.holidays_prior_scale),\n+ changepoint_prior_scale=float(self.changepoint_prior_scale),\n mcmc_samples=self.mcmc_samples,\n interval_width=1 - self.alpha,\n uncertainty_samples=self.uncertainty_samples,\n", "issue": "[BUG] Prophet adapter fails to clone after setting parameters\n**Describe the bug**\r\n<!--\r\nA clear and concise description of what the bug is.\r\n-->\r\n\r\nUsing sklearn's `clone` on `Prophet` after either `seasonality_prior_scale`, `changepoint_prior_scale` or `holidays_prior_scale` were set with a non-float value through `set_params` will raise a `RuntimeError` during cloning. This is due to the `float` conversion happening in `__init__`, which is against what sklearn expects during cloning - it expects that all of the values passed as arguments to `__init__` will not be modified in any way when being set as object attributes.\r\n\r\n**To Reproduce**\r\n<!--\r\nAdd a Minimal, Complete, and Verifiable example (for more details, see e.g. https://stackoverflow.com/help/mcve\r\n\r\nIf the code is too long, feel free to put it in a public gist and link it in the issue: https://gist.github.com\r\n-->\r\n\r\n```python\r\nfrom sktime.forecasting.fbprophet import Prophet\r\nfrom sklearn.base import clone\r\nmodel = Prophet()\r\nmodel.set_params(changepoint_prior_scale=1)\r\nclone(model)\r\n```\r\n\r\nThis will raise a sklearn `RuntimeError`.\r\n\r\n**Expected behavior**\r\n<!--\r\nA clear and concise description of what you expected to happen.\r\n-->\r\n\r\nNo exception is raised.\r\n\r\n**Additional context**\r\n<!--\r\nAdd any other context about the problem here.\r\n-->\r\n\r\n**Versions**\r\n<details>\r\n\r\n<!--\r\nPlease run the following code snippet and paste the output here:\r\n \r\nfrom sktime import show_versions; show_versions()\r\n-->\r\n```\r\nSystem:\r\n python: 3.8.5 (default, Sep 4 2020, 07:30:14) [GCC 7.3.0]\r\nexecutable: /home/baum/anaconda3/bin/python\r\n machine: Linux-5.11.0-7614-generic-x86_64-with-glibc2.10\r\n\r\nPython dependencies:\r\n pip: 20.2.4\r\n setuptools: 50.3.1.post20201107\r\n sklearn: 0.23.2\r\n sktime: 0.5.3\r\n statsmodels: 0.12.2\r\n numpy: 1.20.1\r\n scipy: 1.6.1\r\n Cython: 0.29.17\r\n pandas: 1.2.3\r\n matplotlib: 3.4.1\r\n joblib: 1.0.1\r\n numba: 0.51.2\r\n pmdarima: 1.8.0\r\n tsfresh: None\r\n```\r\n</details>\r\n\r\n<!-- Thanks for contributing! -->\r\n\n", "before_files": [{"content": "#!/usr/bin/env python3 -u\n# -*- coding: utf-8 -*-\n# copyright: sktime developers, BSD-3-Clause License (see LICENSE file)\n\n__author__ = [\"Martin Walter\"]\n__all__ = [\"Prophet\"]\n\nfrom sktime.forecasting.base._base import DEFAULT_ALPHA\nfrom sktime.forecasting.base.adapters import _ProphetAdapter\nfrom sktime.utils.validation._dependencies import _check_soft_dependencies\n\n_check_soft_dependencies(\"fbprophet\")\n\n\nclass Prophet(_ProphetAdapter):\n \"\"\"Prophet forecaster by wrapping fbprophet.\n Parameters\n ----------\n freq: String of DatetimeIndex frequency. See here for possible values:\n https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html\n #timeseries-offset-aliases\n add_seasonality: Dict with args for Prophet.add_seasonality().\n Dict can have the following keys/values:\n name: string name of the seasonality component.\n period: float number of days in one period.\n fourier_order: int number of Fourier components to use.\n prior_scale: optional float prior scale for this component.\n mode: optional 'additive' or 'multiplicative'\n condition_name: string name of the seasonality condition.\n add_country_holidays: Dict with args for Prophet.add_country_holidays().\n Dict can have the following keys/values:\n country_name: Name of the country, like 'UnitedStates' or 'US'\n growth: String 'linear' or 'logistic' to specify a linear or logistic\n trend.\n changepoints: List of dates at which to include potential changepoints. If\n not specified, potential changepoints are selected automatically.\n n_changepoints: Number of potential changepoints to include. Not used\n if input `changepoints` is supplied. If `changepoints` is not supplied,\n then n_changepoints potential changepoints are selected uniformly from\n the first `changepoint_range` proportion of the history.\n changepoint_range: Proportion of history in which trend changepoints will\n be estimated. Defaults to 0.8 for the first 80%. Not used if\n `changepoints` is specified.\n yearly_seasonality: Fit yearly seasonality.\n Can be 'auto', True, False, or a number of Fourier terms to generate.\n weekly_seasonality: Fit weekly seasonality.\n Can be 'auto', True, False, or a number of Fourier terms to generate.\n daily_seasonality: Fit daily seasonality.\n Can be 'auto', True, False, or a number of Fourier terms to generate.\n holidays: pd.DataFrame with columns holiday (string) and ds (date type)\n and optionally columns lower_window and upper_window which specify a\n range of days around the date to be included as holidays.\n lower_window=-2 will include 2 days prior to the date as holidays. Also\n optionally can have a column prior_scale specifying the prior scale for\n that holiday.\n seasonality_mode: 'additive' (default) or 'multiplicative'.\n seasonality_prior_scale: Parameter modulating the strength of the\n seasonality model. Larger values allow the model to fit larger seasonal\n fluctuations, smaller values dampen the seasonality. Can be specified\n for individual seasonalities using add_seasonality.\n holidays_prior_scale: Parameter modulating the strength of the holiday\n components model, unless overridden in the holidays input.\n changepoint_prior_scale: Parameter modulating the flexibility of the\n automatic changepoint selection. Large values will allow many\n changepoints, small values will allow few changepoints.\n mcmc_samples: Integer, if greater than 0, will do full Bayesian inference\n with the specified number of MCMC samples. If 0, will do MAP\n estimation.\n alpha: Float, width of the uncertainty intervals provided\n for the forecast. If mcmc_samples=0, this will be only the uncertainty\n in the trend using the MAP estimate of the extrapolated generative\n model. If mcmc.samples>0, this will be integrated over all model\n parameters, which will include uncertainty in seasonality.\n uncertainty_samples: Number of simulated draws used to estimate\n uncertainty intervals. Settings this value to 0 or False will disable\n uncertainty estimation and speed up the calculation.\n stan_backend: str as defined in StanBackendEnum default: None - will try to\n iterate over all available backends and find the working one\n\n References\n ----------\n https://facebook.github.io/prophet\n https://github.com/facebook/prophet\n\n Example\n ----------\n >>> from sktime.datasets import load_airline\n >>> from sktime.forecasting.fbprophet import Prophet\n >>> # Prophet requires to have data with a pandas.DatetimeIndex\n >>> y = load_airline().to_timestamp(freq='M')\n >>> forecaster = Prophet(\n ... seasonality_mode='multiplicative',\n ... n_changepoints=int(len(y) / 12),\n ... add_country_holidays={'country_name': 'Germany'},\n ... yearly_seasonality=True)\n >>> forecaster.fit(y)\n Prophet(...)\n >>> y_pred = forecaster.predict(fh=[1,2,3])\n \"\"\"\n\n def __init__(\n self,\n # Args due to wrapping\n freq=None,\n add_seasonality=None,\n add_country_holidays=None,\n # Args of fbprophet\n growth=\"linear\",\n changepoints=None,\n n_changepoints=25,\n changepoint_range=0.8,\n yearly_seasonality=\"auto\",\n weekly_seasonality=\"auto\",\n daily_seasonality=\"auto\",\n holidays=None,\n seasonality_mode=\"additive\",\n seasonality_prior_scale=10.0,\n holidays_prior_scale=10.0,\n changepoint_prior_scale=0.05,\n mcmc_samples=0,\n alpha=DEFAULT_ALPHA,\n uncertainty_samples=1000,\n stan_backend=None,\n verbose=0,\n ):\n self.freq = freq\n self.add_seasonality = add_seasonality\n self.add_country_holidays = add_country_holidays\n\n self.growth = growth\n self.changepoints = changepoints\n self.n_changepoints = n_changepoints\n self.changepoint_range = changepoint_range\n self.yearly_seasonality = yearly_seasonality\n self.weekly_seasonality = weekly_seasonality\n self.daily_seasonality = daily_seasonality\n self.holidays = holidays\n self.seasonality_mode = seasonality_mode\n self.seasonality_prior_scale = float(seasonality_prior_scale)\n self.changepoint_prior_scale = float(changepoint_prior_scale)\n self.holidays_prior_scale = float(holidays_prior_scale)\n self.mcmc_samples = mcmc_samples\n self.alpha = alpha\n self.uncertainty_samples = uncertainty_samples\n self.stan_backend = stan_backend\n self.verbose = verbose\n\n # import inside method to avoid hard dependency\n from fbprophet.forecaster import Prophet as _Prophet\n\n self._ModelClass = _Prophet\n\n super(Prophet, self).__init__()\n\n def _instantiate_model(self):\n self._forecaster = self._ModelClass(\n growth=self.growth,\n changepoints=self.changepoints,\n n_changepoints=self.n_changepoints,\n changepoint_range=self.changepoint_range,\n yearly_seasonality=self.yearly_seasonality,\n weekly_seasonality=self.weekly_seasonality,\n daily_seasonality=self.daily_seasonality,\n holidays=self.holidays,\n seasonality_mode=self.seasonality_mode,\n seasonality_prior_scale=self.seasonality_prior_scale,\n holidays_prior_scale=self.holidays_prior_scale,\n changepoint_prior_scale=self.changepoint_prior_scale,\n mcmc_samples=self.mcmc_samples,\n interval_width=1 - self.alpha,\n uncertainty_samples=self.uncertainty_samples,\n stan_backend=self.stan_backend,\n )\n return self\n", "path": "sktime/forecasting/fbprophet.py"}], "after_files": [{"content": "#!/usr/bin/env python3 -u\n# -*- coding: utf-8 -*-\n# copyright: sktime developers, BSD-3-Clause License (see LICENSE file)\n\n\"\"\"Prophet forecaster by wrapping fbprophet.\"\"\"\n\n__author__ = [\"Martin Walter\"]\n__all__ = [\"Prophet\"]\n\nfrom sktime.forecasting.base._base import DEFAULT_ALPHA\nfrom sktime.forecasting.base.adapters import _ProphetAdapter\nfrom sktime.utils.validation._dependencies import _check_soft_dependencies\n\n_check_soft_dependencies(\"fbprophet\")\n\n\nclass Prophet(_ProphetAdapter):\n \"\"\"Prophet forecaster by wrapping fbprophet.\n\n Parameters\n ----------\n freq: String of DatetimeIndex frequency. See here for possible values:\n https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html\n #timeseries-offset-aliases\n add_seasonality: Dict with args for Prophet.add_seasonality().\n Dict can have the following keys/values:\n name: string name of the seasonality component.\n period: float number of days in one period.\n fourier_order: int number of Fourier components to use.\n prior_scale: optional float prior scale for this component.\n mode: optional 'additive' or 'multiplicative'\n condition_name: string name of the seasonality condition.\n add_country_holidays: Dict with args for Prophet.add_country_holidays().\n Dict can have the following keys/values:\n country_name: Name of the country, like 'UnitedStates' or 'US'\n growth: String 'linear' or 'logistic' to specify a linear or logistic\n trend.\n changepoints: List of dates at which to include potential changepoints. If\n not specified, potential changepoints are selected automatically.\n n_changepoints: Number of potential changepoints to include. Not used\n if input `changepoints` is supplied. If `changepoints` is not supplied,\n then n_changepoints potential changepoints are selected uniformly from\n the first `changepoint_range` proportion of the history.\n changepoint_range: Proportion of history in which trend changepoints will\n be estimated. Defaults to 0.8 for the first 80%. Not used if\n `changepoints` is specified.\n yearly_seasonality: Fit yearly seasonality.\n Can be 'auto', True, False, or a number of Fourier terms to generate.\n weekly_seasonality: Fit weekly seasonality.\n Can be 'auto', True, False, or a number of Fourier terms to generate.\n daily_seasonality: Fit daily seasonality.\n Can be 'auto', True, False, or a number of Fourier terms to generate.\n holidays: pd.DataFrame with columns holiday (string) and ds (date type)\n and optionally columns lower_window and upper_window which specify a\n range of days around the date to be included as holidays.\n lower_window=-2 will include 2 days prior to the date as holidays. Also\n optionally can have a column prior_scale specifying the prior scale for\n that holiday.\n seasonality_mode: 'additive' (default) or 'multiplicative'.\n seasonality_prior_scale: Parameter modulating the strength of the\n seasonality model. Larger values allow the model to fit larger seasonal\n fluctuations, smaller values dampen the seasonality. Can be specified\n for individual seasonalities using add_seasonality.\n holidays_prior_scale: Parameter modulating the strength of the holiday\n components model, unless overridden in the holidays input.\n changepoint_prior_scale: Parameter modulating the flexibility of the\n automatic changepoint selection. Large values will allow many\n changepoints, small values will allow few changepoints.\n mcmc_samples: Integer, if greater than 0, will do full Bayesian inference\n with the specified number of MCMC samples. If 0, will do MAP\n estimation.\n alpha: Float, width of the uncertainty intervals provided\n for the forecast. If mcmc_samples=0, this will be only the uncertainty\n in the trend using the MAP estimate of the extrapolated generative\n model. If mcmc.samples>0, this will be integrated over all model\n parameters, which will include uncertainty in seasonality.\n uncertainty_samples: Number of simulated draws used to estimate\n uncertainty intervals. Settings this value to 0 or False will disable\n uncertainty estimation and speed up the calculation.\n stan_backend: str as defined in StanBackendEnum default: None - will try to\n iterate over all available backends and find the working one\n\n References\n ----------\n https://facebook.github.io/prophet\n https://github.com/facebook/prophet\n\n Example\n ----------\n >>> from sktime.datasets import load_airline\n >>> from sktime.forecasting.fbprophet import Prophet\n >>> # Prophet requires to have data with a pandas.DatetimeIndex\n >>> y = load_airline().to_timestamp(freq='M')\n >>> forecaster = Prophet(\n ... seasonality_mode='multiplicative',\n ... n_changepoints=int(len(y) / 12),\n ... add_country_holidays={'country_name': 'Germany'},\n ... yearly_seasonality=True)\n >>> forecaster.fit(y)\n Prophet(...)\n >>> y_pred = forecaster.predict(fh=[1,2,3])\n \"\"\"\n\n def __init__(\n self,\n # Args due to wrapping\n freq=None,\n add_seasonality=None,\n add_country_holidays=None,\n # Args of fbprophet\n growth=\"linear\",\n changepoints=None,\n n_changepoints=25,\n changepoint_range=0.8,\n yearly_seasonality=\"auto\",\n weekly_seasonality=\"auto\",\n daily_seasonality=\"auto\",\n holidays=None,\n seasonality_mode=\"additive\",\n seasonality_prior_scale=10.0,\n holidays_prior_scale=10.0,\n changepoint_prior_scale=0.05,\n mcmc_samples=0,\n alpha=DEFAULT_ALPHA,\n uncertainty_samples=1000,\n stan_backend=None,\n verbose=0,\n ):\n self.freq = freq\n self.add_seasonality = add_seasonality\n self.add_country_holidays = add_country_holidays\n\n self.growth = growth\n self.changepoints = changepoints\n self.n_changepoints = n_changepoints\n self.changepoint_range = changepoint_range\n self.yearly_seasonality = yearly_seasonality\n self.weekly_seasonality = weekly_seasonality\n self.daily_seasonality = daily_seasonality\n self.holidays = holidays\n self.seasonality_mode = seasonality_mode\n self.seasonality_prior_scale = seasonality_prior_scale\n self.changepoint_prior_scale = changepoint_prior_scale\n self.holidays_prior_scale = holidays_prior_scale\n self.mcmc_samples = mcmc_samples\n self.alpha = alpha\n self.uncertainty_samples = uncertainty_samples\n self.stan_backend = stan_backend\n self.verbose = verbose\n\n # import inside method to avoid hard dependency\n from fbprophet.forecaster import Prophet as _Prophet\n\n self._ModelClass = _Prophet\n\n super(Prophet, self).__init__()\n\n def _instantiate_model(self):\n self._forecaster = self._ModelClass(\n growth=self.growth,\n changepoints=self.changepoints,\n n_changepoints=self.n_changepoints,\n changepoint_range=self.changepoint_range,\n yearly_seasonality=self.yearly_seasonality,\n weekly_seasonality=self.weekly_seasonality,\n daily_seasonality=self.daily_seasonality,\n holidays=self.holidays,\n seasonality_mode=self.seasonality_mode,\n seasonality_prior_scale=float(self.seasonality_prior_scale),\n holidays_prior_scale=float(self.holidays_prior_scale),\n changepoint_prior_scale=float(self.changepoint_prior_scale),\n mcmc_samples=self.mcmc_samples,\n interval_width=1 - self.alpha,\n uncertainty_samples=self.uncertainty_samples,\n stan_backend=self.stan_backend,\n )\n return self\n", "path": "sktime/forecasting/fbprophet.py"}]} | 3,016 | 476 |
gh_patches_debug_19776 | rasdani/github-patches | git_diff | azavea__raster-vision-1484 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Local runner should write makefile to temporary dir
Instead, it writes it to the `root_uri` which might be an S3 URI, and `make`, which is used by the local runner cannot handle that.
Makefile error when `root_uri` is an S3 path
## 🐛 Bug
When running training command and having `root_uri` set to an S3 folder, this error shows up:
```
make: s3://<random_bucket_name>/predictions/Makefile: No such file or directory
make: *** No rule to make target 's3://<random_bucket_name>/predictions/Makefile'. Stop.
```
This error disappears when `root_uri` is a local path. AWS config is right as it is able to read and write the files.
## To Reproduce
Steps to reproduce the behavior:
1. I ran the following command inside the container:
`python -m rastervision.pipeline.cli run local code/local_exp.py -a raw_uri s3://<random_bucket_name>/datafortesting/data/ -a root_uri s3://<random_bucket_name>/predictions -a test False`
<!-- Please provide the command executed, source of the get_config() function, error messages, and/or full stack traces if at all possible -->
## Expected behavior
It should run normally like it is running when `root_uri` is a local path.
## Environment
Running with docker. **Image**: quay.io/azavea/raster-vision:pytorch-v0.13.1
## Additional context
This might be a relevant issue: #991
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `rastervision_pipeline/rastervision/pipeline/runner/local_runner.py`
Content:
```
1 import sys
2 from os.path import dirname, join
3 from subprocess import Popen
4
5 from rastervision.pipeline.file_system import str_to_file
6 from rastervision.pipeline.runner.runner import Runner
7 from rastervision.pipeline.utils import terminate_at_exit
8
9 LOCAL = 'local'
10
11
12 class LocalRunner(Runner):
13 """Runs each command locally using different processes for each command/split.
14
15 This is implemented by generating a Makefile and then running it using make.
16 """
17
18 def run(self,
19 cfg_json_uri,
20 pipeline,
21 commands,
22 num_splits=1,
23 pipeline_run_name: str = 'raster-vision'):
24 num_commands = 0
25 for command in commands:
26 if command in pipeline.split_commands and num_splits > 1:
27 num_commands += num_splits
28 else:
29 num_commands += 1
30
31 makefile = '.PHONY: '
32 makefile += ' '.join([str(ci) for ci in range(num_commands)])
33 makefile += '\n\n'
34
35 makefile += 'all: '
36 makefile += ' '.join([str(ci) for ci in range(num_commands)])
37 makefile += '\n\n'
38
39 prev_command_inds = []
40 curr_command_ind = 0
41 for command in commands:
42
43 curr_command_inds = []
44 if command in pipeline.split_commands and num_splits > 1:
45 for split_ind in range(num_splits):
46 makefile += '{}: '.format(curr_command_ind)
47 makefile += ' '.join([str(ci) for ci in prev_command_inds])
48 makefile += '\n'
49 invocation = (
50 'python -m rastervision.pipeline.cli run_command '
51 '{} {} --split-ind {} --num-splits {}').format(
52 cfg_json_uri, command, split_ind, num_splits)
53 makefile += '\t{}\n\n'.format(invocation)
54 curr_command_inds.append(curr_command_ind)
55 curr_command_ind += 1
56 else:
57 makefile += '{}: '.format(curr_command_ind)
58 makefile += ' '.join([str(ci) for ci in prev_command_inds])
59 makefile += '\n'
60 invocation = (
61 'python -m rastervision.pipeline.cli run_command '
62 '{} {}'.format(cfg_json_uri, command))
63 makefile += '\t{}\n\n'.format(invocation)
64 curr_command_inds.append(curr_command_ind)
65 curr_command_ind += 1
66
67 prev_command_inds = curr_command_inds
68
69 makefile_path = join(dirname(cfg_json_uri), 'Makefile')
70 str_to_file(makefile, makefile_path)
71 process = Popen(['make', '-j', '-f', makefile_path])
72 terminate_at_exit(process)
73 exitcode = process.wait()
74 if exitcode != 0:
75 sys.exit(exitcode)
76 else:
77 return 0
78
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/rastervision_pipeline/rastervision/pipeline/runner/local_runner.py b/rastervision_pipeline/rastervision/pipeline/runner/local_runner.py
--- a/rastervision_pipeline/rastervision/pipeline/runner/local_runner.py
+++ b/rastervision_pipeline/rastervision/pipeline/runner/local_runner.py
@@ -2,7 +2,7 @@
from os.path import dirname, join
from subprocess import Popen
-from rastervision.pipeline.file_system import str_to_file
+from rastervision.pipeline.file_system import str_to_file, download_if_needed
from rastervision.pipeline.runner.runner import Runner
from rastervision.pipeline.utils import terminate_at_exit
@@ -68,7 +68,8 @@
makefile_path = join(dirname(cfg_json_uri), 'Makefile')
str_to_file(makefile, makefile_path)
- process = Popen(['make', '-j', '-f', makefile_path])
+ makefile_path_local = download_if_needed(makefile_path)
+ process = Popen(['make', '-j', '-f', makefile_path_local])
terminate_at_exit(process)
exitcode = process.wait()
if exitcode != 0:
| {"golden_diff": "diff --git a/rastervision_pipeline/rastervision/pipeline/runner/local_runner.py b/rastervision_pipeline/rastervision/pipeline/runner/local_runner.py\n--- a/rastervision_pipeline/rastervision/pipeline/runner/local_runner.py\n+++ b/rastervision_pipeline/rastervision/pipeline/runner/local_runner.py\n@@ -2,7 +2,7 @@\n from os.path import dirname, join\n from subprocess import Popen\n \n-from rastervision.pipeline.file_system import str_to_file\n+from rastervision.pipeline.file_system import str_to_file, download_if_needed\n from rastervision.pipeline.runner.runner import Runner\n from rastervision.pipeline.utils import terminate_at_exit\n \n@@ -68,7 +68,8 @@\n \n makefile_path = join(dirname(cfg_json_uri), 'Makefile')\n str_to_file(makefile, makefile_path)\n- process = Popen(['make', '-j', '-f', makefile_path])\n+ makefile_path_local = download_if_needed(makefile_path)\n+ process = Popen(['make', '-j', '-f', makefile_path_local])\n terminate_at_exit(process)\n exitcode = process.wait()\n if exitcode != 0:\n", "issue": "Local runner should write makefile to temporary dir\nInstead, it writes it to the `root_uri` which might be an S3 URI, and `make`, which is used by the local runner cannot handle that.\nMakefile error when `root_uri` is an S3 path\n## \ud83d\udc1b Bug\r\nWhen running training command and having `root_uri` set to an S3 folder, this error shows up:\r\n```\r\nmake: s3://<random_bucket_name>/predictions/Makefile: No such file or directory\r\nmake: *** No rule to make target 's3://<random_bucket_name>/predictions/Makefile'. Stop.\r\n```\r\n\r\nThis error disappears when `root_uri` is a local path. AWS config is right as it is able to read and write the files.\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n1. I ran the following command inside the container:\r\n`python -m rastervision.pipeline.cli run local code/local_exp.py -a raw_uri s3://<random_bucket_name>/datafortesting/data/ -a root_uri s3://<random_bucket_name>/predictions -a test False`\r\n\r\n<!-- Please provide the command executed, source of the get_config() function, error messages, and/or full stack traces if at all possible -->\r\n\r\n## Expected behavior\r\n\r\nIt should run normally like it is running when `root_uri` is a local path.\r\n\r\n## Environment\r\n\r\nRunning with docker. **Image**: quay.io/azavea/raster-vision:pytorch-v0.13.1\r\n\r\n## Additional context\r\n\r\nThis might be a relevant issue: #991 \r\n\n", "before_files": [{"content": "import sys\nfrom os.path import dirname, join\nfrom subprocess import Popen\n\nfrom rastervision.pipeline.file_system import str_to_file\nfrom rastervision.pipeline.runner.runner import Runner\nfrom rastervision.pipeline.utils import terminate_at_exit\n\nLOCAL = 'local'\n\n\nclass LocalRunner(Runner):\n \"\"\"Runs each command locally using different processes for each command/split.\n\n This is implemented by generating a Makefile and then running it using make.\n \"\"\"\n\n def run(self,\n cfg_json_uri,\n pipeline,\n commands,\n num_splits=1,\n pipeline_run_name: str = 'raster-vision'):\n num_commands = 0\n for command in commands:\n if command in pipeline.split_commands and num_splits > 1:\n num_commands += num_splits\n else:\n num_commands += 1\n\n makefile = '.PHONY: '\n makefile += ' '.join([str(ci) for ci in range(num_commands)])\n makefile += '\\n\\n'\n\n makefile += 'all: '\n makefile += ' '.join([str(ci) for ci in range(num_commands)])\n makefile += '\\n\\n'\n\n prev_command_inds = []\n curr_command_ind = 0\n for command in commands:\n\n curr_command_inds = []\n if command in pipeline.split_commands and num_splits > 1:\n for split_ind in range(num_splits):\n makefile += '{}: '.format(curr_command_ind)\n makefile += ' '.join([str(ci) for ci in prev_command_inds])\n makefile += '\\n'\n invocation = (\n 'python -m rastervision.pipeline.cli run_command '\n '{} {} --split-ind {} --num-splits {}').format(\n cfg_json_uri, command, split_ind, num_splits)\n makefile += '\\t{}\\n\\n'.format(invocation)\n curr_command_inds.append(curr_command_ind)\n curr_command_ind += 1\n else:\n makefile += '{}: '.format(curr_command_ind)\n makefile += ' '.join([str(ci) for ci in prev_command_inds])\n makefile += '\\n'\n invocation = (\n 'python -m rastervision.pipeline.cli run_command '\n '{} {}'.format(cfg_json_uri, command))\n makefile += '\\t{}\\n\\n'.format(invocation)\n curr_command_inds.append(curr_command_ind)\n curr_command_ind += 1\n\n prev_command_inds = curr_command_inds\n\n makefile_path = join(dirname(cfg_json_uri), 'Makefile')\n str_to_file(makefile, makefile_path)\n process = Popen(['make', '-j', '-f', makefile_path])\n terminate_at_exit(process)\n exitcode = process.wait()\n if exitcode != 0:\n sys.exit(exitcode)\n else:\n return 0\n", "path": "rastervision_pipeline/rastervision/pipeline/runner/local_runner.py"}], "after_files": [{"content": "import sys\nfrom os.path import dirname, join\nfrom subprocess import Popen\n\nfrom rastervision.pipeline.file_system import str_to_file, download_if_needed\nfrom rastervision.pipeline.runner.runner import Runner\nfrom rastervision.pipeline.utils import terminate_at_exit\n\nLOCAL = 'local'\n\n\nclass LocalRunner(Runner):\n \"\"\"Runs each command locally using different processes for each command/split.\n\n This is implemented by generating a Makefile and then running it using make.\n \"\"\"\n\n def run(self,\n cfg_json_uri,\n pipeline,\n commands,\n num_splits=1,\n pipeline_run_name: str = 'raster-vision'):\n num_commands = 0\n for command in commands:\n if command in pipeline.split_commands and num_splits > 1:\n num_commands += num_splits\n else:\n num_commands += 1\n\n makefile = '.PHONY: '\n makefile += ' '.join([str(ci) for ci in range(num_commands)])\n makefile += '\\n\\n'\n\n makefile += 'all: '\n makefile += ' '.join([str(ci) for ci in range(num_commands)])\n makefile += '\\n\\n'\n\n prev_command_inds = []\n curr_command_ind = 0\n for command in commands:\n\n curr_command_inds = []\n if command in pipeline.split_commands and num_splits > 1:\n for split_ind in range(num_splits):\n makefile += '{}: '.format(curr_command_ind)\n makefile += ' '.join([str(ci) for ci in prev_command_inds])\n makefile += '\\n'\n invocation = (\n 'python -m rastervision.pipeline.cli run_command '\n '{} {} --split-ind {} --num-splits {}').format(\n cfg_json_uri, command, split_ind, num_splits)\n makefile += '\\t{}\\n\\n'.format(invocation)\n curr_command_inds.append(curr_command_ind)\n curr_command_ind += 1\n else:\n makefile += '{}: '.format(curr_command_ind)\n makefile += ' '.join([str(ci) for ci in prev_command_inds])\n makefile += '\\n'\n invocation = (\n 'python -m rastervision.pipeline.cli run_command '\n '{} {}'.format(cfg_json_uri, command))\n makefile += '\\t{}\\n\\n'.format(invocation)\n curr_command_inds.append(curr_command_ind)\n curr_command_ind += 1\n\n prev_command_inds = curr_command_inds\n\n makefile_path = join(dirname(cfg_json_uri), 'Makefile')\n str_to_file(makefile, makefile_path)\n makefile_path_local = download_if_needed(makefile_path)\n process = Popen(['make', '-j', '-f', makefile_path_local])\n terminate_at_exit(process)\n exitcode = process.wait()\n if exitcode != 0:\n sys.exit(exitcode)\n else:\n return 0\n", "path": "rastervision_pipeline/rastervision/pipeline/runner/local_runner.py"}]} | 1,363 | 267 |
gh_patches_debug_34217 | rasdani/github-patches | git_diff | mne-tools__mne-bids-pipeline-647 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Print heartbeat log messages
Some steps take quite a while to complete without printing log messages. This causes issues with CircleCI, which may believe the job is stuck; but it's also an issue for users, who cannot be sure that we're still operational either.
The least we should do is periodically print some heartbeat messages.
Better yet would be some useful status info.
Affected steps are e.g. the time-by-time decoding with our ds000246 configuration.
cc @larsoner
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mne_bids_pipeline/scripts/sensor/_03_decoding_time_by_time.py`
Content:
```
1 """Decode time-by-time using a "sliding" estimator.
2
3 A sliding estimator fits a separate logistic regression model for every time
4 point. The end result is an averaging effect across sensors.
5
6 This approach is different from the one taken in the decoding script for
7 entire epochs. Here, the classifier is traines on the entire epoch, and hence
8 can learn about the entire time course of the signal.
9 """
10
11 ###############################################################################
12 # Let us first import the libraries
13
14 import os.path as op
15 import itertools
16 from types import SimpleNamespace
17
18 import numpy as np
19 import pandas as pd
20 from scipy.io import savemat
21
22 import mne
23 from mne.decoding import (
24 GeneralizingEstimator, SlidingEstimator, cross_val_multiscore
25 )
26
27 from mne_bids import BIDSPath
28
29 from sklearn.preprocessing import StandardScaler
30 from sklearn.pipeline import make_pipeline
31 from sklearn.model_selection import StratifiedKFold
32
33 from ..._config_utils import (
34 get_sessions, get_subjects, get_task, get_datatype, get_eeg_reference,
35 get_deriv_root, _restrict_analyze_channels, get_decoding_contrasts,
36 )
37 from ..._decoding import LogReg
38 from ..._logging import gen_log_kwargs, logger
39 from ..._run import failsafe_run, save_logs
40 from ..._parallel import get_parallel_backend, get_n_jobs
41
42
43 def get_input_fnames_time_decoding(**kwargs):
44 cfg = kwargs.pop('cfg')
45 subject = kwargs.pop('subject')
46 session = kwargs.pop('session')
47 # TODO: Somehow remove these?
48 del kwargs['condition1']
49 del kwargs['condition2']
50 assert len(kwargs) == 0, kwargs.keys()
51 del kwargs
52 # TODO: Shouldn't this at least use the PTP-rejected epochs if available?
53 fname_epochs = BIDSPath(subject=subject,
54 session=session,
55 task=cfg.task,
56 acquisition=cfg.acq,
57 run=None,
58 recording=cfg.rec,
59 space=cfg.space,
60 suffix='epo',
61 extension='.fif',
62 datatype=cfg.datatype,
63 root=cfg.deriv_root,
64 check=False)
65 in_files = dict()
66 in_files['epochs'] = fname_epochs
67 return in_files
68
69
70 @failsafe_run(script_path=__file__,
71 get_input_fnames=get_input_fnames_time_decoding)
72 def run_time_decoding(*, cfg, subject, condition1, condition2, session,
73 in_files):
74 msg = f'Contrasting conditions: {condition1} – {condition2}'
75 logger.info(**gen_log_kwargs(message=msg, subject=subject,
76 session=session))
77 out_files = dict()
78 bids_path = in_files['epochs'].copy()
79
80 epochs = mne.read_epochs(in_files.pop('epochs'))
81 _restrict_analyze_channels(epochs, cfg)
82
83 # We define the epochs and the labels
84 if isinstance(cfg.conditions, dict):
85 epochs_conds = [cfg.conditions[condition1],
86 cfg.conditions[condition2]]
87 cond_names = [condition1, condition2]
88 else:
89 epochs_conds = cond_names = [condition1, condition2]
90 epochs_conds = [condition1, condition2]
91
92 # We have to use this approach because the conditions could be based on
93 # metadata selection, so simply using epochs[conds[0], conds[1]] would
94 # not work.
95 epochs = mne.concatenate_epochs([epochs[epochs_conds[0]],
96 epochs[epochs_conds[1]]])
97 n_cond1 = len(epochs[epochs_conds[0]])
98 n_cond2 = len(epochs[epochs_conds[1]])
99
100 decim = cfg.decoding_time_generalization_decim
101 if cfg.decoding_time_generalization and decim > 1:
102 epochs.decimate(decim, verbose='error')
103
104 X = epochs.get_data()
105 y = np.r_[np.ones(n_cond1), np.zeros(n_cond2)]
106 with get_parallel_backend(cfg):
107 clf = make_pipeline(
108 StandardScaler(),
109 LogReg(
110 solver='liblinear', # much faster than the default
111 random_state=cfg.random_state,
112 n_jobs=1,
113 )
114 )
115 cv = StratifiedKFold(
116 shuffle=True,
117 random_state=cfg.random_state,
118 n_splits=cfg.decoding_n_splits,
119 )
120
121 if cfg.decoding_time_generalization:
122 estimator = GeneralizingEstimator(
123 clf,
124 scoring=cfg.decoding_metric,
125 n_jobs=cfg.n_jobs,
126 )
127 cv_scoring_n_jobs = 1
128 else:
129 estimator = SlidingEstimator(
130 clf,
131 scoring=cfg.decoding_metric,
132 n_jobs=1,
133 )
134 cv_scoring_n_jobs = cfg.n_jobs
135
136 scores = cross_val_multiscore(
137 estimator, X=X, y=y, cv=cv, n_jobs=cv_scoring_n_jobs
138 )
139
140 # let's save the scores now
141 a_vs_b = f'{cond_names[0]}+{cond_names[1]}'.replace(op.sep, '')
142 processing = f'{a_vs_b}+TimeByTime+{cfg.decoding_metric}'
143 processing = processing.replace('_', '-').replace('-', '')
144 mat_key = f'mat_{processing}'
145 out_files[mat_key] = bids_path.copy().update(
146 suffix='decoding', processing=processing, extension='.mat')
147 savemat(
148 out_files[mat_key],
149 {
150 'scores': scores,
151 'times': epochs.times,
152 'decim': decim,
153 }
154 )
155
156 if cfg.decoding_time_generalization:
157 # Only store the mean scores for the diagonal in the TSV file –
158 # we still have all time generalization results in the MAT file
159 # we just saved.
160 mean_crossval_score = np.diag(scores.mean(axis=0))
161 else:
162 mean_crossval_score = scores.mean(axis=0)
163
164 out_files[f'tsv_{processing}'] = out_files[mat_key].copy().update(
165 extension='.tsv')
166 tabular_data = pd.DataFrame(
167 dict(cond_1=[cond_names[0]] * len(epochs.times),
168 cond_2=[cond_names[1]] * len(epochs.times),
169 time=epochs.times,
170 mean_crossval_score=mean_crossval_score,
171 metric=[cfg.decoding_metric] * len(epochs.times))
172 )
173 tabular_data.to_csv(
174 out_files[f'tsv_{processing}'], sep='\t', index=False)
175 assert len(in_files) == 0, in_files.keys()
176 return out_files
177
178
179 def get_config(
180 *,
181 config,
182 ) -> SimpleNamespace:
183 cfg = SimpleNamespace(
184 task=get_task(config),
185 datatype=get_datatype(config),
186 acq=config.acq,
187 rec=config.rec,
188 space=config.space,
189 deriv_root=get_deriv_root(config),
190 conditions=config.conditions,
191 contrasts=get_decoding_contrasts(config),
192 decode=config.decode,
193 decoding_metric=config.decoding_metric,
194 decoding_n_splits=config.decoding_n_splits,
195 decoding_time_generalization=config.decoding_time_generalization,
196 decoding_time_generalization_decim=config.decoding_time_generalization_decim, # noqa: E501
197 random_state=config.random_state,
198 analyze_channels=config.analyze_channels,
199 ch_types=config.ch_types,
200 eeg_reference=get_eeg_reference(config),
201 n_jobs=get_n_jobs(config),
202 parallel_backend=config.parallel_backend,
203 interactive=config.interactive,
204 N_JOBS=config.N_JOBS,
205 )
206 return cfg
207
208
209 def main():
210 """Run time-by-time decoding."""
211 import config
212 if not config.contrasts:
213 msg = 'No contrasts specified; not performing decoding.'
214 logger.info(**gen_log_kwargs(message=msg))
215 return
216
217 if not config.decode:
218 msg = 'No decoding requested by user.'
219 logger.info(**gen_log_kwargs(message=msg))
220 return
221
222 # Here we go parallel inside the :class:`mne.decoding.SlidingEstimator`
223 # so we don't dispatch manually to multiple jobs.
224 logs = []
225 for subject, session, (cond_1, cond_2) in itertools.product(
226 get_subjects(config),
227 get_sessions(config),
228 get_decoding_contrasts(config)
229 ):
230 log = run_time_decoding(
231 cfg=get_config(config=config),
232 subject=subject,
233 condition1=cond_1,
234 condition2=cond_2,
235 session=session,
236 )
237 logs.append(log)
238 save_logs(config=config, logs=logs)
239
240
241 if __name__ == '__main__':
242 main()
243
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mne_bids_pipeline/scripts/sensor/_03_decoding_time_by_time.py b/mne_bids_pipeline/scripts/sensor/_03_decoding_time_by_time.py
--- a/mne_bids_pipeline/scripts/sensor/_03_decoding_time_by_time.py
+++ b/mne_bids_pipeline/scripts/sensor/_03_decoding_time_by_time.py
@@ -37,7 +37,8 @@
from ..._decoding import LogReg
from ..._logging import gen_log_kwargs, logger
from ..._run import failsafe_run, save_logs
-from ..._parallel import get_parallel_backend, get_n_jobs
+from ..._parallel import (
+ get_parallel_backend, get_n_jobs, get_parallel_backend_name)
def get_input_fnames_time_decoding(**kwargs):
@@ -71,7 +72,11 @@
get_input_fnames=get_input_fnames_time_decoding)
def run_time_decoding(*, cfg, subject, condition1, condition2, session,
in_files):
- msg = f'Contrasting conditions: {condition1} – {condition2}'
+ if cfg.decoding_time_generalization:
+ kind = 'time generalization'
+ else:
+ kind = 'sliding estimator'
+ msg = f'Contrasting conditions ({kind}): {condition1} – {condition2}'
logger.info(**gen_log_kwargs(message=msg, subject=subject,
session=session))
out_files = dict()
@@ -103,6 +108,8 @@
X = epochs.get_data()
y = np.r_[np.ones(n_cond1), np.zeros(n_cond2)]
+ # ProgressBar does not work on dask, so only enable it if not using dask
+ verbose = get_parallel_backend_name(cfg) != "dask"
with get_parallel_backend(cfg):
clf = make_pipeline(
StandardScaler(),
@@ -134,7 +141,8 @@
cv_scoring_n_jobs = cfg.n_jobs
scores = cross_val_multiscore(
- estimator, X=X, y=y, cv=cv, n_jobs=cv_scoring_n_jobs
+ estimator, X=X, y=y, cv=cv, n_jobs=cv_scoring_n_jobs,
+ verbose=verbose, # ensure ProgressBar is shown (can be slow)
)
# let's save the scores now
| {"golden_diff": "diff --git a/mne_bids_pipeline/scripts/sensor/_03_decoding_time_by_time.py b/mne_bids_pipeline/scripts/sensor/_03_decoding_time_by_time.py\n--- a/mne_bids_pipeline/scripts/sensor/_03_decoding_time_by_time.py\n+++ b/mne_bids_pipeline/scripts/sensor/_03_decoding_time_by_time.py\n@@ -37,7 +37,8 @@\n from ..._decoding import LogReg\n from ..._logging import gen_log_kwargs, logger\n from ..._run import failsafe_run, save_logs\n-from ..._parallel import get_parallel_backend, get_n_jobs\n+from ..._parallel import (\n+ get_parallel_backend, get_n_jobs, get_parallel_backend_name)\n \n \n def get_input_fnames_time_decoding(**kwargs):\n@@ -71,7 +72,11 @@\n get_input_fnames=get_input_fnames_time_decoding)\n def run_time_decoding(*, cfg, subject, condition1, condition2, session,\n in_files):\n- msg = f'Contrasting conditions: {condition1} \u2013 {condition2}'\n+ if cfg.decoding_time_generalization:\n+ kind = 'time generalization'\n+ else:\n+ kind = 'sliding estimator'\n+ msg = f'Contrasting conditions ({kind}): {condition1} \u2013 {condition2}'\n logger.info(**gen_log_kwargs(message=msg, subject=subject,\n session=session))\n out_files = dict()\n@@ -103,6 +108,8 @@\n \n X = epochs.get_data()\n y = np.r_[np.ones(n_cond1), np.zeros(n_cond2)]\n+ # ProgressBar does not work on dask, so only enable it if not using dask\n+ verbose = get_parallel_backend_name(cfg) != \"dask\"\n with get_parallel_backend(cfg):\n clf = make_pipeline(\n StandardScaler(),\n@@ -134,7 +141,8 @@\n cv_scoring_n_jobs = cfg.n_jobs\n \n scores = cross_val_multiscore(\n- estimator, X=X, y=y, cv=cv, n_jobs=cv_scoring_n_jobs\n+ estimator, X=X, y=y, cv=cv, n_jobs=cv_scoring_n_jobs,\n+ verbose=verbose, # ensure ProgressBar is shown (can be slow)\n )\n \n # let's save the scores now\n", "issue": "Print heartbeat log messages\nSome steps take quite a while to complete without printing log messages. This causes issues with CircleCI, which may believe the job is stuck; but it's also an issue for users, who cannot be sure that we're still operational either.\r\n\r\nThe least we should do is periodically print some heartbeat messages.\r\n\r\nBetter yet would be some useful status info.\r\n\r\nAffected steps are e.g. the time-by-time decoding with our ds000246 configuration.\r\n\r\ncc @larsoner \n", "before_files": [{"content": "\"\"\"Decode time-by-time using a \"sliding\" estimator.\n\nA sliding estimator fits a separate logistic regression model for every time\npoint. The end result is an averaging effect across sensors.\n\nThis approach is different from the one taken in the decoding script for\nentire epochs. Here, the classifier is traines on the entire epoch, and hence\ncan learn about the entire time course of the signal.\n\"\"\"\n\n###############################################################################\n# Let us first import the libraries\n\nimport os.path as op\nimport itertools\nfrom types import SimpleNamespace\n\nimport numpy as np\nimport pandas as pd\nfrom scipy.io import savemat\n\nimport mne\nfrom mne.decoding import (\n GeneralizingEstimator, SlidingEstimator, cross_val_multiscore\n)\n\nfrom mne_bids import BIDSPath\n\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.pipeline import make_pipeline\nfrom sklearn.model_selection import StratifiedKFold\n\nfrom ..._config_utils import (\n get_sessions, get_subjects, get_task, get_datatype, get_eeg_reference,\n get_deriv_root, _restrict_analyze_channels, get_decoding_contrasts,\n)\nfrom ..._decoding import LogReg\nfrom ..._logging import gen_log_kwargs, logger\nfrom ..._run import failsafe_run, save_logs\nfrom ..._parallel import get_parallel_backend, get_n_jobs\n\n\ndef get_input_fnames_time_decoding(**kwargs):\n cfg = kwargs.pop('cfg')\n subject = kwargs.pop('subject')\n session = kwargs.pop('session')\n # TODO: Somehow remove these?\n del kwargs['condition1']\n del kwargs['condition2']\n assert len(kwargs) == 0, kwargs.keys()\n del kwargs\n # TODO: Shouldn't this at least use the PTP-rejected epochs if available?\n fname_epochs = BIDSPath(subject=subject,\n session=session,\n task=cfg.task,\n acquisition=cfg.acq,\n run=None,\n recording=cfg.rec,\n space=cfg.space,\n suffix='epo',\n extension='.fif',\n datatype=cfg.datatype,\n root=cfg.deriv_root,\n check=False)\n in_files = dict()\n in_files['epochs'] = fname_epochs\n return in_files\n\n\n@failsafe_run(script_path=__file__,\n get_input_fnames=get_input_fnames_time_decoding)\ndef run_time_decoding(*, cfg, subject, condition1, condition2, session,\n in_files):\n msg = f'Contrasting conditions: {condition1} \u2013 {condition2}'\n logger.info(**gen_log_kwargs(message=msg, subject=subject,\n session=session))\n out_files = dict()\n bids_path = in_files['epochs'].copy()\n\n epochs = mne.read_epochs(in_files.pop('epochs'))\n _restrict_analyze_channels(epochs, cfg)\n\n # We define the epochs and the labels\n if isinstance(cfg.conditions, dict):\n epochs_conds = [cfg.conditions[condition1],\n cfg.conditions[condition2]]\n cond_names = [condition1, condition2]\n else:\n epochs_conds = cond_names = [condition1, condition2]\n epochs_conds = [condition1, condition2]\n\n # We have to use this approach because the conditions could be based on\n # metadata selection, so simply using epochs[conds[0], conds[1]] would\n # not work.\n epochs = mne.concatenate_epochs([epochs[epochs_conds[0]],\n epochs[epochs_conds[1]]])\n n_cond1 = len(epochs[epochs_conds[0]])\n n_cond2 = len(epochs[epochs_conds[1]])\n\n decim = cfg.decoding_time_generalization_decim\n if cfg.decoding_time_generalization and decim > 1:\n epochs.decimate(decim, verbose='error')\n\n X = epochs.get_data()\n y = np.r_[np.ones(n_cond1), np.zeros(n_cond2)]\n with get_parallel_backend(cfg):\n clf = make_pipeline(\n StandardScaler(),\n LogReg(\n solver='liblinear', # much faster than the default\n random_state=cfg.random_state,\n n_jobs=1,\n )\n )\n cv = StratifiedKFold(\n shuffle=True,\n random_state=cfg.random_state,\n n_splits=cfg.decoding_n_splits,\n )\n\n if cfg.decoding_time_generalization:\n estimator = GeneralizingEstimator(\n clf,\n scoring=cfg.decoding_metric,\n n_jobs=cfg.n_jobs,\n )\n cv_scoring_n_jobs = 1\n else:\n estimator = SlidingEstimator(\n clf,\n scoring=cfg.decoding_metric,\n n_jobs=1,\n )\n cv_scoring_n_jobs = cfg.n_jobs\n\n scores = cross_val_multiscore(\n estimator, X=X, y=y, cv=cv, n_jobs=cv_scoring_n_jobs\n )\n\n # let's save the scores now\n a_vs_b = f'{cond_names[0]}+{cond_names[1]}'.replace(op.sep, '')\n processing = f'{a_vs_b}+TimeByTime+{cfg.decoding_metric}'\n processing = processing.replace('_', '-').replace('-', '')\n mat_key = f'mat_{processing}'\n out_files[mat_key] = bids_path.copy().update(\n suffix='decoding', processing=processing, extension='.mat')\n savemat(\n out_files[mat_key],\n {\n 'scores': scores,\n 'times': epochs.times,\n 'decim': decim,\n }\n )\n\n if cfg.decoding_time_generalization:\n # Only store the mean scores for the diagonal in the TSV file \u2013\n # we still have all time generalization results in the MAT file\n # we just saved.\n mean_crossval_score = np.diag(scores.mean(axis=0))\n else:\n mean_crossval_score = scores.mean(axis=0)\n\n out_files[f'tsv_{processing}'] = out_files[mat_key].copy().update(\n extension='.tsv')\n tabular_data = pd.DataFrame(\n dict(cond_1=[cond_names[0]] * len(epochs.times),\n cond_2=[cond_names[1]] * len(epochs.times),\n time=epochs.times,\n mean_crossval_score=mean_crossval_score,\n metric=[cfg.decoding_metric] * len(epochs.times))\n )\n tabular_data.to_csv(\n out_files[f'tsv_{processing}'], sep='\\t', index=False)\n assert len(in_files) == 0, in_files.keys()\n return out_files\n\n\ndef get_config(\n *,\n config,\n) -> SimpleNamespace:\n cfg = SimpleNamespace(\n task=get_task(config),\n datatype=get_datatype(config),\n acq=config.acq,\n rec=config.rec,\n space=config.space,\n deriv_root=get_deriv_root(config),\n conditions=config.conditions,\n contrasts=get_decoding_contrasts(config),\n decode=config.decode,\n decoding_metric=config.decoding_metric,\n decoding_n_splits=config.decoding_n_splits,\n decoding_time_generalization=config.decoding_time_generalization,\n decoding_time_generalization_decim=config.decoding_time_generalization_decim, # noqa: E501\n random_state=config.random_state,\n analyze_channels=config.analyze_channels,\n ch_types=config.ch_types,\n eeg_reference=get_eeg_reference(config),\n n_jobs=get_n_jobs(config),\n parallel_backend=config.parallel_backend,\n interactive=config.interactive,\n N_JOBS=config.N_JOBS,\n )\n return cfg\n\n\ndef main():\n \"\"\"Run time-by-time decoding.\"\"\"\n import config\n if not config.contrasts:\n msg = 'No contrasts specified; not performing decoding.'\n logger.info(**gen_log_kwargs(message=msg))\n return\n\n if not config.decode:\n msg = 'No decoding requested by user.'\n logger.info(**gen_log_kwargs(message=msg))\n return\n\n # Here we go parallel inside the :class:`mne.decoding.SlidingEstimator`\n # so we don't dispatch manually to multiple jobs.\n logs = []\n for subject, session, (cond_1, cond_2) in itertools.product(\n get_subjects(config),\n get_sessions(config),\n get_decoding_contrasts(config)\n ):\n log = run_time_decoding(\n cfg=get_config(config=config),\n subject=subject,\n condition1=cond_1,\n condition2=cond_2,\n session=session,\n )\n logs.append(log)\n save_logs(config=config, logs=logs)\n\n\nif __name__ == '__main__':\n main()\n", "path": "mne_bids_pipeline/scripts/sensor/_03_decoding_time_by_time.py"}], "after_files": [{"content": "\"\"\"Decode time-by-time using a \"sliding\" estimator.\n\nA sliding estimator fits a separate logistic regression model for every time\npoint. The end result is an averaging effect across sensors.\n\nThis approach is different from the one taken in the decoding script for\nentire epochs. Here, the classifier is traines on the entire epoch, and hence\ncan learn about the entire time course of the signal.\n\"\"\"\n\n###############################################################################\n# Let us first import the libraries\n\nimport os.path as op\nimport itertools\nfrom types import SimpleNamespace\n\nimport numpy as np\nimport pandas as pd\nfrom scipy.io import savemat\n\nimport mne\nfrom mne.decoding import (\n GeneralizingEstimator, SlidingEstimator, cross_val_multiscore\n)\n\nfrom mne_bids import BIDSPath\n\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.pipeline import make_pipeline\nfrom sklearn.model_selection import StratifiedKFold\n\nfrom ..._config_utils import (\n get_sessions, get_subjects, get_task, get_datatype, get_eeg_reference,\n get_deriv_root, _restrict_analyze_channels, get_decoding_contrasts,\n)\nfrom ..._decoding import LogReg\nfrom ..._logging import gen_log_kwargs, logger\nfrom ..._run import failsafe_run, save_logs\nfrom ..._parallel import (\n get_parallel_backend, get_n_jobs, get_parallel_backend_name)\n\n\ndef get_input_fnames_time_decoding(**kwargs):\n cfg = kwargs.pop('cfg')\n subject = kwargs.pop('subject')\n session = kwargs.pop('session')\n # TODO: Somehow remove these?\n del kwargs['condition1']\n del kwargs['condition2']\n assert len(kwargs) == 0, kwargs.keys()\n del kwargs\n # TODO: Shouldn't this at least use the PTP-rejected epochs if available?\n fname_epochs = BIDSPath(subject=subject,\n session=session,\n task=cfg.task,\n acquisition=cfg.acq,\n run=None,\n recording=cfg.rec,\n space=cfg.space,\n suffix='epo',\n extension='.fif',\n datatype=cfg.datatype,\n root=cfg.deriv_root,\n check=False)\n in_files = dict()\n in_files['epochs'] = fname_epochs\n return in_files\n\n\n@failsafe_run(script_path=__file__,\n get_input_fnames=get_input_fnames_time_decoding)\ndef run_time_decoding(*, cfg, subject, condition1, condition2, session,\n in_files):\n if cfg.decoding_time_generalization:\n kind = 'time generalization'\n else:\n kind = 'sliding estimator'\n msg = f'Contrasting conditions ({kind}): {condition1} \u2013 {condition2}'\n logger.info(**gen_log_kwargs(message=msg, subject=subject,\n session=session))\n out_files = dict()\n bids_path = in_files['epochs'].copy()\n\n epochs = mne.read_epochs(in_files.pop('epochs'))\n _restrict_analyze_channels(epochs, cfg)\n\n # We define the epochs and the labels\n if isinstance(cfg.conditions, dict):\n epochs_conds = [cfg.conditions[condition1],\n cfg.conditions[condition2]]\n cond_names = [condition1, condition2]\n else:\n epochs_conds = cond_names = [condition1, condition2]\n epochs_conds = [condition1, condition2]\n\n # We have to use this approach because the conditions could be based on\n # metadata selection, so simply using epochs[conds[0], conds[1]] would\n # not work.\n epochs = mne.concatenate_epochs([epochs[epochs_conds[0]],\n epochs[epochs_conds[1]]])\n n_cond1 = len(epochs[epochs_conds[0]])\n n_cond2 = len(epochs[epochs_conds[1]])\n\n decim = cfg.decoding_time_generalization_decim\n if cfg.decoding_time_generalization and decim > 1:\n epochs.decimate(decim, verbose='error')\n\n X = epochs.get_data()\n y = np.r_[np.ones(n_cond1), np.zeros(n_cond2)]\n # ProgressBar does not work on dask, so only enable it if not using dask\n verbose = get_parallel_backend_name(cfg) != \"dask\"\n with get_parallel_backend(cfg):\n clf = make_pipeline(\n StandardScaler(),\n LogReg(\n solver='liblinear', # much faster than the default\n random_state=cfg.random_state,\n n_jobs=1,\n )\n )\n cv = StratifiedKFold(\n shuffle=True,\n random_state=cfg.random_state,\n n_splits=cfg.decoding_n_splits,\n )\n\n if cfg.decoding_time_generalization:\n estimator = GeneralizingEstimator(\n clf,\n scoring=cfg.decoding_metric,\n n_jobs=cfg.n_jobs,\n )\n cv_scoring_n_jobs = 1\n else:\n estimator = SlidingEstimator(\n clf,\n scoring=cfg.decoding_metric,\n n_jobs=1,\n )\n cv_scoring_n_jobs = cfg.n_jobs\n\n scores = cross_val_multiscore(\n estimator, X=X, y=y, cv=cv, n_jobs=cv_scoring_n_jobs,\n verbose=verbose, # ensure ProgressBar is shown (can be slow)\n )\n\n # let's save the scores now\n a_vs_b = f'{cond_names[0]}+{cond_names[1]}'.replace(op.sep, '')\n processing = f'{a_vs_b}+TimeByTime+{cfg.decoding_metric}'\n processing = processing.replace('_', '-').replace('-', '')\n mat_key = f'mat_{processing}'\n out_files[mat_key] = bids_path.copy().update(\n suffix='decoding', processing=processing, extension='.mat')\n savemat(\n out_files[mat_key],\n {\n 'scores': scores,\n 'times': epochs.times,\n 'decim': decim,\n }\n )\n\n if cfg.decoding_time_generalization:\n # Only store the mean scores for the diagonal in the TSV file \u2013\n # we still have all time generalization results in the MAT file\n # we just saved.\n mean_crossval_score = np.diag(scores.mean(axis=0))\n else:\n mean_crossval_score = scores.mean(axis=0)\n\n out_files[f'tsv_{processing}'] = out_files[mat_key].copy().update(\n extension='.tsv')\n tabular_data = pd.DataFrame(\n dict(cond_1=[cond_names[0]] * len(epochs.times),\n cond_2=[cond_names[1]] * len(epochs.times),\n time=epochs.times,\n mean_crossval_score=mean_crossval_score,\n metric=[cfg.decoding_metric] * len(epochs.times))\n )\n tabular_data.to_csv(\n out_files[f'tsv_{processing}'], sep='\\t', index=False)\n assert len(in_files) == 0, in_files.keys()\n return out_files\n\n\ndef get_config(\n *,\n config,\n) -> SimpleNamespace:\n cfg = SimpleNamespace(\n task=get_task(config),\n datatype=get_datatype(config),\n acq=config.acq,\n rec=config.rec,\n space=config.space,\n deriv_root=get_deriv_root(config),\n conditions=config.conditions,\n contrasts=get_decoding_contrasts(config),\n decode=config.decode,\n decoding_metric=config.decoding_metric,\n decoding_n_splits=config.decoding_n_splits,\n decoding_time_generalization=config.decoding_time_generalization,\n decoding_time_generalization_decim=config.decoding_time_generalization_decim, # noqa: E501\n random_state=config.random_state,\n analyze_channels=config.analyze_channels,\n ch_types=config.ch_types,\n eeg_reference=get_eeg_reference(config),\n n_jobs=get_n_jobs(config),\n parallel_backend=config.parallel_backend,\n interactive=config.interactive,\n N_JOBS=config.N_JOBS,\n )\n return cfg\n\n\ndef main():\n \"\"\"Run time-by-time decoding.\"\"\"\n import config\n if not config.contrasts:\n msg = 'No contrasts specified; not performing decoding.'\n logger.info(**gen_log_kwargs(message=msg))\n return\n\n if not config.decode:\n msg = 'No decoding requested by user.'\n logger.info(**gen_log_kwargs(message=msg))\n return\n\n # Here we go parallel inside the :class:`mne.decoding.SlidingEstimator`\n # so we don't dispatch manually to multiple jobs.\n logs = []\n for subject, session, (cond_1, cond_2) in itertools.product(\n get_subjects(config),\n get_sessions(config),\n get_decoding_contrasts(config)\n ):\n log = run_time_decoding(\n cfg=get_config(config=config),\n subject=subject,\n condition1=cond_1,\n condition2=cond_2,\n session=session,\n )\n logs.append(log)\n save_logs(config=config, logs=logs)\n\n\nif __name__ == '__main__':\n main()\n", "path": "mne_bids_pipeline/scripts/sensor/_03_decoding_time_by_time.py"}]} | 2,841 | 514 |
gh_patches_debug_26415 | rasdani/github-patches | git_diff | bokeh__bokeh-9163 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Update dataset for parallel coords plot examples
The example currently does this:
```
df = pd.read_csv("https://raw.githubusercontent.com/bcdunbar/datasets/master/parcoords_data.csv")
```
Which is not really great. We should add a data set to `sampledata` and use that.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/custom/parallel_plot/parallel_plot.py`
Content:
```
1 import numpy as np
2 import pandas as pd
3
4 from bokeh.plotting import figure
5 from bokeh.layouts import column
6 from bokeh.models import (Range1d, ColumnDataSource, Div, LinearAxis,
7 LinearColorMapper, MultiLine,
8 FixedTicker, BasicTickFormatter, FuncTickFormatter)
9
10 from parallel_selection_tool import ParallelSelectionTool
11 from parallel_reset import ParallelResetTool
12
13
14 def parallel_plot(df, color=None, palette=None):
15 """From a dataframe create a parallel coordinate plot
16 """
17 npts = df.shape[0]
18 ndims = len(df.columns)
19
20 if color is None:
21 color = np.ones(npts)
22 if palette is None:
23 palette = ['#ff0000']
24
25 cmap = LinearColorMapper(high=color.min(),
26 low=color.max(),
27 palette=palette)
28
29 data_source = ColumnDataSource(dict(
30 xs=np.arange(ndims)[None, :].repeat(npts, axis=0).tolist(),
31 ys=np.array((df-df.min())/(df.max()-df.min())).tolist(),
32 color=color))
33
34 p = figure(x_range=(-1, ndims),
35 y_range=(0, 1),
36 width=1000,
37 tools="pan, box_zoom")
38
39 # Create x axis ticks from columns contained in dataframe
40 fixed_x_ticks = FixedTicker(
41 ticks=np.arange(ndims), minor_ticks=[])
42 formatter_x_ticks = FuncTickFormatter(
43 code="return columns[index]", args={"columns": df.columns})
44 p.xaxis.ticker = fixed_x_ticks
45 p.xaxis.formatter = formatter_x_ticks
46
47 p.yaxis.visible = False
48 p.y_range.start = 0
49 p.y_range.end = 1
50 p.y_range.bounds = (-0.1, 1.1) # add a little padding around y axis
51 p.xgrid.visible = False
52 p.ygrid.visible = False
53
54 # Create extra y axis for each dataframe column
55 tickformatter = BasicTickFormatter(precision=1)
56 for index, col in enumerate(df.columns):
57 start = df[col].min()
58 end = df[col].max()
59 bound_min = start + abs(end-start) * (p.y_range.bounds[0] - p.y_range.start)
60 bound_max = end + abs(end-start) * (p.y_range.bounds[1] - p.y_range.end)
61 p.extra_y_ranges.update(
62 {col: Range1d(start=bound_min, end=bound_max, bounds=(bound_min, bound_max))})
63
64 fixedticks = FixedTicker(
65 ticks=np.linspace(start, end, 8), minor_ticks=[])
66
67 p.add_layout(LinearAxis(fixed_location=index, y_range_name=col,
68 ticker=fixedticks, formatter=tickformatter), 'right')
69
70 # create the data renderer ( MultiLine )
71 # specify selected and non selected style
72 non_selected_line_style = dict(line_color='grey', line_width=0.1, line_alpha=0.5)
73
74 selected_line_style = dict(line_color={'field': 'color', 'transform': cmap}, line_width=1)
75
76 parallel_renderer = p.multi_line(
77 xs="xs", ys="ys", source=data_source, **non_selected_line_style)
78
79 # Specify selection style
80 selected_lines = MultiLine(**selected_line_style)
81
82 # Specify non selection style
83 nonselected_lines = MultiLine(**non_selected_line_style)
84
85 parallel_renderer.selection_glyph = selected_lines
86 parallel_renderer.nonselection_glyph = nonselected_lines
87 p.y_range.start = p.y_range.bounds[0]
88 p.y_range.end = p.y_range.bounds[1]
89
90 rect_source = ColumnDataSource({
91 'x': [], 'y': [], 'width': [], 'height': []
92 })
93
94 # add rectangle selections
95 selection_renderer = p.rect(x='x', y='y', width='width', height='height',
96 source=rect_source,
97 fill_alpha=0.7, fill_color='#009933')
98 selection_tool = ParallelSelectionTool(
99 renderer_select=selection_renderer, renderer_data=parallel_renderer,
100 box_width=10)
101 # custom resets (reset only axes not selections)
102 reset_axes = ParallelResetTool()
103
104 # add tools and activate selection ones
105 p.add_tools(selection_tool, reset_axes)
106 p.toolbar.active_drag = selection_tool
107 return p
108
109 if __name__ == '__main__':
110 from bokeh.palettes import Viridis256
111 from bokeh.io import show
112 df = pd.read_csv("https://raw.githubusercontent.com/bcdunbar/datasets/master/parcoords_data.csv")
113 p = parallel_plot(df=df, color=df[df.columns[0]], palette=Viridis256)
114 div = Div(text="Select up and down column grid lines to define filters. Double click a filter to reset it.")
115 show(column(div, p))
116
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/examples/custom/parallel_plot/parallel_plot.py b/examples/custom/parallel_plot/parallel_plot.py
--- a/examples/custom/parallel_plot/parallel_plot.py
+++ b/examples/custom/parallel_plot/parallel_plot.py
@@ -1,11 +1,11 @@
import numpy as np
-import pandas as pd
from bokeh.plotting import figure
from bokeh.layouts import column
from bokeh.models import (Range1d, ColumnDataSource, Div, LinearAxis,
LinearColorMapper, MultiLine,
FixedTicker, BasicTickFormatter, FuncTickFormatter)
+from bokeh.sampledata.autompg import autompg_clean as df
from parallel_selection_tool import ParallelSelectionTool
from parallel_reset import ParallelResetTool
@@ -109,7 +109,9 @@
if __name__ == '__main__':
from bokeh.palettes import Viridis256
from bokeh.io import show
- df = pd.read_csv("https://raw.githubusercontent.com/bcdunbar/datasets/master/parcoords_data.csv")
+ del df['origin']
+ del df['mfr']
+ del df['name']
p = parallel_plot(df=df, color=df[df.columns[0]], palette=Viridis256)
div = Div(text="Select up and down column grid lines to define filters. Double click a filter to reset it.")
show(column(div, p))
| {"golden_diff": "diff --git a/examples/custom/parallel_plot/parallel_plot.py b/examples/custom/parallel_plot/parallel_plot.py\n--- a/examples/custom/parallel_plot/parallel_plot.py\n+++ b/examples/custom/parallel_plot/parallel_plot.py\n@@ -1,11 +1,11 @@\n import numpy as np\n-import pandas as pd\n \n from bokeh.plotting import figure\n from bokeh.layouts import column\n from bokeh.models import (Range1d, ColumnDataSource, Div, LinearAxis,\n LinearColorMapper, MultiLine,\n FixedTicker, BasicTickFormatter, FuncTickFormatter)\n+from bokeh.sampledata.autompg import autompg_clean as df\n \n from parallel_selection_tool import ParallelSelectionTool\n from parallel_reset import ParallelResetTool\n@@ -109,7 +109,9 @@\n if __name__ == '__main__':\n from bokeh.palettes import Viridis256\n from bokeh.io import show\n- df = pd.read_csv(\"https://raw.githubusercontent.com/bcdunbar/datasets/master/parcoords_data.csv\")\n+ del df['origin']\n+ del df['mfr']\n+ del df['name']\n p = parallel_plot(df=df, color=df[df.columns[0]], palette=Viridis256)\n div = Div(text=\"Select up and down column grid lines to define filters. Double click a filter to reset it.\")\n show(column(div, p))\n", "issue": "Update dataset for parallel coords plot examples\nThe example currently does this:\r\n```\r\ndf = pd.read_csv(\"https://raw.githubusercontent.com/bcdunbar/datasets/master/parcoords_data.csv\")\r\n```\r\n\r\nWhich is not really great. We should add a data set to `sampledata` and use that. \r\n\n", "before_files": [{"content": "import numpy as np\nimport pandas as pd\n\nfrom bokeh.plotting import figure\nfrom bokeh.layouts import column\nfrom bokeh.models import (Range1d, ColumnDataSource, Div, LinearAxis,\n LinearColorMapper, MultiLine,\n FixedTicker, BasicTickFormatter, FuncTickFormatter)\n\nfrom parallel_selection_tool import ParallelSelectionTool\nfrom parallel_reset import ParallelResetTool\n\n\ndef parallel_plot(df, color=None, palette=None):\n \"\"\"From a dataframe create a parallel coordinate plot\n \"\"\"\n npts = df.shape[0]\n ndims = len(df.columns)\n\n if color is None:\n color = np.ones(npts)\n if palette is None:\n palette = ['#ff0000']\n\n cmap = LinearColorMapper(high=color.min(),\n low=color.max(),\n palette=palette)\n\n data_source = ColumnDataSource(dict(\n xs=np.arange(ndims)[None, :].repeat(npts, axis=0).tolist(),\n ys=np.array((df-df.min())/(df.max()-df.min())).tolist(),\n color=color))\n\n p = figure(x_range=(-1, ndims),\n y_range=(0, 1),\n width=1000,\n tools=\"pan, box_zoom\")\n\n # Create x axis ticks from columns contained in dataframe\n fixed_x_ticks = FixedTicker(\n ticks=np.arange(ndims), minor_ticks=[])\n formatter_x_ticks = FuncTickFormatter(\n code=\"return columns[index]\", args={\"columns\": df.columns})\n p.xaxis.ticker = fixed_x_ticks\n p.xaxis.formatter = formatter_x_ticks\n\n p.yaxis.visible = False\n p.y_range.start = 0\n p.y_range.end = 1\n p.y_range.bounds = (-0.1, 1.1) # add a little padding around y axis\n p.xgrid.visible = False\n p.ygrid.visible = False\n\n # Create extra y axis for each dataframe column\n tickformatter = BasicTickFormatter(precision=1)\n for index, col in enumerate(df.columns):\n start = df[col].min()\n end = df[col].max()\n bound_min = start + abs(end-start) * (p.y_range.bounds[0] - p.y_range.start)\n bound_max = end + abs(end-start) * (p.y_range.bounds[1] - p.y_range.end)\n p.extra_y_ranges.update(\n {col: Range1d(start=bound_min, end=bound_max, bounds=(bound_min, bound_max))})\n\n fixedticks = FixedTicker(\n ticks=np.linspace(start, end, 8), minor_ticks=[])\n\n p.add_layout(LinearAxis(fixed_location=index, y_range_name=col,\n ticker=fixedticks, formatter=tickformatter), 'right')\n\n # create the data renderer ( MultiLine )\n # specify selected and non selected style\n non_selected_line_style = dict(line_color='grey', line_width=0.1, line_alpha=0.5)\n\n selected_line_style = dict(line_color={'field': 'color', 'transform': cmap}, line_width=1)\n\n parallel_renderer = p.multi_line(\n xs=\"xs\", ys=\"ys\", source=data_source, **non_selected_line_style)\n\n # Specify selection style\n selected_lines = MultiLine(**selected_line_style)\n\n # Specify non selection style\n nonselected_lines = MultiLine(**non_selected_line_style)\n\n parallel_renderer.selection_glyph = selected_lines\n parallel_renderer.nonselection_glyph = nonselected_lines\n p.y_range.start = p.y_range.bounds[0]\n p.y_range.end = p.y_range.bounds[1]\n\n rect_source = ColumnDataSource({\n 'x': [], 'y': [], 'width': [], 'height': []\n })\n\n # add rectangle selections\n selection_renderer = p.rect(x='x', y='y', width='width', height='height',\n source=rect_source,\n fill_alpha=0.7, fill_color='#009933')\n selection_tool = ParallelSelectionTool(\n renderer_select=selection_renderer, renderer_data=parallel_renderer,\n box_width=10)\n # custom resets (reset only axes not selections)\n reset_axes = ParallelResetTool()\n\n # add tools and activate selection ones\n p.add_tools(selection_tool, reset_axes)\n p.toolbar.active_drag = selection_tool\n return p\n\nif __name__ == '__main__':\n from bokeh.palettes import Viridis256\n from bokeh.io import show\n df = pd.read_csv(\"https://raw.githubusercontent.com/bcdunbar/datasets/master/parcoords_data.csv\")\n p = parallel_plot(df=df, color=df[df.columns[0]], palette=Viridis256)\n div = Div(text=\"Select up and down column grid lines to define filters. Double click a filter to reset it.\")\n show(column(div, p))\n", "path": "examples/custom/parallel_plot/parallel_plot.py"}], "after_files": [{"content": "import numpy as np\n\nfrom bokeh.plotting import figure\nfrom bokeh.layouts import column\nfrom bokeh.models import (Range1d, ColumnDataSource, Div, LinearAxis,\n LinearColorMapper, MultiLine,\n FixedTicker, BasicTickFormatter, FuncTickFormatter)\nfrom bokeh.sampledata.autompg import autompg_clean as df\n\nfrom parallel_selection_tool import ParallelSelectionTool\nfrom parallel_reset import ParallelResetTool\n\n\ndef parallel_plot(df, color=None, palette=None):\n \"\"\"From a dataframe create a parallel coordinate plot\n \"\"\"\n npts = df.shape[0]\n ndims = len(df.columns)\n\n if color is None:\n color = np.ones(npts)\n if palette is None:\n palette = ['#ff0000']\n\n cmap = LinearColorMapper(high=color.min(),\n low=color.max(),\n palette=palette)\n\n data_source = ColumnDataSource(dict(\n xs=np.arange(ndims)[None, :].repeat(npts, axis=0).tolist(),\n ys=np.array((df-df.min())/(df.max()-df.min())).tolist(),\n color=color))\n\n p = figure(x_range=(-1, ndims),\n y_range=(0, 1),\n width=1000,\n tools=\"pan, box_zoom\")\n\n # Create x axis ticks from columns contained in dataframe\n fixed_x_ticks = FixedTicker(\n ticks=np.arange(ndims), minor_ticks=[])\n formatter_x_ticks = FuncTickFormatter(\n code=\"return columns[index]\", args={\"columns\": df.columns})\n p.xaxis.ticker = fixed_x_ticks\n p.xaxis.formatter = formatter_x_ticks\n\n p.yaxis.visible = False\n p.y_range.start = 0\n p.y_range.end = 1\n p.y_range.bounds = (-0.1, 1.1) # add a little padding around y axis\n p.xgrid.visible = False\n p.ygrid.visible = False\n\n # Create extra y axis for each dataframe column\n tickformatter = BasicTickFormatter(precision=1)\n for index, col in enumerate(df.columns):\n start = df[col].min()\n end = df[col].max()\n bound_min = start + abs(end-start) * (p.y_range.bounds[0] - p.y_range.start)\n bound_max = end + abs(end-start) * (p.y_range.bounds[1] - p.y_range.end)\n p.extra_y_ranges.update(\n {col: Range1d(start=bound_min, end=bound_max, bounds=(bound_min, bound_max))})\n\n fixedticks = FixedTicker(\n ticks=np.linspace(start, end, 8), minor_ticks=[])\n\n p.add_layout(LinearAxis(fixed_location=index, y_range_name=col,\n ticker=fixedticks, formatter=tickformatter), 'right')\n\n # create the data renderer ( MultiLine )\n # specify selected and non selected style\n non_selected_line_style = dict(line_color='grey', line_width=0.1, line_alpha=0.5)\n\n selected_line_style = dict(line_color={'field': 'color', 'transform': cmap}, line_width=1)\n\n parallel_renderer = p.multi_line(\n xs=\"xs\", ys=\"ys\", source=data_source, **non_selected_line_style)\n\n # Specify selection style\n selected_lines = MultiLine(**selected_line_style)\n\n # Specify non selection style\n nonselected_lines = MultiLine(**non_selected_line_style)\n\n parallel_renderer.selection_glyph = selected_lines\n parallel_renderer.nonselection_glyph = nonselected_lines\n p.y_range.start = p.y_range.bounds[0]\n p.y_range.end = p.y_range.bounds[1]\n\n rect_source = ColumnDataSource({\n 'x': [], 'y': [], 'width': [], 'height': []\n })\n\n # add rectangle selections\n selection_renderer = p.rect(x='x', y='y', width='width', height='height',\n source=rect_source,\n fill_alpha=0.7, fill_color='#009933')\n selection_tool = ParallelSelectionTool(\n renderer_select=selection_renderer, renderer_data=parallel_renderer,\n box_width=10)\n # custom resets (reset only axes not selections)\n reset_axes = ParallelResetTool()\n\n # add tools and activate selection ones\n p.add_tools(selection_tool, reset_axes)\n p.toolbar.active_drag = selection_tool\n return p\n\nif __name__ == '__main__':\n from bokeh.palettes import Viridis256\n from bokeh.io import show\n del df['origin']\n del df['mfr']\n del df['name']\n p = parallel_plot(df=df, color=df[df.columns[0]], palette=Viridis256)\n div = Div(text=\"Select up and down column grid lines to define filters. Double click a filter to reset it.\")\n show(column(div, p))\n", "path": "examples/custom/parallel_plot/parallel_plot.py"}]} | 1,624 | 302 |
gh_patches_debug_19448 | rasdani/github-patches | git_diff | openstates__openstates-scrapers-2886 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
KS failing since at least 2019-02-24
KS has been failing since 2019-02-24
Based on automated runs it appears that KS has not run successfully in 2 days (2019-02-24).
```
06:02:22 INFO pupa: save jurisdiction Kansas as jurisdiction_ocd-jurisdiction-country:us-state:ks-government.json
06:02:22 INFO pupa: save organization Kansas State Legislature as organization_ea95774a-38c2-11e9-b4ad-021e9c8c6450.json
06:02:22 INFO pupa: save organization Senate as organization_ea957a1a-38c2-11e9-b4ad-021e9c8c6450.json
06:02:22 INFO pupa: save organization House as organization_ea957bbe-38c2-11e9-b4ad-021e9c8c6450.json
06:02:22 INFO pupa: no session specified, using 2019-2020
06:02:22 INFO scrapelib: GET - http://www.kslegislature.org/li/api/v11/rev-1/bill_status/
06:02:23 WARNING pupa: unknown action code on SB9: misc_he_200 Hearing: Tuesday, January 22, 2019, 10:30 AM Room 548-S
06:02:23 WARNING pupa: unknown action code on SB9: misc_he_200 Hearing: Tuesday, February 12, 2019, 9:00 AM Room 112-N
06:02:23 INFO scrapelib: GET - http://www.kslegislature.org/li/b2019_20/measures/sb9/
06:02:27 INFO scrapelib: GET - http://www.kslegislature.org/li/b2019_20/measures/vote_view/je_20190222111337_914158/
06:02:32 WARNING scrapelib: sleeping for 10 seconds before retry
06:02:42 WARNING scrapelib: sleeping for 20 seconds before retry
06:03:03 WARNING scrapelib: sleeping for 40 seconds before retry
```
Visit http://bobsled.openstates.org for more info.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `openstates/ks/bills.py`
Content:
```
1 import re
2 import json
3 import datetime
4
5 import lxml.html
6 from pupa.scrape import Scraper, Bill, VoteEvent
7
8 from . import ksapi
9
10
11 def _clean_spaces(title):
12 return re.sub(r'\s+', ' ', title)
13
14
15 class KSBillScraper(Scraper):
16 def scrape(self, chamber=None, session=None):
17 if session is None:
18 session = self.latest_session()
19 self.info('no session specified, using %s', session)
20
21 chambers = [chamber] if chamber is not None else ['upper', 'lower']
22
23 for chamber in chambers:
24 yield from self.scrape_chamber(chamber, session)
25
26 def scrape_chamber(self, chamber, session):
27 chamber_name = 'Senate' if chamber == 'upper' else 'House'
28 chamber_letter = chamber_name[0]
29 # perhaps we should save this data so we can make one request for both?
30 bill_request = self.get(ksapi.url + 'bill_status/').text
31 bill_request_json = json.loads(bill_request)
32 bills = bill_request_json['content']
33 for bill_data in bills:
34
35 bill_id = bill_data['BILLNO']
36
37 # filter other chambers
38 if not bill_id.startswith(chamber_letter):
39 continue
40
41 if 'CR' in bill_id:
42 btype = 'concurrent resolution'
43 elif 'R' in bill_id:
44 btype = 'resolution'
45 elif 'B' in bill_id:
46 btype = 'bill'
47
48 title = bill_data['SHORTTITLE'] or bill_data['LONGTITLE']
49
50 # main
51 bill = Bill(
52 bill_id,
53 session,
54 title,
55 chamber=chamber,
56 classification=btype,
57 )
58 bill.extras = {'status': bill_data['STATUS']}
59
60 bill.add_source(ksapi.url + 'bill_status/' + bill_id.lower())
61
62 if (bill_data['LONGTITLE'] and
63 bill_data['LONGTITLE'] != bill.title):
64 bill.add_title(bill_data['LONGTITLE'])
65
66 # An "original sponsor" is the API's expression of "primary sponsor"
67 for primary_sponsor in bill_data['ORIGINAL_SPONSOR']:
68 bill.add_sponsorship(
69 name=primary_sponsor,
70 entity_type='organization' if "committee" in primary_sponsor.lower()
71 else 'person',
72 primary=True,
73 classification="original sponsor"
74 )
75 for sponsor in bill_data['SPONSOR_NAMES']:
76 if sponsor in bill_data['ORIGINAL_SPONSOR']:
77 continue
78 bill.add_sponsorship(
79 name=sponsor,
80 entity_type='organization' if "committee" in sponsor.lower() else 'person',
81 primary=False,
82 classification='cosponsor',
83 )
84
85 # history is backwards
86 for event in reversed(bill_data['HISTORY']):
87 actor = ('upper' if event['chamber'] == 'Senate'
88 else 'lower')
89
90 date = event['session_date']
91 # append committee names if present
92 if 'committee_names' in event:
93 action = (event['status'] + ' ' +
94 ' and '.join(event['committee_names']))
95 else:
96 action = event['status']
97
98 if event['action_code'] not in ksapi.action_codes:
99 self.warning('unknown action code on %s: %s %s' %
100 (bill_id, event['action_code'],
101 event['status']))
102 atype = None
103 else:
104 atype = ksapi.action_codes[event['action_code']]
105 bill.add_action(
106 action, date, chamber=actor, classification=atype)
107
108 # Versions are exposed in `bill_data['versions'],
109 # but lack any descriptive text or identifiers;
110 # continue to scrape these from the HTML
111 yield from self.scrape_html(bill, session)
112
113 yield bill
114
115 def scrape_html(self, bill, session):
116 meta = next(
117 each for each in self.jurisdiction.legislative_sessions
118 if each['identifier'] == session
119 )
120 slug = meta['_scraped_name']
121 # we have to go to the HTML for the versions & votes
122 base_url = 'http://www.kslegislature.org/li/%s/measures/' % slug
123 if 'resolution' in bill.classification:
124 base_url = 'http://www.kslegislature.org/li/%s/year1/measures/' % slug
125
126 url = base_url + bill.identifier.lower() + '/'
127 doc = lxml.html.fromstring(self.get(url).text)
128 doc.make_links_absolute(url)
129
130 bill.add_source(url)
131
132 # versions & notes
133 version_rows = doc.xpath('//tbody[starts-with(@id, "version-tab")]/tr')
134 for row in version_rows:
135 # version, docs, sn, fn
136 tds = row.getchildren()
137 title = _clean_spaces(tds[0].text_content().strip())
138 doc_url = get_doc_link(tds[1])
139 if doc_url:
140 bill.add_version_link(title, doc_url, media_type='application/pdf')
141 if len(tds) > 2:
142 sn_url = get_doc_link(tds[2])
143 if sn_url:
144 bill.add_document_link(
145 title + ' - Supplementary Note', sn_url,
146 on_duplicate='ignore'
147 )
148 if len(tds) > 3:
149 if sn_url:
150 bill.add_document_link(title + ' - Fiscal Note', sn_url,
151 on_duplicate='ignore'
152 )
153
154 all_links = doc.xpath(
155 "//table[@class='bottom']/tbody[@class='tab-content-sub']/tr/td/a/@href"
156 )
157 vote_members_urls = []
158 for i in all_links:
159 if "vote_view" in i:
160 vote_members_urls.append(str(i))
161 if len(vote_members_urls) > 0:
162 for link in vote_members_urls:
163 yield from self.parse_vote(bill, link)
164
165 history_rows = doc.xpath('//tbody[starts-with(@id, "history-tab")]/tr')
166 for row in history_rows:
167 row_text = row.xpath('.//td[3]')[0].text_content()
168 # amendments & reports
169 amendment = get_doc_link(row.xpath('.//td[4]')[0])
170 if amendment:
171 if 'Motion to Amend' in row_text:
172 _, offered_by = row_text.split('Motion to Amend -')
173 amendment_name = 'Amendment ' + offered_by.strip()
174 elif 'Conference committee report now available' in row_text:
175 amendment_name = 'Conference Committee Report'
176 else:
177 amendment_name = row_text.strip()
178 bill.add_document_link(_clean_spaces(amendment_name), amendment,
179 on_duplicate='ignore')
180
181 def parse_vote(self, bill, link):
182 text = self.get(link).text
183 if 'Page Not Found' in text:
184 self.warning("missing vote, skipping")
185 return
186 member_doc = lxml.html.fromstring(text)
187 motion = member_doc.xpath("//div[@id='main_content']/h4/text()")
188 chamber_date_line = ''.join(member_doc.xpath("//div[@id='main_content']/h3[1]//text()"))
189 chamber_date_line_words = chamber_date_line.split()
190 vote_chamber = chamber_date_line_words[0]
191 vote_date = datetime.datetime.strptime(chamber_date_line_words[-1], '%m/%d/%Y')
192 vote_status = " ".join(chamber_date_line_words[2:-2])
193
194 opinions = member_doc.xpath("//div[@id='main_content']/h3[position() > 1]/text()")
195 if len(opinions) > 0:
196 vote_status = vote_status if vote_status.strip() else motion[0]
197 vote_chamber = 'upper' if vote_chamber == 'Senate' else 'lower'
198
199 for i in opinions:
200 try:
201 count = int(i[i.find("(") + 1:i.find(")")])
202 except ValueError:
203 # This is likely not a vote-count text chunk
204 # It's probably '`On roll call the vote was:`
205 pass
206 else:
207 if "yea" in i.lower():
208 yes_count = count
209 elif "nay" in i.lower():
210 no_count = count
211 elif "present" in i.lower():
212 p_count = count
213 elif "absent" in i.lower():
214 a_count = count
215
216 vote = VoteEvent(
217 bill=bill,
218 start_date=vote_date.strftime('%Y-%m-%d'),
219 chamber=vote_chamber,
220 motion_text=vote_status,
221 result='pass' if yes_count > no_count else 'fail',
222 classification='passage',
223 )
224 vote.pupa_id = link
225
226 vote.set_count('yes', yes_count)
227 vote.set_count('no', no_count)
228 vote.set_count('abstain', p_count)
229 vote.set_count('absent', a_count)
230
231 vote.add_source(link)
232
233 a_links = member_doc.xpath("//div[@id='main_content']/a/text()")
234 for i in range(1, len(a_links)):
235 if i <= yes_count:
236 vote.vote('yes', re.sub(',', '', a_links[i]).split()[0])
237 elif no_count != 0 and i > yes_count and i <= yes_count + no_count:
238 vote.vote('no', re.sub(',', '', a_links[i]).split()[0])
239 else:
240 vote.vote('other', re.sub(',', '', a_links[i]).split()[0])
241 yield vote
242 else:
243 self.warning("No Votes for: %s", link)
244
245
246 def get_doc_link(elem):
247 # try ODT then PDF
248 link = elem.xpath('.//a[contains(@href, ".odt")]/@href')
249 if link:
250 return link[0]
251 link = elem.xpath('.//a[contains(@href, ".pdf")]/@href')
252 if link:
253 return link[0]
254
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/openstates/ks/bills.py b/openstates/ks/bills.py
--- a/openstates/ks/bills.py
+++ b/openstates/ks/bills.py
@@ -1,6 +1,7 @@
import re
import json
import datetime
+import requests
import lxml.html
from pupa.scrape import Scraper, Bill, VoteEvent
@@ -179,8 +180,16 @@
on_duplicate='ignore')
def parse_vote(self, bill, link):
- text = self.get(link).text
- if 'Page Not Found' in text:
+ # Server sometimes sends proper error headers,
+ # sometimes not
+ try:
+ self.info("Get {}".format(link))
+ text = requests.get(link).text
+ except requests.exceptions.HTTPError as err:
+ self.warning("{} fetching vote {}, skipping".format(err, link))
+ return
+
+ if 'Page Not Found' in text or 'Page Unavailable' in text:
self.warning("missing vote, skipping")
return
member_doc = lxml.html.fromstring(text)
| {"golden_diff": "diff --git a/openstates/ks/bills.py b/openstates/ks/bills.py\n--- a/openstates/ks/bills.py\n+++ b/openstates/ks/bills.py\n@@ -1,6 +1,7 @@\n import re\n import json\n import datetime\n+import requests\n \n import lxml.html\n from pupa.scrape import Scraper, Bill, VoteEvent\n@@ -179,8 +180,16 @@\n on_duplicate='ignore')\n \n def parse_vote(self, bill, link):\n- text = self.get(link).text\n- if 'Page Not Found' in text:\n+ # Server sometimes sends proper error headers,\n+ # sometimes not\n+ try:\n+ self.info(\"Get {}\".format(link))\n+ text = requests.get(link).text\n+ except requests.exceptions.HTTPError as err:\n+ self.warning(\"{} fetching vote {}, skipping\".format(err, link))\n+ return\n+\n+ if 'Page Not Found' in text or 'Page Unavailable' in text:\n self.warning(\"missing vote, skipping\")\n return\n member_doc = lxml.html.fromstring(text)\n", "issue": "KS failing since at least 2019-02-24\nKS has been failing since 2019-02-24\n\nBased on automated runs it appears that KS has not run successfully in 2 days (2019-02-24).\n\n\n```\n 06:02:22 INFO pupa: save jurisdiction Kansas as jurisdiction_ocd-jurisdiction-country:us-state:ks-government.json\n06:02:22 INFO pupa: save organization Kansas State Legislature as organization_ea95774a-38c2-11e9-b4ad-021e9c8c6450.json\n06:02:22 INFO pupa: save organization Senate as organization_ea957a1a-38c2-11e9-b4ad-021e9c8c6450.json\n06:02:22 INFO pupa: save organization House as organization_ea957bbe-38c2-11e9-b4ad-021e9c8c6450.json\n06:02:22 INFO pupa: no session specified, using 2019-2020\n06:02:22 INFO scrapelib: GET - http://www.kslegislature.org/li/api/v11/rev-1/bill_status/\n06:02:23 WARNING pupa: unknown action code on SB9: misc_he_200 Hearing: Tuesday, January 22, 2019, 10:30 AM Room 548-S\n06:02:23 WARNING pupa: unknown action code on SB9: misc_he_200 Hearing: Tuesday, February 12, 2019, 9:00 AM Room 112-N\n06:02:23 INFO scrapelib: GET - http://www.kslegislature.org/li/b2019_20/measures/sb9/\n06:02:27 INFO scrapelib: GET - http://www.kslegislature.org/li/b2019_20/measures/vote_view/je_20190222111337_914158/\n06:02:32 WARNING scrapelib: sleeping for 10 seconds before retry\n06:02:42 WARNING scrapelib: sleeping for 20 seconds before retry\n06:03:03 WARNING scrapelib: sleeping for 40 seconds before retry\n```\n\nVisit http://bobsled.openstates.org for more info.\n\n", "before_files": [{"content": "import re\nimport json\nimport datetime\n\nimport lxml.html\nfrom pupa.scrape import Scraper, Bill, VoteEvent\n\nfrom . import ksapi\n\n\ndef _clean_spaces(title):\n return re.sub(r'\\s+', ' ', title)\n\n\nclass KSBillScraper(Scraper):\n def scrape(self, chamber=None, session=None):\n if session is None:\n session = self.latest_session()\n self.info('no session specified, using %s', session)\n\n chambers = [chamber] if chamber is not None else ['upper', 'lower']\n\n for chamber in chambers:\n yield from self.scrape_chamber(chamber, session)\n\n def scrape_chamber(self, chamber, session):\n chamber_name = 'Senate' if chamber == 'upper' else 'House'\n chamber_letter = chamber_name[0]\n # perhaps we should save this data so we can make one request for both?\n bill_request = self.get(ksapi.url + 'bill_status/').text\n bill_request_json = json.loads(bill_request)\n bills = bill_request_json['content']\n for bill_data in bills:\n\n bill_id = bill_data['BILLNO']\n\n # filter other chambers\n if not bill_id.startswith(chamber_letter):\n continue\n\n if 'CR' in bill_id:\n btype = 'concurrent resolution'\n elif 'R' in bill_id:\n btype = 'resolution'\n elif 'B' in bill_id:\n btype = 'bill'\n\n title = bill_data['SHORTTITLE'] or bill_data['LONGTITLE']\n\n # main\n bill = Bill(\n bill_id,\n session,\n title,\n chamber=chamber,\n classification=btype,\n )\n bill.extras = {'status': bill_data['STATUS']}\n\n bill.add_source(ksapi.url + 'bill_status/' + bill_id.lower())\n\n if (bill_data['LONGTITLE'] and\n bill_data['LONGTITLE'] != bill.title):\n bill.add_title(bill_data['LONGTITLE'])\n\n # An \"original sponsor\" is the API's expression of \"primary sponsor\"\n for primary_sponsor in bill_data['ORIGINAL_SPONSOR']:\n bill.add_sponsorship(\n name=primary_sponsor,\n entity_type='organization' if \"committee\" in primary_sponsor.lower()\n else 'person',\n primary=True,\n classification=\"original sponsor\"\n )\n for sponsor in bill_data['SPONSOR_NAMES']:\n if sponsor in bill_data['ORIGINAL_SPONSOR']:\n continue\n bill.add_sponsorship(\n name=sponsor,\n entity_type='organization' if \"committee\" in sponsor.lower() else 'person',\n primary=False,\n classification='cosponsor',\n )\n\n # history is backwards\n for event in reversed(bill_data['HISTORY']):\n actor = ('upper' if event['chamber'] == 'Senate'\n else 'lower')\n\n date = event['session_date']\n # append committee names if present\n if 'committee_names' in event:\n action = (event['status'] + ' ' +\n ' and '.join(event['committee_names']))\n else:\n action = event['status']\n\n if event['action_code'] not in ksapi.action_codes:\n self.warning('unknown action code on %s: %s %s' %\n (bill_id, event['action_code'],\n event['status']))\n atype = None\n else:\n atype = ksapi.action_codes[event['action_code']]\n bill.add_action(\n action, date, chamber=actor, classification=atype)\n\n # Versions are exposed in `bill_data['versions'],\n # but lack any descriptive text or identifiers;\n # continue to scrape these from the HTML\n yield from self.scrape_html(bill, session)\n\n yield bill\n\n def scrape_html(self, bill, session):\n meta = next(\n each for each in self.jurisdiction.legislative_sessions\n if each['identifier'] == session\n )\n slug = meta['_scraped_name']\n # we have to go to the HTML for the versions & votes\n base_url = 'http://www.kslegislature.org/li/%s/measures/' % slug\n if 'resolution' in bill.classification:\n base_url = 'http://www.kslegislature.org/li/%s/year1/measures/' % slug\n\n url = base_url + bill.identifier.lower() + '/'\n doc = lxml.html.fromstring(self.get(url).text)\n doc.make_links_absolute(url)\n\n bill.add_source(url)\n\n # versions & notes\n version_rows = doc.xpath('//tbody[starts-with(@id, \"version-tab\")]/tr')\n for row in version_rows:\n # version, docs, sn, fn\n tds = row.getchildren()\n title = _clean_spaces(tds[0].text_content().strip())\n doc_url = get_doc_link(tds[1])\n if doc_url:\n bill.add_version_link(title, doc_url, media_type='application/pdf')\n if len(tds) > 2:\n sn_url = get_doc_link(tds[2])\n if sn_url:\n bill.add_document_link(\n title + ' - Supplementary Note', sn_url,\n on_duplicate='ignore'\n )\n if len(tds) > 3:\n if sn_url:\n bill.add_document_link(title + ' - Fiscal Note', sn_url,\n on_duplicate='ignore'\n )\n\n all_links = doc.xpath(\n \"//table[@class='bottom']/tbody[@class='tab-content-sub']/tr/td/a/@href\"\n )\n vote_members_urls = []\n for i in all_links:\n if \"vote_view\" in i:\n vote_members_urls.append(str(i))\n if len(vote_members_urls) > 0:\n for link in vote_members_urls:\n yield from self.parse_vote(bill, link)\n\n history_rows = doc.xpath('//tbody[starts-with(@id, \"history-tab\")]/tr')\n for row in history_rows:\n row_text = row.xpath('.//td[3]')[0].text_content()\n # amendments & reports\n amendment = get_doc_link(row.xpath('.//td[4]')[0])\n if amendment:\n if 'Motion to Amend' in row_text:\n _, offered_by = row_text.split('Motion to Amend -')\n amendment_name = 'Amendment ' + offered_by.strip()\n elif 'Conference committee report now available' in row_text:\n amendment_name = 'Conference Committee Report'\n else:\n amendment_name = row_text.strip()\n bill.add_document_link(_clean_spaces(amendment_name), amendment,\n on_duplicate='ignore')\n\n def parse_vote(self, bill, link):\n text = self.get(link).text\n if 'Page Not Found' in text:\n self.warning(\"missing vote, skipping\")\n return\n member_doc = lxml.html.fromstring(text)\n motion = member_doc.xpath(\"//div[@id='main_content']/h4/text()\")\n chamber_date_line = ''.join(member_doc.xpath(\"//div[@id='main_content']/h3[1]//text()\"))\n chamber_date_line_words = chamber_date_line.split()\n vote_chamber = chamber_date_line_words[0]\n vote_date = datetime.datetime.strptime(chamber_date_line_words[-1], '%m/%d/%Y')\n vote_status = \" \".join(chamber_date_line_words[2:-2])\n\n opinions = member_doc.xpath(\"//div[@id='main_content']/h3[position() > 1]/text()\")\n if len(opinions) > 0:\n vote_status = vote_status if vote_status.strip() else motion[0]\n vote_chamber = 'upper' if vote_chamber == 'Senate' else 'lower'\n\n for i in opinions:\n try:\n count = int(i[i.find(\"(\") + 1:i.find(\")\")])\n except ValueError:\n # This is likely not a vote-count text chunk\n # It's probably '`On roll call the vote was:`\n pass\n else:\n if \"yea\" in i.lower():\n yes_count = count\n elif \"nay\" in i.lower():\n no_count = count\n elif \"present\" in i.lower():\n p_count = count\n elif \"absent\" in i.lower():\n a_count = count\n\n vote = VoteEvent(\n bill=bill,\n start_date=vote_date.strftime('%Y-%m-%d'),\n chamber=vote_chamber,\n motion_text=vote_status,\n result='pass' if yes_count > no_count else 'fail',\n classification='passage',\n )\n vote.pupa_id = link\n\n vote.set_count('yes', yes_count)\n vote.set_count('no', no_count)\n vote.set_count('abstain', p_count)\n vote.set_count('absent', a_count)\n\n vote.add_source(link)\n\n a_links = member_doc.xpath(\"//div[@id='main_content']/a/text()\")\n for i in range(1, len(a_links)):\n if i <= yes_count:\n vote.vote('yes', re.sub(',', '', a_links[i]).split()[0])\n elif no_count != 0 and i > yes_count and i <= yes_count + no_count:\n vote.vote('no', re.sub(',', '', a_links[i]).split()[0])\n else:\n vote.vote('other', re.sub(',', '', a_links[i]).split()[0])\n yield vote\n else:\n self.warning(\"No Votes for: %s\", link)\n\n\ndef get_doc_link(elem):\n # try ODT then PDF\n link = elem.xpath('.//a[contains(@href, \".odt\")]/@href')\n if link:\n return link[0]\n link = elem.xpath('.//a[contains(@href, \".pdf\")]/@href')\n if link:\n return link[0]\n", "path": "openstates/ks/bills.py"}], "after_files": [{"content": "import re\nimport json\nimport datetime\nimport requests\n\nimport lxml.html\nfrom pupa.scrape import Scraper, Bill, VoteEvent\n\nfrom . import ksapi\n\n\ndef _clean_spaces(title):\n return re.sub(r'\\s+', ' ', title)\n\n\nclass KSBillScraper(Scraper):\n def scrape(self, chamber=None, session=None):\n if session is None:\n session = self.latest_session()\n self.info('no session specified, using %s', session)\n\n chambers = [chamber] if chamber is not None else ['upper', 'lower']\n\n for chamber in chambers:\n yield from self.scrape_chamber(chamber, session)\n\n def scrape_chamber(self, chamber, session):\n chamber_name = 'Senate' if chamber == 'upper' else 'House'\n chamber_letter = chamber_name[0]\n # perhaps we should save this data so we can make one request for both?\n bill_request = self.get(ksapi.url + 'bill_status/').text\n bill_request_json = json.loads(bill_request)\n bills = bill_request_json['content']\n for bill_data in bills:\n\n bill_id = bill_data['BILLNO']\n\n # filter other chambers\n if not bill_id.startswith(chamber_letter):\n continue\n\n if 'CR' in bill_id:\n btype = 'concurrent resolution'\n elif 'R' in bill_id:\n btype = 'resolution'\n elif 'B' in bill_id:\n btype = 'bill'\n\n title = bill_data['SHORTTITLE'] or bill_data['LONGTITLE']\n\n # main\n bill = Bill(\n bill_id,\n session,\n title,\n chamber=chamber,\n classification=btype,\n )\n bill.extras = {'status': bill_data['STATUS']}\n\n bill.add_source(ksapi.url + 'bill_status/' + bill_id.lower())\n\n if (bill_data['LONGTITLE'] and\n bill_data['LONGTITLE'] != bill.title):\n bill.add_title(bill_data['LONGTITLE'])\n\n # An \"original sponsor\" is the API's expression of \"primary sponsor\"\n for primary_sponsor in bill_data['ORIGINAL_SPONSOR']:\n bill.add_sponsorship(\n name=primary_sponsor,\n entity_type='organization' if \"committee\" in primary_sponsor.lower()\n else 'person',\n primary=True,\n classification=\"original sponsor\"\n )\n for sponsor in bill_data['SPONSOR_NAMES']:\n if sponsor in bill_data['ORIGINAL_SPONSOR']:\n continue\n bill.add_sponsorship(\n name=sponsor,\n entity_type='organization' if \"committee\" in sponsor.lower() else 'person',\n primary=False,\n classification='cosponsor',\n )\n\n # history is backwards\n for event in reversed(bill_data['HISTORY']):\n actor = ('upper' if event['chamber'] == 'Senate'\n else 'lower')\n\n date = event['session_date']\n # append committee names if present\n if 'committee_names' in event:\n action = (event['status'] + ' ' +\n ' and '.join(event['committee_names']))\n else:\n action = event['status']\n\n if event['action_code'] not in ksapi.action_codes:\n self.warning('unknown action code on %s: %s %s' %\n (bill_id, event['action_code'],\n event['status']))\n atype = None\n else:\n atype = ksapi.action_codes[event['action_code']]\n bill.add_action(\n action, date, chamber=actor, classification=atype)\n\n # Versions are exposed in `bill_data['versions'],\n # but lack any descriptive text or identifiers;\n # continue to scrape these from the HTML\n yield from self.scrape_html(bill, session)\n\n yield bill\n\n def scrape_html(self, bill, session):\n meta = next(\n each for each in self.jurisdiction.legislative_sessions\n if each['identifier'] == session\n )\n slug = meta['_scraped_name']\n # we have to go to the HTML for the versions & votes\n base_url = 'http://www.kslegislature.org/li/%s/measures/' % slug\n if 'resolution' in bill.classification:\n base_url = 'http://www.kslegislature.org/li/%s/year1/measures/' % slug\n\n url = base_url + bill.identifier.lower() + '/'\n doc = lxml.html.fromstring(self.get(url).text)\n doc.make_links_absolute(url)\n\n bill.add_source(url)\n\n # versions & notes\n version_rows = doc.xpath('//tbody[starts-with(@id, \"version-tab\")]/tr')\n for row in version_rows:\n # version, docs, sn, fn\n tds = row.getchildren()\n title = _clean_spaces(tds[0].text_content().strip())\n doc_url = get_doc_link(tds[1])\n if doc_url:\n bill.add_version_link(title, doc_url, media_type='application/pdf')\n if len(tds) > 2:\n sn_url = get_doc_link(tds[2])\n if sn_url:\n bill.add_document_link(\n title + ' - Supplementary Note', sn_url,\n on_duplicate='ignore'\n )\n if len(tds) > 3:\n if sn_url:\n bill.add_document_link(title + ' - Fiscal Note', sn_url,\n on_duplicate='ignore'\n )\n\n all_links = doc.xpath(\n \"//table[@class='bottom']/tbody[@class='tab-content-sub']/tr/td/a/@href\"\n )\n vote_members_urls = []\n for i in all_links:\n if \"vote_view\" in i:\n vote_members_urls.append(str(i))\n if len(vote_members_urls) > 0:\n for link in vote_members_urls:\n yield from self.parse_vote(bill, link)\n\n history_rows = doc.xpath('//tbody[starts-with(@id, \"history-tab\")]/tr')\n for row in history_rows:\n row_text = row.xpath('.//td[3]')[0].text_content()\n # amendments & reports\n amendment = get_doc_link(row.xpath('.//td[4]')[0])\n if amendment:\n if 'Motion to Amend' in row_text:\n _, offered_by = row_text.split('Motion to Amend -')\n amendment_name = 'Amendment ' + offered_by.strip()\n elif 'Conference committee report now available' in row_text:\n amendment_name = 'Conference Committee Report'\n else:\n amendment_name = row_text.strip()\n bill.add_document_link(_clean_spaces(amendment_name), amendment,\n on_duplicate='ignore')\n\n def parse_vote(self, bill, link):\n # Server sometimes sends proper error headers,\n # sometimes not\n try:\n self.info(\"Get {}\".format(link))\n text = requests.get(link).text\n except requests.exceptions.HTTPError as err:\n self.warning(\"{} fetching vote {}, skipping\".format(err, link))\n return\n\n if 'Page Not Found' in text or 'Page Unavailable' in text:\n self.warning(\"missing vote, skipping\")\n return\n member_doc = lxml.html.fromstring(text)\n motion = member_doc.xpath(\"//div[@id='main_content']/h4/text()\")\n chamber_date_line = ''.join(member_doc.xpath(\"//div[@id='main_content']/h3[1]//text()\"))\n chamber_date_line_words = chamber_date_line.split()\n vote_chamber = chamber_date_line_words[0]\n vote_date = datetime.datetime.strptime(chamber_date_line_words[-1], '%m/%d/%Y')\n vote_status = \" \".join(chamber_date_line_words[2:-2])\n\n opinions = member_doc.xpath(\"//div[@id='main_content']/h3[position() > 1]/text()\")\n if len(opinions) > 0:\n vote_status = vote_status if vote_status.strip() else motion[0]\n vote_chamber = 'upper' if vote_chamber == 'Senate' else 'lower'\n\n for i in opinions:\n try:\n count = int(i[i.find(\"(\") + 1:i.find(\")\")])\n except ValueError:\n # This is likely not a vote-count text chunk\n # It's probably '`On roll call the vote was:`\n pass\n else:\n if \"yea\" in i.lower():\n yes_count = count\n elif \"nay\" in i.lower():\n no_count = count\n elif \"present\" in i.lower():\n p_count = count\n elif \"absent\" in i.lower():\n a_count = count\n\n vote = VoteEvent(\n bill=bill,\n start_date=vote_date.strftime('%Y-%m-%d'),\n chamber=vote_chamber,\n motion_text=vote_status,\n result='pass' if yes_count > no_count else 'fail',\n classification='passage',\n )\n vote.pupa_id = link\n\n vote.set_count('yes', yes_count)\n vote.set_count('no', no_count)\n vote.set_count('abstain', p_count)\n vote.set_count('absent', a_count)\n\n vote.add_source(link)\n\n a_links = member_doc.xpath(\"//div[@id='main_content']/a/text()\")\n for i in range(1, len(a_links)):\n if i <= yes_count:\n vote.vote('yes', re.sub(',', '', a_links[i]).split()[0])\n elif no_count != 0 and i > yes_count and i <= yes_count + no_count:\n vote.vote('no', re.sub(',', '', a_links[i]).split()[0])\n else:\n vote.vote('other', re.sub(',', '', a_links[i]).split()[0])\n yield vote\n else:\n self.warning(\"No Votes for: %s\", link)\n\n\ndef get_doc_link(elem):\n # try ODT then PDF\n link = elem.xpath('.//a[contains(@href, \".odt\")]/@href')\n if link:\n return link[0]\n link = elem.xpath('.//a[contains(@href, \".pdf\")]/@href')\n if link:\n return link[0]\n", "path": "openstates/ks/bills.py"}]} | 3,651 | 245 |
gh_patches_debug_5990 | rasdani/github-patches | git_diff | googleapis__python-bigquery-672 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
AttributeError in `resource_name_to_date()` samples fixture
See this samples [test run](https://source.cloud.google.com/results/invocations/e5c424d5-84a6-4505-ae44-3bc70fa94e44/targets/cloud-devrel%2Fclient-libraries%2Fpython%2Fgoogleapis%2Fpython-bigquery%2Fsamples%2Fpython3.6%2Fpresubmit/log). A bug in [this fixture](https://github.com/googleapis/python-bigquery/blob/bd7dbdae5c972b16bafc53c67911eeaa3255a880/samples/snippets/conftest.py#L33-L36) made all of them fail.
> AttributeError: module 'datetime' has no attribute 'strptime'
I wonder how this got in in the first place?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `samples/snippets/conftest.py`
Content:
```
1 # Copyright 2020 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import datetime
16 import random
17
18 from google.cloud import bigquery
19 import pytest
20
21
22 RESOURCE_PREFIX = "python_bigquery_samples_snippets"
23 RESOURCE_DATE_FORMAT = "%Y%m%d_%H%M%S"
24 RESOURCE_DATE_LENGTH = 4 + 2 + 2 + 1 + 2 + 2 + 2
25
26
27 def resource_prefix() -> str:
28 timestamp = datetime.datetime.utcnow().strftime(RESOURCE_DATE_FORMAT)
29 random_string = hex(random.randrange(1000000))[2:]
30 return f"{RESOURCE_PREFIX}_{timestamp}_{random_string}"
31
32
33 def resource_name_to_date(resource_name: str):
34 start_date = len(RESOURCE_PREFIX) + 1
35 date_string = resource_name[start_date : start_date + RESOURCE_DATE_LENGTH]
36 return datetime.strptime(date_string, RESOURCE_DATE_FORMAT)
37
38
39 @pytest.fixture(scope="session", autouse=True)
40 def cleanup_datasets(bigquery_client: bigquery.Client):
41 yesterday = datetime.datetime.utcnow() - datetime.timedelta(days=1)
42 for dataset in bigquery_client.list_datasets():
43 if (
44 dataset.dataset_id.startswith(RESOURCE_PREFIX)
45 and resource_name_to_date(dataset.dataset_id) < yesterday
46 ):
47 bigquery_client.delete_dataset(
48 dataset, delete_contents=True, not_found_ok=True
49 )
50
51
52 @pytest.fixture(scope="session")
53 def bigquery_client():
54 bigquery_client = bigquery.Client()
55 return bigquery_client
56
57
58 @pytest.fixture(scope="session")
59 def project_id(bigquery_client):
60 return bigquery_client.project
61
62
63 @pytest.fixture(scope="session")
64 def dataset_id(bigquery_client: bigquery.Client, project_id: str):
65 dataset_id = resource_prefix()
66 full_dataset_id = f"{project_id}.{dataset_id}"
67 dataset = bigquery.Dataset(full_dataset_id)
68 bigquery_client.create_dataset(dataset)
69 yield dataset_id
70 bigquery_client.delete_dataset(dataset, delete_contents=True, not_found_ok=True)
71
72
73 @pytest.fixture
74 def bigquery_client_patch(monkeypatch, bigquery_client):
75 monkeypatch.setattr(bigquery, "Client", lambda: bigquery_client)
76
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/samples/snippets/conftest.py b/samples/snippets/conftest.py
--- a/samples/snippets/conftest.py
+++ b/samples/snippets/conftest.py
@@ -33,7 +33,7 @@
def resource_name_to_date(resource_name: str):
start_date = len(RESOURCE_PREFIX) + 1
date_string = resource_name[start_date : start_date + RESOURCE_DATE_LENGTH]
- return datetime.strptime(date_string, RESOURCE_DATE_FORMAT)
+ return datetime.datetime.strptime(date_string, RESOURCE_DATE_FORMAT)
@pytest.fixture(scope="session", autouse=True)
| {"golden_diff": "diff --git a/samples/snippets/conftest.py b/samples/snippets/conftest.py\n--- a/samples/snippets/conftest.py\n+++ b/samples/snippets/conftest.py\n@@ -33,7 +33,7 @@\n def resource_name_to_date(resource_name: str):\n start_date = len(RESOURCE_PREFIX) + 1\n date_string = resource_name[start_date : start_date + RESOURCE_DATE_LENGTH]\n- return datetime.strptime(date_string, RESOURCE_DATE_FORMAT)\n+ return datetime.datetime.strptime(date_string, RESOURCE_DATE_FORMAT)\n \n \n @pytest.fixture(scope=\"session\", autouse=True)\n", "issue": "AttributeError in `resource_name_to_date()` samples fixture\nSee this samples [test run](https://source.cloud.google.com/results/invocations/e5c424d5-84a6-4505-ae44-3bc70fa94e44/targets/cloud-devrel%2Fclient-libraries%2Fpython%2Fgoogleapis%2Fpython-bigquery%2Fsamples%2Fpython3.6%2Fpresubmit/log). A bug in [this fixture](https://github.com/googleapis/python-bigquery/blob/bd7dbdae5c972b16bafc53c67911eeaa3255a880/samples/snippets/conftest.py#L33-L36) made all of them fail.\r\n\r\n> AttributeError: module 'datetime' has no attribute 'strptime'\r\n\r\nI wonder how this got in in the first place?\r\n\n", "before_files": [{"content": "# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport datetime\nimport random\n\nfrom google.cloud import bigquery\nimport pytest\n\n\nRESOURCE_PREFIX = \"python_bigquery_samples_snippets\"\nRESOURCE_DATE_FORMAT = \"%Y%m%d_%H%M%S\"\nRESOURCE_DATE_LENGTH = 4 + 2 + 2 + 1 + 2 + 2 + 2\n\n\ndef resource_prefix() -> str:\n timestamp = datetime.datetime.utcnow().strftime(RESOURCE_DATE_FORMAT)\n random_string = hex(random.randrange(1000000))[2:]\n return f\"{RESOURCE_PREFIX}_{timestamp}_{random_string}\"\n\n\ndef resource_name_to_date(resource_name: str):\n start_date = len(RESOURCE_PREFIX) + 1\n date_string = resource_name[start_date : start_date + RESOURCE_DATE_LENGTH]\n return datetime.strptime(date_string, RESOURCE_DATE_FORMAT)\n\n\[email protected](scope=\"session\", autouse=True)\ndef cleanup_datasets(bigquery_client: bigquery.Client):\n yesterday = datetime.datetime.utcnow() - datetime.timedelta(days=1)\n for dataset in bigquery_client.list_datasets():\n if (\n dataset.dataset_id.startswith(RESOURCE_PREFIX)\n and resource_name_to_date(dataset.dataset_id) < yesterday\n ):\n bigquery_client.delete_dataset(\n dataset, delete_contents=True, not_found_ok=True\n )\n\n\[email protected](scope=\"session\")\ndef bigquery_client():\n bigquery_client = bigquery.Client()\n return bigquery_client\n\n\[email protected](scope=\"session\")\ndef project_id(bigquery_client):\n return bigquery_client.project\n\n\[email protected](scope=\"session\")\ndef dataset_id(bigquery_client: bigquery.Client, project_id: str):\n dataset_id = resource_prefix()\n full_dataset_id = f\"{project_id}.{dataset_id}\"\n dataset = bigquery.Dataset(full_dataset_id)\n bigquery_client.create_dataset(dataset)\n yield dataset_id\n bigquery_client.delete_dataset(dataset, delete_contents=True, not_found_ok=True)\n\n\[email protected]\ndef bigquery_client_patch(monkeypatch, bigquery_client):\n monkeypatch.setattr(bigquery, \"Client\", lambda: bigquery_client)\n", "path": "samples/snippets/conftest.py"}], "after_files": [{"content": "# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport datetime\nimport random\n\nfrom google.cloud import bigquery\nimport pytest\n\n\nRESOURCE_PREFIX = \"python_bigquery_samples_snippets\"\nRESOURCE_DATE_FORMAT = \"%Y%m%d_%H%M%S\"\nRESOURCE_DATE_LENGTH = 4 + 2 + 2 + 1 + 2 + 2 + 2\n\n\ndef resource_prefix() -> str:\n timestamp = datetime.datetime.utcnow().strftime(RESOURCE_DATE_FORMAT)\n random_string = hex(random.randrange(1000000))[2:]\n return f\"{RESOURCE_PREFIX}_{timestamp}_{random_string}\"\n\n\ndef resource_name_to_date(resource_name: str):\n start_date = len(RESOURCE_PREFIX) + 1\n date_string = resource_name[start_date : start_date + RESOURCE_DATE_LENGTH]\n return datetime.datetime.strptime(date_string, RESOURCE_DATE_FORMAT)\n\n\[email protected](scope=\"session\", autouse=True)\ndef cleanup_datasets(bigquery_client: bigquery.Client):\n yesterday = datetime.datetime.utcnow() - datetime.timedelta(days=1)\n for dataset in bigquery_client.list_datasets():\n if (\n dataset.dataset_id.startswith(RESOURCE_PREFIX)\n and resource_name_to_date(dataset.dataset_id) < yesterday\n ):\n bigquery_client.delete_dataset(\n dataset, delete_contents=True, not_found_ok=True\n )\n\n\[email protected](scope=\"session\")\ndef bigquery_client():\n bigquery_client = bigquery.Client()\n return bigquery_client\n\n\[email protected](scope=\"session\")\ndef project_id(bigquery_client):\n return bigquery_client.project\n\n\[email protected](scope=\"session\")\ndef dataset_id(bigquery_client: bigquery.Client, project_id: str):\n dataset_id = resource_prefix()\n full_dataset_id = f\"{project_id}.{dataset_id}\"\n dataset = bigquery.Dataset(full_dataset_id)\n bigquery_client.create_dataset(dataset)\n yield dataset_id\n bigquery_client.delete_dataset(dataset, delete_contents=True, not_found_ok=True)\n\n\[email protected]\ndef bigquery_client_patch(monkeypatch, bigquery_client):\n monkeypatch.setattr(bigquery, \"Client\", lambda: bigquery_client)\n", "path": "samples/snippets/conftest.py"}]} | 1,182 | 132 |
gh_patches_debug_60681 | rasdani/github-patches | git_diff | OCHA-DAP__hdx-ckan-1830 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Organization view pages result in 500 error
Only on stag. I tested several different orgs.

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ckanext-hdx_search/ckanext/hdx_search/plugin.py`
Content:
```
1 import logging, re
2 import ckan.plugins as plugins
3 import ckan.plugins.toolkit as tk
4 import ckan.lib.plugins as lib_plugins
5
6 def convert_country(q):
7 for c in tk.get_action('group_list')({'user':'127.0.0.1'},{'all_fields': True}):
8 if re.findall(c['display_name'].lower(),q.lower()):
9 q += ' '+c['name']
10 return q
11
12 class HDXSearchPlugin(plugins.SingletonPlugin):
13 plugins.implements(plugins.IConfigurer, inherit=False)
14 plugins.implements(plugins.IRoutes, inherit=True)
15 plugins.implements(plugins.ITemplateHelpers, inherit=False)
16 plugins.implements(plugins.IPackageController, inherit=True)
17
18 def update_config(self, config):
19 tk.add_template_directory(config, 'templates')
20
21 def get_helpers(self):
22 return {}
23
24 def before_map(self, map):
25 map.connect('search', '/search',
26 controller='ckanext.hdx_search.controllers.search_controller:HDXSearchController', action='search')
27 map.connect('simple_search',
28 '/dataset', controller='ckanext.hdx_search.controllers.simple_search_controller:HDXSimpleSearchController', action='package_search')
29 return map
30
31 def after_map(self, map):
32 map.connect('search', '/search',
33 controller='ckanext.hdx_search.controllers.search_controller:HDXSearchController', action='search')
34 map.connect('simple_search',
35 '/dataset', controller='ckanext.hdx_search.controllers.simple_search_controller:HDXSimpleSearchController', action='package_search')
36 return map
37
38 def before_search(self, search_params):
39 search_params['q'] = convert_country(search_params['q'])
40 if 'facet.field' in search_params and 'vocab_Topics' not in search_params['facet.field']:
41 search_params['facet.field'].append('vocab_Topics')
42
43 # If indicator flag is set, search only that type
44 if 'ext_indicator' in search_params['extras']:
45 if int(search_params['extras']['ext_indicator']) == 1:
46 search_params['fq'] = search_params['fq'] + ' +extras_indicator:1'
47 elif int(search_params['extras']['ext_indicator']) == 0:
48 search_params['fq'] = search_params[
49 'fq'] + ' -extras_indicator:1'
50 return search_params
51
52 def after_search(self, search_results, search_params):
53 return search_results
54
55 def before_view(self, pkg_dict):
56 return pkg_dict
57
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ckanext-hdx_search/ckanext/hdx_search/plugin.py b/ckanext-hdx_search/ckanext/hdx_search/plugin.py
--- a/ckanext-hdx_search/ckanext/hdx_search/plugin.py
+++ b/ckanext-hdx_search/ckanext/hdx_search/plugin.py
@@ -36,7 +36,7 @@
return map
def before_search(self, search_params):
- search_params['q'] = convert_country(search_params['q'])
+ #search_params['q'] = convert_country(search_params['q'])
if 'facet.field' in search_params and 'vocab_Topics' not in search_params['facet.field']:
search_params['facet.field'].append('vocab_Topics')
| {"golden_diff": "diff --git a/ckanext-hdx_search/ckanext/hdx_search/plugin.py b/ckanext-hdx_search/ckanext/hdx_search/plugin.py\n--- a/ckanext-hdx_search/ckanext/hdx_search/plugin.py\n+++ b/ckanext-hdx_search/ckanext/hdx_search/plugin.py\n@@ -36,7 +36,7 @@\n return map\n \n def before_search(self, search_params):\n- search_params['q'] = convert_country(search_params['q'])\n+ #search_params['q'] = convert_country(search_params['q'])\n if 'facet.field' in search_params and 'vocab_Topics' not in search_params['facet.field']:\n search_params['facet.field'].append('vocab_Topics')\n", "issue": "Organization view pages result in 500 error\nOnly on stag. I tested several different orgs. \n\n\n\n", "before_files": [{"content": "import logging, re\nimport ckan.plugins as plugins\nimport ckan.plugins.toolkit as tk\nimport ckan.lib.plugins as lib_plugins\n\ndef convert_country(q):\n for c in tk.get_action('group_list')({'user':'127.0.0.1'},{'all_fields': True}):\n if re.findall(c['display_name'].lower(),q.lower()):\n q += ' '+c['name']\n return q\n\nclass HDXSearchPlugin(plugins.SingletonPlugin):\n plugins.implements(plugins.IConfigurer, inherit=False)\n plugins.implements(plugins.IRoutes, inherit=True)\n plugins.implements(plugins.ITemplateHelpers, inherit=False)\n plugins.implements(plugins.IPackageController, inherit=True)\n\n def update_config(self, config):\n tk.add_template_directory(config, 'templates')\n\n def get_helpers(self):\n return {}\n\n def before_map(self, map):\n map.connect('search', '/search',\n controller='ckanext.hdx_search.controllers.search_controller:HDXSearchController', action='search')\n map.connect('simple_search',\n '/dataset', controller='ckanext.hdx_search.controllers.simple_search_controller:HDXSimpleSearchController', action='package_search')\n return map\n\n def after_map(self, map):\n map.connect('search', '/search',\n controller='ckanext.hdx_search.controllers.search_controller:HDXSearchController', action='search')\n map.connect('simple_search',\n '/dataset', controller='ckanext.hdx_search.controllers.simple_search_controller:HDXSimpleSearchController', action='package_search')\n return map\n\n def before_search(self, search_params):\n search_params['q'] = convert_country(search_params['q'])\n if 'facet.field' in search_params and 'vocab_Topics' not in search_params['facet.field']:\n search_params['facet.field'].append('vocab_Topics')\n\n # If indicator flag is set, search only that type\n if 'ext_indicator' in search_params['extras']:\n if int(search_params['extras']['ext_indicator']) == 1:\n search_params['fq'] = search_params['fq'] + ' +extras_indicator:1'\n elif int(search_params['extras']['ext_indicator']) == 0:\n search_params['fq'] = search_params[\n 'fq'] + ' -extras_indicator:1'\n return search_params\n\n def after_search(self, search_results, search_params):\n return search_results\n\n def before_view(self, pkg_dict):\n return pkg_dict\n", "path": "ckanext-hdx_search/ckanext/hdx_search/plugin.py"}], "after_files": [{"content": "import logging, re\nimport ckan.plugins as plugins\nimport ckan.plugins.toolkit as tk\nimport ckan.lib.plugins as lib_plugins\n\ndef convert_country(q):\n for c in tk.get_action('group_list')({'user':'127.0.0.1'},{'all_fields': True}):\n if re.findall(c['display_name'].lower(),q.lower()):\n q += ' '+c['name']\n return q\n\nclass HDXSearchPlugin(plugins.SingletonPlugin):\n plugins.implements(plugins.IConfigurer, inherit=False)\n plugins.implements(plugins.IRoutes, inherit=True)\n plugins.implements(plugins.ITemplateHelpers, inherit=False)\n plugins.implements(plugins.IPackageController, inherit=True)\n\n def update_config(self, config):\n tk.add_template_directory(config, 'templates')\n\n def get_helpers(self):\n return {}\n\n def before_map(self, map):\n map.connect('search', '/search',\n controller='ckanext.hdx_search.controllers.search_controller:HDXSearchController', action='search')\n map.connect('simple_search',\n '/dataset', controller='ckanext.hdx_search.controllers.simple_search_controller:HDXSimpleSearchController', action='package_search')\n return map\n\n def after_map(self, map):\n map.connect('search', '/search',\n controller='ckanext.hdx_search.controllers.search_controller:HDXSearchController', action='search')\n map.connect('simple_search',\n '/dataset', controller='ckanext.hdx_search.controllers.simple_search_controller:HDXSimpleSearchController', action='package_search')\n return map\n\n def before_search(self, search_params):\n #search_params['q'] = convert_country(search_params['q'])\n if 'facet.field' in search_params and 'vocab_Topics' not in search_params['facet.field']:\n search_params['facet.field'].append('vocab_Topics')\n\n # If indicator flag is set, search only that type\n if 'ext_indicator' in search_params['extras']:\n if int(search_params['extras']['ext_indicator']) == 1:\n search_params['fq'] = search_params['fq'] + ' +extras_indicator:1'\n elif int(search_params['extras']['ext_indicator']) == 0:\n search_params['fq'] = search_params[\n 'fq'] + ' -extras_indicator:1'\n return search_params\n\n def after_search(self, search_results, search_params):\n return search_results\n\n def before_view(self, pkg_dict):\n return pkg_dict\n", "path": "ckanext-hdx_search/ckanext/hdx_search/plugin.py"}]} | 994 | 168 |
gh_patches_debug_11541 | rasdani/github-patches | git_diff | Kinto__kinto-555 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
500 error when json validation fails with additional properties
`<ValidationError: "Additional properties are not allowed ('blockID', 'id' were unexpected)">`
```
File "/home/mathieu/Code/Mozilla/cliquet/cliquet/resource/__init__.py", line 395, in put
new_record = self.process_record(post_record, old=existing)
File "/home/mathieu/Code/Mozilla/kinto/kinto/views/records.py", line 71, in process_record
field = e.path.pop() if e.path else e.validator_value.pop()
AttributeError: 'bool' object has no attribute 'pop' lang=None uid=67f1f8fc8d3dd228096950c20d6515c4786a675264741bec2995a560519aad87
```
Possible fix:
``` diff
except jsonschema_exceptions.ValidationError as e:
- field = e.path.pop() if e.path else e.validator_value.pop()
+ try:
+ field = e.path.pop() if e.path else e.validator_value.pop()
+ except AttributeError:
+ field = None
```
500 error when json validation fails with additional properties
`<ValidationError: "Additional properties are not allowed ('blockID', 'id' were unexpected)">`
```
File "/home/mathieu/Code/Mozilla/cliquet/cliquet/resource/__init__.py", line 395, in put
new_record = self.process_record(post_record, old=existing)
File "/home/mathieu/Code/Mozilla/kinto/kinto/views/records.py", line 71, in process_record
field = e.path.pop() if e.path else e.validator_value.pop()
AttributeError: 'bool' object has no attribute 'pop' lang=None uid=67f1f8fc8d3dd228096950c20d6515c4786a675264741bec2995a560519aad87
```
Possible fix:
``` diff
except jsonschema_exceptions.ValidationError as e:
- field = e.path.pop() if e.path else e.validator_value.pop()
+ try:
+ field = e.path.pop() if e.path else e.validator_value.pop()
+ except AttributeError:
+ field = None
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kinto/views/records.py`
Content:
```
1 import copy
2
3 import jsonschema
4 from cliquet import resource
5 from cliquet.errors import raise_invalid
6 from jsonschema import exceptions as jsonschema_exceptions
7 from pyramid.security import Authenticated
8 from pyramid.settings import asbool
9
10 from kinto.views import object_exists_or_404
11
12
13 class RecordSchema(resource.ResourceSchema):
14 class Options:
15 preserve_unknown = True
16
17
18 _parent_path = '/buckets/{{bucket_id}}/collections/{{collection_id}}'
19
20
21 @resource.register(name='record',
22 collection_path=_parent_path + '/records',
23 record_path=_parent_path + '/records/{{id}}')
24 class Record(resource.ShareableResource):
25
26 mapping = RecordSchema()
27 schema_field = 'schema'
28
29 def __init__(self, *args, **kwargs):
30 super(Record, self).__init__(*args, **kwargs)
31
32 # Check if already fetched before (in batch).
33 collections = self.request.bound_data.setdefault('collections', {})
34 collection_uri = self.get_parent_id(self.request)
35 if collection_uri not in collections:
36 # Unknown yet, fetch from storage.
37 collection_parent_id = '/buckets/%s' % self.bucket_id
38 collection = object_exists_or_404(self.request,
39 collection_id='collection',
40 parent_id=collection_parent_id,
41 object_id=self.collection_id)
42 collections[collection_uri] = collection
43
44 self._collection = collections[collection_uri]
45
46 def get_parent_id(self, request):
47 self.bucket_id = request.matchdict['bucket_id']
48 self.collection_id = request.matchdict['collection_id']
49 return '/buckets/%s/collections/%s' % (self.bucket_id,
50 self.collection_id)
51
52 def is_known_field(self, field_name):
53 """Without schema, any field is considered as known."""
54 return True
55
56 def process_record(self, new, old=None):
57 """Validate records against collection schema, if any."""
58 new = super(Record, self).process_record(new, old)
59
60 schema = self._collection.get('schema')
61 settings = self.request.registry.settings
62 schema_validation = 'experimental_collection_schema_validation'
63 if not schema or not asbool(settings.get(schema_validation)):
64 return new
65
66 collection_timestamp = self._collection[self.model.modified_field]
67
68 try:
69 stripped = copy.deepcopy(new)
70 stripped.pop(self.model.id_field, None)
71 stripped.pop(self.model.modified_field, None)
72 stripped.pop(self.model.permissions_field, None)
73 stripped.pop(self.schema_field, None)
74 jsonschema.validate(stripped, schema)
75 except jsonschema_exceptions.ValidationError as e:
76 field = e.path.pop() if e.path else e.validator_value.pop()
77 raise_invalid(self.request, name=field, description=e.message)
78
79 new[self.schema_field] = collection_timestamp
80 return new
81
82 def collection_get(self):
83 result = super(Record, self).collection_get()
84 self._handle_cache_expires(self.request.response)
85 return result
86
87 def get(self):
88 result = super(Record, self).get()
89 self._handle_cache_expires(self.request.response)
90 return result
91
92 def _handle_cache_expires(self, response):
93 """If the parent collection defines a ``cache_expires`` attribute,
94 then cache-control response headers are sent.
95
96 .. note::
97
98 Those headers are also sent if the
99 ``kinto.record_cache_expires_seconds`` setting is defined.
100 """
101 is_anonymous = Authenticated not in self.request.effective_principals
102 if not is_anonymous:
103 return
104
105 cache_expires = self._collection.get('cache_expires')
106 if cache_expires is None:
107 by_bucket = 'kinto.%s_record_cache_expires_seconds' % (
108 self.bucket_id)
109 by_collection = '%s_%s_record_cache_expires_seconds' % (
110 self.bucket_id, self.collection_id)
111 settings = self.request.registry.settings
112 cache_expires = settings.get(by_collection,
113 settings.get(by_bucket))
114
115 if cache_expires is not None:
116 response.cache_expires(seconds=cache_expires)
117
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/kinto/views/records.py b/kinto/views/records.py
--- a/kinto/views/records.py
+++ b/kinto/views/records.py
@@ -73,7 +73,10 @@
stripped.pop(self.schema_field, None)
jsonschema.validate(stripped, schema)
except jsonschema_exceptions.ValidationError as e:
- field = e.path.pop() if e.path else e.validator_value.pop()
+ try:
+ field = e.path.pop() if e.path else e.validator_value.pop()
+ except AttributeError:
+ field = None
raise_invalid(self.request, name=field, description=e.message)
new[self.schema_field] = collection_timestamp
| {"golden_diff": "diff --git a/kinto/views/records.py b/kinto/views/records.py\n--- a/kinto/views/records.py\n+++ b/kinto/views/records.py\n@@ -73,7 +73,10 @@\n stripped.pop(self.schema_field, None)\n jsonschema.validate(stripped, schema)\n except jsonschema_exceptions.ValidationError as e:\n- field = e.path.pop() if e.path else e.validator_value.pop()\n+ try:\n+ field = e.path.pop() if e.path else e.validator_value.pop()\n+ except AttributeError:\n+ field = None\n raise_invalid(self.request, name=field, description=e.message)\n \n new[self.schema_field] = collection_timestamp\n", "issue": "500 error when json validation fails with additional properties\n`<ValidationError: \"Additional properties are not allowed ('blockID', 'id' were unexpected)\">`\n\n```\n File \"/home/mathieu/Code/Mozilla/cliquet/cliquet/resource/__init__.py\", line 395, in put\n new_record = self.process_record(post_record, old=existing)\n File \"/home/mathieu/Code/Mozilla/kinto/kinto/views/records.py\", line 71, in process_record\n field = e.path.pop() if e.path else e.validator_value.pop()\nAttributeError: 'bool' object has no attribute 'pop' lang=None uid=67f1f8fc8d3dd228096950c20d6515c4786a675264741bec2995a560519aad87\n\n```\n\nPossible fix:\n\n``` diff\n except jsonschema_exceptions.ValidationError as e:\n- field = e.path.pop() if e.path else e.validator_value.pop()\n+ try:\n+ field = e.path.pop() if e.path else e.validator_value.pop()\n+ except AttributeError:\n+ field = None\n```\n\n500 error when json validation fails with additional properties\n`<ValidationError: \"Additional properties are not allowed ('blockID', 'id' were unexpected)\">`\n\n```\n File \"/home/mathieu/Code/Mozilla/cliquet/cliquet/resource/__init__.py\", line 395, in put\n new_record = self.process_record(post_record, old=existing)\n File \"/home/mathieu/Code/Mozilla/kinto/kinto/views/records.py\", line 71, in process_record\n field = e.path.pop() if e.path else e.validator_value.pop()\nAttributeError: 'bool' object has no attribute 'pop' lang=None uid=67f1f8fc8d3dd228096950c20d6515c4786a675264741bec2995a560519aad87\n\n```\n\nPossible fix:\n\n``` diff\n except jsonschema_exceptions.ValidationError as e:\n- field = e.path.pop() if e.path else e.validator_value.pop()\n+ try:\n+ field = e.path.pop() if e.path else e.validator_value.pop()\n+ except AttributeError:\n+ field = None\n```\n\n", "before_files": [{"content": "import copy\n\nimport jsonschema\nfrom cliquet import resource\nfrom cliquet.errors import raise_invalid\nfrom jsonschema import exceptions as jsonschema_exceptions\nfrom pyramid.security import Authenticated\nfrom pyramid.settings import asbool\n\nfrom kinto.views import object_exists_or_404\n\n\nclass RecordSchema(resource.ResourceSchema):\n class Options:\n preserve_unknown = True\n\n\n_parent_path = '/buckets/{{bucket_id}}/collections/{{collection_id}}'\n\n\[email protected](name='record',\n collection_path=_parent_path + '/records',\n record_path=_parent_path + '/records/{{id}}')\nclass Record(resource.ShareableResource):\n\n mapping = RecordSchema()\n schema_field = 'schema'\n\n def __init__(self, *args, **kwargs):\n super(Record, self).__init__(*args, **kwargs)\n\n # Check if already fetched before (in batch).\n collections = self.request.bound_data.setdefault('collections', {})\n collection_uri = self.get_parent_id(self.request)\n if collection_uri not in collections:\n # Unknown yet, fetch from storage.\n collection_parent_id = '/buckets/%s' % self.bucket_id\n collection = object_exists_or_404(self.request,\n collection_id='collection',\n parent_id=collection_parent_id,\n object_id=self.collection_id)\n collections[collection_uri] = collection\n\n self._collection = collections[collection_uri]\n\n def get_parent_id(self, request):\n self.bucket_id = request.matchdict['bucket_id']\n self.collection_id = request.matchdict['collection_id']\n return '/buckets/%s/collections/%s' % (self.bucket_id,\n self.collection_id)\n\n def is_known_field(self, field_name):\n \"\"\"Without schema, any field is considered as known.\"\"\"\n return True\n\n def process_record(self, new, old=None):\n \"\"\"Validate records against collection schema, if any.\"\"\"\n new = super(Record, self).process_record(new, old)\n\n schema = self._collection.get('schema')\n settings = self.request.registry.settings\n schema_validation = 'experimental_collection_schema_validation'\n if not schema or not asbool(settings.get(schema_validation)):\n return new\n\n collection_timestamp = self._collection[self.model.modified_field]\n\n try:\n stripped = copy.deepcopy(new)\n stripped.pop(self.model.id_field, None)\n stripped.pop(self.model.modified_field, None)\n stripped.pop(self.model.permissions_field, None)\n stripped.pop(self.schema_field, None)\n jsonschema.validate(stripped, schema)\n except jsonschema_exceptions.ValidationError as e:\n field = e.path.pop() if e.path else e.validator_value.pop()\n raise_invalid(self.request, name=field, description=e.message)\n\n new[self.schema_field] = collection_timestamp\n return new\n\n def collection_get(self):\n result = super(Record, self).collection_get()\n self._handle_cache_expires(self.request.response)\n return result\n\n def get(self):\n result = super(Record, self).get()\n self._handle_cache_expires(self.request.response)\n return result\n\n def _handle_cache_expires(self, response):\n \"\"\"If the parent collection defines a ``cache_expires`` attribute,\n then cache-control response headers are sent.\n\n .. note::\n\n Those headers are also sent if the\n ``kinto.record_cache_expires_seconds`` setting is defined.\n \"\"\"\n is_anonymous = Authenticated not in self.request.effective_principals\n if not is_anonymous:\n return\n\n cache_expires = self._collection.get('cache_expires')\n if cache_expires is None:\n by_bucket = 'kinto.%s_record_cache_expires_seconds' % (\n self.bucket_id)\n by_collection = '%s_%s_record_cache_expires_seconds' % (\n self.bucket_id, self.collection_id)\n settings = self.request.registry.settings\n cache_expires = settings.get(by_collection,\n settings.get(by_bucket))\n\n if cache_expires is not None:\n response.cache_expires(seconds=cache_expires)\n", "path": "kinto/views/records.py"}], "after_files": [{"content": "import copy\n\nimport jsonschema\nfrom cliquet import resource\nfrom cliquet.errors import raise_invalid\nfrom jsonschema import exceptions as jsonschema_exceptions\nfrom pyramid.security import Authenticated\nfrom pyramid.settings import asbool\n\nfrom kinto.views import object_exists_or_404\n\n\nclass RecordSchema(resource.ResourceSchema):\n class Options:\n preserve_unknown = True\n\n\n_parent_path = '/buckets/{{bucket_id}}/collections/{{collection_id}}'\n\n\[email protected](name='record',\n collection_path=_parent_path + '/records',\n record_path=_parent_path + '/records/{{id}}')\nclass Record(resource.ShareableResource):\n\n mapping = RecordSchema()\n schema_field = 'schema'\n\n def __init__(self, *args, **kwargs):\n super(Record, self).__init__(*args, **kwargs)\n\n # Check if already fetched before (in batch).\n collections = self.request.bound_data.setdefault('collections', {})\n collection_uri = self.get_parent_id(self.request)\n if collection_uri not in collections:\n # Unknown yet, fetch from storage.\n collection_parent_id = '/buckets/%s' % self.bucket_id\n collection = object_exists_or_404(self.request,\n collection_id='collection',\n parent_id=collection_parent_id,\n object_id=self.collection_id)\n collections[collection_uri] = collection\n\n self._collection = collections[collection_uri]\n\n def get_parent_id(self, request):\n self.bucket_id = request.matchdict['bucket_id']\n self.collection_id = request.matchdict['collection_id']\n return '/buckets/%s/collections/%s' % (self.bucket_id,\n self.collection_id)\n\n def is_known_field(self, field_name):\n \"\"\"Without schema, any field is considered as known.\"\"\"\n return True\n\n def process_record(self, new, old=None):\n \"\"\"Validate records against collection schema, if any.\"\"\"\n new = super(Record, self).process_record(new, old)\n\n schema = self._collection.get('schema')\n settings = self.request.registry.settings\n schema_validation = 'experimental_collection_schema_validation'\n if not schema or not asbool(settings.get(schema_validation)):\n return new\n\n collection_timestamp = self._collection[self.model.modified_field]\n\n try:\n stripped = copy.deepcopy(new)\n stripped.pop(self.model.id_field, None)\n stripped.pop(self.model.modified_field, None)\n stripped.pop(self.model.permissions_field, None)\n stripped.pop(self.schema_field, None)\n jsonschema.validate(stripped, schema)\n except jsonschema_exceptions.ValidationError as e:\n try:\n field = e.path.pop() if e.path else e.validator_value.pop()\n except AttributeError:\n field = None\n raise_invalid(self.request, name=field, description=e.message)\n\n new[self.schema_field] = collection_timestamp\n return new\n\n def collection_get(self):\n result = super(Record, self).collection_get()\n self._handle_cache_expires(self.request.response)\n return result\n\n def get(self):\n result = super(Record, self).get()\n self._handle_cache_expires(self.request.response)\n return result\n\n def _handle_cache_expires(self, response):\n \"\"\"If the parent collection defines a ``cache_expires`` attribute,\n then cache-control response headers are sent.\n\n .. note::\n\n Those headers are also sent if the\n ``kinto.record_cache_expires_seconds`` setting is defined.\n \"\"\"\n is_anonymous = Authenticated not in self.request.effective_principals\n if not is_anonymous:\n return\n\n cache_expires = self._collection.get('cache_expires')\n if cache_expires is None:\n by_bucket = 'kinto.%s_record_cache_expires_seconds' % (\n self.bucket_id)\n by_collection = '%s_%s_record_cache_expires_seconds' % (\n self.bucket_id, self.collection_id)\n settings = self.request.registry.settings\n cache_expires = settings.get(by_collection,\n settings.get(by_bucket))\n\n if cache_expires is not None:\n response.cache_expires(seconds=cache_expires)\n", "path": "kinto/views/records.py"}]} | 1,910 | 151 |
gh_patches_debug_5673 | rasdani/github-patches | git_diff | scikit-hep__pyhf-1546 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Different ordering of channels between model.config.channels and mode.config.channel_nbins
# Description
We've recently observed that the ordering of channels outputted from `model.config.channels` differs from the order obtained with `model.config.channel_nbins`. This isn't really a bug, but more a quirk which we thought would be worth bringing to the attention of the developers. We ran into some issues when breaking up the `expected_data` list into individual channels by looping through the `model.config.channel_nbins` ordering, rather than the `model.config.channels` ordering (the `model.config.channels` order matches that of the model). Hopefully this issue helps save another user some time in the future, at very least.
# Expected Behavior
We expected that the order of the channels in the two dictionaries from `model.config.channels` and `model.config.channel_nbins` would be the same.
# Actual Behavior
The ordering of the channels is different. As an example, we are generating workspaces using two categories, and we're separating our data by year (2016, 2017, and 2018). This gives us six channels total. The outputs are:
```
(Pdb) model.config.channels
['vbf_channel_16_high_cat', 'vbf_channel_16_low_cat', 'vbf_channel_17_high_cat', 'vbf_channel_17_low_cat', 'vbf_channel_18_high_cat', 'vbf_channel_18_low_cat']
(Pdb) model.config.channel_nbins
{'vbf_channel_16_low_cat': 12, 'vbf_channel_16_high_cat': 18, 'vbf_channel_17_low_cat': 12, 'vbf_channel_17_high_cat': 18, 'vbf_channel_18_low_cat': 12, 'vbf_channel_18_high_cat': 18}
```
I believe that `model.config.channels` is possibly re-ordering the channels so that the names are in alphabetical order. I have not confirmed this, though. The workspace .json file is filled with the ordering produced by `model.config.channel_nbins`.
# Steps to Reproduce
I'm using pyhf version 0.6.2 along with python 3.8.8.
I can make a dummy workspace for this issue, but I thought that since this is a pretty small issue, it might be overkill. Please let me know if this would be helpful, though.
# Checklist
- [ ] Run `git fetch` to get the most up to date version of `master`
- [X] Searched through existing Issues to confirm this is not a duplicate issue
- [X] Filled out the Description, Expected Behavior, Actual Behavior, and Steps to Reproduce sections above or have edited/removed them in a way that fully describes the issue
Thanks! -Rachel
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/pyhf/mixins.py`
Content:
```
1 import logging
2
3 log = logging.getLogger(__name__)
4
5
6 class _ChannelSummaryMixin:
7 """
8 A mixin that provides summary data of the provided channels.
9
10 This mixin will forward all other information to other classes defined in the Child class.
11
12 Args:
13 **channels: A list of channels to provide summary information about. Follows the `defs.json#/definitions/channel` schema.
14 """
15
16 def __init__(self, *args, **kwargs):
17 channels = kwargs.pop('channels')
18 super().__init__(*args, **kwargs)
19 self.channels = []
20 self.samples = []
21 self.parameters = []
22 self.modifiers = []
23 # keep track of the width of each channel (how many bins)
24 self.channel_nbins = {}
25 # need to keep track in which order we added the constraints
26 # so that we can generate correctly-ordered data
27 for channel in channels:
28 self.channels.append(channel['name'])
29 self.channel_nbins[channel['name']] = len(channel['samples'][0]['data'])
30 for sample in channel['samples']:
31 self.samples.append(sample['name'])
32 for modifier_def in sample['modifiers']:
33 self.parameters.append(modifier_def['name'])
34 self.modifiers.append(
35 (
36 modifier_def['name'], # mod name
37 modifier_def['type'], # mod type
38 )
39 )
40
41 self.channels = sorted(list(set(self.channels)))
42 self.samples = sorted(list(set(self.samples)))
43 self.parameters = sorted(list(set(self.parameters)))
44 self.modifiers = sorted(list(set(self.modifiers)))
45
46 self.channel_slices = {}
47 begin = 0
48 for c in self.channels:
49 end = begin + self.channel_nbins[c]
50 self.channel_slices[c] = slice(begin, end)
51 begin = end
52
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/pyhf/mixins.py b/src/pyhf/mixins.py
--- a/src/pyhf/mixins.py
+++ b/src/pyhf/mixins.py
@@ -42,6 +42,9 @@
self.samples = sorted(list(set(self.samples)))
self.parameters = sorted(list(set(self.parameters)))
self.modifiers = sorted(list(set(self.modifiers)))
+ self.channel_nbins = {
+ channel: self.channel_nbins[channel] for channel in self.channels
+ }
self.channel_slices = {}
begin = 0
| {"golden_diff": "diff --git a/src/pyhf/mixins.py b/src/pyhf/mixins.py\n--- a/src/pyhf/mixins.py\n+++ b/src/pyhf/mixins.py\n@@ -42,6 +42,9 @@\n self.samples = sorted(list(set(self.samples)))\n self.parameters = sorted(list(set(self.parameters)))\n self.modifiers = sorted(list(set(self.modifiers)))\n+ self.channel_nbins = {\n+ channel: self.channel_nbins[channel] for channel in self.channels\n+ }\n \n self.channel_slices = {}\n begin = 0\n", "issue": "Different ordering of channels between model.config.channels and mode.config.channel_nbins\n# Description\r\n\r\nWe've recently observed that the ordering of channels outputted from `model.config.channels` differs from the order obtained with `model.config.channel_nbins`. This isn't really a bug, but more a quirk which we thought would be worth bringing to the attention of the developers. We ran into some issues when breaking up the `expected_data` list into individual channels by looping through the `model.config.channel_nbins` ordering, rather than the `model.config.channels` ordering (the `model.config.channels` order matches that of the model). Hopefully this issue helps save another user some time in the future, at very least. \r\n\r\n# Expected Behavior\r\n\r\nWe expected that the order of the channels in the two dictionaries from `model.config.channels` and `model.config.channel_nbins` would be the same. \r\n\r\n# Actual Behavior\r\n\r\nThe ordering of the channels is different. As an example, we are generating workspaces using two categories, and we're separating our data by year (2016, 2017, and 2018). This gives us six channels total. The outputs are: \r\n```\r\n(Pdb) model.config.channels\r\n['vbf_channel_16_high_cat', 'vbf_channel_16_low_cat', 'vbf_channel_17_high_cat', 'vbf_channel_17_low_cat', 'vbf_channel_18_high_cat', 'vbf_channel_18_low_cat']\r\n\r\n(Pdb) model.config.channel_nbins\r\n{'vbf_channel_16_low_cat': 12, 'vbf_channel_16_high_cat': 18, 'vbf_channel_17_low_cat': 12, 'vbf_channel_17_high_cat': 18, 'vbf_channel_18_low_cat': 12, 'vbf_channel_18_high_cat': 18}\r\n```\r\nI believe that `model.config.channels` is possibly re-ordering the channels so that the names are in alphabetical order. I have not confirmed this, though. The workspace .json file is filled with the ordering produced by `model.config.channel_nbins`. \r\n\r\n# Steps to Reproduce\r\n\r\nI'm using pyhf version 0.6.2 along with python 3.8.8. \r\nI can make a dummy workspace for this issue, but I thought that since this is a pretty small issue, it might be overkill. Please let me know if this would be helpful, though. \r\n\r\n# Checklist\r\n\r\n- [ ] Run `git fetch` to get the most up to date version of `master`\r\n- [X] Searched through existing Issues to confirm this is not a duplicate issue\r\n- [X] Filled out the Description, Expected Behavior, Actual Behavior, and Steps to Reproduce sections above or have edited/removed them in a way that fully describes the issue\r\n\r\n\r\nThanks! -Rachel \r\n\n", "before_files": [{"content": "import logging\n\nlog = logging.getLogger(__name__)\n\n\nclass _ChannelSummaryMixin:\n \"\"\"\n A mixin that provides summary data of the provided channels.\n\n This mixin will forward all other information to other classes defined in the Child class.\n\n Args:\n **channels: A list of channels to provide summary information about. Follows the `defs.json#/definitions/channel` schema.\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n channels = kwargs.pop('channels')\n super().__init__(*args, **kwargs)\n self.channels = []\n self.samples = []\n self.parameters = []\n self.modifiers = []\n # keep track of the width of each channel (how many bins)\n self.channel_nbins = {}\n # need to keep track in which order we added the constraints\n # so that we can generate correctly-ordered data\n for channel in channels:\n self.channels.append(channel['name'])\n self.channel_nbins[channel['name']] = len(channel['samples'][0]['data'])\n for sample in channel['samples']:\n self.samples.append(sample['name'])\n for modifier_def in sample['modifiers']:\n self.parameters.append(modifier_def['name'])\n self.modifiers.append(\n (\n modifier_def['name'], # mod name\n modifier_def['type'], # mod type\n )\n )\n\n self.channels = sorted(list(set(self.channels)))\n self.samples = sorted(list(set(self.samples)))\n self.parameters = sorted(list(set(self.parameters)))\n self.modifiers = sorted(list(set(self.modifiers)))\n\n self.channel_slices = {}\n begin = 0\n for c in self.channels:\n end = begin + self.channel_nbins[c]\n self.channel_slices[c] = slice(begin, end)\n begin = end\n", "path": "src/pyhf/mixins.py"}], "after_files": [{"content": "import logging\n\nlog = logging.getLogger(__name__)\n\n\nclass _ChannelSummaryMixin:\n \"\"\"\n A mixin that provides summary data of the provided channels.\n\n This mixin will forward all other information to other classes defined in the Child class.\n\n Args:\n **channels: A list of channels to provide summary information about. Follows the `defs.json#/definitions/channel` schema.\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n channels = kwargs.pop('channels')\n super().__init__(*args, **kwargs)\n self.channels = []\n self.samples = []\n self.parameters = []\n self.modifiers = []\n # keep track of the width of each channel (how many bins)\n self.channel_nbins = {}\n # need to keep track in which order we added the constraints\n # so that we can generate correctly-ordered data\n for channel in channels:\n self.channels.append(channel['name'])\n self.channel_nbins[channel['name']] = len(channel['samples'][0]['data'])\n for sample in channel['samples']:\n self.samples.append(sample['name'])\n for modifier_def in sample['modifiers']:\n self.parameters.append(modifier_def['name'])\n self.modifiers.append(\n (\n modifier_def['name'], # mod name\n modifier_def['type'], # mod type\n )\n )\n\n self.channels = sorted(list(set(self.channels)))\n self.samples = sorted(list(set(self.samples)))\n self.parameters = sorted(list(set(self.parameters)))\n self.modifiers = sorted(list(set(self.modifiers)))\n self.channel_nbins = {\n channel: self.channel_nbins[channel] for channel in self.channels\n }\n\n self.channel_slices = {}\n begin = 0\n for c in self.channels:\n end = begin + self.channel_nbins[c]\n self.channel_slices[c] = slice(begin, end)\n begin = end\n", "path": "src/pyhf/mixins.py"}]} | 1,351 | 124 |
gh_patches_debug_12861 | rasdani/github-patches | git_diff | python-discord__bot-352 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bot changing off-topic channel names more than once per daily cycle
The bot should change the channel names of the three off-topic channels only once after hitting UTC midnight. However, we've noticed that it may attempt to set new channel names far more often than that. The root of the cause is still unknown, but after looking at the recent audit log screenshotted by Scragly, I've come up with a conjecture to what's happening.
If you take a look at the logs below, then you'll notice that most of the channel name changes actually happen just before the whole hour (9:59 in Scragly's local time). My guess is that the sleep duration, which is set based on "the start of the current day" + "1 day time delta", is off by one second. The odd thing is that it's not obvious to me from the code why this happens.
However, if this is true, then it would explain the current behavior: The background task is triggered at 23:59:59 (UTC), it will cycle the channel names, and, **as we're still on the same day**, calculate a sleep second delta based on the previous midnight (which will be 0, since it's trying to find the seconds to 23:59:59, which it already is), so it will async.sleep for 0 seconds and run itself again.
The fact that it doesn't trigger more often is then caused because it needs to make API calls and has other `await` points in the execution.
Since this behavior very much looks like what Discord would call "API Abuse", I think it's important to put in a bug fix as quickly as possible. Since the only thing I can think of is the off-by-one error described above, my proposal is aim for 1 minute past midnight, so we never run into "we're still on the same day" issues again:
```py
today_at_midnight = datetime.utcnow().replace(microsecond=0, second=0, minute=0, hour=0)
next_midnight = today_at_midnight + timedelta(days=1, minutes=1)
seconds_to_sleep = (next_midnight - datetime.utcnow()).seconds
await asyncio.sleep(seconds_to_sleep)
```
If that doesn't fix it, we need to investigate further, but I think this will work.
---
**Edit: some additional confirmation of the off-by-one-second error:**
```
Apr 15 08:08:35 pd.beardfist.com Bot: | bot.cogs.off_topic_names | DEBUG | update_names: seconds to sleep 24
```
I'd set the name change for `08:09:00`, but sleeping for 24 seconds at `08:08:35` will cause it to "wake up" at `08:08:59`.
---

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bot/cogs/off_topic_names.py`
Content:
```
1 import asyncio
2 import logging
3 from datetime import datetime, timedelta
4
5 from discord import Colour, Embed
6 from discord.ext.commands import BadArgument, Bot, Context, Converter, group
7
8 from bot.constants import Channels, Keys, MODERATION_ROLES, URLs
9 from bot.decorators import with_role
10 from bot.pagination import LinePaginator
11
12
13 CHANNELS = (Channels.off_topic_0, Channels.off_topic_1, Channels.off_topic_2)
14 log = logging.getLogger(__name__)
15
16
17 class OffTopicName(Converter):
18 """A converter that ensures an added off-topic name is valid."""
19
20 @staticmethod
21 async def convert(ctx: Context, argument: str):
22 allowed_characters = ("-", "’", "'", "`")
23
24 if not (2 <= len(argument) <= 96):
25 raise BadArgument("Channel name must be between 2 and 96 chars long")
26
27 elif not all(c.isalnum() or c in allowed_characters for c in argument):
28 raise BadArgument(
29 "Channel name must only consist of "
30 "alphanumeric characters, minus signs or apostrophes."
31 )
32
33 elif not argument.islower():
34 raise BadArgument("Channel name must be lowercase")
35
36 # Replace some unusable apostrophe-like characters with "’".
37 return argument.replace("'", "’").replace("`", "’")
38
39
40 async def update_names(bot: Bot, headers: dict):
41 """
42 The background updater task that performs a channel name update daily.
43
44 Args:
45 bot (Bot):
46 The running bot instance, used for fetching data from the
47 website via the bot's `http_session`.
48 """
49
50 while True:
51 today_at_midnight = datetime.utcnow().replace(microsecond=0, second=0, minute=0, hour=0)
52 next_midnight = today_at_midnight + timedelta(days=1)
53 seconds_to_sleep = (next_midnight - datetime.utcnow()).seconds
54 await asyncio.sleep(seconds_to_sleep)
55
56 response = await bot.http_session.get(
57 f'{URLs.site_off_topic_names_api}?random_items=3',
58 headers=headers
59 )
60 channel_0_name, channel_1_name, channel_2_name = await response.json()
61 channel_0, channel_1, channel_2 = (bot.get_channel(channel_id) for channel_id in CHANNELS)
62
63 await channel_0.edit(name=f'ot0-{channel_0_name}')
64 await channel_1.edit(name=f'ot1-{channel_1_name}')
65 await channel_2.edit(name=f'ot2-{channel_2_name}')
66 log.debug(
67 "Updated off-topic channel names to"
68 f" {channel_0_name}, {channel_1_name} and {channel_2_name}"
69 )
70
71
72 class OffTopicNames:
73 """Commands related to managing the off-topic category channel names."""
74
75 def __init__(self, bot: Bot):
76 self.bot = bot
77 self.headers = {"X-API-KEY": Keys.site_api}
78 self.updater_task = None
79
80 def __cleanup(self):
81 if self.updater_task is not None:
82 self.updater_task.cancel()
83
84 async def on_ready(self):
85 if self.updater_task is None:
86 coro = update_names(self.bot, self.headers)
87 self.updater_task = await self.bot.loop.create_task(coro)
88
89 @group(name='otname', aliases=('otnames', 'otn'), invoke_without_command=True)
90 @with_role(*MODERATION_ROLES)
91 async def otname_group(self, ctx):
92 """Add or list items from the off-topic channel name rotation."""
93
94 await ctx.invoke(self.bot.get_command("help"), "otname")
95
96 @otname_group.command(name='add', aliases=('a',))
97 @with_role(*MODERATION_ROLES)
98 async def add_command(self, ctx, name: OffTopicName):
99 """Adds a new off-topic name to the rotation."""
100
101 result = await self.bot.http_session.post(
102 URLs.site_off_topic_names_api,
103 headers=self.headers,
104 params={'name': name}
105 )
106
107 response = await result.json()
108
109 if result.status == 200:
110 log.info(
111 f"{ctx.author.name}#{ctx.author.discriminator}"
112 f" added the off-topic channel name '{name}"
113 )
114 await ctx.send(":ok_hand:")
115 else:
116 error_reason = response.get('message', "No reason provided.")
117 await ctx.send(f":warning: got non-200 from the API: {error_reason}")
118
119 @otname_group.command(name='delete', aliases=('remove', 'rm', 'del', 'd'))
120 @with_role(*MODERATION_ROLES)
121 async def delete_command(self, ctx, name: OffTopicName):
122 """Removes a off-topic name from the rotation."""
123
124 result = await self.bot.http_session.delete(
125 URLs.site_off_topic_names_api,
126 headers=self.headers,
127 params={'name': name}
128 )
129
130 response = await result.json()
131
132 if result.status == 200:
133 if response['deleted'] == 0:
134 await ctx.send(f":warning: No name matching `{name}` was found in the database.")
135 else:
136 log.info(
137 f"{ctx.author.name}#{ctx.author.discriminator}"
138 f" deleted the off-topic channel name '{name}"
139 )
140 await ctx.send(":ok_hand:")
141 else:
142 error_reason = response.get('message', "No reason provided.")
143 await ctx.send(f":warning: got non-200 from the API: {error_reason}")
144
145 @otname_group.command(name='list', aliases=('l',))
146 @with_role(*MODERATION_ROLES)
147 async def list_command(self, ctx):
148 """
149 Lists all currently known off-topic channel names in a paginator.
150 Restricted to Moderator and above to not spoil the surprise.
151 """
152
153 result = await self.bot.http_session.get(
154 URLs.site_off_topic_names_api,
155 headers=self.headers
156 )
157 response = await result.json()
158 lines = sorted(f"• {name}" for name in response)
159
160 embed = Embed(
161 title=f"Known off-topic names (`{len(response)}` total)",
162 colour=Colour.blue()
163 )
164 await LinePaginator.paginate(lines, ctx, embed, max_size=400, empty=False)
165
166
167 def setup(bot: Bot):
168 bot.add_cog(OffTopicNames(bot))
169 log.info("Cog loaded: OffTopicNames")
170
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bot/cogs/off_topic_names.py b/bot/cogs/off_topic_names.py
--- a/bot/cogs/off_topic_names.py
+++ b/bot/cogs/off_topic_names.py
@@ -48,9 +48,11 @@
"""
while True:
+ # Since we truncate the compute timedelta to seconds, we add one second to ensure
+ # we go past midnight in the `seconds_to_sleep` set below.
today_at_midnight = datetime.utcnow().replace(microsecond=0, second=0, minute=0, hour=0)
next_midnight = today_at_midnight + timedelta(days=1)
- seconds_to_sleep = (next_midnight - datetime.utcnow()).seconds
+ seconds_to_sleep = (next_midnight - datetime.utcnow()).seconds + 1
await asyncio.sleep(seconds_to_sleep)
response = await bot.http_session.get(
| {"golden_diff": "diff --git a/bot/cogs/off_topic_names.py b/bot/cogs/off_topic_names.py\n--- a/bot/cogs/off_topic_names.py\n+++ b/bot/cogs/off_topic_names.py\n@@ -48,9 +48,11 @@\n \"\"\"\n \n while True:\n+ # Since we truncate the compute timedelta to seconds, we add one second to ensure\n+ # we go past midnight in the `seconds_to_sleep` set below.\n today_at_midnight = datetime.utcnow().replace(microsecond=0, second=0, minute=0, hour=0)\n next_midnight = today_at_midnight + timedelta(days=1)\n- seconds_to_sleep = (next_midnight - datetime.utcnow()).seconds\n+ seconds_to_sleep = (next_midnight - datetime.utcnow()).seconds + 1\n await asyncio.sleep(seconds_to_sleep)\n \n response = await bot.http_session.get(\n", "issue": "Bot changing off-topic channel names more than once per daily cycle\nThe bot should change the channel names of the three off-topic channels only once after hitting UTC midnight. However, we've noticed that it may attempt to set new channel names far more often than that. The root of the cause is still unknown, but after looking at the recent audit log screenshotted by Scragly, I've come up with a conjecture to what's happening.\r\n\r\nIf you take a look at the logs below, then you'll notice that most of the channel name changes actually happen just before the whole hour (9:59 in Scragly's local time). My guess is that the sleep duration, which is set based on \"the start of the current day\" + \"1 day time delta\", is off by one second. The odd thing is that it's not obvious to me from the code why this happens.\r\n\r\nHowever, if this is true, then it would explain the current behavior: The background task is triggered at 23:59:59 (UTC), it will cycle the channel names, and, **as we're still on the same day**, calculate a sleep second delta based on the previous midnight (which will be 0, since it's trying to find the seconds to 23:59:59, which it already is), so it will async.sleep for 0 seconds and run itself again.\r\n\r\nThe fact that it doesn't trigger more often is then caused because it needs to make API calls and has other `await` points in the execution. \r\n\r\nSince this behavior very much looks like what Discord would call \"API Abuse\", I think it's important to put in a bug fix as quickly as possible. Since the only thing I can think of is the off-by-one error described above, my proposal is aim for 1 minute past midnight, so we never run into \"we're still on the same day\" issues again:\r\n\r\n```py\r\n today_at_midnight = datetime.utcnow().replace(microsecond=0, second=0, minute=0, hour=0)\r\n next_midnight = today_at_midnight + timedelta(days=1, minutes=1)\r\n seconds_to_sleep = (next_midnight - datetime.utcnow()).seconds\r\n await asyncio.sleep(seconds_to_sleep)\r\n```\r\n\r\nIf that doesn't fix it, we need to investigate further, but I think this will work.\r\n\r\n---\r\n\r\n**Edit: some additional confirmation of the off-by-one-second error:**\r\n```\r\nApr 15 08:08:35 pd.beardfist.com Bot: | bot.cogs.off_topic_names | DEBUG | update_names: seconds to sleep 24\r\n```\r\n\r\nI'd set the name change for `08:09:00`, but sleeping for 24 seconds at `08:08:35` will cause it to \"wake up\" at `08:08:59`.\r\n\r\n---\r\n\r\n\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "import asyncio\nimport logging\nfrom datetime import datetime, timedelta\n\nfrom discord import Colour, Embed\nfrom discord.ext.commands import BadArgument, Bot, Context, Converter, group\n\nfrom bot.constants import Channels, Keys, MODERATION_ROLES, URLs\nfrom bot.decorators import with_role\nfrom bot.pagination import LinePaginator\n\n\nCHANNELS = (Channels.off_topic_0, Channels.off_topic_1, Channels.off_topic_2)\nlog = logging.getLogger(__name__)\n\n\nclass OffTopicName(Converter):\n \"\"\"A converter that ensures an added off-topic name is valid.\"\"\"\n\n @staticmethod\n async def convert(ctx: Context, argument: str):\n allowed_characters = (\"-\", \"\u2019\", \"'\", \"`\")\n\n if not (2 <= len(argument) <= 96):\n raise BadArgument(\"Channel name must be between 2 and 96 chars long\")\n\n elif not all(c.isalnum() or c in allowed_characters for c in argument):\n raise BadArgument(\n \"Channel name must only consist of \"\n \"alphanumeric characters, minus signs or apostrophes.\"\n )\n\n elif not argument.islower():\n raise BadArgument(\"Channel name must be lowercase\")\n\n # Replace some unusable apostrophe-like characters with \"\u2019\".\n return argument.replace(\"'\", \"\u2019\").replace(\"`\", \"\u2019\")\n\n\nasync def update_names(bot: Bot, headers: dict):\n \"\"\"\n The background updater task that performs a channel name update daily.\n\n Args:\n bot (Bot):\n The running bot instance, used for fetching data from the\n website via the bot's `http_session`.\n \"\"\"\n\n while True:\n today_at_midnight = datetime.utcnow().replace(microsecond=0, second=0, minute=0, hour=0)\n next_midnight = today_at_midnight + timedelta(days=1)\n seconds_to_sleep = (next_midnight - datetime.utcnow()).seconds\n await asyncio.sleep(seconds_to_sleep)\n\n response = await bot.http_session.get(\n f'{URLs.site_off_topic_names_api}?random_items=3',\n headers=headers\n )\n channel_0_name, channel_1_name, channel_2_name = await response.json()\n channel_0, channel_1, channel_2 = (bot.get_channel(channel_id) for channel_id in CHANNELS)\n\n await channel_0.edit(name=f'ot0-{channel_0_name}')\n await channel_1.edit(name=f'ot1-{channel_1_name}')\n await channel_2.edit(name=f'ot2-{channel_2_name}')\n log.debug(\n \"Updated off-topic channel names to\"\n f\" {channel_0_name}, {channel_1_name} and {channel_2_name}\"\n )\n\n\nclass OffTopicNames:\n \"\"\"Commands related to managing the off-topic category channel names.\"\"\"\n\n def __init__(self, bot: Bot):\n self.bot = bot\n self.headers = {\"X-API-KEY\": Keys.site_api}\n self.updater_task = None\n\n def __cleanup(self):\n if self.updater_task is not None:\n self.updater_task.cancel()\n\n async def on_ready(self):\n if self.updater_task is None:\n coro = update_names(self.bot, self.headers)\n self.updater_task = await self.bot.loop.create_task(coro)\n\n @group(name='otname', aliases=('otnames', 'otn'), invoke_without_command=True)\n @with_role(*MODERATION_ROLES)\n async def otname_group(self, ctx):\n \"\"\"Add or list items from the off-topic channel name rotation.\"\"\"\n\n await ctx.invoke(self.bot.get_command(\"help\"), \"otname\")\n\n @otname_group.command(name='add', aliases=('a',))\n @with_role(*MODERATION_ROLES)\n async def add_command(self, ctx, name: OffTopicName):\n \"\"\"Adds a new off-topic name to the rotation.\"\"\"\n\n result = await self.bot.http_session.post(\n URLs.site_off_topic_names_api,\n headers=self.headers,\n params={'name': name}\n )\n\n response = await result.json()\n\n if result.status == 200:\n log.info(\n f\"{ctx.author.name}#{ctx.author.discriminator}\"\n f\" added the off-topic channel name '{name}\"\n )\n await ctx.send(\":ok_hand:\")\n else:\n error_reason = response.get('message', \"No reason provided.\")\n await ctx.send(f\":warning: got non-200 from the API: {error_reason}\")\n\n @otname_group.command(name='delete', aliases=('remove', 'rm', 'del', 'd'))\n @with_role(*MODERATION_ROLES)\n async def delete_command(self, ctx, name: OffTopicName):\n \"\"\"Removes a off-topic name from the rotation.\"\"\"\n\n result = await self.bot.http_session.delete(\n URLs.site_off_topic_names_api,\n headers=self.headers,\n params={'name': name}\n )\n\n response = await result.json()\n\n if result.status == 200:\n if response['deleted'] == 0:\n await ctx.send(f\":warning: No name matching `{name}` was found in the database.\")\n else:\n log.info(\n f\"{ctx.author.name}#{ctx.author.discriminator}\"\n f\" deleted the off-topic channel name '{name}\"\n )\n await ctx.send(\":ok_hand:\")\n else:\n error_reason = response.get('message', \"No reason provided.\")\n await ctx.send(f\":warning: got non-200 from the API: {error_reason}\")\n\n @otname_group.command(name='list', aliases=('l',))\n @with_role(*MODERATION_ROLES)\n async def list_command(self, ctx):\n \"\"\"\n Lists all currently known off-topic channel names in a paginator.\n Restricted to Moderator and above to not spoil the surprise.\n \"\"\"\n\n result = await self.bot.http_session.get(\n URLs.site_off_topic_names_api,\n headers=self.headers\n )\n response = await result.json()\n lines = sorted(f\"\u2022 {name}\" for name in response)\n\n embed = Embed(\n title=f\"Known off-topic names (`{len(response)}` total)\",\n colour=Colour.blue()\n )\n await LinePaginator.paginate(lines, ctx, embed, max_size=400, empty=False)\n\n\ndef setup(bot: Bot):\n bot.add_cog(OffTopicNames(bot))\n log.info(\"Cog loaded: OffTopicNames\")\n", "path": "bot/cogs/off_topic_names.py"}], "after_files": [{"content": "import asyncio\nimport logging\nfrom datetime import datetime, timedelta\n\nfrom discord import Colour, Embed\nfrom discord.ext.commands import BadArgument, Bot, Context, Converter, group\n\nfrom bot.constants import Channels, Keys, MODERATION_ROLES, URLs\nfrom bot.decorators import with_role\nfrom bot.pagination import LinePaginator\n\n\nCHANNELS = (Channels.off_topic_0, Channels.off_topic_1, Channels.off_topic_2)\nlog = logging.getLogger(__name__)\n\n\nclass OffTopicName(Converter):\n \"\"\"A converter that ensures an added off-topic name is valid.\"\"\"\n\n @staticmethod\n async def convert(ctx: Context, argument: str):\n allowed_characters = (\"-\", \"\u2019\", \"'\", \"`\")\n\n if not (2 <= len(argument) <= 96):\n raise BadArgument(\"Channel name must be between 2 and 96 chars long\")\n\n elif not all(c.isalnum() or c in allowed_characters for c in argument):\n raise BadArgument(\n \"Channel name must only consist of \"\n \"alphanumeric characters, minus signs or apostrophes.\"\n )\n\n elif not argument.islower():\n raise BadArgument(\"Channel name must be lowercase\")\n\n # Replace some unusable apostrophe-like characters with \"\u2019\".\n return argument.replace(\"'\", \"\u2019\").replace(\"`\", \"\u2019\")\n\n\nasync def update_names(bot: Bot, headers: dict):\n \"\"\"\n The background updater task that performs a channel name update daily.\n\n Args:\n bot (Bot):\n The running bot instance, used for fetching data from the\n website via the bot's `http_session`.\n \"\"\"\n\n while True:\n # Since we truncate the compute timedelta to seconds, we add one second to ensure\n # we go past midnight in the `seconds_to_sleep` set below.\n today_at_midnight = datetime.utcnow().replace(microsecond=0, second=0, minute=0, hour=0)\n next_midnight = today_at_midnight + timedelta(days=1)\n seconds_to_sleep = (next_midnight - datetime.utcnow()).seconds + 1\n await asyncio.sleep(seconds_to_sleep)\n\n response = await bot.http_session.get(\n f'{URLs.site_off_topic_names_api}?random_items=3',\n headers=headers\n )\n channel_0_name, channel_1_name, channel_2_name = await response.json()\n channel_0, channel_1, channel_2 = (bot.get_channel(channel_id) for channel_id in CHANNELS)\n\n await channel_0.edit(name=f'ot0-{channel_0_name}')\n await channel_1.edit(name=f'ot1-{channel_1_name}')\n await channel_2.edit(name=f'ot2-{channel_2_name}')\n log.debug(\n \"Updated off-topic channel names to\"\n f\" {channel_0_name}, {channel_1_name} and {channel_2_name}\"\n )\n\n\nclass OffTopicNames:\n \"\"\"Commands related to managing the off-topic category channel names.\"\"\"\n\n def __init__(self, bot: Bot):\n self.bot = bot\n self.headers = {\"X-API-KEY\": Keys.site_api}\n self.updater_task = None\n\n def __cleanup(self):\n if self.updater_task is not None:\n self.updater_task.cancel()\n\n async def on_ready(self):\n if self.updater_task is None:\n coro = update_names(self.bot, self.headers)\n self.updater_task = await self.bot.loop.create_task(coro)\n\n @group(name='otname', aliases=('otnames', 'otn'), invoke_without_command=True)\n @with_role(*MODERATION_ROLES)\n async def otname_group(self, ctx):\n \"\"\"Add or list items from the off-topic channel name rotation.\"\"\"\n\n await ctx.invoke(self.bot.get_command(\"help\"), \"otname\")\n\n @otname_group.command(name='add', aliases=('a',))\n @with_role(*MODERATION_ROLES)\n async def add_command(self, ctx, name: OffTopicName):\n \"\"\"Adds a new off-topic name to the rotation.\"\"\"\n\n result = await self.bot.http_session.post(\n URLs.site_off_topic_names_api,\n headers=self.headers,\n params={'name': name}\n )\n\n response = await result.json()\n\n if result.status == 200:\n log.info(\n f\"{ctx.author.name}#{ctx.author.discriminator}\"\n f\" added the off-topic channel name '{name}\"\n )\n await ctx.send(\":ok_hand:\")\n else:\n error_reason = response.get('message', \"No reason provided.\")\n await ctx.send(f\":warning: got non-200 from the API: {error_reason}\")\n\n @otname_group.command(name='delete', aliases=('remove', 'rm', 'del', 'd'))\n @with_role(*MODERATION_ROLES)\n async def delete_command(self, ctx, name: OffTopicName):\n \"\"\"Removes a off-topic name from the rotation.\"\"\"\n\n result = await self.bot.http_session.delete(\n URLs.site_off_topic_names_api,\n headers=self.headers,\n params={'name': name}\n )\n\n response = await result.json()\n\n if result.status == 200:\n if response['deleted'] == 0:\n await ctx.send(f\":warning: No name matching `{name}` was found in the database.\")\n else:\n log.info(\n f\"{ctx.author.name}#{ctx.author.discriminator}\"\n f\" deleted the off-topic channel name '{name}\"\n )\n await ctx.send(\":ok_hand:\")\n else:\n error_reason = response.get('message', \"No reason provided.\")\n await ctx.send(f\":warning: got non-200 from the API: {error_reason}\")\n\n @otname_group.command(name='list', aliases=('l',))\n @with_role(*MODERATION_ROLES)\n async def list_command(self, ctx):\n \"\"\"\n Lists all currently known off-topic channel names in a paginator.\n Restricted to Moderator and above to not spoil the surprise.\n \"\"\"\n\n result = await self.bot.http_session.get(\n URLs.site_off_topic_names_api,\n headers=self.headers\n )\n response = await result.json()\n lines = sorted(f\"\u2022 {name}\" for name in response)\n\n embed = Embed(\n title=f\"Known off-topic names (`{len(response)}` total)\",\n colour=Colour.blue()\n )\n await LinePaginator.paginate(lines, ctx, embed, max_size=400, empty=False)\n\n\ndef setup(bot: Bot):\n bot.add_cog(OffTopicNames(bot))\n log.info(\"Cog loaded: OffTopicNames\")\n", "path": "bot/cogs/off_topic_names.py"}]} | 2,731 | 196 |
gh_patches_debug_43949 | rasdani/github-patches | git_diff | ephios-dev__ephios-87 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Nutzer löschen
Als Manager möchte ich einen Nutzer löschen können. Dafür soll es in der Liste der Nutzer sowie auf der Bearbeitungsseite eines Nutzers eine entsprechende Schaltfläche geben.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ephios/user_management/urls.py`
Content:
```
1 from django.urls import path
2
3 from ephios.user_management import views
4
5 app_name = "user_management"
6 urlpatterns = [
7 path("profile/", views.ProfileView.as_view(), name="profile"),
8 path("groups/", views.GroupListView.as_view(), name="group_list"),
9 path("groups/<int:pk>/edit", views.GroupUpdateView.as_view(), name="group_edit"),
10 path("groups/<int:pk>/delete", views.GroupDeleteView.as_view(), name="group_delete"),
11 path("groups/create", views.GroupCreateView.as_view(), name="group_add"),
12 path(
13 "users/",
14 views.UserProfileListView.as_view(),
15 name="userprofile_list",
16 ),
17 path(
18 "users/<int:pk>/edit",
19 views.UserProfileUpdateView.as_view(),
20 name="userprofile_edit",
21 ),
22 path(
23 "users/create/",
24 views.UserProfileCreateView.as_view(),
25 name="userprofile_create",
26 ),
27 ]
28
```
Path: `ephios/user_management/views.py`
Content:
```
1 from django.contrib import messages
2 from django.contrib.auth.mixins import LoginRequiredMixin, PermissionRequiredMixin
3 from django.contrib.auth.models import Group
4 from django.shortcuts import redirect
5 from django.urls import reverse
6 from django.utils.translation import gettext as _
7 from django.views.generic import (
8 CreateView,
9 DeleteView,
10 DetailView,
11 ListView,
12 TemplateView,
13 UpdateView,
14 )
15 from django.views.generic.detail import SingleObjectMixin
16 from guardian.shortcuts import get_objects_for_group
17
18 from ephios.user_management import mail
19 from ephios.user_management.forms import GroupForm, QualificationGrantFormset, UserProfileForm
20 from ephios.user_management.models import UserProfile
21
22
23 class ProfileView(LoginRequiredMixin, DetailView):
24 def get_object(self, queryset=None):
25 return self.request.user
26
27
28 class UserProfileListView(PermissionRequiredMixin, ListView):
29 model = UserProfile
30 permission_required = "user_management.view_userprofile"
31
32
33 class UserProfileCreateView(PermissionRequiredMixin, TemplateView):
34 template_name = "user_management/userprofile_form.html"
35 permission_required = "user_management.add_userprofile"
36 model = UserProfile
37
38 def get_context_data(self, **kwargs):
39 kwargs.setdefault("userprofile_form", UserProfileForm(self.request.POST or None))
40 kwargs.setdefault(
41 "qualification_formset", QualificationGrantFormset(self.request.POST or None)
42 )
43 return super().get_context_data(**kwargs)
44
45 def post(self, request, *args, **kwargs):
46 userprofile_form = UserProfileForm(self.request.POST)
47 qualification_formset = QualificationGrantFormset(self.request.POST)
48 if all((userprofile_form.is_valid(), qualification_formset.is_valid())):
49 userprofile = userprofile_form.save()
50 qualification_formset.instance = userprofile
51 qualification_formset.save()
52 messages.success(self.request, _("User added successfully."))
53 if userprofile.is_active:
54 mail.send_account_creation_info(userprofile)
55 return redirect(reverse("user_management:userprofile_list"))
56 else:
57 return self.render_to_response(
58 self.get_context_data(
59 userprofile_form=userprofile_form, qualification_formset=qualification_formset
60 )
61 )
62
63
64 class UserProfileUpdateView(PermissionRequiredMixin, SingleObjectMixin, TemplateView):
65 model = UserProfile
66 permission_required = "user_management.change_userprofile"
67 template_name = "user_management/userprofile_form.html"
68
69 def get_userprofile_form(self):
70 return UserProfileForm(
71 self.request.POST or None,
72 initial={
73 "groups": self.get_object().groups.all(),
74 },
75 instance=self.object,
76 )
77
78 def get_qualification_formset(self):
79 return QualificationGrantFormset(self.request.POST or None, instance=self.object)
80
81 def get_context_data(self, **kwargs):
82 self.object = self.get_object()
83 kwargs.setdefault("userprofile_form", self.get_userprofile_form())
84 kwargs.setdefault("qualification_formset", self.get_qualification_formset())
85 return super().get_context_data(**kwargs)
86
87 def post(self, request, *args, **kwargs):
88 self.object = self.get_object()
89 userprofile_form = self.get_userprofile_form()
90 qualification_formset = self.get_qualification_formset()
91 if all((userprofile_form.is_valid(), qualification_formset.is_valid())):
92 userprofile = userprofile_form.save()
93 qualification_formset.save()
94 messages.success(self.request, _("User updated successfully."))
95 if userprofile.is_active:
96 mail.send_account_update_info(userprofile)
97 return redirect(reverse("user_management:userprofile_list"))
98 else:
99 return self.render_to_response(
100 self.get_context_data(
101 userprofile_form=userprofile_form, qualification_formset=qualification_formset
102 )
103 )
104
105
106 class GroupListView(PermissionRequiredMixin, ListView):
107 model = Group
108 permission_required = "auth.view_group"
109 template_name = "user_management/group_list.html"
110
111
112 class GroupCreateView(PermissionRequiredMixin, CreateView):
113 model = Group
114 permission_required = "auth.add_group"
115 template_name = "user_management/group_form.html"
116 form_class = GroupForm
117
118 def get_form_kwargs(self):
119 kwargs = super().get_form_kwargs()
120 kwargs["initial"] = {
121 "users": UserProfile.objects.none(),
122 "can_add_event": False,
123 "publish_event_for_group": Group.objects.none(),
124 }
125 return kwargs
126
127 def get_success_url(self):
128 messages.success(self.request, _("Group created successfully."))
129 return reverse("user_management:group_list")
130
131
132 class GroupUpdateView(PermissionRequiredMixin, UpdateView):
133 model = Group
134 permission_required = "auth.change_group"
135 template_name = "user_management/group_form.html"
136 form_class = GroupForm
137
138 def get_form_kwargs(self):
139 kwargs = super().get_form_kwargs()
140 kwargs["initial"] = {
141 "users": self.object.user_set.all(),
142 "can_view_past_event": self.object.permissions.filter(
143 codename="view_past_event"
144 ).exists(),
145 "can_add_event": self.object.permissions.filter(codename="add_event").exists(),
146 "publish_event_for_group": get_objects_for_group(
147 self.object, "publish_event_for_group", klass=Group
148 ),
149 }
150 return kwargs
151
152 def get_success_url(self):
153 messages.success(self.request, _("Group updated successfully."))
154 return reverse("user_management:group_list")
155
156
157 class GroupDeleteView(PermissionRequiredMixin, DeleteView):
158 model = Group
159 permission_required = "auth.delete_group"
160 template_name = "user_management/group_confirm_delete.html"
161
162 def get_success_url(self):
163 return reverse("user_management:group_list")
164
```
Path: `ephios/user_management/forms.py`
Content:
```
1 from django import forms
2 from django.contrib.auth.forms import ReadOnlyPasswordHashField
3 from django.contrib.auth.models import Group
4 from django.contrib.auth.password_validation import validate_password
5 from django.forms import (
6 BooleanField,
7 ModelForm,
8 ModelMultipleChoiceField,
9 TextInput,
10 inlineformset_factory,
11 )
12 from django.utils.translation import gettext as _
13 from django_select2.forms import Select2MultipleWidget, Select2Widget
14 from guardian.shortcuts import assign_perm, remove_perm
15
16 from ephios.extra.widgets import CustomDateInput
17 from ephios.user_management.models import QualificationGrant, UserProfile
18 from ephios.user_management.widgets import MultiUserProfileWidget
19
20
21 class UserCreationForm(forms.ModelForm):
22 """A form for creating new users. Includes all the required
23 fields, plus a repeated password."""
24
25 password = forms.CharField(label=_("Password"), widget=forms.PasswordInput)
26 password_validation = forms.CharField(
27 label=_("Password confirmation"), widget=forms.PasswordInput
28 )
29 field_order = ["email", "password", "password_validation"]
30
31 class Meta:
32 model = UserProfile
33 fields = (
34 "email",
35 "first_name",
36 "last_name",
37 "date_of_birth",
38 "phone",
39 )
40
41 def clean_password_validation(self):
42 # Check that the two password entries match
43 password = self.cleaned_data.get("password")
44 password_validation = self.cleaned_data.get("password_validation")
45 if password and password_validation and password != password_validation:
46 raise forms.ValidationError(_("Passwords don't match"))
47 return password_validation
48
49 def _post_clean(self):
50 super()._post_clean()
51 # Validate the password after self.instance is updated with form data
52 # by super().
53 password = self.cleaned_data.get("password_validation")
54 if password:
55 try:
56 validate_password(password, self.instance)
57 except forms.ValidationError as error:
58 self.add_error("password", error)
59
60 def save(self, commit=True):
61 # Save the provided password in hashed format
62 user = super().save(commit=False)
63 user.set_password(self.cleaned_data["password"])
64 if commit:
65 user.save()
66 return user
67
68
69 class UserChangeForm(forms.ModelForm):
70 """A form for updating users. Includes all the fields on
71 the user, but replaces the password field with admin's
72 password hash display field.
73 """
74
75 password = ReadOnlyPasswordHashField()
76
77 class Meta:
78 model = UserProfile
79 fields = (
80 "email",
81 "password",
82 "first_name",
83 "last_name",
84 "date_of_birth",
85 "phone",
86 "is_active",
87 "is_staff",
88 )
89
90 def clean_password(self):
91 # Regardless of what the user provides, return the initial value.
92 # This is done here, rather than on the field, because the
93 # field does not have access to the initial value
94 return self.initial["password"]
95
96
97 class GroupForm(ModelForm):
98 publish_event_for_group = ModelMultipleChoiceField(
99 label=_("Can publish event for group"),
100 queryset=Group.objects.all(),
101 required=False,
102 help_text=_("Choose groups that this group can make events visible for."),
103 widget=Select2MultipleWidget,
104 )
105 can_view_past_event = BooleanField(label=_("Can view past events"), required=False)
106 can_add_event = BooleanField(label=_("Can add event"), required=False)
107 users = ModelMultipleChoiceField(
108 label=_("Users"), queryset=UserProfile.objects.all(), widget=MultiUserProfileWidget
109 )
110
111 field_order = [
112 "name",
113 "users",
114 "can_view_past_event",
115 "can_add_event",
116 "publish_event_for_group",
117 ]
118
119 class Meta:
120 model = Group
121 fields = ["name"]
122
123 def save(self, commit=True):
124 group = super().save(commit)
125
126 group.user_set.set(self.cleaned_data["users"])
127
128 if self.cleaned_data["can_view_past_event"]:
129 assign_perm("event_management.view_past_event", group)
130 else:
131 remove_perm("event_management.view_past_event", group)
132
133 if self.cleaned_data["can_add_event"]:
134 assign_perm("event_management.add_event", group)
135 assign_perm("event_management.delete_event", group)
136
137 if "publish_event_for_group" in self.changed_data:
138 for target_group in self.cleaned_data["publish_event_for_group"].exlude(
139 id__in=self.initial["publish_event_for_group"]
140 ):
141 assign_perm("publish_event_for_group", group, target_group)
142 for target_group in self.initial["publish_event_for_group"].exclude(
143 id__in=self.cleaned_data["publish_event_for_group"]
144 ):
145 remove_perm("publish_event_for_group", group, target_group)
146 else:
147 remove_perm("event_management.add_event", group)
148 remove_perm("event_management.delete_event", group)
149 for target_group in Group.objects.all():
150 remove_perm("publish_event_for_group", group, target_group)
151
152
153 class UserProfileForm(ModelForm):
154 groups = ModelMultipleChoiceField(
155 label=_("Groups"), queryset=Group.objects.all(), widget=Select2MultipleWidget
156 )
157
158 field_order = [
159 "email",
160 "first_name",
161 "last_name",
162 "date_of_birth",
163 "phone",
164 "groups",
165 "is_active",
166 ]
167
168 class Meta:
169 model = UserProfile
170 fields = ["email", "first_name", "last_name", "date_of_birth", "phone", "is_active"]
171 widgets = {"date_of_birth": CustomDateInput(format="%Y-%m-%d")}
172 help_texts = {
173 "is_active": _("Inactive users cannot log in and do not get any notifications.")
174 }
175 labels = {"is_active": _("Active user")}
176
177 def save(self, commit=True):
178 userprofile = super().save(commit)
179 userprofile.groups.set(self.cleaned_data["groups"])
180 userprofile.save()
181 return userprofile
182
183
184 class QualificationGrantForm(ModelForm):
185 model = QualificationGrant
186
187 class Meta:
188 fields = ["qualification", "expires"]
189 widgets = {"qualification": Select2Widget, "expires": CustomDateInput(format="%Y-%m-%d")}
190
191 def __init__(self, *args, **kwargs):
192 super().__init__(*args, **kwargs)
193 instance = getattr(self, "instance", None)
194 if instance and instance.pk:
195 self.fields["qualification"].disabled = True
196 self.fields["qualification"].widget = TextInput(
197 attrs={"class": "form-control-plaintext"}
198 )
199 self.initial["qualification"] = instance.qualification.title
200
201
202 QualificationGrantFormset = inlineformset_factory(
203 UserProfile,
204 QualificationGrant,
205 form=QualificationGrantForm,
206 extra=0,
207 )
208
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ephios/user_management/forms.py b/ephios/user_management/forms.py
--- a/ephios/user_management/forms.py
+++ b/ephios/user_management/forms.py
@@ -149,6 +149,8 @@
for target_group in Group.objects.all():
remove_perm("publish_event_for_group", group, target_group)
+ return group
+
class UserProfileForm(ModelForm):
groups = ModelMultipleChoiceField(
diff --git a/ephios/user_management/urls.py b/ephios/user_management/urls.py
--- a/ephios/user_management/urls.py
+++ b/ephios/user_management/urls.py
@@ -19,6 +19,11 @@
views.UserProfileUpdateView.as_view(),
name="userprofile_edit",
),
+ path(
+ "users/<int:pk>/delete",
+ views.UserProfileDeleteView.as_view(),
+ name="userprofile_delete",
+ ),
path(
"users/create/",
views.UserProfileCreateView.as_view(),
diff --git a/ephios/user_management/views.py b/ephios/user_management/views.py
--- a/ephios/user_management/views.py
+++ b/ephios/user_management/views.py
@@ -49,7 +49,12 @@
userprofile = userprofile_form.save()
qualification_formset.instance = userprofile
qualification_formset.save()
- messages.success(self.request, _("User added successfully."))
+ messages.success(
+ self.request,
+ _("User {name} ({user}) added successfully.").format(
+ name=userprofile.get_full_name(), user=userprofile
+ ),
+ )
if userprofile.is_active:
mail.send_account_creation_info(userprofile)
return redirect(reverse("user_management:userprofile_list"))
@@ -91,7 +96,12 @@
if all((userprofile_form.is_valid(), qualification_formset.is_valid())):
userprofile = userprofile_form.save()
qualification_formset.save()
- messages.success(self.request, _("User updated successfully."))
+ messages.success(
+ self.request,
+ _("User {name} ({user}) updated successfully.").format(
+ name=self.object.get_full_name(), user=self.object
+ ),
+ )
if userprofile.is_active:
mail.send_account_update_info(userprofile)
return redirect(reverse("user_management:userprofile_list"))
@@ -103,6 +113,21 @@
)
+class UserProfileDeleteView(PermissionRequiredMixin, DeleteView):
+ model = UserProfile
+ permission_required = "user_management.delete_userprofile"
+ template_name = "user_management/userprofile_confirm_delete.html"
+
+ def get_success_url(self):
+ messages.info(
+ self.request,
+ _("The user {name} ({user}) was deleted.").format(
+ name=self.object.get_full_name(), user=self.object
+ ),
+ )
+ return reverse("user_management:userprofile_list")
+
+
class GroupListView(PermissionRequiredMixin, ListView):
model = Group
permission_required = "auth.view_group"
@@ -125,7 +150,9 @@
return kwargs
def get_success_url(self):
- messages.success(self.request, _("Group created successfully."))
+ messages.success(
+ self.request, _('Group "{group}" created successfully.').format(group=self.object)
+ )
return reverse("user_management:group_list")
@@ -150,7 +177,9 @@
return kwargs
def get_success_url(self):
- messages.success(self.request, _("Group updated successfully."))
+ messages.success(
+ self.request, _('Group "{group}" updated successfully.').format(group=self.object)
+ )
return reverse("user_management:group_list")
@@ -160,4 +189,5 @@
template_name = "user_management/group_confirm_delete.html"
def get_success_url(self):
+ messages.info(self.request, _('The group "{group}" was deleted.').format(group=self.object))
return reverse("user_management:group_list")
| {"golden_diff": "diff --git a/ephios/user_management/forms.py b/ephios/user_management/forms.py\n--- a/ephios/user_management/forms.py\n+++ b/ephios/user_management/forms.py\n@@ -149,6 +149,8 @@\n for target_group in Group.objects.all():\n remove_perm(\"publish_event_for_group\", group, target_group)\n \n+ return group\n+\n \n class UserProfileForm(ModelForm):\n groups = ModelMultipleChoiceField(\ndiff --git a/ephios/user_management/urls.py b/ephios/user_management/urls.py\n--- a/ephios/user_management/urls.py\n+++ b/ephios/user_management/urls.py\n@@ -19,6 +19,11 @@\n views.UserProfileUpdateView.as_view(),\n name=\"userprofile_edit\",\n ),\n+ path(\n+ \"users/<int:pk>/delete\",\n+ views.UserProfileDeleteView.as_view(),\n+ name=\"userprofile_delete\",\n+ ),\n path(\n \"users/create/\",\n views.UserProfileCreateView.as_view(),\ndiff --git a/ephios/user_management/views.py b/ephios/user_management/views.py\n--- a/ephios/user_management/views.py\n+++ b/ephios/user_management/views.py\n@@ -49,7 +49,12 @@\n userprofile = userprofile_form.save()\n qualification_formset.instance = userprofile\n qualification_formset.save()\n- messages.success(self.request, _(\"User added successfully.\"))\n+ messages.success(\n+ self.request,\n+ _(\"User {name} ({user}) added successfully.\").format(\n+ name=userprofile.get_full_name(), user=userprofile\n+ ),\n+ )\n if userprofile.is_active:\n mail.send_account_creation_info(userprofile)\n return redirect(reverse(\"user_management:userprofile_list\"))\n@@ -91,7 +96,12 @@\n if all((userprofile_form.is_valid(), qualification_formset.is_valid())):\n userprofile = userprofile_form.save()\n qualification_formset.save()\n- messages.success(self.request, _(\"User updated successfully.\"))\n+ messages.success(\n+ self.request,\n+ _(\"User {name} ({user}) updated successfully.\").format(\n+ name=self.object.get_full_name(), user=self.object\n+ ),\n+ )\n if userprofile.is_active:\n mail.send_account_update_info(userprofile)\n return redirect(reverse(\"user_management:userprofile_list\"))\n@@ -103,6 +113,21 @@\n )\n \n \n+class UserProfileDeleteView(PermissionRequiredMixin, DeleteView):\n+ model = UserProfile\n+ permission_required = \"user_management.delete_userprofile\"\n+ template_name = \"user_management/userprofile_confirm_delete.html\"\n+\n+ def get_success_url(self):\n+ messages.info(\n+ self.request,\n+ _(\"The user {name} ({user}) was deleted.\").format(\n+ name=self.object.get_full_name(), user=self.object\n+ ),\n+ )\n+ return reverse(\"user_management:userprofile_list\")\n+\n+\n class GroupListView(PermissionRequiredMixin, ListView):\n model = Group\n permission_required = \"auth.view_group\"\n@@ -125,7 +150,9 @@\n return kwargs\n \n def get_success_url(self):\n- messages.success(self.request, _(\"Group created successfully.\"))\n+ messages.success(\n+ self.request, _('Group \"{group}\" created successfully.').format(group=self.object)\n+ )\n return reverse(\"user_management:group_list\")\n \n \n@@ -150,7 +177,9 @@\n return kwargs\n \n def get_success_url(self):\n- messages.success(self.request, _(\"Group updated successfully.\"))\n+ messages.success(\n+ self.request, _('Group \"{group}\" updated successfully.').format(group=self.object)\n+ )\n return reverse(\"user_management:group_list\")\n \n \n@@ -160,4 +189,5 @@\n template_name = \"user_management/group_confirm_delete.html\"\n \n def get_success_url(self):\n+ messages.info(self.request, _('The group \"{group}\" was deleted.').format(group=self.object))\n return reverse(\"user_management:group_list\")\n", "issue": "Nutzer l\u00f6schen\nAls Manager m\u00f6chte ich einen Nutzer l\u00f6schen k\u00f6nnen. Daf\u00fcr soll es in der Liste der Nutzer sowie auf der Bearbeitungsseite eines Nutzers eine entsprechende Schaltfl\u00e4che geben.\n", "before_files": [{"content": "from django.urls import path\n\nfrom ephios.user_management import views\n\napp_name = \"user_management\"\nurlpatterns = [\n path(\"profile/\", views.ProfileView.as_view(), name=\"profile\"),\n path(\"groups/\", views.GroupListView.as_view(), name=\"group_list\"),\n path(\"groups/<int:pk>/edit\", views.GroupUpdateView.as_view(), name=\"group_edit\"),\n path(\"groups/<int:pk>/delete\", views.GroupDeleteView.as_view(), name=\"group_delete\"),\n path(\"groups/create\", views.GroupCreateView.as_view(), name=\"group_add\"),\n path(\n \"users/\",\n views.UserProfileListView.as_view(),\n name=\"userprofile_list\",\n ),\n path(\n \"users/<int:pk>/edit\",\n views.UserProfileUpdateView.as_view(),\n name=\"userprofile_edit\",\n ),\n path(\n \"users/create/\",\n views.UserProfileCreateView.as_view(),\n name=\"userprofile_create\",\n ),\n]\n", "path": "ephios/user_management/urls.py"}, {"content": "from django.contrib import messages\nfrom django.contrib.auth.mixins import LoginRequiredMixin, PermissionRequiredMixin\nfrom django.contrib.auth.models import Group\nfrom django.shortcuts import redirect\nfrom django.urls import reverse\nfrom django.utils.translation import gettext as _\nfrom django.views.generic import (\n CreateView,\n DeleteView,\n DetailView,\n ListView,\n TemplateView,\n UpdateView,\n)\nfrom django.views.generic.detail import SingleObjectMixin\nfrom guardian.shortcuts import get_objects_for_group\n\nfrom ephios.user_management import mail\nfrom ephios.user_management.forms import GroupForm, QualificationGrantFormset, UserProfileForm\nfrom ephios.user_management.models import UserProfile\n\n\nclass ProfileView(LoginRequiredMixin, DetailView):\n def get_object(self, queryset=None):\n return self.request.user\n\n\nclass UserProfileListView(PermissionRequiredMixin, ListView):\n model = UserProfile\n permission_required = \"user_management.view_userprofile\"\n\n\nclass UserProfileCreateView(PermissionRequiredMixin, TemplateView):\n template_name = \"user_management/userprofile_form.html\"\n permission_required = \"user_management.add_userprofile\"\n model = UserProfile\n\n def get_context_data(self, **kwargs):\n kwargs.setdefault(\"userprofile_form\", UserProfileForm(self.request.POST or None))\n kwargs.setdefault(\n \"qualification_formset\", QualificationGrantFormset(self.request.POST or None)\n )\n return super().get_context_data(**kwargs)\n\n def post(self, request, *args, **kwargs):\n userprofile_form = UserProfileForm(self.request.POST)\n qualification_formset = QualificationGrantFormset(self.request.POST)\n if all((userprofile_form.is_valid(), qualification_formset.is_valid())):\n userprofile = userprofile_form.save()\n qualification_formset.instance = userprofile\n qualification_formset.save()\n messages.success(self.request, _(\"User added successfully.\"))\n if userprofile.is_active:\n mail.send_account_creation_info(userprofile)\n return redirect(reverse(\"user_management:userprofile_list\"))\n else:\n return self.render_to_response(\n self.get_context_data(\n userprofile_form=userprofile_form, qualification_formset=qualification_formset\n )\n )\n\n\nclass UserProfileUpdateView(PermissionRequiredMixin, SingleObjectMixin, TemplateView):\n model = UserProfile\n permission_required = \"user_management.change_userprofile\"\n template_name = \"user_management/userprofile_form.html\"\n\n def get_userprofile_form(self):\n return UserProfileForm(\n self.request.POST or None,\n initial={\n \"groups\": self.get_object().groups.all(),\n },\n instance=self.object,\n )\n\n def get_qualification_formset(self):\n return QualificationGrantFormset(self.request.POST or None, instance=self.object)\n\n def get_context_data(self, **kwargs):\n self.object = self.get_object()\n kwargs.setdefault(\"userprofile_form\", self.get_userprofile_form())\n kwargs.setdefault(\"qualification_formset\", self.get_qualification_formset())\n return super().get_context_data(**kwargs)\n\n def post(self, request, *args, **kwargs):\n self.object = self.get_object()\n userprofile_form = self.get_userprofile_form()\n qualification_formset = self.get_qualification_formset()\n if all((userprofile_form.is_valid(), qualification_formset.is_valid())):\n userprofile = userprofile_form.save()\n qualification_formset.save()\n messages.success(self.request, _(\"User updated successfully.\"))\n if userprofile.is_active:\n mail.send_account_update_info(userprofile)\n return redirect(reverse(\"user_management:userprofile_list\"))\n else:\n return self.render_to_response(\n self.get_context_data(\n userprofile_form=userprofile_form, qualification_formset=qualification_formset\n )\n )\n\n\nclass GroupListView(PermissionRequiredMixin, ListView):\n model = Group\n permission_required = \"auth.view_group\"\n template_name = \"user_management/group_list.html\"\n\n\nclass GroupCreateView(PermissionRequiredMixin, CreateView):\n model = Group\n permission_required = \"auth.add_group\"\n template_name = \"user_management/group_form.html\"\n form_class = GroupForm\n\n def get_form_kwargs(self):\n kwargs = super().get_form_kwargs()\n kwargs[\"initial\"] = {\n \"users\": UserProfile.objects.none(),\n \"can_add_event\": False,\n \"publish_event_for_group\": Group.objects.none(),\n }\n return kwargs\n\n def get_success_url(self):\n messages.success(self.request, _(\"Group created successfully.\"))\n return reverse(\"user_management:group_list\")\n\n\nclass GroupUpdateView(PermissionRequiredMixin, UpdateView):\n model = Group\n permission_required = \"auth.change_group\"\n template_name = \"user_management/group_form.html\"\n form_class = GroupForm\n\n def get_form_kwargs(self):\n kwargs = super().get_form_kwargs()\n kwargs[\"initial\"] = {\n \"users\": self.object.user_set.all(),\n \"can_view_past_event\": self.object.permissions.filter(\n codename=\"view_past_event\"\n ).exists(),\n \"can_add_event\": self.object.permissions.filter(codename=\"add_event\").exists(),\n \"publish_event_for_group\": get_objects_for_group(\n self.object, \"publish_event_for_group\", klass=Group\n ),\n }\n return kwargs\n\n def get_success_url(self):\n messages.success(self.request, _(\"Group updated successfully.\"))\n return reverse(\"user_management:group_list\")\n\n\nclass GroupDeleteView(PermissionRequiredMixin, DeleteView):\n model = Group\n permission_required = \"auth.delete_group\"\n template_name = \"user_management/group_confirm_delete.html\"\n\n def get_success_url(self):\n return reverse(\"user_management:group_list\")\n", "path": "ephios/user_management/views.py"}, {"content": "from django import forms\nfrom django.contrib.auth.forms import ReadOnlyPasswordHashField\nfrom django.contrib.auth.models import Group\nfrom django.contrib.auth.password_validation import validate_password\nfrom django.forms import (\n BooleanField,\n ModelForm,\n ModelMultipleChoiceField,\n TextInput,\n inlineformset_factory,\n)\nfrom django.utils.translation import gettext as _\nfrom django_select2.forms import Select2MultipleWidget, Select2Widget\nfrom guardian.shortcuts import assign_perm, remove_perm\n\nfrom ephios.extra.widgets import CustomDateInput\nfrom ephios.user_management.models import QualificationGrant, UserProfile\nfrom ephios.user_management.widgets import MultiUserProfileWidget\n\n\nclass UserCreationForm(forms.ModelForm):\n \"\"\"A form for creating new users. Includes all the required\n fields, plus a repeated password.\"\"\"\n\n password = forms.CharField(label=_(\"Password\"), widget=forms.PasswordInput)\n password_validation = forms.CharField(\n label=_(\"Password confirmation\"), widget=forms.PasswordInput\n )\n field_order = [\"email\", \"password\", \"password_validation\"]\n\n class Meta:\n model = UserProfile\n fields = (\n \"email\",\n \"first_name\",\n \"last_name\",\n \"date_of_birth\",\n \"phone\",\n )\n\n def clean_password_validation(self):\n # Check that the two password entries match\n password = self.cleaned_data.get(\"password\")\n password_validation = self.cleaned_data.get(\"password_validation\")\n if password and password_validation and password != password_validation:\n raise forms.ValidationError(_(\"Passwords don't match\"))\n return password_validation\n\n def _post_clean(self):\n super()._post_clean()\n # Validate the password after self.instance is updated with form data\n # by super().\n password = self.cleaned_data.get(\"password_validation\")\n if password:\n try:\n validate_password(password, self.instance)\n except forms.ValidationError as error:\n self.add_error(\"password\", error)\n\n def save(self, commit=True):\n # Save the provided password in hashed format\n user = super().save(commit=False)\n user.set_password(self.cleaned_data[\"password\"])\n if commit:\n user.save()\n return user\n\n\nclass UserChangeForm(forms.ModelForm):\n \"\"\"A form for updating users. Includes all the fields on\n the user, but replaces the password field with admin's\n password hash display field.\n \"\"\"\n\n password = ReadOnlyPasswordHashField()\n\n class Meta:\n model = UserProfile\n fields = (\n \"email\",\n \"password\",\n \"first_name\",\n \"last_name\",\n \"date_of_birth\",\n \"phone\",\n \"is_active\",\n \"is_staff\",\n )\n\n def clean_password(self):\n # Regardless of what the user provides, return the initial value.\n # This is done here, rather than on the field, because the\n # field does not have access to the initial value\n return self.initial[\"password\"]\n\n\nclass GroupForm(ModelForm):\n publish_event_for_group = ModelMultipleChoiceField(\n label=_(\"Can publish event for group\"),\n queryset=Group.objects.all(),\n required=False,\n help_text=_(\"Choose groups that this group can make events visible for.\"),\n widget=Select2MultipleWidget,\n )\n can_view_past_event = BooleanField(label=_(\"Can view past events\"), required=False)\n can_add_event = BooleanField(label=_(\"Can add event\"), required=False)\n users = ModelMultipleChoiceField(\n label=_(\"Users\"), queryset=UserProfile.objects.all(), widget=MultiUserProfileWidget\n )\n\n field_order = [\n \"name\",\n \"users\",\n \"can_view_past_event\",\n \"can_add_event\",\n \"publish_event_for_group\",\n ]\n\n class Meta:\n model = Group\n fields = [\"name\"]\n\n def save(self, commit=True):\n group = super().save(commit)\n\n group.user_set.set(self.cleaned_data[\"users\"])\n\n if self.cleaned_data[\"can_view_past_event\"]:\n assign_perm(\"event_management.view_past_event\", group)\n else:\n remove_perm(\"event_management.view_past_event\", group)\n\n if self.cleaned_data[\"can_add_event\"]:\n assign_perm(\"event_management.add_event\", group)\n assign_perm(\"event_management.delete_event\", group)\n\n if \"publish_event_for_group\" in self.changed_data:\n for target_group in self.cleaned_data[\"publish_event_for_group\"].exlude(\n id__in=self.initial[\"publish_event_for_group\"]\n ):\n assign_perm(\"publish_event_for_group\", group, target_group)\n for target_group in self.initial[\"publish_event_for_group\"].exclude(\n id__in=self.cleaned_data[\"publish_event_for_group\"]\n ):\n remove_perm(\"publish_event_for_group\", group, target_group)\n else:\n remove_perm(\"event_management.add_event\", group)\n remove_perm(\"event_management.delete_event\", group)\n for target_group in Group.objects.all():\n remove_perm(\"publish_event_for_group\", group, target_group)\n\n\nclass UserProfileForm(ModelForm):\n groups = ModelMultipleChoiceField(\n label=_(\"Groups\"), queryset=Group.objects.all(), widget=Select2MultipleWidget\n )\n\n field_order = [\n \"email\",\n \"first_name\",\n \"last_name\",\n \"date_of_birth\",\n \"phone\",\n \"groups\",\n \"is_active\",\n ]\n\n class Meta:\n model = UserProfile\n fields = [\"email\", \"first_name\", \"last_name\", \"date_of_birth\", \"phone\", \"is_active\"]\n widgets = {\"date_of_birth\": CustomDateInput(format=\"%Y-%m-%d\")}\n help_texts = {\n \"is_active\": _(\"Inactive users cannot log in and do not get any notifications.\")\n }\n labels = {\"is_active\": _(\"Active user\")}\n\n def save(self, commit=True):\n userprofile = super().save(commit)\n userprofile.groups.set(self.cleaned_data[\"groups\"])\n userprofile.save()\n return userprofile\n\n\nclass QualificationGrantForm(ModelForm):\n model = QualificationGrant\n\n class Meta:\n fields = [\"qualification\", \"expires\"]\n widgets = {\"qualification\": Select2Widget, \"expires\": CustomDateInput(format=\"%Y-%m-%d\")}\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n instance = getattr(self, \"instance\", None)\n if instance and instance.pk:\n self.fields[\"qualification\"].disabled = True\n self.fields[\"qualification\"].widget = TextInput(\n attrs={\"class\": \"form-control-plaintext\"}\n )\n self.initial[\"qualification\"] = instance.qualification.title\n\n\nQualificationGrantFormset = inlineformset_factory(\n UserProfile,\n QualificationGrant,\n form=QualificationGrantForm,\n extra=0,\n)\n", "path": "ephios/user_management/forms.py"}], "after_files": [{"content": "from django.urls import path\n\nfrom ephios.user_management import views\n\napp_name = \"user_management\"\nurlpatterns = [\n path(\"profile/\", views.ProfileView.as_view(), name=\"profile\"),\n path(\"groups/\", views.GroupListView.as_view(), name=\"group_list\"),\n path(\"groups/<int:pk>/edit\", views.GroupUpdateView.as_view(), name=\"group_edit\"),\n path(\"groups/<int:pk>/delete\", views.GroupDeleteView.as_view(), name=\"group_delete\"),\n path(\"groups/create\", views.GroupCreateView.as_view(), name=\"group_add\"),\n path(\n \"users/\",\n views.UserProfileListView.as_view(),\n name=\"userprofile_list\",\n ),\n path(\n \"users/<int:pk>/edit\",\n views.UserProfileUpdateView.as_view(),\n name=\"userprofile_edit\",\n ),\n path(\n \"users/<int:pk>/delete\",\n views.UserProfileDeleteView.as_view(),\n name=\"userprofile_delete\",\n ),\n path(\n \"users/create/\",\n views.UserProfileCreateView.as_view(),\n name=\"userprofile_create\",\n ),\n]\n", "path": "ephios/user_management/urls.py"}, {"content": "from django.contrib import messages\nfrom django.contrib.auth.mixins import LoginRequiredMixin, PermissionRequiredMixin\nfrom django.contrib.auth.models import Group\nfrom django.shortcuts import redirect\nfrom django.urls import reverse\nfrom django.utils.translation import gettext as _\nfrom django.views.generic import (\n CreateView,\n DeleteView,\n DetailView,\n ListView,\n TemplateView,\n UpdateView,\n)\nfrom django.views.generic.detail import SingleObjectMixin\nfrom guardian.shortcuts import get_objects_for_group\n\nfrom ephios.user_management import mail\nfrom ephios.user_management.forms import GroupForm, QualificationGrantFormset, UserProfileForm\nfrom ephios.user_management.models import UserProfile\n\n\nclass ProfileView(LoginRequiredMixin, DetailView):\n def get_object(self, queryset=None):\n return self.request.user\n\n\nclass UserProfileListView(PermissionRequiredMixin, ListView):\n model = UserProfile\n permission_required = \"user_management.view_userprofile\"\n\n\nclass UserProfileCreateView(PermissionRequiredMixin, TemplateView):\n template_name = \"user_management/userprofile_form.html\"\n permission_required = \"user_management.add_userprofile\"\n model = UserProfile\n\n def get_context_data(self, **kwargs):\n kwargs.setdefault(\"userprofile_form\", UserProfileForm(self.request.POST or None))\n kwargs.setdefault(\n \"qualification_formset\", QualificationGrantFormset(self.request.POST or None)\n )\n return super().get_context_data(**kwargs)\n\n def post(self, request, *args, **kwargs):\n userprofile_form = UserProfileForm(self.request.POST)\n qualification_formset = QualificationGrantFormset(self.request.POST)\n if all((userprofile_form.is_valid(), qualification_formset.is_valid())):\n userprofile = userprofile_form.save()\n qualification_formset.instance = userprofile\n qualification_formset.save()\n messages.success(\n self.request,\n _(\"User {name} ({user}) added successfully.\").format(\n name=userprofile.get_full_name(), user=userprofile\n ),\n )\n if userprofile.is_active:\n mail.send_account_creation_info(userprofile)\n return redirect(reverse(\"user_management:userprofile_list\"))\n else:\n return self.render_to_response(\n self.get_context_data(\n userprofile_form=userprofile_form, qualification_formset=qualification_formset\n )\n )\n\n\nclass UserProfileUpdateView(PermissionRequiredMixin, SingleObjectMixin, TemplateView):\n model = UserProfile\n permission_required = \"user_management.change_userprofile\"\n template_name = \"user_management/userprofile_form.html\"\n\n def get_userprofile_form(self):\n return UserProfileForm(\n self.request.POST or None,\n initial={\n \"groups\": self.get_object().groups.all(),\n },\n instance=self.object,\n )\n\n def get_qualification_formset(self):\n return QualificationGrantFormset(self.request.POST or None, instance=self.object)\n\n def get_context_data(self, **kwargs):\n self.object = self.get_object()\n kwargs.setdefault(\"userprofile_form\", self.get_userprofile_form())\n kwargs.setdefault(\"qualification_formset\", self.get_qualification_formset())\n return super().get_context_data(**kwargs)\n\n def post(self, request, *args, **kwargs):\n self.object = self.get_object()\n userprofile_form = self.get_userprofile_form()\n qualification_formset = self.get_qualification_formset()\n if all((userprofile_form.is_valid(), qualification_formset.is_valid())):\n userprofile = userprofile_form.save()\n qualification_formset.save()\n messages.success(\n self.request,\n _(\"User {name} ({user}) updated successfully.\").format(\n name=self.object.get_full_name(), user=self.object\n ),\n )\n if userprofile.is_active:\n mail.send_account_update_info(userprofile)\n return redirect(reverse(\"user_management:userprofile_list\"))\n else:\n return self.render_to_response(\n self.get_context_data(\n userprofile_form=userprofile_form, qualification_formset=qualification_formset\n )\n )\n\n\nclass UserProfileDeleteView(PermissionRequiredMixin, DeleteView):\n model = UserProfile\n permission_required = \"user_management.delete_userprofile\"\n template_name = \"user_management/userprofile_confirm_delete.html\"\n\n def get_success_url(self):\n messages.info(\n self.request,\n _(\"The user {name} ({user}) was deleted.\").format(\n name=self.object.get_full_name(), user=self.object\n ),\n )\n return reverse(\"user_management:userprofile_list\")\n\n\nclass GroupListView(PermissionRequiredMixin, ListView):\n model = Group\n permission_required = \"auth.view_group\"\n template_name = \"user_management/group_list.html\"\n\n\nclass GroupCreateView(PermissionRequiredMixin, CreateView):\n model = Group\n permission_required = \"auth.add_group\"\n template_name = \"user_management/group_form.html\"\n form_class = GroupForm\n\n def get_form_kwargs(self):\n kwargs = super().get_form_kwargs()\n kwargs[\"initial\"] = {\n \"users\": UserProfile.objects.none(),\n \"can_add_event\": False,\n \"publish_event_for_group\": Group.objects.none(),\n }\n return kwargs\n\n def get_success_url(self):\n messages.success(\n self.request, _('Group \"{group}\" created successfully.').format(group=self.object)\n )\n return reverse(\"user_management:group_list\")\n\n\nclass GroupUpdateView(PermissionRequiredMixin, UpdateView):\n model = Group\n permission_required = \"auth.change_group\"\n template_name = \"user_management/group_form.html\"\n form_class = GroupForm\n\n def get_form_kwargs(self):\n kwargs = super().get_form_kwargs()\n kwargs[\"initial\"] = {\n \"users\": self.object.user_set.all(),\n \"can_view_past_event\": self.object.permissions.filter(\n codename=\"view_past_event\"\n ).exists(),\n \"can_add_event\": self.object.permissions.filter(codename=\"add_event\").exists(),\n \"publish_event_for_group\": get_objects_for_group(\n self.object, \"publish_event_for_group\", klass=Group\n ),\n }\n return kwargs\n\n def get_success_url(self):\n messages.success(\n self.request, _('Group \"{group}\" updated successfully.').format(group=self.object)\n )\n return reverse(\"user_management:group_list\")\n\n\nclass GroupDeleteView(PermissionRequiredMixin, DeleteView):\n model = Group\n permission_required = \"auth.delete_group\"\n template_name = \"user_management/group_confirm_delete.html\"\n\n def get_success_url(self):\n messages.info(self.request, _('The group \"{group}\" was deleted.').format(group=self.object))\n return reverse(\"user_management:group_list\")\n", "path": "ephios/user_management/views.py"}, {"content": "from django import forms\nfrom django.contrib.auth.forms import ReadOnlyPasswordHashField\nfrom django.contrib.auth.models import Group\nfrom django.contrib.auth.password_validation import validate_password\nfrom django.forms import (\n BooleanField,\n ModelForm,\n ModelMultipleChoiceField,\n TextInput,\n inlineformset_factory,\n)\nfrom django.utils.translation import gettext as _\nfrom django_select2.forms import Select2MultipleWidget, Select2Widget\nfrom guardian.shortcuts import assign_perm, remove_perm\n\nfrom ephios.extra.widgets import CustomDateInput\nfrom ephios.user_management.models import QualificationGrant, UserProfile\nfrom ephios.user_management.widgets import MultiUserProfileWidget\n\n\nclass UserCreationForm(forms.ModelForm):\n \"\"\"A form for creating new users. Includes all the required\n fields, plus a repeated password.\"\"\"\n\n password = forms.CharField(label=_(\"Password\"), widget=forms.PasswordInput)\n password_validation = forms.CharField(\n label=_(\"Password confirmation\"), widget=forms.PasswordInput\n )\n field_order = [\"email\", \"password\", \"password_validation\"]\n\n class Meta:\n model = UserProfile\n fields = (\n \"email\",\n \"first_name\",\n \"last_name\",\n \"date_of_birth\",\n \"phone\",\n )\n\n def clean_password_validation(self):\n # Check that the two password entries match\n password = self.cleaned_data.get(\"password\")\n password_validation = self.cleaned_data.get(\"password_validation\")\n if password and password_validation and password != password_validation:\n raise forms.ValidationError(_(\"Passwords don't match\"))\n return password_validation\n\n def _post_clean(self):\n super()._post_clean()\n # Validate the password after self.instance is updated with form data\n # by super().\n password = self.cleaned_data.get(\"password_validation\")\n if password:\n try:\n validate_password(password, self.instance)\n except forms.ValidationError as error:\n self.add_error(\"password\", error)\n\n def save(self, commit=True):\n # Save the provided password in hashed format\n user = super().save(commit=False)\n user.set_password(self.cleaned_data[\"password\"])\n if commit:\n user.save()\n return user\n\n\nclass UserChangeForm(forms.ModelForm):\n \"\"\"A form for updating users. Includes all the fields on\n the user, but replaces the password field with admin's\n password hash display field.\n \"\"\"\n\n password = ReadOnlyPasswordHashField()\n\n class Meta:\n model = UserProfile\n fields = (\n \"email\",\n \"password\",\n \"first_name\",\n \"last_name\",\n \"date_of_birth\",\n \"phone\",\n \"is_active\",\n \"is_staff\",\n )\n\n def clean_password(self):\n # Regardless of what the user provides, return the initial value.\n # This is done here, rather than on the field, because the\n # field does not have access to the initial value\n return self.initial[\"password\"]\n\n\nclass GroupForm(ModelForm):\n publish_event_for_group = ModelMultipleChoiceField(\n label=_(\"Can publish event for group\"),\n queryset=Group.objects.all(),\n required=False,\n help_text=_(\"Choose groups that this group can make events visible for.\"),\n widget=Select2MultipleWidget,\n )\n can_view_past_event = BooleanField(label=_(\"Can view past events\"), required=False)\n can_add_event = BooleanField(label=_(\"Can add event\"), required=False)\n users = ModelMultipleChoiceField(\n label=_(\"Users\"), queryset=UserProfile.objects.all(), widget=MultiUserProfileWidget\n )\n\n field_order = [\n \"name\",\n \"users\",\n \"can_view_past_event\",\n \"can_add_event\",\n \"publish_event_for_group\",\n ]\n\n class Meta:\n model = Group\n fields = [\"name\"]\n\n def save(self, commit=True):\n group = super().save(commit)\n\n group.user_set.set(self.cleaned_data[\"users\"])\n\n if self.cleaned_data[\"can_view_past_event\"]:\n assign_perm(\"event_management.view_past_event\", group)\n else:\n remove_perm(\"event_management.view_past_event\", group)\n\n if self.cleaned_data[\"can_add_event\"]:\n assign_perm(\"event_management.add_event\", group)\n assign_perm(\"event_management.delete_event\", group)\n\n if \"publish_event_for_group\" in self.changed_data:\n for target_group in self.cleaned_data[\"publish_event_for_group\"].exlude(\n id__in=self.initial[\"publish_event_for_group\"]\n ):\n assign_perm(\"publish_event_for_group\", group, target_group)\n for target_group in self.initial[\"publish_event_for_group\"].exclude(\n id__in=self.cleaned_data[\"publish_event_for_group\"]\n ):\n remove_perm(\"publish_event_for_group\", group, target_group)\n else:\n remove_perm(\"event_management.add_event\", group)\n remove_perm(\"event_management.delete_event\", group)\n for target_group in Group.objects.all():\n remove_perm(\"publish_event_for_group\", group, target_group)\n\n return group\n\n\nclass UserProfileForm(ModelForm):\n groups = ModelMultipleChoiceField(\n label=_(\"Groups\"), queryset=Group.objects.all(), widget=Select2MultipleWidget\n )\n\n field_order = [\n \"email\",\n \"first_name\",\n \"last_name\",\n \"date_of_birth\",\n \"phone\",\n \"groups\",\n \"is_active\",\n ]\n\n class Meta:\n model = UserProfile\n fields = [\"email\", \"first_name\", \"last_name\", \"date_of_birth\", \"phone\", \"is_active\"]\n widgets = {\"date_of_birth\": CustomDateInput(format=\"%Y-%m-%d\")}\n help_texts = {\n \"is_active\": _(\"Inactive users cannot log in and do not get any notifications.\")\n }\n labels = {\"is_active\": _(\"Active user\")}\n\n def save(self, commit=True):\n userprofile = super().save(commit)\n userprofile.groups.set(self.cleaned_data[\"groups\"])\n userprofile.save()\n return userprofile\n\n\nclass QualificationGrantForm(ModelForm):\n model = QualificationGrant\n\n class Meta:\n fields = [\"qualification\", \"expires\"]\n widgets = {\"qualification\": Select2Widget, \"expires\": CustomDateInput(format=\"%Y-%m-%d\")}\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n instance = getattr(self, \"instance\", None)\n if instance and instance.pk:\n self.fields[\"qualification\"].disabled = True\n self.fields[\"qualification\"].widget = TextInput(\n attrs={\"class\": \"form-control-plaintext\"}\n )\n self.initial[\"qualification\"] = instance.qualification.title\n\n\nQualificationGrantFormset = inlineformset_factory(\n UserProfile,\n QualificationGrant,\n form=QualificationGrantForm,\n extra=0,\n)\n", "path": "ephios/user_management/forms.py"}]} | 4,093 | 886 |
gh_patches_debug_12825 | rasdani/github-patches | git_diff | fossasia__open-event-server-4403 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Verify Email endpoint doesn’t work.
**I'm submitting a ...** (check one with "x")
- [X] bug report
- [ ] feature request
- [ ] support request => Please do not submit support requests here, instead ask your query in out Gitter channel at https://gitter.im/fossasia/open-event-orga-server
There are two specific errors in the function.
- [x] Firstly the email verification state change isn’t saved in db
- [x] Response for the view function is missing.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `app/api/auth.py`
Content:
```
1 import base64
2 from flask import request, jsonify, abort, make_response, Blueprint
3 from flask_jwt import current_identity as current_user, jwt_required
4 from sqlalchemy.orm.exc import NoResultFound
5
6 from app import get_settings
7 from app.api.helpers.db import save_to_db
8 from app.api.helpers.files import make_frontend_url
9 from app.api.helpers.mail import send_email_with_action
10
11 from app.api.helpers.utilities import get_serializer
12 from app.models.mail import PASSWORD_RESET
13 from app.models.user import User
14
15 auth_routes = Blueprint('auth', __name__, url_prefix='/v1/auth')
16
17
18 @auth_routes.route('/verify-email', methods=['POST'])
19 def verify_email():
20 token = base64.b64decode(request.json['data']['token'])
21 s = get_serializer()
22
23 try:
24 data = s.loads(token)
25 except Exception:
26 return abort(
27 make_response(jsonify(error="Invalid Token"), 400)
28 )
29
30 try:
31 user = User.query.filter_by(email=data[0]).one()
32 except Exception:
33 return abort(
34 make_response(jsonify(error="Invalid Token"), 400)
35 )
36 else:
37 user.is_verified = True
38
39
40 @auth_routes.route('/reset-password', methods=['POST'])
41 def reset_password_post():
42 email = request.json['data']['email']
43
44 try:
45 user = User.query.filter_by(email=email).one()
46 except NoResultFound:
47 return abort(
48 make_response(jsonify(error="User not found"), 404)
49 )
50 else:
51 link = make_frontend_url('/reset-password', {'token': user.reset_password})
52 send_email_with_action(user, PASSWORD_RESET, app_name=get_settings()['app_name'], link=link)
53
54 return make_response(jsonify(message="Email Sent"), 200)
55
56
57 @auth_routes.route('/reset-password', methods=['PATCH'])
58 def reset_password_patch():
59 token = request.json['data']['token']
60 password = request.json['data']['password']
61
62 try:
63 user = User.query.filter_by(reset_password=token).one()
64 except NoResultFound:
65 return abort(
66 make_response(jsonify(error="User not found"), 404)
67 )
68 else:
69 user.password = password
70 save_to_db(user)
71
72 return jsonify({
73 "id": user.id,
74 "email": user.email,
75 "name": user.name if user.get('name') else None
76 })
77
78
79 @auth_routes.route('/change-password', methods=['POST'])
80 @jwt_required()
81 def change_password():
82 old_password = request.json['data']['old-password']
83 new_password = request.json['data']['new-password']
84
85 try:
86 user = User.query.filter_by(id=current_user.id).one()
87 except NoResultFound:
88 return abort(
89 make_response(jsonify(error="User not found"), 404)
90 )
91 else:
92 if user.is_correct_password(old_password):
93
94 user.password = new_password
95 save_to_db(user)
96 else:
97 return make_response(jsonify(error="Wrong Password"), 400)
98
99 return jsonify({
100 "id": user.id,
101 "email": user.email,
102 "name": user.fullname if user.fullname else None,
103 "password_changed": True
104 })
105
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/app/api/auth.py b/app/api/auth.py
--- a/app/api/auth.py
+++ b/app/api/auth.py
@@ -35,6 +35,8 @@
)
else:
user.is_verified = True
+ save_to_db(user)
+ return make_response(jsonify(message="Email Verified"), 200)
@auth_routes.route('/reset-password', methods=['POST'])
@@ -94,7 +96,9 @@
user.password = new_password
save_to_db(user)
else:
- return make_response(jsonify(error="Wrong Password"), 400)
+ return abort(
+ make_response(jsonify(error="Wrong Password"), 400)
+ )
return jsonify({
"id": user.id,
| {"golden_diff": "diff --git a/app/api/auth.py b/app/api/auth.py\n--- a/app/api/auth.py\n+++ b/app/api/auth.py\n@@ -35,6 +35,8 @@\n )\n else:\n user.is_verified = True\n+ save_to_db(user)\n+ return make_response(jsonify(message=\"Email Verified\"), 200)\n \n \n @auth_routes.route('/reset-password', methods=['POST'])\n@@ -94,7 +96,9 @@\n user.password = new_password\n save_to_db(user)\n else:\n- return make_response(jsonify(error=\"Wrong Password\"), 400)\n+ return abort(\n+ make_response(jsonify(error=\"Wrong Password\"), 400)\n+ )\n \n return jsonify({\n \"id\": user.id,\n", "issue": " Verify Email endpoint doesn\u2019t work.\n**I'm submitting a ...** (check one with \"x\")\r\n- [X] bug report\r\n- [ ] feature request\r\n- [ ] support request => Please do not submit support requests here, instead ask your query in out Gitter channel at https://gitter.im/fossasia/open-event-orga-server\r\n\r\nThere are two specific errors in the function.\r\n- [x] Firstly the email verification state change isn\u2019t saved in db\r\n- [x] Response for the view function is missing.\r\n\n", "before_files": [{"content": "import base64\nfrom flask import request, jsonify, abort, make_response, Blueprint\nfrom flask_jwt import current_identity as current_user, jwt_required\nfrom sqlalchemy.orm.exc import NoResultFound\n\nfrom app import get_settings\nfrom app.api.helpers.db import save_to_db\nfrom app.api.helpers.files import make_frontend_url\nfrom app.api.helpers.mail import send_email_with_action\n\nfrom app.api.helpers.utilities import get_serializer\nfrom app.models.mail import PASSWORD_RESET\nfrom app.models.user import User\n\nauth_routes = Blueprint('auth', __name__, url_prefix='/v1/auth')\n\n\n@auth_routes.route('/verify-email', methods=['POST'])\ndef verify_email():\n token = base64.b64decode(request.json['data']['token'])\n s = get_serializer()\n\n try:\n data = s.loads(token)\n except Exception:\n return abort(\n make_response(jsonify(error=\"Invalid Token\"), 400)\n )\n\n try:\n user = User.query.filter_by(email=data[0]).one()\n except Exception:\n return abort(\n make_response(jsonify(error=\"Invalid Token\"), 400)\n )\n else:\n user.is_verified = True\n\n\n@auth_routes.route('/reset-password', methods=['POST'])\ndef reset_password_post():\n email = request.json['data']['email']\n\n try:\n user = User.query.filter_by(email=email).one()\n except NoResultFound:\n return abort(\n make_response(jsonify(error=\"User not found\"), 404)\n )\n else:\n link = make_frontend_url('/reset-password', {'token': user.reset_password})\n send_email_with_action(user, PASSWORD_RESET, app_name=get_settings()['app_name'], link=link)\n\n return make_response(jsonify(message=\"Email Sent\"), 200)\n\n\n@auth_routes.route('/reset-password', methods=['PATCH'])\ndef reset_password_patch():\n token = request.json['data']['token']\n password = request.json['data']['password']\n\n try:\n user = User.query.filter_by(reset_password=token).one()\n except NoResultFound:\n return abort(\n make_response(jsonify(error=\"User not found\"), 404)\n )\n else:\n user.password = password\n save_to_db(user)\n\n return jsonify({\n \"id\": user.id,\n \"email\": user.email,\n \"name\": user.name if user.get('name') else None\n })\n\n\n@auth_routes.route('/change-password', methods=['POST'])\n@jwt_required()\ndef change_password():\n old_password = request.json['data']['old-password']\n new_password = request.json['data']['new-password']\n\n try:\n user = User.query.filter_by(id=current_user.id).one()\n except NoResultFound:\n return abort(\n make_response(jsonify(error=\"User not found\"), 404)\n )\n else:\n if user.is_correct_password(old_password):\n\n user.password = new_password\n save_to_db(user)\n else:\n return make_response(jsonify(error=\"Wrong Password\"), 400)\n\n return jsonify({\n \"id\": user.id,\n \"email\": user.email,\n \"name\": user.fullname if user.fullname else None,\n \"password_changed\": True\n })\n", "path": "app/api/auth.py"}], "after_files": [{"content": "import base64\nfrom flask import request, jsonify, abort, make_response, Blueprint\nfrom flask_jwt import current_identity as current_user, jwt_required\nfrom sqlalchemy.orm.exc import NoResultFound\n\nfrom app import get_settings\nfrom app.api.helpers.db import save_to_db\nfrom app.api.helpers.files import make_frontend_url\nfrom app.api.helpers.mail import send_email_with_action\n\nfrom app.api.helpers.utilities import get_serializer\nfrom app.models.mail import PASSWORD_RESET\nfrom app.models.user import User\n\nauth_routes = Blueprint('auth', __name__, url_prefix='/v1/auth')\n\n\n@auth_routes.route('/verify-email', methods=['POST'])\ndef verify_email():\n token = base64.b64decode(request.json['data']['token'])\n s = get_serializer()\n\n try:\n data = s.loads(token)\n except Exception:\n return abort(\n make_response(jsonify(error=\"Invalid Token\"), 400)\n )\n\n try:\n user = User.query.filter_by(email=data[0]).one()\n except Exception:\n return abort(\n make_response(jsonify(error=\"Invalid Token\"), 400)\n )\n else:\n user.is_verified = True\n save_to_db(user)\n return make_response(jsonify(message=\"Email Verified\"), 200)\n\n\n@auth_routes.route('/reset-password', methods=['POST'])\ndef reset_password_post():\n email = request.json['data']['email']\n\n try:\n user = User.query.filter_by(email=email).one()\n except NoResultFound:\n return abort(\n make_response(jsonify(error=\"User not found\"), 404)\n )\n else:\n link = make_frontend_url('/reset-password', {'token': user.reset_password})\n send_email_with_action(user, PASSWORD_RESET, app_name=get_settings()['app_name'], link=link)\n\n return make_response(jsonify(message=\"Email Sent\"), 200)\n\n\n@auth_routes.route('/reset-password', methods=['PATCH'])\ndef reset_password_patch():\n token = request.json['data']['token']\n password = request.json['data']['password']\n\n try:\n user = User.query.filter_by(reset_password=token).one()\n except NoResultFound:\n return abort(\n make_response(jsonify(error=\"User not found\"), 404)\n )\n else:\n user.password = password\n save_to_db(user)\n\n return jsonify({\n \"id\": user.id,\n \"email\": user.email,\n \"name\": user.name if user.get('name') else None\n })\n\n\n@auth_routes.route('/change-password', methods=['POST'])\n@jwt_required()\ndef change_password():\n old_password = request.json['data']['old-password']\n new_password = request.json['data']['new-password']\n\n try:\n user = User.query.filter_by(id=current_user.id).one()\n except NoResultFound:\n return abort(\n make_response(jsonify(error=\"User not found\"), 404)\n )\n else:\n if user.is_correct_password(old_password):\n\n user.password = new_password\n save_to_db(user)\n else:\n return abort(\n make_response(jsonify(error=\"Wrong Password\"), 400)\n )\n\n return jsonify({\n \"id\": user.id,\n \"email\": user.email,\n \"name\": user.fullname if user.fullname else None,\n \"password_changed\": True\n })\n", "path": "app/api/auth.py"}]} | 1,277 | 170 |
gh_patches_debug_37390 | rasdani/github-patches | git_diff | vllm-project__vllm-1239 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Installation Error
When I install vLLM, I have pull the docker as the tutorial and install from source. But there are an error encountered, how can I fix this:
`ValueError: Unsupported CUDA arch (5.2). Valid CUDA arch strings are: ['7.0', '7.5', '8.0', '8.6', '8.9', '9.0', '7.0+PTX', '7.5+PTX', '8.0+PTX', '8.6+PTX', '8.9+PTX', '9.0+PTX'].`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 import io
2 import os
3 import re
4 import subprocess
5 from typing import List, Set
6 import warnings
7
8 from packaging.version import parse, Version
9 import setuptools
10 import torch
11 from torch.utils.cpp_extension import BuildExtension, CUDAExtension, CUDA_HOME
12
13 ROOT_DIR = os.path.dirname(__file__)
14
15 # Supported NVIDIA GPU architectures.
16 SUPPORTED_ARCHS = ["7.0", "7.5", "8.0", "8.6", "8.9", "9.0"]
17
18 # Compiler flags.
19 CXX_FLAGS = ["-g", "-O2", "-std=c++17"]
20 # TODO(woosuk): Should we use -O3?
21 NVCC_FLAGS = ["-O2", "-std=c++17"]
22
23 ABI = 1 if torch._C._GLIBCXX_USE_CXX11_ABI else 0
24 CXX_FLAGS += [f"-D_GLIBCXX_USE_CXX11_ABI={ABI}"]
25 NVCC_FLAGS += [f"-D_GLIBCXX_USE_CXX11_ABI={ABI}"]
26
27 if CUDA_HOME is None:
28 raise RuntimeError(
29 "Cannot find CUDA_HOME. CUDA must be available to build the package.")
30
31
32 def get_nvcc_cuda_version(cuda_dir: str) -> Version:
33 """Get the CUDA version from nvcc.
34
35 Adapted from https://github.com/NVIDIA/apex/blob/8b7a1ff183741dd8f9b87e7bafd04cfde99cea28/setup.py
36 """
37 nvcc_output = subprocess.check_output([cuda_dir + "/bin/nvcc", "-V"],
38 universal_newlines=True)
39 output = nvcc_output.split()
40 release_idx = output.index("release") + 1
41 nvcc_cuda_version = parse(output[release_idx].split(",")[0])
42 return nvcc_cuda_version
43
44
45 def get_torch_arch_list() -> Set[str]:
46 # TORCH_CUDA_ARCH_LIST can have one or more architectures,
47 # e.g. "8.0" or "7.5,8.0,8.6+PTX". Here, the "8.6+PTX" option asks the
48 # compiler to additionally include PTX code that can be runtime-compiled
49 # and executed on the 8.6 or newer architectures. While the PTX code will
50 # not give the best performance on the newer architectures, it provides
51 # forward compatibility.
52 valid_arch_strs = SUPPORTED_ARCHS + [s + "+PTX" for s in SUPPORTED_ARCHS]
53 arch_list = os.environ.get("TORCH_CUDA_ARCH_LIST", None)
54 if arch_list is None:
55 return set()
56
57 # List are separated by ; or space.
58 arch_list = arch_list.replace(" ", ";").split(";")
59 for arch in arch_list:
60 if arch not in valid_arch_strs:
61 raise ValueError(
62 f"Unsupported CUDA arch ({arch}). "
63 f"Valid CUDA arch strings are: {valid_arch_strs}.")
64 return set(arch_list)
65
66
67 # First, check the TORCH_CUDA_ARCH_LIST environment variable.
68 compute_capabilities = get_torch_arch_list()
69 if not compute_capabilities:
70 # If TORCH_CUDA_ARCH_LIST is not defined or empty, target all available
71 # GPUs on the current machine.
72 device_count = torch.cuda.device_count()
73 for i in range(device_count):
74 major, minor = torch.cuda.get_device_capability(i)
75 if major < 7:
76 raise RuntimeError(
77 "GPUs with compute capability below 7.0 are not supported.")
78 compute_capabilities.add(f"{major}.{minor}")
79
80 nvcc_cuda_version = get_nvcc_cuda_version(CUDA_HOME)
81 if not compute_capabilities:
82 # If no GPU is specified nor available, add all supported architectures
83 # based on the NVCC CUDA version.
84 compute_capabilities = set(SUPPORTED_ARCHS)
85 if nvcc_cuda_version < Version("11.1"):
86 compute_capabilities.remove("8.6")
87 if nvcc_cuda_version < Version("11.8"):
88 compute_capabilities.remove("8.9")
89 compute_capabilities.remove("9.0")
90
91 # Validate the NVCC CUDA version.
92 if nvcc_cuda_version < Version("11.0"):
93 raise RuntimeError("CUDA 11.0 or higher is required to build the package.")
94 if nvcc_cuda_version < Version("11.1"):
95 if any(cc.startswith("8.6") for cc in compute_capabilities):
96 raise RuntimeError(
97 "CUDA 11.1 or higher is required for compute capability 8.6.")
98 if nvcc_cuda_version < Version("11.8"):
99 if any(cc.startswith("8.9") for cc in compute_capabilities):
100 # CUDA 11.8 is required to generate the code targeting compute capability 8.9.
101 # However, GPUs with compute capability 8.9 can also run the code generated by
102 # the previous versions of CUDA 11 and targeting compute capability 8.0.
103 # Therefore, if CUDA 11.8 is not available, we target compute capability 8.0
104 # instead of 8.9.
105 warnings.warn(
106 "CUDA 11.8 or higher is required for compute capability 8.9. "
107 "Targeting compute capability 8.0 instead.")
108 compute_capabilities = set(cc for cc in compute_capabilities
109 if not cc.startswith("8.9"))
110 compute_capabilities.add("8.0+PTX")
111 if any(cc.startswith("9.0") for cc in compute_capabilities):
112 raise RuntimeError(
113 "CUDA 11.8 or higher is required for compute capability 9.0.")
114
115 # Add target compute capabilities to NVCC flags.
116 for capability in compute_capabilities:
117 num = capability[0] + capability[2]
118 NVCC_FLAGS += ["-gencode", f"arch=compute_{num},code=sm_{num}"]
119 if capability.endswith("+PTX"):
120 NVCC_FLAGS += ["-gencode", f"arch=compute_{num},code=compute_{num}"]
121
122 # Use NVCC threads to parallelize the build.
123 if nvcc_cuda_version >= Version("11.2"):
124 num_threads = min(os.cpu_count(), 8)
125 NVCC_FLAGS += ["--threads", str(num_threads)]
126
127 ext_modules = []
128
129 # Cache operations.
130 cache_extension = CUDAExtension(
131 name="vllm.cache_ops",
132 sources=["csrc/cache.cpp", "csrc/cache_kernels.cu"],
133 extra_compile_args={
134 "cxx": CXX_FLAGS,
135 "nvcc": NVCC_FLAGS,
136 },
137 )
138 ext_modules.append(cache_extension)
139
140 # Attention kernels.
141 attention_extension = CUDAExtension(
142 name="vllm.attention_ops",
143 sources=["csrc/attention.cpp", "csrc/attention/attention_kernels.cu"],
144 extra_compile_args={
145 "cxx": CXX_FLAGS,
146 "nvcc": NVCC_FLAGS,
147 },
148 )
149 ext_modules.append(attention_extension)
150
151 # Positional encoding kernels.
152 positional_encoding_extension = CUDAExtension(
153 name="vllm.pos_encoding_ops",
154 sources=["csrc/pos_encoding.cpp", "csrc/pos_encoding_kernels.cu"],
155 extra_compile_args={
156 "cxx": CXX_FLAGS,
157 "nvcc": NVCC_FLAGS,
158 },
159 )
160 ext_modules.append(positional_encoding_extension)
161
162 # Layer normalization kernels.
163 layernorm_extension = CUDAExtension(
164 name="vllm.layernorm_ops",
165 sources=["csrc/layernorm.cpp", "csrc/layernorm_kernels.cu"],
166 extra_compile_args={
167 "cxx": CXX_FLAGS,
168 "nvcc": NVCC_FLAGS,
169 },
170 )
171 ext_modules.append(layernorm_extension)
172
173 # Activation kernels.
174 activation_extension = CUDAExtension(
175 name="vllm.activation_ops",
176 sources=["csrc/activation.cpp", "csrc/activation_kernels.cu"],
177 extra_compile_args={
178 "cxx": CXX_FLAGS,
179 "nvcc": NVCC_FLAGS,
180 },
181 )
182 ext_modules.append(activation_extension)
183
184 # Quantization kernels.
185 quantization_extension = CUDAExtension(
186 name="vllm.quantization_ops",
187 sources=[
188 "csrc/quantization.cpp",
189 "csrc/quantization/awq/gemm_kernels.cu",
190 ],
191 extra_compile_args={
192 "cxx": CXX_FLAGS,
193 "nvcc": NVCC_FLAGS,
194 },
195 )
196 ext_modules.append(quantization_extension)
197
198 # Misc. CUDA utils.
199 cuda_utils_extension = CUDAExtension(
200 name="vllm.cuda_utils",
201 sources=["csrc/cuda_utils.cpp", "csrc/cuda_utils_kernels.cu"],
202 extra_compile_args={
203 "cxx": CXX_FLAGS,
204 "nvcc": NVCC_FLAGS,
205 },
206 )
207 ext_modules.append(cuda_utils_extension)
208
209
210 def get_path(*filepath) -> str:
211 return os.path.join(ROOT_DIR, *filepath)
212
213
214 def find_version(filepath: str):
215 """Extract version information from the given filepath.
216
217 Adapted from https://github.com/ray-project/ray/blob/0b190ee1160eeca9796bc091e07eaebf4c85b511/python/setup.py
218 """
219 with open(filepath) as fp:
220 version_match = re.search(r"^__version__ = ['\"]([^'\"]*)['\"]",
221 fp.read(), re.M)
222 if version_match:
223 return version_match.group(1)
224 raise RuntimeError("Unable to find version string.")
225
226
227 def read_readme() -> str:
228 """Read the README file."""
229 return io.open(get_path("README.md"), "r", encoding="utf-8").read()
230
231
232 def get_requirements() -> List[str]:
233 """Get Python package dependencies from requirements.txt."""
234 with open(get_path("requirements.txt")) as f:
235 requirements = f.read().strip().split("\n")
236 return requirements
237
238
239 setuptools.setup(
240 name="vllm",
241 version=find_version(get_path("vllm", "__init__.py")),
242 author="vLLM Team",
243 license="Apache 2.0",
244 description=("A high-throughput and memory-efficient inference and "
245 "serving engine for LLMs"),
246 long_description=read_readme(),
247 long_description_content_type="text/markdown",
248 url="https://github.com/vllm-project/vllm",
249 project_urls={
250 "Homepage": "https://github.com/vllm-project/vllm",
251 "Documentation": "https://vllm.readthedocs.io/en/latest/",
252 },
253 classifiers=[
254 "Programming Language :: Python :: 3.8",
255 "Programming Language :: Python :: 3.9",
256 "Programming Language :: Python :: 3.10",
257 "Programming Language :: Python :: 3.11",
258 "License :: OSI Approved :: Apache Software License",
259 "Topic :: Scientific/Engineering :: Artificial Intelligence",
260 ],
261 packages=setuptools.find_packages(exclude=("benchmarks", "csrc", "docs",
262 "examples", "tests")),
263 python_requires=">=3.8",
264 install_requires=get_requirements(),
265 ext_modules=ext_modules,
266 cmdclass={"build_ext": BuildExtension},
267 )
268
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -13,7 +13,7 @@
ROOT_DIR = os.path.dirname(__file__)
# Supported NVIDIA GPU architectures.
-SUPPORTED_ARCHS = ["7.0", "7.5", "8.0", "8.6", "8.9", "9.0"]
+SUPPORTED_ARCHS = {"7.0", "7.5", "8.0", "8.6", "8.9", "9.0"}
# Compiler flags.
CXX_FLAGS = ["-g", "-O2", "-std=c++17"]
@@ -49,19 +49,32 @@
# and executed on the 8.6 or newer architectures. While the PTX code will
# not give the best performance on the newer architectures, it provides
# forward compatibility.
- valid_arch_strs = SUPPORTED_ARCHS + [s + "+PTX" for s in SUPPORTED_ARCHS]
- arch_list = os.environ.get("TORCH_CUDA_ARCH_LIST", None)
- if arch_list is None:
+ env_arch_list = os.environ.get("TORCH_CUDA_ARCH_LIST", None)
+ if env_arch_list is None:
return set()
# List are separated by ; or space.
- arch_list = arch_list.replace(" ", ";").split(";")
- for arch in arch_list:
- if arch not in valid_arch_strs:
- raise ValueError(
- f"Unsupported CUDA arch ({arch}). "
- f"Valid CUDA arch strings are: {valid_arch_strs}.")
- return set(arch_list)
+ torch_arch_list = set(env_arch_list.replace(" ", ";").split(";"))
+ if not torch_arch_list:
+ return set()
+
+ # Filter out the invalid architectures and print a warning.
+ valid_archs = SUPPORTED_ARCHS.union({s + "+PTX" for s in SUPPORTED_ARCHS})
+ arch_list = torch_arch_list.intersection(valid_archs)
+ # If none of the specified architectures are valid, raise an error.
+ if not arch_list:
+ raise RuntimeError(
+ "None of the CUDA architectures in `TORCH_CUDA_ARCH_LIST` env "
+ f"variable ({env_arch_list}) is supported. "
+ f"Supported CUDA architectures are: {valid_archs}.")
+ invalid_arch_list = torch_arch_list - valid_archs
+ if invalid_arch_list:
+ warnings.warn(
+ f"Unsupported CUDA architectures ({invalid_arch_list}) are "
+ "excluded from the `TORCH_CUDA_ARCH_LIST` env variable "
+ f"({env_arch_list}). Supported CUDA architectures are: "
+ f"{valid_archs}.")
+ return arch_list
# First, check the TORCH_CUDA_ARCH_LIST environment variable.
@@ -81,7 +94,7 @@
if not compute_capabilities:
# If no GPU is specified nor available, add all supported architectures
# based on the NVCC CUDA version.
- compute_capabilities = set(SUPPORTED_ARCHS)
+ compute_capabilities = SUPPORTED_ARCHS.copy()
if nvcc_cuda_version < Version("11.1"):
compute_capabilities.remove("8.6")
if nvcc_cuda_version < Version("11.8"):
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -13,7 +13,7 @@\n ROOT_DIR = os.path.dirname(__file__)\n \n # Supported NVIDIA GPU architectures.\n-SUPPORTED_ARCHS = [\"7.0\", \"7.5\", \"8.0\", \"8.6\", \"8.9\", \"9.0\"]\n+SUPPORTED_ARCHS = {\"7.0\", \"7.5\", \"8.0\", \"8.6\", \"8.9\", \"9.0\"}\n \n # Compiler flags.\n CXX_FLAGS = [\"-g\", \"-O2\", \"-std=c++17\"]\n@@ -49,19 +49,32 @@\n # and executed on the 8.6 or newer architectures. While the PTX code will\n # not give the best performance on the newer architectures, it provides\n # forward compatibility.\n- valid_arch_strs = SUPPORTED_ARCHS + [s + \"+PTX\" for s in SUPPORTED_ARCHS]\n- arch_list = os.environ.get(\"TORCH_CUDA_ARCH_LIST\", None)\n- if arch_list is None:\n+ env_arch_list = os.environ.get(\"TORCH_CUDA_ARCH_LIST\", None)\n+ if env_arch_list is None:\n return set()\n \n # List are separated by ; or space.\n- arch_list = arch_list.replace(\" \", \";\").split(\";\")\n- for arch in arch_list:\n- if arch not in valid_arch_strs:\n- raise ValueError(\n- f\"Unsupported CUDA arch ({arch}). \"\n- f\"Valid CUDA arch strings are: {valid_arch_strs}.\")\n- return set(arch_list)\n+ torch_arch_list = set(env_arch_list.replace(\" \", \";\").split(\";\"))\n+ if not torch_arch_list:\n+ return set()\n+\n+ # Filter out the invalid architectures and print a warning.\n+ valid_archs = SUPPORTED_ARCHS.union({s + \"+PTX\" for s in SUPPORTED_ARCHS})\n+ arch_list = torch_arch_list.intersection(valid_archs)\n+ # If none of the specified architectures are valid, raise an error.\n+ if not arch_list:\n+ raise RuntimeError(\n+ \"None of the CUDA architectures in `TORCH_CUDA_ARCH_LIST` env \"\n+ f\"variable ({env_arch_list}) is supported. \"\n+ f\"Supported CUDA architectures are: {valid_archs}.\")\n+ invalid_arch_list = torch_arch_list - valid_archs\n+ if invalid_arch_list:\n+ warnings.warn(\n+ f\"Unsupported CUDA architectures ({invalid_arch_list}) are \"\n+ \"excluded from the `TORCH_CUDA_ARCH_LIST` env variable \"\n+ f\"({env_arch_list}). Supported CUDA architectures are: \"\n+ f\"{valid_archs}.\")\n+ return arch_list\n \n \n # First, check the TORCH_CUDA_ARCH_LIST environment variable.\n@@ -81,7 +94,7 @@\n if not compute_capabilities:\n # If no GPU is specified nor available, add all supported architectures\n # based on the NVCC CUDA version.\n- compute_capabilities = set(SUPPORTED_ARCHS)\n+ compute_capabilities = SUPPORTED_ARCHS.copy()\n if nvcc_cuda_version < Version(\"11.1\"):\n compute_capabilities.remove(\"8.6\")\n if nvcc_cuda_version < Version(\"11.8\"):\n", "issue": "Installation Error\nWhen I install vLLM, I have pull the docker as the tutorial and install from source. But there are an error encountered, how can I fix this:\r\n`ValueError: Unsupported CUDA arch (5.2). Valid CUDA arch strings are: ['7.0', '7.5', '8.0', '8.6', '8.9', '9.0', '7.0+PTX', '7.5+PTX', '8.0+PTX', '8.6+PTX', '8.9+PTX', '9.0+PTX'].`\n", "before_files": [{"content": "import io\nimport os\nimport re\nimport subprocess\nfrom typing import List, Set\nimport warnings\n\nfrom packaging.version import parse, Version\nimport setuptools\nimport torch\nfrom torch.utils.cpp_extension import BuildExtension, CUDAExtension, CUDA_HOME\n\nROOT_DIR = os.path.dirname(__file__)\n\n# Supported NVIDIA GPU architectures.\nSUPPORTED_ARCHS = [\"7.0\", \"7.5\", \"8.0\", \"8.6\", \"8.9\", \"9.0\"]\n\n# Compiler flags.\nCXX_FLAGS = [\"-g\", \"-O2\", \"-std=c++17\"]\n# TODO(woosuk): Should we use -O3?\nNVCC_FLAGS = [\"-O2\", \"-std=c++17\"]\n\nABI = 1 if torch._C._GLIBCXX_USE_CXX11_ABI else 0\nCXX_FLAGS += [f\"-D_GLIBCXX_USE_CXX11_ABI={ABI}\"]\nNVCC_FLAGS += [f\"-D_GLIBCXX_USE_CXX11_ABI={ABI}\"]\n\nif CUDA_HOME is None:\n raise RuntimeError(\n \"Cannot find CUDA_HOME. CUDA must be available to build the package.\")\n\n\ndef get_nvcc_cuda_version(cuda_dir: str) -> Version:\n \"\"\"Get the CUDA version from nvcc.\n\n Adapted from https://github.com/NVIDIA/apex/blob/8b7a1ff183741dd8f9b87e7bafd04cfde99cea28/setup.py\n \"\"\"\n nvcc_output = subprocess.check_output([cuda_dir + \"/bin/nvcc\", \"-V\"],\n universal_newlines=True)\n output = nvcc_output.split()\n release_idx = output.index(\"release\") + 1\n nvcc_cuda_version = parse(output[release_idx].split(\",\")[0])\n return nvcc_cuda_version\n\n\ndef get_torch_arch_list() -> Set[str]:\n # TORCH_CUDA_ARCH_LIST can have one or more architectures,\n # e.g. \"8.0\" or \"7.5,8.0,8.6+PTX\". Here, the \"8.6+PTX\" option asks the\n # compiler to additionally include PTX code that can be runtime-compiled\n # and executed on the 8.6 or newer architectures. While the PTX code will\n # not give the best performance on the newer architectures, it provides\n # forward compatibility.\n valid_arch_strs = SUPPORTED_ARCHS + [s + \"+PTX\" for s in SUPPORTED_ARCHS]\n arch_list = os.environ.get(\"TORCH_CUDA_ARCH_LIST\", None)\n if arch_list is None:\n return set()\n\n # List are separated by ; or space.\n arch_list = arch_list.replace(\" \", \";\").split(\";\")\n for arch in arch_list:\n if arch not in valid_arch_strs:\n raise ValueError(\n f\"Unsupported CUDA arch ({arch}). \"\n f\"Valid CUDA arch strings are: {valid_arch_strs}.\")\n return set(arch_list)\n\n\n# First, check the TORCH_CUDA_ARCH_LIST environment variable.\ncompute_capabilities = get_torch_arch_list()\nif not compute_capabilities:\n # If TORCH_CUDA_ARCH_LIST is not defined or empty, target all available\n # GPUs on the current machine.\n device_count = torch.cuda.device_count()\n for i in range(device_count):\n major, minor = torch.cuda.get_device_capability(i)\n if major < 7:\n raise RuntimeError(\n \"GPUs with compute capability below 7.0 are not supported.\")\n compute_capabilities.add(f\"{major}.{minor}\")\n\nnvcc_cuda_version = get_nvcc_cuda_version(CUDA_HOME)\nif not compute_capabilities:\n # If no GPU is specified nor available, add all supported architectures\n # based on the NVCC CUDA version.\n compute_capabilities = set(SUPPORTED_ARCHS)\n if nvcc_cuda_version < Version(\"11.1\"):\n compute_capabilities.remove(\"8.6\")\n if nvcc_cuda_version < Version(\"11.8\"):\n compute_capabilities.remove(\"8.9\")\n compute_capabilities.remove(\"9.0\")\n\n# Validate the NVCC CUDA version.\nif nvcc_cuda_version < Version(\"11.0\"):\n raise RuntimeError(\"CUDA 11.0 or higher is required to build the package.\")\nif nvcc_cuda_version < Version(\"11.1\"):\n if any(cc.startswith(\"8.6\") for cc in compute_capabilities):\n raise RuntimeError(\n \"CUDA 11.1 or higher is required for compute capability 8.6.\")\nif nvcc_cuda_version < Version(\"11.8\"):\n if any(cc.startswith(\"8.9\") for cc in compute_capabilities):\n # CUDA 11.8 is required to generate the code targeting compute capability 8.9.\n # However, GPUs with compute capability 8.9 can also run the code generated by\n # the previous versions of CUDA 11 and targeting compute capability 8.0.\n # Therefore, if CUDA 11.8 is not available, we target compute capability 8.0\n # instead of 8.9.\n warnings.warn(\n \"CUDA 11.8 or higher is required for compute capability 8.9. \"\n \"Targeting compute capability 8.0 instead.\")\n compute_capabilities = set(cc for cc in compute_capabilities\n if not cc.startswith(\"8.9\"))\n compute_capabilities.add(\"8.0+PTX\")\n if any(cc.startswith(\"9.0\") for cc in compute_capabilities):\n raise RuntimeError(\n \"CUDA 11.8 or higher is required for compute capability 9.0.\")\n\n# Add target compute capabilities to NVCC flags.\nfor capability in compute_capabilities:\n num = capability[0] + capability[2]\n NVCC_FLAGS += [\"-gencode\", f\"arch=compute_{num},code=sm_{num}\"]\n if capability.endswith(\"+PTX\"):\n NVCC_FLAGS += [\"-gencode\", f\"arch=compute_{num},code=compute_{num}\"]\n\n# Use NVCC threads to parallelize the build.\nif nvcc_cuda_version >= Version(\"11.2\"):\n num_threads = min(os.cpu_count(), 8)\n NVCC_FLAGS += [\"--threads\", str(num_threads)]\n\next_modules = []\n\n# Cache operations.\ncache_extension = CUDAExtension(\n name=\"vllm.cache_ops\",\n sources=[\"csrc/cache.cpp\", \"csrc/cache_kernels.cu\"],\n extra_compile_args={\n \"cxx\": CXX_FLAGS,\n \"nvcc\": NVCC_FLAGS,\n },\n)\next_modules.append(cache_extension)\n\n# Attention kernels.\nattention_extension = CUDAExtension(\n name=\"vllm.attention_ops\",\n sources=[\"csrc/attention.cpp\", \"csrc/attention/attention_kernels.cu\"],\n extra_compile_args={\n \"cxx\": CXX_FLAGS,\n \"nvcc\": NVCC_FLAGS,\n },\n)\next_modules.append(attention_extension)\n\n# Positional encoding kernels.\npositional_encoding_extension = CUDAExtension(\n name=\"vllm.pos_encoding_ops\",\n sources=[\"csrc/pos_encoding.cpp\", \"csrc/pos_encoding_kernels.cu\"],\n extra_compile_args={\n \"cxx\": CXX_FLAGS,\n \"nvcc\": NVCC_FLAGS,\n },\n)\next_modules.append(positional_encoding_extension)\n\n# Layer normalization kernels.\nlayernorm_extension = CUDAExtension(\n name=\"vllm.layernorm_ops\",\n sources=[\"csrc/layernorm.cpp\", \"csrc/layernorm_kernels.cu\"],\n extra_compile_args={\n \"cxx\": CXX_FLAGS,\n \"nvcc\": NVCC_FLAGS,\n },\n)\next_modules.append(layernorm_extension)\n\n# Activation kernels.\nactivation_extension = CUDAExtension(\n name=\"vllm.activation_ops\",\n sources=[\"csrc/activation.cpp\", \"csrc/activation_kernels.cu\"],\n extra_compile_args={\n \"cxx\": CXX_FLAGS,\n \"nvcc\": NVCC_FLAGS,\n },\n)\next_modules.append(activation_extension)\n\n# Quantization kernels.\nquantization_extension = CUDAExtension(\n name=\"vllm.quantization_ops\",\n sources=[\n \"csrc/quantization.cpp\",\n \"csrc/quantization/awq/gemm_kernels.cu\",\n ],\n extra_compile_args={\n \"cxx\": CXX_FLAGS,\n \"nvcc\": NVCC_FLAGS,\n },\n)\next_modules.append(quantization_extension)\n\n# Misc. CUDA utils.\ncuda_utils_extension = CUDAExtension(\n name=\"vllm.cuda_utils\",\n sources=[\"csrc/cuda_utils.cpp\", \"csrc/cuda_utils_kernels.cu\"],\n extra_compile_args={\n \"cxx\": CXX_FLAGS,\n \"nvcc\": NVCC_FLAGS,\n },\n)\next_modules.append(cuda_utils_extension)\n\n\ndef get_path(*filepath) -> str:\n return os.path.join(ROOT_DIR, *filepath)\n\n\ndef find_version(filepath: str):\n \"\"\"Extract version information from the given filepath.\n\n Adapted from https://github.com/ray-project/ray/blob/0b190ee1160eeca9796bc091e07eaebf4c85b511/python/setup.py\n \"\"\"\n with open(filepath) as fp:\n version_match = re.search(r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\",\n fp.read(), re.M)\n if version_match:\n return version_match.group(1)\n raise RuntimeError(\"Unable to find version string.\")\n\n\ndef read_readme() -> str:\n \"\"\"Read the README file.\"\"\"\n return io.open(get_path(\"README.md\"), \"r\", encoding=\"utf-8\").read()\n\n\ndef get_requirements() -> List[str]:\n \"\"\"Get Python package dependencies from requirements.txt.\"\"\"\n with open(get_path(\"requirements.txt\")) as f:\n requirements = f.read().strip().split(\"\\n\")\n return requirements\n\n\nsetuptools.setup(\n name=\"vllm\",\n version=find_version(get_path(\"vllm\", \"__init__.py\")),\n author=\"vLLM Team\",\n license=\"Apache 2.0\",\n description=(\"A high-throughput and memory-efficient inference and \"\n \"serving engine for LLMs\"),\n long_description=read_readme(),\n long_description_content_type=\"text/markdown\",\n url=\"https://github.com/vllm-project/vllm\",\n project_urls={\n \"Homepage\": \"https://github.com/vllm-project/vllm\",\n \"Documentation\": \"https://vllm.readthedocs.io/en/latest/\",\n },\n classifiers=[\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: 3.11\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n ],\n packages=setuptools.find_packages(exclude=(\"benchmarks\", \"csrc\", \"docs\",\n \"examples\", \"tests\")),\n python_requires=\">=3.8\",\n install_requires=get_requirements(),\n ext_modules=ext_modules,\n cmdclass={\"build_ext\": BuildExtension},\n)\n", "path": "setup.py"}], "after_files": [{"content": "import io\nimport os\nimport re\nimport subprocess\nfrom typing import List, Set\nimport warnings\n\nfrom packaging.version import parse, Version\nimport setuptools\nimport torch\nfrom torch.utils.cpp_extension import BuildExtension, CUDAExtension, CUDA_HOME\n\nROOT_DIR = os.path.dirname(__file__)\n\n# Supported NVIDIA GPU architectures.\nSUPPORTED_ARCHS = {\"7.0\", \"7.5\", \"8.0\", \"8.6\", \"8.9\", \"9.0\"}\n\n# Compiler flags.\nCXX_FLAGS = [\"-g\", \"-O2\", \"-std=c++17\"]\n# TODO(woosuk): Should we use -O3?\nNVCC_FLAGS = [\"-O2\", \"-std=c++17\"]\n\nABI = 1 if torch._C._GLIBCXX_USE_CXX11_ABI else 0\nCXX_FLAGS += [f\"-D_GLIBCXX_USE_CXX11_ABI={ABI}\"]\nNVCC_FLAGS += [f\"-D_GLIBCXX_USE_CXX11_ABI={ABI}\"]\n\nif CUDA_HOME is None:\n raise RuntimeError(\n \"Cannot find CUDA_HOME. CUDA must be available to build the package.\")\n\n\ndef get_nvcc_cuda_version(cuda_dir: str) -> Version:\n \"\"\"Get the CUDA version from nvcc.\n\n Adapted from https://github.com/NVIDIA/apex/blob/8b7a1ff183741dd8f9b87e7bafd04cfde99cea28/setup.py\n \"\"\"\n nvcc_output = subprocess.check_output([cuda_dir + \"/bin/nvcc\", \"-V\"],\n universal_newlines=True)\n output = nvcc_output.split()\n release_idx = output.index(\"release\") + 1\n nvcc_cuda_version = parse(output[release_idx].split(\",\")[0])\n return nvcc_cuda_version\n\n\ndef get_torch_arch_list() -> Set[str]:\n # TORCH_CUDA_ARCH_LIST can have one or more architectures,\n # e.g. \"8.0\" or \"7.5,8.0,8.6+PTX\". Here, the \"8.6+PTX\" option asks the\n # compiler to additionally include PTX code that can be runtime-compiled\n # and executed on the 8.6 or newer architectures. While the PTX code will\n # not give the best performance on the newer architectures, it provides\n # forward compatibility.\n env_arch_list = os.environ.get(\"TORCH_CUDA_ARCH_LIST\", None)\n if env_arch_list is None:\n return set()\n\n # List are separated by ; or space.\n torch_arch_list = set(env_arch_list.replace(\" \", \";\").split(\";\"))\n if not torch_arch_list:\n return set()\n\n # Filter out the invalid architectures and print a warning.\n valid_archs = SUPPORTED_ARCHS.union({s + \"+PTX\" for s in SUPPORTED_ARCHS})\n arch_list = torch_arch_list.intersection(valid_archs)\n # If none of the specified architectures are valid, raise an error.\n if not arch_list:\n raise RuntimeError(\n \"None of the CUDA architectures in `TORCH_CUDA_ARCH_LIST` env \"\n f\"variable ({env_arch_list}) is supported. \"\n f\"Supported CUDA architectures are: {valid_archs}.\")\n invalid_arch_list = torch_arch_list - valid_archs\n if invalid_arch_list:\n warnings.warn(\n f\"Unsupported CUDA architectures ({invalid_arch_list}) are \"\n \"excluded from the `TORCH_CUDA_ARCH_LIST` env variable \"\n f\"({env_arch_list}). Supported CUDA architectures are: \"\n f\"{valid_archs}.\")\n return arch_list\n\n\n# First, check the TORCH_CUDA_ARCH_LIST environment variable.\ncompute_capabilities = get_torch_arch_list()\nif not compute_capabilities:\n # If TORCH_CUDA_ARCH_LIST is not defined or empty, target all available\n # GPUs on the current machine.\n device_count = torch.cuda.device_count()\n for i in range(device_count):\n major, minor = torch.cuda.get_device_capability(i)\n if major < 7:\n raise RuntimeError(\n \"GPUs with compute capability below 7.0 are not supported.\")\n compute_capabilities.add(f\"{major}.{minor}\")\n\nnvcc_cuda_version = get_nvcc_cuda_version(CUDA_HOME)\nif not compute_capabilities:\n # If no GPU is specified nor available, add all supported architectures\n # based on the NVCC CUDA version.\n compute_capabilities = SUPPORTED_ARCHS.copy()\n if nvcc_cuda_version < Version(\"11.1\"):\n compute_capabilities.remove(\"8.6\")\n if nvcc_cuda_version < Version(\"11.8\"):\n compute_capabilities.remove(\"8.9\")\n compute_capabilities.remove(\"9.0\")\n\n# Validate the NVCC CUDA version.\nif nvcc_cuda_version < Version(\"11.0\"):\n raise RuntimeError(\"CUDA 11.0 or higher is required to build the package.\")\nif nvcc_cuda_version < Version(\"11.1\"):\n if any(cc.startswith(\"8.6\") for cc in compute_capabilities):\n raise RuntimeError(\n \"CUDA 11.1 or higher is required for compute capability 8.6.\")\nif nvcc_cuda_version < Version(\"11.8\"):\n if any(cc.startswith(\"8.9\") for cc in compute_capabilities):\n # CUDA 11.8 is required to generate the code targeting compute capability 8.9.\n # However, GPUs with compute capability 8.9 can also run the code generated by\n # the previous versions of CUDA 11 and targeting compute capability 8.0.\n # Therefore, if CUDA 11.8 is not available, we target compute capability 8.0\n # instead of 8.9.\n warnings.warn(\n \"CUDA 11.8 or higher is required for compute capability 8.9. \"\n \"Targeting compute capability 8.0 instead.\")\n compute_capabilities = set(cc for cc in compute_capabilities\n if not cc.startswith(\"8.9\"))\n compute_capabilities.add(\"8.0+PTX\")\n if any(cc.startswith(\"9.0\") for cc in compute_capabilities):\n raise RuntimeError(\n \"CUDA 11.8 or higher is required for compute capability 9.0.\")\n\n# Add target compute capabilities to NVCC flags.\nfor capability in compute_capabilities:\n num = capability[0] + capability[2]\n NVCC_FLAGS += [\"-gencode\", f\"arch=compute_{num},code=sm_{num}\"]\n if capability.endswith(\"+PTX\"):\n NVCC_FLAGS += [\"-gencode\", f\"arch=compute_{num},code=compute_{num}\"]\n\n# Use NVCC threads to parallelize the build.\nif nvcc_cuda_version >= Version(\"11.2\"):\n num_threads = min(os.cpu_count(), 8)\n NVCC_FLAGS += [\"--threads\", str(num_threads)]\n\next_modules = []\n\n# Cache operations.\ncache_extension = CUDAExtension(\n name=\"vllm.cache_ops\",\n sources=[\"csrc/cache.cpp\", \"csrc/cache_kernels.cu\"],\n extra_compile_args={\n \"cxx\": CXX_FLAGS,\n \"nvcc\": NVCC_FLAGS,\n },\n)\next_modules.append(cache_extension)\n\n# Attention kernels.\nattention_extension = CUDAExtension(\n name=\"vllm.attention_ops\",\n sources=[\"csrc/attention.cpp\", \"csrc/attention/attention_kernels.cu\"],\n extra_compile_args={\n \"cxx\": CXX_FLAGS,\n \"nvcc\": NVCC_FLAGS,\n },\n)\next_modules.append(attention_extension)\n\n# Positional encoding kernels.\npositional_encoding_extension = CUDAExtension(\n name=\"vllm.pos_encoding_ops\",\n sources=[\"csrc/pos_encoding.cpp\", \"csrc/pos_encoding_kernels.cu\"],\n extra_compile_args={\n \"cxx\": CXX_FLAGS,\n \"nvcc\": NVCC_FLAGS,\n },\n)\next_modules.append(positional_encoding_extension)\n\n# Layer normalization kernels.\nlayernorm_extension = CUDAExtension(\n name=\"vllm.layernorm_ops\",\n sources=[\"csrc/layernorm.cpp\", \"csrc/layernorm_kernels.cu\"],\n extra_compile_args={\n \"cxx\": CXX_FLAGS,\n \"nvcc\": NVCC_FLAGS,\n },\n)\next_modules.append(layernorm_extension)\n\n# Activation kernels.\nactivation_extension = CUDAExtension(\n name=\"vllm.activation_ops\",\n sources=[\"csrc/activation.cpp\", \"csrc/activation_kernels.cu\"],\n extra_compile_args={\n \"cxx\": CXX_FLAGS,\n \"nvcc\": NVCC_FLAGS,\n },\n)\next_modules.append(activation_extension)\n\n# Quantization kernels.\nquantization_extension = CUDAExtension(\n name=\"vllm.quantization_ops\",\n sources=[\n \"csrc/quantization.cpp\",\n \"csrc/quantization/awq/gemm_kernels.cu\",\n ],\n extra_compile_args={\n \"cxx\": CXX_FLAGS,\n \"nvcc\": NVCC_FLAGS,\n },\n)\next_modules.append(quantization_extension)\n\n# Misc. CUDA utils.\ncuda_utils_extension = CUDAExtension(\n name=\"vllm.cuda_utils\",\n sources=[\"csrc/cuda_utils.cpp\", \"csrc/cuda_utils_kernels.cu\"],\n extra_compile_args={\n \"cxx\": CXX_FLAGS,\n \"nvcc\": NVCC_FLAGS,\n },\n)\next_modules.append(cuda_utils_extension)\n\n\ndef get_path(*filepath) -> str:\n return os.path.join(ROOT_DIR, *filepath)\n\n\ndef find_version(filepath: str):\n \"\"\"Extract version information from the given filepath.\n\n Adapted from https://github.com/ray-project/ray/blob/0b190ee1160eeca9796bc091e07eaebf4c85b511/python/setup.py\n \"\"\"\n with open(filepath) as fp:\n version_match = re.search(r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\",\n fp.read(), re.M)\n if version_match:\n return version_match.group(1)\n raise RuntimeError(\"Unable to find version string.\")\n\n\ndef read_readme() -> str:\n \"\"\"Read the README file.\"\"\"\n return io.open(get_path(\"README.md\"), \"r\", encoding=\"utf-8\").read()\n\n\ndef get_requirements() -> List[str]:\n \"\"\"Get Python package dependencies from requirements.txt.\"\"\"\n with open(get_path(\"requirements.txt\")) as f:\n requirements = f.read().strip().split(\"\\n\")\n return requirements\n\n\nsetuptools.setup(\n name=\"vllm\",\n version=find_version(get_path(\"vllm\", \"__init__.py\")),\n author=\"vLLM Team\",\n license=\"Apache 2.0\",\n description=(\"A high-throughput and memory-efficient inference and \"\n \"serving engine for LLMs\"),\n long_description=read_readme(),\n long_description_content_type=\"text/markdown\",\n url=\"https://github.com/vllm-project/vllm\",\n project_urls={\n \"Homepage\": \"https://github.com/vllm-project/vllm\",\n \"Documentation\": \"https://vllm.readthedocs.io/en/latest/\",\n },\n classifiers=[\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: 3.11\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n ],\n packages=setuptools.find_packages(exclude=(\"benchmarks\", \"csrc\", \"docs\",\n \"examples\", \"tests\")),\n python_requires=\">=3.8\",\n install_requires=get_requirements(),\n ext_modules=ext_modules,\n cmdclass={\"build_ext\": BuildExtension},\n)\n", "path": "setup.py"}]} | 3,501 | 727 |
gh_patches_debug_7577 | rasdani/github-patches | git_diff | webkom__lego-24 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add shell pluss and django extensions
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lego/settings/base.py`
Content:
```
1 import sys
2 import os
3 BASE_DIR = os.path.dirname(os.path.dirname(__file__))
4
5 SECRET_KEY = 'This is supersecret'
6
7 TESTING = 'test' in sys.argv # Check if manage.py test has been run
8
9 DEBUG = True
10 TEMPLATE_DEBUG = True
11 ALLOWED_HOSTS = []
12
13 AUTH_USER_MODEL = 'users.User'
14
15 INSTALLED_APPS = (
16 'django.contrib.admin',
17 'django.contrib.auth',
18 'django.contrib.contenttypes',
19 'django.contrib.sessions',
20 'django.contrib.messages',
21 'django.contrib.staticfiles',
22
23 'oauth2_provider',
24 'rest_framework',
25
26 'lego.apps.LegoConfig',
27 'lego.users',
28
29 'lego.app.oauth',
30 )
31
32 AUTHENTICATION_BACKEND = (
33 'oauth2_provider.backends.OAuth2Backend'
34 )
35
36 MIDDLEWARE_CLASSES = (
37 'django.contrib.sessions.middleware.SessionMiddleware',
38 'django.middleware.common.CommonMiddleware',
39 'django.middleware.csrf.CsrfViewMiddleware',
40 'oauth2_provider.middleware.OAuth2TokenMiddleware',
41 'django.contrib.auth.middleware.AuthenticationMiddleware',
42 'django.contrib.messages.middleware.MessageMiddleware',
43 'django.middleware.clickjacking.XFrameOptionsMiddleware',
44 )
45
46 OAUTH2_PROVIDER_APPLICATION_MODEL = 'oauth.APIApplication'
47
48 ROOT_URLCONF = 'lego.urls'
49
50 WSGI_APPLICATION = 'lego.wsgi.application'
51
52 LANGUAGE_CODE = 'en-us'
53 TIME_ZONE = 'UTC'
54 USE_I18N = True
55 USE_L10N = True
56 USE_TZ = True
57 DEBUG = True
58
59 STATIC_URL = '/static/'
60 STATIC_ROOT = os.path.join(BASE_DIR, '../static')
61 MEDIA_URL = '/uploads/'
62 MEDIA_ROOT = os.path.join(BASE_DIR, '../uploads')
63
64 TEMPLATE_DIRS = (
65 os.path.join(BASE_DIR, 'templates'),
66 )
67
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lego/settings/base.py b/lego/settings/base.py
--- a/lego/settings/base.py
+++ b/lego/settings/base.py
@@ -6,6 +6,8 @@
TESTING = 'test' in sys.argv # Check if manage.py test has been run
+SHELL_PLUS = "ipython"
+
DEBUG = True
TEMPLATE_DEBUG = True
ALLOWED_HOSTS = []
@@ -20,6 +22,8 @@
'django.contrib.messages',
'django.contrib.staticfiles',
+ 'django_extensions',
+
'oauth2_provider',
'rest_framework',
| {"golden_diff": "diff --git a/lego/settings/base.py b/lego/settings/base.py\n--- a/lego/settings/base.py\n+++ b/lego/settings/base.py\n@@ -6,6 +6,8 @@\n \n TESTING = 'test' in sys.argv # Check if manage.py test has been run\n \n+SHELL_PLUS = \"ipython\"\n+\n DEBUG = True\n TEMPLATE_DEBUG = True\n ALLOWED_HOSTS = []\n@@ -20,6 +22,8 @@\n 'django.contrib.messages',\n 'django.contrib.staticfiles',\n \n+ 'django_extensions',\n+\n 'oauth2_provider',\n 'rest_framework',\n", "issue": "Add shell pluss and django extensions\n\n", "before_files": [{"content": "import sys\nimport os\nBASE_DIR = os.path.dirname(os.path.dirname(__file__))\n\nSECRET_KEY = 'This is supersecret'\n\nTESTING = 'test' in sys.argv # Check if manage.py test has been run\n\nDEBUG = True\nTEMPLATE_DEBUG = True\nALLOWED_HOSTS = []\n\nAUTH_USER_MODEL = 'users.User'\n\nINSTALLED_APPS = (\n 'django.contrib.admin',\n 'django.contrib.auth',\n 'django.contrib.contenttypes',\n 'django.contrib.sessions',\n 'django.contrib.messages',\n 'django.contrib.staticfiles',\n\n 'oauth2_provider',\n 'rest_framework',\n\n 'lego.apps.LegoConfig',\n 'lego.users',\n\n 'lego.app.oauth',\n)\n\nAUTHENTICATION_BACKEND = (\n 'oauth2_provider.backends.OAuth2Backend'\n)\n\nMIDDLEWARE_CLASSES = (\n 'django.contrib.sessions.middleware.SessionMiddleware',\n 'django.middleware.common.CommonMiddleware',\n 'django.middleware.csrf.CsrfViewMiddleware',\n 'oauth2_provider.middleware.OAuth2TokenMiddleware',\n 'django.contrib.auth.middleware.AuthenticationMiddleware',\n 'django.contrib.messages.middleware.MessageMiddleware',\n 'django.middleware.clickjacking.XFrameOptionsMiddleware',\n)\n\nOAUTH2_PROVIDER_APPLICATION_MODEL = 'oauth.APIApplication'\n\nROOT_URLCONF = 'lego.urls'\n\nWSGI_APPLICATION = 'lego.wsgi.application'\n\nLANGUAGE_CODE = 'en-us'\nTIME_ZONE = 'UTC'\nUSE_I18N = True\nUSE_L10N = True\nUSE_TZ = True\nDEBUG = True\n\nSTATIC_URL = '/static/'\nSTATIC_ROOT = os.path.join(BASE_DIR, '../static')\nMEDIA_URL = '/uploads/'\nMEDIA_ROOT = os.path.join(BASE_DIR, '../uploads')\n\nTEMPLATE_DIRS = (\n os.path.join(BASE_DIR, 'templates'),\n)\n", "path": "lego/settings/base.py"}], "after_files": [{"content": "import sys\nimport os\nBASE_DIR = os.path.dirname(os.path.dirname(__file__))\n\nSECRET_KEY = 'This is supersecret'\n\nTESTING = 'test' in sys.argv # Check if manage.py test has been run\n\nSHELL_PLUS = \"ipython\"\n\nDEBUG = True\nTEMPLATE_DEBUG = True\nALLOWED_HOSTS = []\n\nAUTH_USER_MODEL = 'users.User'\n\nINSTALLED_APPS = (\n 'django.contrib.admin',\n 'django.contrib.auth',\n 'django.contrib.contenttypes',\n 'django.contrib.sessions',\n 'django.contrib.messages',\n 'django.contrib.staticfiles',\n\n 'django_extensions',\n\n 'oauth2_provider',\n 'rest_framework',\n\n 'lego.apps.LegoConfig',\n 'lego.users',\n\n 'lego.app.oauth',\n)\n\nAUTHENTICATION_BACKEND = (\n 'oauth2_provider.backends.OAuth2Backend'\n)\n\nMIDDLEWARE_CLASSES = (\n 'django.contrib.sessions.middleware.SessionMiddleware',\n 'django.middleware.common.CommonMiddleware',\n 'django.middleware.csrf.CsrfViewMiddleware',\n 'oauth2_provider.middleware.OAuth2TokenMiddleware',\n 'django.contrib.auth.middleware.AuthenticationMiddleware',\n 'django.contrib.messages.middleware.MessageMiddleware',\n 'django.middleware.clickjacking.XFrameOptionsMiddleware',\n)\n\nOAUTH2_PROVIDER_APPLICATION_MODEL = 'oauth.APIApplication'\n\nROOT_URLCONF = 'lego.urls'\n\nWSGI_APPLICATION = 'lego.wsgi.application'\n\nLANGUAGE_CODE = 'en-us'\nTIME_ZONE = 'UTC'\nUSE_I18N = True\nUSE_L10N = True\nUSE_TZ = True\nDEBUG = True\n\nSTATIC_URL = '/static/'\nSTATIC_ROOT = os.path.join(BASE_DIR, '../static')\nMEDIA_URL = '/uploads/'\nMEDIA_ROOT = os.path.join(BASE_DIR, '../uploads')\n\nTEMPLATE_DIRS = (\n os.path.join(BASE_DIR, 'templates'),\n)\n", "path": "lego/settings/base.py"}]} | 771 | 134 |
gh_patches_debug_19933 | rasdani/github-patches | git_diff | pyro-ppl__pyro-1760 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
pip install pyro-ppl failing in Python 3
```
$ conda create -n test3 python=3
...
$ source activate test3
$ pip install pyro-ppl
Collecting pyro-ppl
Downloading https://files.pythonhosted.org/packages/75/0e/e6013ad748349e447c6da7ba0b3d66950e834a5b45a0ff426c131b00cde5/pyro-ppl-0.3.1.tar.gz (207kB)
100% |████████████████████████████████| 215kB 4.9MB/s
Complete output from command python setup.py egg_info:
fatal: Not a git repository (or any of the parent directories): .git
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/private/var/folders/c5/cqn89jt900v5tmm2y7p0tf4r0000gn/T/pip-install-6feboagj/pyro-ppl/setup.py", line 25, in <module>
cwd=PROJECT_PATH).decode('ascii').strip()
File "/Users/fritzobermeyer/miniconda2/envs/test3/lib/python3.7/subprocess.py", line 395, in check_output
**kwargs).stdout
File "/Users/fritzobermeyer/miniconda2/envs/test3/lib/python3.7/subprocess.py", line 487, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['git', 'tag', '--points-at', 'HEAD']' returned non-zero exit status 128.
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /private/var/folders/c5/cqn89jt900v5tmm2y7p0tf4r0000gn/T/pip-install-6feboagj/pyro-ppl/
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 from __future__ import absolute_import, division, print_function
2
3 import os
4 import subprocess
5 import sys
6
7 from setuptools import find_packages, setup
8
9 PROJECT_PATH = os.path.dirname(os.path.abspath(__file__))
10 VERSION = """
11 # This file is auto-generated with the version information during setup.py installation.
12
13 __version__ = '{}'
14 """
15
16 # Find pyro version.
17 for line in open(os.path.join(PROJECT_PATH, 'pyro', '__init__.py')):
18 if line.startswith('version_prefix = '):
19 version = line.strip().split()[2][1:-1]
20
21 # Append current commit sha to version
22 commit_sha = ''
23 try:
24 current_tag = subprocess.check_output(['git', 'tag', '--points-at', 'HEAD'],
25 cwd=PROJECT_PATH).decode('ascii').strip()
26 # only add sha if HEAD does not point to the release tag
27 if not current_tag == version:
28 commit_sha = subprocess.check_output(['git', 'rev-parse', '--short', 'HEAD'],
29 cwd=PROJECT_PATH).decode('ascii').strip()
30 except OSError:
31 pass
32
33 # Write version to _version.py
34 if commit_sha:
35 version += '+{}'.format(commit_sha)
36 with open(os.path.join(PROJECT_PATH, 'pyro', '_version.py'), 'w') as f:
37 f.write(VERSION.format(version))
38
39 # Convert README.md to rst for display at https://pypi.python.org/pypi/pyro-ppl
40 # When releasing on pypi, make sure pandoc is on your system:
41 # $ brew install pandoc # OS X
42 # $ sudo apt-get install pandoc # Ubuntu Linux
43 try:
44 import pypandoc
45 long_description = pypandoc.convert('README.md', 'rst')
46 print(long_description)
47 except (IOError, ImportError, OSError) as e:
48 sys.stderr.write('Failed to convert README.md to rst:\n {}\n'.format(e))
49 sys.stderr.flush()
50 long_description = open('README.md').read()
51
52 # Remove badges since they will always be obsolete.
53 # This assumes the first 10 lines contain badge info.
54 long_description = '\n'.join([str(line) for line in long_description.split('\n')[10:]])
55
56 # examples/tutorials
57 EXTRAS_REQUIRE = [
58 'jupyter>=1.0.0',
59 'matplotlib>=1.3',
60 'observations>=0.1.4',
61 'pillow',
62 'torchvision',
63 'visdom>=0.1.4',
64 'pandas',
65 'seaborn',
66 'wget',
67 ]
68
69 if sys.version_info[0] == 2:
70 EXTRAS_REQUIRE.append('functools32')
71
72 setup(
73 name='pyro-ppl',
74 version=version,
75 description='A Python library for probabilistic modeling and inference',
76 long_description=long_description,
77 packages=find_packages(include=['pyro', 'pyro.*']),
78 url='http://pyro.ai',
79 author='Uber AI Labs',
80 author_email='[email protected]',
81 install_requires=[
82 # if you add any additional libraries, please also
83 # add them to `docs/requirements.txt`
84 'contextlib2',
85 'graphviz>=0.8',
86 'numpy>=1.7',
87 'opt_einsum>=2.3.2',
88 'six>=1.10.0',
89 'torch>=1.0.0',
90 'tqdm>=4.28',
91 ],
92 extras_require={
93 'extras': EXTRAS_REQUIRE,
94 'test': EXTRAS_REQUIRE + [
95 'nbval',
96 'pytest>=4.1',
97 'pytest-cov',
98 'scipy>=1.1',
99 ],
100 'profile': ['prettytable', 'pytest-benchmark', 'snakeviz'],
101 'dev': EXTRAS_REQUIRE + [
102 'flake8',
103 'isort',
104 'nbformat',
105 'nbsphinx>=0.3.2',
106 'nbstripout',
107 'nbval',
108 'pypandoc',
109 'pytest>=4.1',
110 'pytest-xdist',
111 'scipy>=1.1',
112 'sphinx',
113 'sphinx_rtd_theme',
114 'yapf',
115 ],
116 },
117 tests_require=['flake8', 'pytest>=4.1'],
118 keywords='machine learning statistics probabilistic programming bayesian modeling pytorch',
119 license='MIT License',
120 classifiers=[
121 'Intended Audience :: Developers',
122 'Intended Audience :: Education',
123 'Intended Audience :: Science/Research',
124 'Operating System :: POSIX :: Linux',
125 'Operating System :: MacOS :: MacOS X',
126 'Programming Language :: Python :: 2.7',
127 'Programming Language :: Python :: 3.6',
128 ],
129 # yapf
130 )
131
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -27,8 +27,9 @@
if not current_tag == version:
commit_sha = subprocess.check_output(['git', 'rev-parse', '--short', 'HEAD'],
cwd=PROJECT_PATH).decode('ascii').strip()
-except OSError:
- pass
+# catch all exception to be safe
+except Exception:
+ pass # probably not a git repo
# Write version to _version.py
if commit_sha:
@@ -44,7 +45,7 @@
import pypandoc
long_description = pypandoc.convert('README.md', 'rst')
print(long_description)
-except (IOError, ImportError, OSError) as e:
+except Exception as e:
sys.stderr.write('Failed to convert README.md to rst:\n {}\n'.format(e))
sys.stderr.flush()
long_description = open('README.md').read()
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -27,8 +27,9 @@\n if not current_tag == version:\n commit_sha = subprocess.check_output(['git', 'rev-parse', '--short', 'HEAD'],\n cwd=PROJECT_PATH).decode('ascii').strip()\n-except OSError:\n- pass\n+# catch all exception to be safe\n+except Exception:\n+ pass # probably not a git repo\n \n # Write version to _version.py\n if commit_sha:\n@@ -44,7 +45,7 @@\n import pypandoc\n long_description = pypandoc.convert('README.md', 'rst')\n print(long_description)\n-except (IOError, ImportError, OSError) as e:\n+except Exception as e:\n sys.stderr.write('Failed to convert README.md to rst:\\n {}\\n'.format(e))\n sys.stderr.flush()\n long_description = open('README.md').read()\n", "issue": "pip install pyro-ppl failing in Python 3\n```\r\n$ conda create -n test3 python=3\r\n...\r\n$ source activate test3\r\n$ pip install pyro-ppl\r\nCollecting pyro-ppl\r\n Downloading https://files.pythonhosted.org/packages/75/0e/e6013ad748349e447c6da7ba0b3d66950e834a5b45a0ff426c131b00cde5/pyro-ppl-0.3.1.tar.gz (207kB)\r\n 100% |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 215kB 4.9MB/s\r\n Complete output from command python setup.py egg_info:\r\n fatal: Not a git repository (or any of the parent directories): .git\r\n Traceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/private/var/folders/c5/cqn89jt900v5tmm2y7p0tf4r0000gn/T/pip-install-6feboagj/pyro-ppl/setup.py\", line 25, in <module>\r\n cwd=PROJECT_PATH).decode('ascii').strip()\r\n File \"/Users/fritzobermeyer/miniconda2/envs/test3/lib/python3.7/subprocess.py\", line 395, in check_output\r\n **kwargs).stdout\r\n File \"/Users/fritzobermeyer/miniconda2/envs/test3/lib/python3.7/subprocess.py\", line 487, in run\r\n output=stdout, stderr=stderr)\r\n subprocess.CalledProcessError: Command '['git', 'tag', '--points-at', 'HEAD']' returned non-zero exit status 128.\r\n\r\n ----------------------------------------\r\nCommand \"python setup.py egg_info\" failed with error code 1 in /private/var/folders/c5/cqn89jt900v5tmm2y7p0tf4r0000gn/T/pip-install-6feboagj/pyro-ppl/\r\n```\n", "before_files": [{"content": "from __future__ import absolute_import, division, print_function\n\nimport os\nimport subprocess\nimport sys\n\nfrom setuptools import find_packages, setup\n\nPROJECT_PATH = os.path.dirname(os.path.abspath(__file__))\nVERSION = \"\"\"\n# This file is auto-generated with the version information during setup.py installation.\n\n__version__ = '{}'\n\"\"\"\n\n# Find pyro version.\nfor line in open(os.path.join(PROJECT_PATH, 'pyro', '__init__.py')):\n if line.startswith('version_prefix = '):\n version = line.strip().split()[2][1:-1]\n\n# Append current commit sha to version\ncommit_sha = ''\ntry:\n current_tag = subprocess.check_output(['git', 'tag', '--points-at', 'HEAD'],\n cwd=PROJECT_PATH).decode('ascii').strip()\n # only add sha if HEAD does not point to the release tag\n if not current_tag == version:\n commit_sha = subprocess.check_output(['git', 'rev-parse', '--short', 'HEAD'],\n cwd=PROJECT_PATH).decode('ascii').strip()\nexcept OSError:\n pass\n\n# Write version to _version.py\nif commit_sha:\n version += '+{}'.format(commit_sha)\nwith open(os.path.join(PROJECT_PATH, 'pyro', '_version.py'), 'w') as f:\n f.write(VERSION.format(version))\n\n# Convert README.md to rst for display at https://pypi.python.org/pypi/pyro-ppl\n# When releasing on pypi, make sure pandoc is on your system:\n# $ brew install pandoc # OS X\n# $ sudo apt-get install pandoc # Ubuntu Linux\ntry:\n import pypandoc\n long_description = pypandoc.convert('README.md', 'rst')\n print(long_description)\nexcept (IOError, ImportError, OSError) as e:\n sys.stderr.write('Failed to convert README.md to rst:\\n {}\\n'.format(e))\n sys.stderr.flush()\n long_description = open('README.md').read()\n\n# Remove badges since they will always be obsolete.\n# This assumes the first 10 lines contain badge info.\nlong_description = '\\n'.join([str(line) for line in long_description.split('\\n')[10:]])\n\n# examples/tutorials\nEXTRAS_REQUIRE = [\n 'jupyter>=1.0.0',\n 'matplotlib>=1.3',\n 'observations>=0.1.4',\n 'pillow',\n 'torchvision',\n 'visdom>=0.1.4',\n 'pandas',\n 'seaborn',\n 'wget',\n]\n\nif sys.version_info[0] == 2:\n EXTRAS_REQUIRE.append('functools32')\n\nsetup(\n name='pyro-ppl',\n version=version,\n description='A Python library for probabilistic modeling and inference',\n long_description=long_description,\n packages=find_packages(include=['pyro', 'pyro.*']),\n url='http://pyro.ai',\n author='Uber AI Labs',\n author_email='[email protected]',\n install_requires=[\n # if you add any additional libraries, please also\n # add them to `docs/requirements.txt`\n 'contextlib2',\n 'graphviz>=0.8',\n 'numpy>=1.7',\n 'opt_einsum>=2.3.2',\n 'six>=1.10.0',\n 'torch>=1.0.0',\n 'tqdm>=4.28',\n ],\n extras_require={\n 'extras': EXTRAS_REQUIRE,\n 'test': EXTRAS_REQUIRE + [\n 'nbval',\n 'pytest>=4.1',\n 'pytest-cov',\n 'scipy>=1.1',\n ],\n 'profile': ['prettytable', 'pytest-benchmark', 'snakeviz'],\n 'dev': EXTRAS_REQUIRE + [\n 'flake8',\n 'isort',\n 'nbformat',\n 'nbsphinx>=0.3.2',\n 'nbstripout',\n 'nbval',\n 'pypandoc',\n 'pytest>=4.1',\n 'pytest-xdist',\n 'scipy>=1.1',\n 'sphinx',\n 'sphinx_rtd_theme',\n 'yapf',\n ],\n },\n tests_require=['flake8', 'pytest>=4.1'],\n keywords='machine learning statistics probabilistic programming bayesian modeling pytorch',\n license='MIT License',\n classifiers=[\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'Operating System :: POSIX :: Linux',\n 'Operating System :: MacOS :: MacOS X',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3.6',\n ],\n # yapf\n)\n", "path": "setup.py"}], "after_files": [{"content": "from __future__ import absolute_import, division, print_function\n\nimport os\nimport subprocess\nimport sys\n\nfrom setuptools import find_packages, setup\n\nPROJECT_PATH = os.path.dirname(os.path.abspath(__file__))\nVERSION = \"\"\"\n# This file is auto-generated with the version information during setup.py installation.\n\n__version__ = '{}'\n\"\"\"\n\n# Find pyro version.\nfor line in open(os.path.join(PROJECT_PATH, 'pyro', '__init__.py')):\n if line.startswith('version_prefix = '):\n version = line.strip().split()[2][1:-1]\n\n# Append current commit sha to version\ncommit_sha = ''\ntry:\n current_tag = subprocess.check_output(['git', 'tag', '--points-at', 'HEAD'],\n cwd=PROJECT_PATH).decode('ascii').strip()\n # only add sha if HEAD does not point to the release tag\n if not current_tag == version:\n commit_sha = subprocess.check_output(['git', 'rev-parse', '--short', 'HEAD'],\n cwd=PROJECT_PATH).decode('ascii').strip()\n# catch all exception to be safe\nexcept Exception:\n pass # probably not a git repo\n\n# Write version to _version.py\nif commit_sha:\n version += '+{}'.format(commit_sha)\nwith open(os.path.join(PROJECT_PATH, 'pyro', '_version.py'), 'w') as f:\n f.write(VERSION.format(version))\n\n# Convert README.md to rst for display at https://pypi.python.org/pypi/pyro-ppl\n# When releasing on pypi, make sure pandoc is on your system:\n# $ brew install pandoc # OS X\n# $ sudo apt-get install pandoc # Ubuntu Linux\ntry:\n import pypandoc\n long_description = pypandoc.convert('README.md', 'rst')\n print(long_description)\nexcept Exception as e:\n sys.stderr.write('Failed to convert README.md to rst:\\n {}\\n'.format(e))\n sys.stderr.flush()\n long_description = open('README.md').read()\n\n# Remove badges since they will always be obsolete.\n# This assumes the first 10 lines contain badge info.\nlong_description = '\\n'.join([str(line) for line in long_description.split('\\n')[10:]])\n\n# examples/tutorials\nEXTRAS_REQUIRE = [\n 'jupyter>=1.0.0',\n 'matplotlib>=1.3',\n 'observations>=0.1.4',\n 'pillow',\n 'torchvision',\n 'visdom>=0.1.4',\n 'pandas',\n 'seaborn',\n 'wget',\n]\n\nif sys.version_info[0] == 2:\n EXTRAS_REQUIRE.append('functools32')\n\nsetup(\n name='pyro-ppl',\n version=version,\n description='A Python library for probabilistic modeling and inference',\n long_description=long_description,\n packages=find_packages(include=['pyro', 'pyro.*']),\n url='http://pyro.ai',\n author='Uber AI Labs',\n author_email='[email protected]',\n install_requires=[\n # if you add any additional libraries, please also\n # add them to `docs/requirements.txt`\n 'contextlib2',\n 'graphviz>=0.8',\n 'numpy>=1.7',\n 'opt_einsum>=2.3.2',\n 'six>=1.10.0',\n 'torch>=1.0.0',\n 'tqdm>=4.28',\n ],\n extras_require={\n 'extras': EXTRAS_REQUIRE,\n 'test': EXTRAS_REQUIRE + [\n 'nbval',\n 'pytest>=4.1',\n 'pytest-cov',\n 'scipy>=1.1',\n ],\n 'profile': ['prettytable', 'pytest-benchmark', 'snakeviz'],\n 'dev': EXTRAS_REQUIRE + [\n 'flake8',\n 'isort',\n 'nbformat',\n 'nbsphinx>=0.3.2',\n 'nbstripout',\n 'nbval',\n 'pypandoc',\n 'pytest>=4.1',\n 'pytest-xdist',\n 'scipy>=1.1',\n 'sphinx',\n 'sphinx_rtd_theme',\n 'yapf',\n ],\n },\n tests_require=['flake8', 'pytest>=4.1'],\n keywords='machine learning statistics probabilistic programming bayesian modeling pytorch',\n license='MIT License',\n classifiers=[\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'Operating System :: POSIX :: Linux',\n 'Operating System :: MacOS :: MacOS X',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3.6',\n ],\n # yapf\n)\n", "path": "setup.py"}]} | 2,051 | 211 |
gh_patches_debug_25227 | rasdani/github-patches | git_diff | Textualize__textual-584 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Lazy load Widgets
We have a `widgets` module that imports all the widgets. THis allows for easy imports like:
```python
from textual.widgets import DataTable, Button
```
The downside is that all widgets are imported even if you only need one, increasing startup time.
I think we should be able to do this lazily some how, possibly by implementing a module level `__getattr__`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/textual/widgets/__init__.py`
Content:
```
1 from ._footer import Footer
2 from ._header import Header
3 from ._button import Button
4 from ._placeholder import Placeholder
5 from ._static import Static
6 from ._tree_control import TreeControl, TreeClick, TreeNode, NodeID
7 from ._directory_tree import DirectoryTree, FileClick
8
9 __all__ = [
10 "Button",
11 "DirectoryTree",
12 "FileClick",
13 "Footer",
14 "Header",
15 "Placeholder",
16 "Static",
17 "TreeClick",
18 "TreeControl",
19 "TreeNode",
20 "NodeID",
21 ]
22
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/textual/widgets/__init__.py b/src/textual/widgets/__init__.py
--- a/src/textual/widgets/__init__.py
+++ b/src/textual/widgets/__init__.py
@@ -1,21 +1,41 @@
-from ._footer import Footer
-from ._header import Header
-from ._button import Button
-from ._placeholder import Placeholder
-from ._static import Static
-from ._tree_control import TreeControl, TreeClick, TreeNode, NodeID
-from ._directory_tree import DirectoryTree, FileClick
+from __future__ import annotations
+from importlib import import_module
+import typing
+from ..case import camel_to_snake
+
+if typing.TYPE_CHECKING:
+ from ..widget import Widget
+
+
+# ⚠️For any new built-in Widget we create, not only we have to add them to the following list, but also to the
+# `__init__.pyi` file in this same folder - otherwise text editors and type checkers won't be able to "see" them.
__all__ = [
"Button",
"DirectoryTree",
- "FileClick",
"Footer",
"Header",
"Placeholder",
"Static",
- "TreeClick",
"TreeControl",
- "TreeNode",
- "NodeID",
]
+
+
+_WIDGETS_LAZY_LOADING_CACHE: dict[str, type[Widget]] = {}
+
+# Let's decrease startup time by lazy loading our Widgets:
+def __getattr__(widget_class: str) -> type[Widget]:
+ try:
+ return _WIDGETS_LAZY_LOADING_CACHE[widget_class]
+ except KeyError:
+ pass
+
+ if widget_class not in __all__:
+ raise ImportError(f"Package 'textual.widgets' has no class '{widget_class}'")
+
+ widget_module_path = f"._{camel_to_snake(widget_class)}"
+ module = import_module(widget_module_path, package="textual.widgets")
+ class_ = getattr(module, widget_class)
+
+ _WIDGETS_LAZY_LOADING_CACHE[widget_class] = class_
+ return class_
| {"golden_diff": "diff --git a/src/textual/widgets/__init__.py b/src/textual/widgets/__init__.py\n--- a/src/textual/widgets/__init__.py\n+++ b/src/textual/widgets/__init__.py\n@@ -1,21 +1,41 @@\n-from ._footer import Footer\n-from ._header import Header\n-from ._button import Button\n-from ._placeholder import Placeholder\n-from ._static import Static\n-from ._tree_control import TreeControl, TreeClick, TreeNode, NodeID\n-from ._directory_tree import DirectoryTree, FileClick\n+from __future__ import annotations\n+from importlib import import_module\n+import typing\n \n+from ..case import camel_to_snake\n+\n+if typing.TYPE_CHECKING:\n+ from ..widget import Widget\n+\n+\n+# \u26a0\ufe0fFor any new built-in Widget we create, not only we have to add them to the following list, but also to the\n+# `__init__.pyi` file in this same folder - otherwise text editors and type checkers won't be able to \"see\" them.\n __all__ = [\n \"Button\",\n \"DirectoryTree\",\n- \"FileClick\",\n \"Footer\",\n \"Header\",\n \"Placeholder\",\n \"Static\",\n- \"TreeClick\",\n \"TreeControl\",\n- \"TreeNode\",\n- \"NodeID\",\n ]\n+\n+\n+_WIDGETS_LAZY_LOADING_CACHE: dict[str, type[Widget]] = {}\n+\n+# Let's decrease startup time by lazy loading our Widgets:\n+def __getattr__(widget_class: str) -> type[Widget]:\n+ try:\n+ return _WIDGETS_LAZY_LOADING_CACHE[widget_class]\n+ except KeyError:\n+ pass\n+\n+ if widget_class not in __all__:\n+ raise ImportError(f\"Package 'textual.widgets' has no class '{widget_class}'\")\n+\n+ widget_module_path = f\"._{camel_to_snake(widget_class)}\"\n+ module = import_module(widget_module_path, package=\"textual.widgets\")\n+ class_ = getattr(module, widget_class)\n+\n+ _WIDGETS_LAZY_LOADING_CACHE[widget_class] = class_\n+ return class_\n", "issue": "Lazy load Widgets\nWe have a `widgets` module that imports all the widgets. THis allows for easy imports like:\r\n\r\n```python\r\nfrom textual.widgets import DataTable, Button\r\n```\r\n\r\nThe downside is that all widgets are imported even if you only need one, increasing startup time.\r\n\r\nI think we should be able to do this lazily some how, possibly by implementing a module level `__getattr__`.\n", "before_files": [{"content": "from ._footer import Footer\nfrom ._header import Header\nfrom ._button import Button\nfrom ._placeholder import Placeholder\nfrom ._static import Static\nfrom ._tree_control import TreeControl, TreeClick, TreeNode, NodeID\nfrom ._directory_tree import DirectoryTree, FileClick\n\n__all__ = [\n \"Button\",\n \"DirectoryTree\",\n \"FileClick\",\n \"Footer\",\n \"Header\",\n \"Placeholder\",\n \"Static\",\n \"TreeClick\",\n \"TreeControl\",\n \"TreeNode\",\n \"NodeID\",\n]\n", "path": "src/textual/widgets/__init__.py"}], "after_files": [{"content": "from __future__ import annotations\nfrom importlib import import_module\nimport typing\n\nfrom ..case import camel_to_snake\n\nif typing.TYPE_CHECKING:\n from ..widget import Widget\n\n\n# \u26a0\ufe0fFor any new built-in Widget we create, not only we have to add them to the following list, but also to the\n# `__init__.pyi` file in this same folder - otherwise text editors and type checkers won't be able to \"see\" them.\n__all__ = [\n \"Button\",\n \"DirectoryTree\",\n \"Footer\",\n \"Header\",\n \"Placeholder\",\n \"Static\",\n \"TreeControl\",\n]\n\n\n_WIDGETS_LAZY_LOADING_CACHE: dict[str, type[Widget]] = {}\n\n# Let's decrease startup time by lazy loading our Widgets:\ndef __getattr__(widget_class: str) -> type[Widget]:\n try:\n return _WIDGETS_LAZY_LOADING_CACHE[widget_class]\n except KeyError:\n pass\n\n if widget_class not in __all__:\n raise ImportError(f\"Package 'textual.widgets' has no class '{widget_class}'\")\n\n widget_module_path = f\"._{camel_to_snake(widget_class)}\"\n module = import_module(widget_module_path, package=\"textual.widgets\")\n class_ = getattr(module, widget_class)\n\n _WIDGETS_LAZY_LOADING_CACHE[widget_class] = class_\n return class_\n", "path": "src/textual/widgets/__init__.py"}]} | 492 | 458 |
gh_patches_debug_22944 | rasdani/github-patches | git_diff | plone__Products.CMFPlone-3963 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CMFPlone must not depend on plone.api
Dependency in setup.py, used in
https://github.com/plone/Products.CMFPlone/blob/5f866f56d979ca0879181823a1a802035f727079/Products/CMFPlone/controlpanel/browser/error_log_form.py#L6
plone.api depends on CMFPlone, not vice versa!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 from setuptools import find_packages
2 from setuptools import setup
3
4
5 version = "6.1.0a4.dev0"
6
7
8 setup(
9 name="Products.CMFPlone",
10 version=version,
11 description="The Plone Content Management System (core)",
12 long_description=open("README.md").read() + "\n" + open("CHANGES.md").read(),
13 long_description_content_type="text/markdown",
14 classifiers=[
15 "Development Status :: 5 - Production/Stable",
16 "Environment :: Web Environment",
17 "Framework :: Plone",
18 "Framework :: Plone :: 6.0",
19 "Framework :: Plone :: Core",
20 "Framework :: Zope :: 5",
21 "License :: OSI Approved :: GNU General Public License v2 (GPLv2)",
22 "Operating System :: OS Independent",
23 "Programming Language :: Python",
24 "Programming Language :: Python :: 3.8",
25 "Programming Language :: Python :: 3.9",
26 "Programming Language :: Python :: 3.10",
27 "Programming Language :: Python :: 3.11",
28 "Programming Language :: Python :: 3.12",
29 ],
30 python_requires=">=3.8",
31 keywords="Plone CMF Python Zope CMS Webapplication",
32 author="Plone Foundation",
33 author_email="[email protected]",
34 url="https://plone.org",
35 license="GPL version 2",
36 project_urls={
37 "Homepage": "https://plone.org",
38 "Documentation": "https://6.docs.plone.org",
39 "Source": "https://github.com/plone/Products.CMFPlone",
40 "Issues": "https://github.com/plone/plone.org/Products.CMFPlone",
41 "Forum": "https://community.plone.org/",
42 "Chat": "https://discord.gg/zFY3EBbjaj",
43 "Mastodon": "https://plone.social/@plone",
44 "Twitter": "https://twitter.com/plone",
45 "Videos": "https://youtube.com/@plonecms",
46 "Sponsor": "https://github.com/sponsors/plone",
47 },
48 packages=find_packages(),
49 namespace_packages=["Products"],
50 include_package_data=True,
51 zip_safe=False,
52 install_requires=[
53 "borg.localrole",
54 "five.customerize",
55 "lxml",
56 "plone.api >= 1.4.4",
57 "plone.app.content",
58 "plone.app.contentlisting",
59 "plone.app.contentmenu >= 2.0.1",
60 "plone.app.contentrules",
61 "plone.app.contenttypes",
62 "plone.app.customerize",
63 "plone.app.dexterity",
64 "plone.app.discussion",
65 "plone.app.i18n",
66 "plone.app.layout >= 2.5.15",
67 "plone.app.linkintegrity >=1.0.3",
68 "plone.app.locales",
69 "plone.app.portlets",
70 "plone.app.redirector",
71 "plone.app.registry",
72 "plone.app.theming",
73 "plone.app.users",
74 "plone.app.uuid",
75 "plone.app.viewletmanager",
76 "plone.app.vocabularies",
77 "plone.app.workflow",
78 "plone.app.z3cform >= 4.1.0",
79 "plone.base",
80 "plone.browserlayer >= 2.1.5",
81 "plone.contentrules",
82 "plone.folder",
83 "plone.i18n >= 4.0.5",
84 "plone.indexer",
85 "plone.intelligenttext",
86 "plone.locking",
87 "plone.memoize",
88 "plone.outputfilters",
89 "plone.portlet.collection",
90 "plone.portlet.static",
91 "plone.portlets",
92 "plone.protect >= 3.0.0",
93 "plone.resource",
94 "plone.schema",
95 "plone.session",
96 "plone.staticresources",
97 "plone.theme",
98 "plonetheme.barceloneta",
99 "Products.CMFEditions",
100 "Products.DCWorkflow",
101 "Products.ExtendedPathIndex",
102 "Products.isurlinportal",
103 "Products.MimetypesRegistry",
104 "Products.PlonePAS",
105 "Products.PortalTransforms",
106 "Products.SiteErrorLog",
107 "Products.statusmessages",
108 "setuptools>=36.2",
109 "plone.autoinclude",
110 "webresource>=1.2",
111 "Zope[wsgi] >= 5.0",
112 "zope.app.locales >= 3.6.0",
113 "zope.cachedescriptors",
114 "zope.deferredimport",
115 "zope.deprecation",
116 "zope.dottedname",
117 "zope.i18n",
118 "zope.i18nmessageid",
119 "zope.structuredtext",
120 ],
121 extras_require={
122 "test": [
123 "lxml",
124 "plone.app.robotframework>=1.0",
125 "robotframework-debuglibrary",
126 "plone.app.testing",
127 "zope.globalrequest",
128 "zope.testing",
129 "gunicorn",
130 ]
131 },
132 )
133
```
Path: `Products/CMFPlone/controlpanel/browser/error_log_form.py`
Content:
```
1 from DateTime import DateTime
2 from plone.base import PloneMessageFactory as _
3 from Products.CMFPlone.utils import safe_nativestring
4 from Products.Five import BrowserView
5
6 import plone.api as api
7
8
9 class ErrorLogUpdate(BrowserView):
10 def __call__(self):
11 member = api.user.get_current()
12
13 if getattr(self.request, "form.button.search", None) is not None:
14 search = self.request.form.get("search_entry")
15 if search == "":
16 member.setProperties(error_log_update=0.0)
17 self.context.plone_utils.addPortalMessage(_("Showing all entries"))
18 return self.request.RESPONSE.redirect(
19 self.context.absolute_url() + "/@@error-log-form"
20 )
21 return self.request.RESPONSE.redirect(
22 self.context.absolute_url() + "/@@error-log-show-entry?id=%s" % search
23 )
24
25 elif getattr(self.request, "form.button.showall", None) is not None:
26 member.setProperties(error_log_update=0.0)
27 self.context.plone_utils.addPortalMessage(_("Showing all entries"))
28 return self.request.RESPONSE.redirect(
29 self.context.absolute_url() + "/@@error-log-form"
30 )
31
32 elif getattr(self.request, "form.button.clear", None) is not None:
33 member.setProperties(error_log_update=DateTime().timeTime())
34 self.context.plone_utils.addPortalMessage(_("Entries cleared"))
35 return self.request.RESPONSE.redirect(
36 self.context.absolute_url() + "/@@error-log-form"
37 )
38
39 else:
40 return self.request.RESPONSE.redirect(
41 self.context.absolute_url() + "/@@error-log-form"
42 )
43
44
45 class ErrorLogSetProperties(BrowserView):
46 def __call__(self):
47 keep_entries = self.request.form.get("keep_entries")
48 ignored_exceptions = self.request.form.get("ignored_exceptions")
49 copy_to_zlog = self.request.form.get("copy_to_zlog", 0)
50
51 ignored_exceptions = map(safe_nativestring, ignored_exceptions)
52 self.context.error_log.setProperties(
53 keep_entries, copy_to_zlog, ignored_exceptions
54 )
55 self.context.plone_utils.addPortalMessage(_("Changes made."))
56
57 return self.request.RESPONSE.redirect(
58 self.context.absolute_url() + "/@@error-log-form"
59 )
60
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/Products/CMFPlone/controlpanel/browser/error_log_form.py b/Products/CMFPlone/controlpanel/browser/error_log_form.py
--- a/Products/CMFPlone/controlpanel/browser/error_log_form.py
+++ b/Products/CMFPlone/controlpanel/browser/error_log_form.py
@@ -1,14 +1,14 @@
from DateTime import DateTime
from plone.base import PloneMessageFactory as _
+from Products.CMFCore.utils import getToolByName
from Products.CMFPlone.utils import safe_nativestring
from Products.Five import BrowserView
-import plone.api as api
-
class ErrorLogUpdate(BrowserView):
def __call__(self):
- member = api.user.get_current()
+ portal_membership = getToolByName(self.context, "portal_membership")
+ member = portal_membership.getAuthenticatedMember()
if getattr(self.request, "form.button.search", None) is not None:
search = self.request.form.get("search_entry")
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -53,7 +53,6 @@
"borg.localrole",
"five.customerize",
"lxml",
- "plone.api >= 1.4.4",
"plone.app.content",
"plone.app.contentlisting",
"plone.app.contentmenu >= 2.0.1",
| {"golden_diff": "diff --git a/Products/CMFPlone/controlpanel/browser/error_log_form.py b/Products/CMFPlone/controlpanel/browser/error_log_form.py\n--- a/Products/CMFPlone/controlpanel/browser/error_log_form.py\n+++ b/Products/CMFPlone/controlpanel/browser/error_log_form.py\n@@ -1,14 +1,14 @@\n from DateTime import DateTime\n from plone.base import PloneMessageFactory as _\n+from Products.CMFCore.utils import getToolByName\n from Products.CMFPlone.utils import safe_nativestring\n from Products.Five import BrowserView\n \n-import plone.api as api\n-\n \n class ErrorLogUpdate(BrowserView):\n def __call__(self):\n- member = api.user.get_current()\n+ portal_membership = getToolByName(self.context, \"portal_membership\")\n+ member = portal_membership.getAuthenticatedMember()\n \n if getattr(self.request, \"form.button.search\", None) is not None:\n search = self.request.form.get(\"search_entry\")\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -53,7 +53,6 @@\n \"borg.localrole\",\n \"five.customerize\",\n \"lxml\",\n- \"plone.api >= 1.4.4\",\n \"plone.app.content\",\n \"plone.app.contentlisting\",\n \"plone.app.contentmenu >= 2.0.1\",\n", "issue": "CMFPlone must not depend on plone.api\nDependency in setup.py, used in \r\n\r\nhttps://github.com/plone/Products.CMFPlone/blob/5f866f56d979ca0879181823a1a802035f727079/Products/CMFPlone/controlpanel/browser/error_log_form.py#L6\r\n\r\nplone.api depends on CMFPlone, not vice versa!\n", "before_files": [{"content": "from setuptools import find_packages\nfrom setuptools import setup\n\n\nversion = \"6.1.0a4.dev0\"\n\n\nsetup(\n name=\"Products.CMFPlone\",\n version=version,\n description=\"The Plone Content Management System (core)\",\n long_description=open(\"README.md\").read() + \"\\n\" + open(\"CHANGES.md\").read(),\n long_description_content_type=\"text/markdown\",\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Web Environment\",\n \"Framework :: Plone\",\n \"Framework :: Plone :: 6.0\",\n \"Framework :: Plone :: Core\",\n \"Framework :: Zope :: 5\",\n \"License :: OSI Approved :: GNU General Public License v2 (GPLv2)\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: 3.11\",\n \"Programming Language :: Python :: 3.12\",\n ],\n python_requires=\">=3.8\",\n keywords=\"Plone CMF Python Zope CMS Webapplication\",\n author=\"Plone Foundation\",\n author_email=\"[email protected]\",\n url=\"https://plone.org\",\n license=\"GPL version 2\",\n project_urls={\n \"Homepage\": \"https://plone.org\",\n \"Documentation\": \"https://6.docs.plone.org\",\n \"Source\": \"https://github.com/plone/Products.CMFPlone\",\n \"Issues\": \"https://github.com/plone/plone.org/Products.CMFPlone\",\n \"Forum\": \"https://community.plone.org/\",\n \"Chat\": \"https://discord.gg/zFY3EBbjaj\",\n \"Mastodon\": \"https://plone.social/@plone\",\n \"Twitter\": \"https://twitter.com/plone\",\n \"Videos\": \"https://youtube.com/@plonecms\",\n \"Sponsor\": \"https://github.com/sponsors/plone\",\n },\n packages=find_packages(),\n namespace_packages=[\"Products\"],\n include_package_data=True,\n zip_safe=False,\n install_requires=[\n \"borg.localrole\",\n \"five.customerize\",\n \"lxml\",\n \"plone.api >= 1.4.4\",\n \"plone.app.content\",\n \"plone.app.contentlisting\",\n \"plone.app.contentmenu >= 2.0.1\",\n \"plone.app.contentrules\",\n \"plone.app.contenttypes\",\n \"plone.app.customerize\",\n \"plone.app.dexterity\",\n \"plone.app.discussion\",\n \"plone.app.i18n\",\n \"plone.app.layout >= 2.5.15\",\n \"plone.app.linkintegrity >=1.0.3\",\n \"plone.app.locales\",\n \"plone.app.portlets\",\n \"plone.app.redirector\",\n \"plone.app.registry\",\n \"plone.app.theming\",\n \"plone.app.users\",\n \"plone.app.uuid\",\n \"plone.app.viewletmanager\",\n \"plone.app.vocabularies\",\n \"plone.app.workflow\",\n \"plone.app.z3cform >= 4.1.0\",\n \"plone.base\",\n \"plone.browserlayer >= 2.1.5\",\n \"plone.contentrules\",\n \"plone.folder\",\n \"plone.i18n >= 4.0.5\",\n \"plone.indexer\",\n \"plone.intelligenttext\",\n \"plone.locking\",\n \"plone.memoize\",\n \"plone.outputfilters\",\n \"plone.portlet.collection\",\n \"plone.portlet.static\",\n \"plone.portlets\",\n \"plone.protect >= 3.0.0\",\n \"plone.resource\",\n \"plone.schema\",\n \"plone.session\",\n \"plone.staticresources\",\n \"plone.theme\",\n \"plonetheme.barceloneta\",\n \"Products.CMFEditions\",\n \"Products.DCWorkflow\",\n \"Products.ExtendedPathIndex\",\n \"Products.isurlinportal\",\n \"Products.MimetypesRegistry\",\n \"Products.PlonePAS\",\n \"Products.PortalTransforms\",\n \"Products.SiteErrorLog\",\n \"Products.statusmessages\",\n \"setuptools>=36.2\",\n \"plone.autoinclude\",\n \"webresource>=1.2\",\n \"Zope[wsgi] >= 5.0\",\n \"zope.app.locales >= 3.6.0\",\n \"zope.cachedescriptors\",\n \"zope.deferredimport\",\n \"zope.deprecation\",\n \"zope.dottedname\",\n \"zope.i18n\",\n \"zope.i18nmessageid\",\n \"zope.structuredtext\",\n ],\n extras_require={\n \"test\": [\n \"lxml\",\n \"plone.app.robotframework>=1.0\",\n \"robotframework-debuglibrary\",\n \"plone.app.testing\",\n \"zope.globalrequest\",\n \"zope.testing\",\n \"gunicorn\",\n ]\n },\n)\n", "path": "setup.py"}, {"content": "from DateTime import DateTime\nfrom plone.base import PloneMessageFactory as _\nfrom Products.CMFPlone.utils import safe_nativestring\nfrom Products.Five import BrowserView\n\nimport plone.api as api\n\n\nclass ErrorLogUpdate(BrowserView):\n def __call__(self):\n member = api.user.get_current()\n\n if getattr(self.request, \"form.button.search\", None) is not None:\n search = self.request.form.get(\"search_entry\")\n if search == \"\":\n member.setProperties(error_log_update=0.0)\n self.context.plone_utils.addPortalMessage(_(\"Showing all entries\"))\n return self.request.RESPONSE.redirect(\n self.context.absolute_url() + \"/@@error-log-form\"\n )\n return self.request.RESPONSE.redirect(\n self.context.absolute_url() + \"/@@error-log-show-entry?id=%s\" % search\n )\n\n elif getattr(self.request, \"form.button.showall\", None) is not None:\n member.setProperties(error_log_update=0.0)\n self.context.plone_utils.addPortalMessage(_(\"Showing all entries\"))\n return self.request.RESPONSE.redirect(\n self.context.absolute_url() + \"/@@error-log-form\"\n )\n\n elif getattr(self.request, \"form.button.clear\", None) is not None:\n member.setProperties(error_log_update=DateTime().timeTime())\n self.context.plone_utils.addPortalMessage(_(\"Entries cleared\"))\n return self.request.RESPONSE.redirect(\n self.context.absolute_url() + \"/@@error-log-form\"\n )\n\n else:\n return self.request.RESPONSE.redirect(\n self.context.absolute_url() + \"/@@error-log-form\"\n )\n\n\nclass ErrorLogSetProperties(BrowserView):\n def __call__(self):\n keep_entries = self.request.form.get(\"keep_entries\")\n ignored_exceptions = self.request.form.get(\"ignored_exceptions\")\n copy_to_zlog = self.request.form.get(\"copy_to_zlog\", 0)\n\n ignored_exceptions = map(safe_nativestring, ignored_exceptions)\n self.context.error_log.setProperties(\n keep_entries, copy_to_zlog, ignored_exceptions\n )\n self.context.plone_utils.addPortalMessage(_(\"Changes made.\"))\n\n return self.request.RESPONSE.redirect(\n self.context.absolute_url() + \"/@@error-log-form\"\n )\n", "path": "Products/CMFPlone/controlpanel/browser/error_log_form.py"}], "after_files": [{"content": "from setuptools import find_packages\nfrom setuptools import setup\n\n\nversion = \"6.1.0a4.dev0\"\n\n\nsetup(\n name=\"Products.CMFPlone\",\n version=version,\n description=\"The Plone Content Management System (core)\",\n long_description=open(\"README.md\").read() + \"\\n\" + open(\"CHANGES.md\").read(),\n long_description_content_type=\"text/markdown\",\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Web Environment\",\n \"Framework :: Plone\",\n \"Framework :: Plone :: 6.0\",\n \"Framework :: Plone :: Core\",\n \"Framework :: Zope :: 5\",\n \"License :: OSI Approved :: GNU General Public License v2 (GPLv2)\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: 3.11\",\n \"Programming Language :: Python :: 3.12\",\n ],\n python_requires=\">=3.8\",\n keywords=\"Plone CMF Python Zope CMS Webapplication\",\n author=\"Plone Foundation\",\n author_email=\"[email protected]\",\n url=\"https://plone.org\",\n license=\"GPL version 2\",\n project_urls={\n \"Homepage\": \"https://plone.org\",\n \"Documentation\": \"https://6.docs.plone.org\",\n \"Source\": \"https://github.com/plone/Products.CMFPlone\",\n \"Issues\": \"https://github.com/plone/plone.org/Products.CMFPlone\",\n \"Forum\": \"https://community.plone.org/\",\n \"Chat\": \"https://discord.gg/zFY3EBbjaj\",\n \"Mastodon\": \"https://plone.social/@plone\",\n \"Twitter\": \"https://twitter.com/plone\",\n \"Videos\": \"https://youtube.com/@plonecms\",\n \"Sponsor\": \"https://github.com/sponsors/plone\",\n },\n packages=find_packages(),\n namespace_packages=[\"Products\"],\n include_package_data=True,\n zip_safe=False,\n install_requires=[\n \"borg.localrole\",\n \"five.customerize\",\n \"lxml\",\n \"plone.app.content\",\n \"plone.app.contentlisting\",\n \"plone.app.contentmenu >= 2.0.1\",\n \"plone.app.contentrules\",\n \"plone.app.contenttypes\",\n \"plone.app.customerize\",\n \"plone.app.dexterity\",\n \"plone.app.discussion\",\n \"plone.app.i18n\",\n \"plone.app.layout >= 2.5.15\",\n \"plone.app.linkintegrity >=1.0.3\",\n \"plone.app.locales\",\n \"plone.app.portlets\",\n \"plone.app.redirector\",\n \"plone.app.registry\",\n \"plone.app.theming\",\n \"plone.app.users\",\n \"plone.app.uuid\",\n \"plone.app.viewletmanager\",\n \"plone.app.vocabularies\",\n \"plone.app.workflow\",\n \"plone.app.z3cform >= 4.1.0\",\n \"plone.base\",\n \"plone.browserlayer >= 2.1.5\",\n \"plone.contentrules\",\n \"plone.folder\",\n \"plone.i18n >= 4.0.5\",\n \"plone.indexer\",\n \"plone.intelligenttext\",\n \"plone.locking\",\n \"plone.memoize\",\n \"plone.outputfilters\",\n \"plone.portlet.collection\",\n \"plone.portlet.static\",\n \"plone.portlets\",\n \"plone.protect >= 3.0.0\",\n \"plone.resource\",\n \"plone.schema\",\n \"plone.session\",\n \"plone.staticresources\",\n \"plone.theme\",\n \"plonetheme.barceloneta\",\n \"Products.CMFEditions\",\n \"Products.DCWorkflow\",\n \"Products.ExtendedPathIndex\",\n \"Products.isurlinportal\",\n \"Products.MimetypesRegistry\",\n \"Products.PlonePAS\",\n \"Products.PortalTransforms\",\n \"Products.SiteErrorLog\",\n \"Products.statusmessages\",\n \"setuptools>=36.2\",\n \"plone.autoinclude\",\n \"webresource>=1.2\",\n \"Zope[wsgi] >= 5.0\",\n \"zope.app.locales >= 3.6.0\",\n \"zope.cachedescriptors\",\n \"zope.deferredimport\",\n \"zope.deprecation\",\n \"zope.dottedname\",\n \"zope.i18n\",\n \"zope.i18nmessageid\",\n \"zope.structuredtext\",\n ],\n extras_require={\n \"test\": [\n \"lxml\",\n \"plone.app.robotframework>=1.0\",\n \"robotframework-debuglibrary\",\n \"plone.app.testing\",\n \"zope.globalrequest\",\n \"zope.testing\",\n \"gunicorn\",\n ]\n },\n)\n", "path": "setup.py"}, {"content": "from DateTime import DateTime\nfrom plone.base import PloneMessageFactory as _\nfrom Products.CMFCore.utils import getToolByName\nfrom Products.CMFPlone.utils import safe_nativestring\nfrom Products.Five import BrowserView\n\n\nclass ErrorLogUpdate(BrowserView):\n def __call__(self):\n portal_membership = getToolByName(self.context, \"portal_membership\")\n member = portal_membership.getAuthenticatedMember()\n\n if getattr(self.request, \"form.button.search\", None) is not None:\n search = self.request.form.get(\"search_entry\")\n if search == \"\":\n member.setProperties(error_log_update=0.0)\n self.context.plone_utils.addPortalMessage(_(\"Showing all entries\"))\n return self.request.RESPONSE.redirect(\n self.context.absolute_url() + \"/@@error-log-form\"\n )\n return self.request.RESPONSE.redirect(\n self.context.absolute_url() + \"/@@error-log-show-entry?id=%s\" % search\n )\n\n elif getattr(self.request, \"form.button.showall\", None) is not None:\n member.setProperties(error_log_update=0.0)\n self.context.plone_utils.addPortalMessage(_(\"Showing all entries\"))\n return self.request.RESPONSE.redirect(\n self.context.absolute_url() + \"/@@error-log-form\"\n )\n\n elif getattr(self.request, \"form.button.clear\", None) is not None:\n member.setProperties(error_log_update=DateTime().timeTime())\n self.context.plone_utils.addPortalMessage(_(\"Entries cleared\"))\n return self.request.RESPONSE.redirect(\n self.context.absolute_url() + \"/@@error-log-form\"\n )\n\n else:\n return self.request.RESPONSE.redirect(\n self.context.absolute_url() + \"/@@error-log-form\"\n )\n\n\nclass ErrorLogSetProperties(BrowserView):\n def __call__(self):\n keep_entries = self.request.form.get(\"keep_entries\")\n ignored_exceptions = self.request.form.get(\"ignored_exceptions\")\n copy_to_zlog = self.request.form.get(\"copy_to_zlog\", 0)\n\n ignored_exceptions = map(safe_nativestring, ignored_exceptions)\n self.context.error_log.setProperties(\n keep_entries, copy_to_zlog, ignored_exceptions\n )\n self.context.plone_utils.addPortalMessage(_(\"Changes made.\"))\n\n return self.request.RESPONSE.redirect(\n self.context.absolute_url() + \"/@@error-log-form\"\n )\n", "path": "Products/CMFPlone/controlpanel/browser/error_log_form.py"}]} | 2,409 | 309 |
gh_patches_debug_14294 | rasdani/github-patches | git_diff | scikit-image__scikit-image-4172 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
IO: unhandled exception, in case the URL cannot be opened
## Description

## Way to reproduce
```python
def url_to_image(url):
"""download image from url and return it"""
im_mat = io.imread(url)
image = cv2.imdecode(im_mat, cv2.IMREAD_COLOR)
# return the image
return image
```
## Version information
```python
>>> from __future__ import print_function
>>> import sys; print(sys.version)
3.7.2 (tags/v3.7.2:9a3ffc0492, Dec 23 2018, 23:09:28) [MSC v.1916 64 bit (AMD64)]
>>> import platform; print(platform.platform())
Windows-10-10.0.17134-SP0
>>> import skimage; print("scikit-image version: {}".format(skimage.__version__))
scikit-image version: 0.14.2
>>> import numpy; print("numpy version: {}".format(numpy.__version__))
```
```python
Traceback (most recent call last):
File "C:\Python37\lib\site-packages\skimage\io\util.py", line 28, in file_or_url_context
with tempfile.NamedTemporaryFile(delete=False, suffix=ext) as f:
File "C:\Python37\lib\tempfile.py", line 547, in NamedTemporaryFile
(fd, name) = _mkstemp_inner(dir, prefix, suffix, flags, output_type)
File "C:\Python37\lib\tempfile.py", line 258, in _mkstemp_inner
fd = _os.open(file, flags, 0o600)
OSError: [Errno 22] Invalid argument: 'C:\\Users\\kelvin\\AppData\\Local\\Temp\\tmpegjt2y8r.png?mi_guid=CFCDF62B5115D31389F2893D3CCFFA5CB42BDE7F8CD696BB57B97B8BB567940F&mi_language=EN'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "imagecheck.py", line 28, in <module>
print(url_to_image("http://www.movable-ink-6437.com/p/rp/0fd853751b1edbb0.png?mi_guid=CFCDF62B5115D31389F2893D3CCFFA5CB42BDE7F8CD696BB57B97B8BB567940F&mi_language=EN"))
File "imagecheck.py", line 9, in url_to_image
im_mat = io.imread(url)
File "C:\Python37\lib\site-packages\skimage\io\_io.py", line 61, in imread
with file_or_url_context(fname) as fname:
File "C:\Python37\lib\contextlib.py", line 112, in __enter__
return next(self.gen)
File "C:\Python37\lib\site-packages\skimage\io\util.py", line 34, in file_or_url_context
os.remove(f.name)
UnboundLocalError: local variable 'f' referenced before assignment
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `skimage/io/util.py`
Content:
```
1 import urllib.parse
2 import urllib.request
3
4 import os
5 import re
6 import tempfile
7 from contextlib import contextmanager
8
9
10 URL_REGEX = re.compile(r'http://|https://|ftp://|file://|file:\\')
11
12
13 def is_url(filename):
14 """Return True if string is an http or ftp path."""
15 return (isinstance(filename, str) and
16 URL_REGEX.match(filename) is not None)
17
18
19 @contextmanager
20 def file_or_url_context(resource_name):
21 """Yield name of file from the given resource (i.e. file or url)."""
22 if is_url(resource_name):
23 url_components = urllib.parse.urlparse(resource_name)
24 _, ext = os.path.splitext(url_components.path)
25 try:
26 with tempfile.NamedTemporaryFile(delete=False, suffix=ext) as f:
27 u = urllib.request.urlopen(resource_name)
28 f.write(u.read())
29 # f must be closed before yielding
30 yield f.name
31 finally:
32 os.remove(f.name)
33 else:
34 yield resource_name
35
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/skimage/io/util.py b/skimage/io/util.py
--- a/skimage/io/util.py
+++ b/skimage/io/util.py
@@ -1,5 +1,6 @@
import urllib.parse
import urllib.request
+from urllib.error import URLError, HTTPError
import os
import re
@@ -28,7 +29,15 @@
f.write(u.read())
# f must be closed before yielding
yield f.name
- finally:
+ except (URLError, HTTPError):
+ # could not open URL
+ os.remove(f.name)
+ raise
+ except (FileNotFoundError, FileExistsError,
+ PermissionError, BaseException):
+ # could not create temporary file
+ raise
+ else:
os.remove(f.name)
else:
yield resource_name
| {"golden_diff": "diff --git a/skimage/io/util.py b/skimage/io/util.py\n--- a/skimage/io/util.py\n+++ b/skimage/io/util.py\n@@ -1,5 +1,6 @@\n import urllib.parse\n import urllib.request\n+from urllib.error import URLError, HTTPError\n \n import os\n import re\n@@ -28,7 +29,15 @@\n f.write(u.read())\n # f must be closed before yielding\n yield f.name\n- finally:\n+ except (URLError, HTTPError):\n+ # could not open URL\n+ os.remove(f.name)\n+ raise\n+ except (FileNotFoundError, FileExistsError,\n+ PermissionError, BaseException):\n+ # could not create temporary file\n+ raise\n+ else:\n os.remove(f.name)\n else:\n yield resource_name\n", "issue": "IO: unhandled exception, in case the URL cannot be opened\n## Description\r\n\r\n\r\n\r\n## Way to reproduce\r\n```python\r\ndef url_to_image(url):\r\n \"\"\"download image from url and return it\"\"\"\r\n im_mat = io.imread(url)\r\n image = cv2.imdecode(im_mat, cv2.IMREAD_COLOR)\r\n \r\n # return the image\r\n return image\r\n```\r\n\r\n\r\n## Version information\r\n```python\r\n>>> from __future__ import print_function\r\n>>> import sys; print(sys.version)\r\n3.7.2 (tags/v3.7.2:9a3ffc0492, Dec 23 2018, 23:09:28) [MSC v.1916 64 bit (AMD64)]\r\n>>> import platform; print(platform.platform())\r\nWindows-10-10.0.17134-SP0\r\n>>> import skimage; print(\"scikit-image version: {}\".format(skimage.__version__))\r\nscikit-image version: 0.14.2\r\n>>> import numpy; print(\"numpy version: {}\".format(numpy.__version__))\r\n```\r\n\r\n```python\r\nTraceback (most recent call last):\r\n File \"C:\\Python37\\lib\\site-packages\\skimage\\io\\util.py\", line 28, in file_or_url_context\r\n with tempfile.NamedTemporaryFile(delete=False, suffix=ext) as f:\r\n File \"C:\\Python37\\lib\\tempfile.py\", line 547, in NamedTemporaryFile\r\n (fd, name) = _mkstemp_inner(dir, prefix, suffix, flags, output_type)\r\n File \"C:\\Python37\\lib\\tempfile.py\", line 258, in _mkstemp_inner\r\n fd = _os.open(file, flags, 0o600)\r\nOSError: [Errno 22] Invalid argument: 'C:\\\\Users\\\\kelvin\\\\AppData\\\\Local\\\\Temp\\\\tmpegjt2y8r.png?mi_guid=CFCDF62B5115D31389F2893D3CCFFA5CB42BDE7F8CD696BB57B97B8BB567940F&mi_language=EN'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"imagecheck.py\", line 28, in <module>\r\n print(url_to_image(\"http://www.movable-ink-6437.com/p/rp/0fd853751b1edbb0.png?mi_guid=CFCDF62B5115D31389F2893D3CCFFA5CB42BDE7F8CD696BB57B97B8BB567940F&mi_language=EN\"))\r\n File \"imagecheck.py\", line 9, in url_to_image\r\n im_mat = io.imread(url)\r\n File \"C:\\Python37\\lib\\site-packages\\skimage\\io\\_io.py\", line 61, in imread\r\n with file_or_url_context(fname) as fname:\r\n File \"C:\\Python37\\lib\\contextlib.py\", line 112, in __enter__\r\n return next(self.gen)\r\n File \"C:\\Python37\\lib\\site-packages\\skimage\\io\\util.py\", line 34, in file_or_url_context\r\n os.remove(f.name)\r\nUnboundLocalError: local variable 'f' referenced before assignment\r\n```\r\n\r\n\n", "before_files": [{"content": "import urllib.parse\nimport urllib.request\n\nimport os\nimport re\nimport tempfile\nfrom contextlib import contextmanager\n\n\nURL_REGEX = re.compile(r'http://|https://|ftp://|file://|file:\\\\')\n\n\ndef is_url(filename):\n \"\"\"Return True if string is an http or ftp path.\"\"\"\n return (isinstance(filename, str) and\n URL_REGEX.match(filename) is not None)\n\n\n@contextmanager\ndef file_or_url_context(resource_name):\n \"\"\"Yield name of file from the given resource (i.e. file or url).\"\"\"\n if is_url(resource_name):\n url_components = urllib.parse.urlparse(resource_name)\n _, ext = os.path.splitext(url_components.path)\n try:\n with tempfile.NamedTemporaryFile(delete=False, suffix=ext) as f:\n u = urllib.request.urlopen(resource_name)\n f.write(u.read())\n # f must be closed before yielding\n yield f.name\n finally:\n os.remove(f.name)\n else:\n yield resource_name\n", "path": "skimage/io/util.py"}], "after_files": [{"content": "import urllib.parse\nimport urllib.request\nfrom urllib.error import URLError, HTTPError\n\nimport os\nimport re\nimport tempfile\nfrom contextlib import contextmanager\n\n\nURL_REGEX = re.compile(r'http://|https://|ftp://|file://|file:\\\\')\n\n\ndef is_url(filename):\n \"\"\"Return True if string is an http or ftp path.\"\"\"\n return (isinstance(filename, str) and\n URL_REGEX.match(filename) is not None)\n\n\n@contextmanager\ndef file_or_url_context(resource_name):\n \"\"\"Yield name of file from the given resource (i.e. file or url).\"\"\"\n if is_url(resource_name):\n url_components = urllib.parse.urlparse(resource_name)\n _, ext = os.path.splitext(url_components.path)\n try:\n with tempfile.NamedTemporaryFile(delete=False, suffix=ext) as f:\n u = urllib.request.urlopen(resource_name)\n f.write(u.read())\n # f must be closed before yielding\n yield f.name\n except (URLError, HTTPError):\n # could not open URL\n os.remove(f.name)\n raise\n except (FileNotFoundError, FileExistsError,\n PermissionError, BaseException):\n # could not create temporary file\n raise\n else:\n os.remove(f.name)\n else:\n yield resource_name\n", "path": "skimage/io/util.py"}]} | 1,381 | 187 |
gh_patches_debug_12519 | rasdani/github-patches | git_diff | litestar-org__litestar-771 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Properly indented docstrings result in incorrectly rendered Markdown in openapi docs
**Describe the bug**
A properly formatted docstring is indented to match the indentation of the function or method. The markdown rendering that is passed into the API docs seems to be interpreting these indentations in such a way that the markdown is not properly formatted unless the docstring is out-dented to an unconventional flush left.
**To Reproduce**
I have the following function definition with docstring:
```
async def content_classifier_model_details(model_name:str) -> ModelClassDetailsResponse:
"""Retrieve the variants and versions of a group of content classifier models for a
given named classification.
Models can be specified at the `classify` endpoint as follows:
- **Class name only** (`model_name`). E.g. `climate_action`. Will utilize the
default model as indicated by the default flag in the model details. _This format
is provided primarily for development and exploratory purposes._ In production
code, it is recommended that the variant be specified.
- **Class with variant** (`name-variant`). E.g. `climate_action-nx`. Will utilize
the current version of the specified variant as indicated by the `current` field
in the model details.
- **Full versioned identifier** (`name-variant-version`). E.g. `climate_action-nx-1` is
useful for pinning your application to a specific historical version.
"""
```
The result is shown in the screenshot titled incorrect-markdown-render.png. The expected rendering is shown in correct-markdown-render.png which can be achieved by the following unconventional docstring format:
```
async def content_classifier_model_details(model_name:str) -> ModelClassDetailsResponse:
"""Retrieve the variants and versions of a group of content classifier models for a
given named classification.
Models can be specified at the `classify` endpoint as follows:
- **Class name only** (`model_name`). E.g. `climate_action`. Will utilize the
default model as indicated by the default flag in the model details. _This format
is provided primarily for development and exploratory purposes._ In production
code, it is recommended that the variant be specified.
- **Class with variant** (`name-variant`). E.g. `climate_action-nx`. Will utilize
the current version of the specified variant as indicated by the `current` field
in the model details.
- **Full versioned identifier** (`name-variant-version`). E.g. `climate_action-nx-1` is
useful for pinning your application to a specific historical version.
"""
```
## incorrect-markdown-render
<img width="579" alt="incorrect-markdown-render" src="https://user-images.githubusercontent.com/307713/200467063-af74141c-f289-49f0-afe6-1aa28e9bea73.png">
## correct-markdown-render
(achieved via unconventional docstring out-denting)
<img width="571" alt="correct-markdown-render" src="https://user-images.githubusercontent.com/307713/200467064-349dfbda-0a21-4d99-8929-ab3a9f51323f.png">
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `starlite/openapi/path_item.py`
Content:
```
1 from typing import TYPE_CHECKING, Dict, List, Optional, Tuple, cast
2
3 from pydantic_openapi_schema.v3_1_0.operation import Operation
4 from pydantic_openapi_schema.v3_1_0.path_item import PathItem
5
6 from starlite.openapi.parameters import create_parameter_for_handler
7 from starlite.openapi.request_body import create_request_body
8 from starlite.openapi.responses import create_responses
9 from starlite.utils import get_name
10
11 if TYPE_CHECKING:
12 from pydantic import BaseModel
13 from pydantic_openapi_schema.v3_1_0 import SecurityRequirement
14
15 from starlite.handlers import HTTPRouteHandler
16 from starlite.plugins.base import PluginProtocol
17 from starlite.routes import HTTPRoute
18 from starlite.types import AnyCallable
19
20
21 def get_description_for_handler(route_handler: "HTTPRouteHandler", use_handler_docstrings: bool) -> Optional[str]:
22 """Produces the operation description for a route handler, either by using the description value if provided, or the docstring - if config is enabled.
23
24 Args:
25 route_handler: A route handler instance.
26 use_handler_docstrings: If `True` and `route_handler.description` is `None` returns docstring of wrapped
27 handler function.
28
29 Returns:
30 An optional description string
31 """
32 handler_description = route_handler.description
33 if handler_description is None and use_handler_docstrings:
34 return route_handler.fn.__doc__
35 return handler_description
36
37
38 def extract_layered_values(
39 route_handler: "HTTPRouteHandler",
40 ) -> Tuple[Optional[List[str]], Optional[List[Dict[str, List[str]]]]]:
41 """Extracts the tags and security values from the route handler layers.
42
43 Args:
44 route_handler: A Route Handler instance.
45
46 Returns:
47 A tuple of optional lists.
48 """
49 tags: List[str] = []
50 security: List["SecurityRequirement"] = []
51 for layer in route_handler.ownership_layers:
52 if layer.tags:
53 tags.extend(layer.tags)
54 if layer.security:
55 security.extend(layer.security)
56 return list(set(tags)) if tags else None, security or None
57
58
59 def create_path_item(
60 route: "HTTPRoute", create_examples: bool, plugins: List["PluginProtocol"], use_handler_docstrings: bool
61 ) -> PathItem:
62 """Create a PathItem model for the given route parsing all http_methods
63 into Operation Models."""
64 path_item = PathItem()
65 for http_method, handler_tuple in route.route_handler_map.items():
66 route_handler, _ = handler_tuple
67 if route_handler.include_in_schema:
68 handler_fields = cast("BaseModel", route_handler.signature_model).__fields__
69 parameters = (
70 create_parameter_for_handler(
71 route_handler=route_handler,
72 handler_fields=handler_fields,
73 path_parameters=route.path_parameters,
74 generate_examples=create_examples,
75 )
76 or None
77 )
78 raises_validation_error = bool("data" in handler_fields or path_item.parameters or parameters)
79 handler_name = get_name(cast("AnyCallable", route_handler.fn)).replace("_", " ").title()
80 request_body = None
81 if "data" in handler_fields:
82 request_body = create_request_body(
83 field=handler_fields["data"], generate_examples=create_examples, plugins=plugins
84 )
85
86 tags, security = extract_layered_values(route_handler)
87 operation = Operation(
88 operationId=route_handler.operation_id or handler_name,
89 tags=tags,
90 summary=route_handler.summary,
91 description=get_description_for_handler(route_handler, use_handler_docstrings),
92 deprecated=route_handler.deprecated,
93 responses=create_responses(
94 route_handler=route_handler,
95 raises_validation_error=raises_validation_error,
96 generate_examples=create_examples,
97 plugins=plugins,
98 ),
99 requestBody=request_body,
100 parameters=parameters, # type: ignore[arg-type]
101 security=security,
102 )
103 setattr(path_item, http_method.lower(), operation)
104 return path_item
105
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/starlite/openapi/path_item.py b/starlite/openapi/path_item.py
--- a/starlite/openapi/path_item.py
+++ b/starlite/openapi/path_item.py
@@ -1,3 +1,4 @@
+from inspect import cleandoc
from typing import TYPE_CHECKING, Dict, List, Optional, Tuple, cast
from pydantic_openapi_schema.v3_1_0.operation import Operation
@@ -31,7 +32,7 @@
"""
handler_description = route_handler.description
if handler_description is None and use_handler_docstrings:
- return route_handler.fn.__doc__
+ return cleandoc(route_handler.fn.__doc__) if route_handler.fn.__doc__ else None
return handler_description
| {"golden_diff": "diff --git a/starlite/openapi/path_item.py b/starlite/openapi/path_item.py\n--- a/starlite/openapi/path_item.py\n+++ b/starlite/openapi/path_item.py\n@@ -1,3 +1,4 @@\n+from inspect import cleandoc\n from typing import TYPE_CHECKING, Dict, List, Optional, Tuple, cast\n \n from pydantic_openapi_schema.v3_1_0.operation import Operation\n@@ -31,7 +32,7 @@\n \"\"\"\n handler_description = route_handler.description\n if handler_description is None and use_handler_docstrings:\n- return route_handler.fn.__doc__\n+ return cleandoc(route_handler.fn.__doc__) if route_handler.fn.__doc__ else None\n return handler_description\n", "issue": "Properly indented docstrings result in incorrectly rendered Markdown in openapi docs\n**Describe the bug**\r\n\r\nA properly formatted docstring is indented to match the indentation of the function or method. The markdown rendering that is passed into the API docs seems to be interpreting these indentations in such a way that the markdown is not properly formatted unless the docstring is out-dented to an unconventional flush left.\r\n\r\n**To Reproduce**\r\n\r\nI have the following function definition with docstring:\r\n\r\n```\r\nasync def content_classifier_model_details(model_name:str) -> ModelClassDetailsResponse:\r\n \"\"\"Retrieve the variants and versions of a group of content classifier models for a\r\n given named classification.\r\n\r\n Models can be specified at the `classify` endpoint as follows:\r\n\r\n - **Class name only** (`model_name`). E.g. `climate_action`. Will utilize the\r\n default model as indicated by the default flag in the model details. _This format\r\n is provided primarily for development and exploratory purposes._ In production\r\n code, it is recommended that the variant be specified.\r\n - **Class with variant** (`name-variant`). E.g. `climate_action-nx`. Will utilize\r\n the current version of the specified variant as indicated by the `current` field\r\n in the model details.\r\n - **Full versioned identifier** (`name-variant-version`). E.g. `climate_action-nx-1` is\r\n useful for pinning your application to a specific historical version.\r\n \"\"\"\r\n```\r\nThe result is shown in the screenshot titled incorrect-markdown-render.png. The expected rendering is shown in correct-markdown-render.png which can be achieved by the following unconventional docstring format:\r\n\r\n```\r\nasync def content_classifier_model_details(model_name:str) -> ModelClassDetailsResponse:\r\n \"\"\"Retrieve the variants and versions of a group of content classifier models for a\r\ngiven named classification.\r\n\r\nModels can be specified at the `classify` endpoint as follows:\r\n\r\n- **Class name only** (`model_name`). E.g. `climate_action`. Will utilize the\r\ndefault model as indicated by the default flag in the model details. _This format\r\nis provided primarily for development and exploratory purposes._ In production\r\ncode, it is recommended that the variant be specified.\r\n- **Class with variant** (`name-variant`). E.g. `climate_action-nx`. Will utilize\r\n the current version of the specified variant as indicated by the `current` field\r\n in the model details.\r\n- **Full versioned identifier** (`name-variant-version`). E.g. `climate_action-nx-1` is\r\n useful for pinning your application to a specific historical version.\r\n\"\"\"\r\n```\r\n\r\n## incorrect-markdown-render\r\n\r\n<img width=\"579\" alt=\"incorrect-markdown-render\" src=\"https://user-images.githubusercontent.com/307713/200467063-af74141c-f289-49f0-afe6-1aa28e9bea73.png\">\r\n\r\n## correct-markdown-render\r\n\r\n(achieved via unconventional docstring out-denting)\r\n\r\n<img width=\"571\" alt=\"correct-markdown-render\" src=\"https://user-images.githubusercontent.com/307713/200467064-349dfbda-0a21-4d99-8929-ab3a9f51323f.png\">\r\n\r\n\n", "before_files": [{"content": "from typing import TYPE_CHECKING, Dict, List, Optional, Tuple, cast\n\nfrom pydantic_openapi_schema.v3_1_0.operation import Operation\nfrom pydantic_openapi_schema.v3_1_0.path_item import PathItem\n\nfrom starlite.openapi.parameters import create_parameter_for_handler\nfrom starlite.openapi.request_body import create_request_body\nfrom starlite.openapi.responses import create_responses\nfrom starlite.utils import get_name\n\nif TYPE_CHECKING:\n from pydantic import BaseModel\n from pydantic_openapi_schema.v3_1_0 import SecurityRequirement\n\n from starlite.handlers import HTTPRouteHandler\n from starlite.plugins.base import PluginProtocol\n from starlite.routes import HTTPRoute\n from starlite.types import AnyCallable\n\n\ndef get_description_for_handler(route_handler: \"HTTPRouteHandler\", use_handler_docstrings: bool) -> Optional[str]:\n \"\"\"Produces the operation description for a route handler, either by using the description value if provided, or the docstring - if config is enabled.\n\n Args:\n route_handler: A route handler instance.\n use_handler_docstrings: If `True` and `route_handler.description` is `None` returns docstring of wrapped\n handler function.\n\n Returns:\n An optional description string\n \"\"\"\n handler_description = route_handler.description\n if handler_description is None and use_handler_docstrings:\n return route_handler.fn.__doc__\n return handler_description\n\n\ndef extract_layered_values(\n route_handler: \"HTTPRouteHandler\",\n) -> Tuple[Optional[List[str]], Optional[List[Dict[str, List[str]]]]]:\n \"\"\"Extracts the tags and security values from the route handler layers.\n\n Args:\n route_handler: A Route Handler instance.\n\n Returns:\n A tuple of optional lists.\n \"\"\"\n tags: List[str] = []\n security: List[\"SecurityRequirement\"] = []\n for layer in route_handler.ownership_layers:\n if layer.tags:\n tags.extend(layer.tags)\n if layer.security:\n security.extend(layer.security)\n return list(set(tags)) if tags else None, security or None\n\n\ndef create_path_item(\n route: \"HTTPRoute\", create_examples: bool, plugins: List[\"PluginProtocol\"], use_handler_docstrings: bool\n) -> PathItem:\n \"\"\"Create a PathItem model for the given route parsing all http_methods\n into Operation Models.\"\"\"\n path_item = PathItem()\n for http_method, handler_tuple in route.route_handler_map.items():\n route_handler, _ = handler_tuple\n if route_handler.include_in_schema:\n handler_fields = cast(\"BaseModel\", route_handler.signature_model).__fields__\n parameters = (\n create_parameter_for_handler(\n route_handler=route_handler,\n handler_fields=handler_fields,\n path_parameters=route.path_parameters,\n generate_examples=create_examples,\n )\n or None\n )\n raises_validation_error = bool(\"data\" in handler_fields or path_item.parameters or parameters)\n handler_name = get_name(cast(\"AnyCallable\", route_handler.fn)).replace(\"_\", \" \").title()\n request_body = None\n if \"data\" in handler_fields:\n request_body = create_request_body(\n field=handler_fields[\"data\"], generate_examples=create_examples, plugins=plugins\n )\n\n tags, security = extract_layered_values(route_handler)\n operation = Operation(\n operationId=route_handler.operation_id or handler_name,\n tags=tags,\n summary=route_handler.summary,\n description=get_description_for_handler(route_handler, use_handler_docstrings),\n deprecated=route_handler.deprecated,\n responses=create_responses(\n route_handler=route_handler,\n raises_validation_error=raises_validation_error,\n generate_examples=create_examples,\n plugins=plugins,\n ),\n requestBody=request_body,\n parameters=parameters, # type: ignore[arg-type]\n security=security,\n )\n setattr(path_item, http_method.lower(), operation)\n return path_item\n", "path": "starlite/openapi/path_item.py"}], "after_files": [{"content": "from inspect import cleandoc\nfrom typing import TYPE_CHECKING, Dict, List, Optional, Tuple, cast\n\nfrom pydantic_openapi_schema.v3_1_0.operation import Operation\nfrom pydantic_openapi_schema.v3_1_0.path_item import PathItem\n\nfrom starlite.openapi.parameters import create_parameter_for_handler\nfrom starlite.openapi.request_body import create_request_body\nfrom starlite.openapi.responses import create_responses\nfrom starlite.utils import get_name\n\nif TYPE_CHECKING:\n from pydantic import BaseModel\n from pydantic_openapi_schema.v3_1_0 import SecurityRequirement\n\n from starlite.handlers import HTTPRouteHandler\n from starlite.plugins.base import PluginProtocol\n from starlite.routes import HTTPRoute\n from starlite.types import AnyCallable\n\n\ndef get_description_for_handler(route_handler: \"HTTPRouteHandler\", use_handler_docstrings: bool) -> Optional[str]:\n \"\"\"Produces the operation description for a route handler, either by using the description value if provided, or the docstring - if config is enabled.\n\n Args:\n route_handler: A route handler instance.\n use_handler_docstrings: If `True` and `route_handler.description` is `None` returns docstring of wrapped\n handler function.\n\n Returns:\n An optional description string\n \"\"\"\n handler_description = route_handler.description\n if handler_description is None and use_handler_docstrings:\n return cleandoc(route_handler.fn.__doc__) if route_handler.fn.__doc__ else None\n return handler_description\n\n\ndef extract_layered_values(\n route_handler: \"HTTPRouteHandler\",\n) -> Tuple[Optional[List[str]], Optional[List[Dict[str, List[str]]]]]:\n \"\"\"Extracts the tags and security values from the route handler layers.\n\n Args:\n route_handler: A Route Handler instance.\n\n Returns:\n A tuple of optional lists.\n \"\"\"\n tags: List[str] = []\n security: List[\"SecurityRequirement\"] = []\n for layer in route_handler.ownership_layers:\n if layer.tags:\n tags.extend(layer.tags)\n if layer.security:\n security.extend(layer.security)\n return list(set(tags)) if tags else None, security or None\n\n\ndef create_path_item(\n route: \"HTTPRoute\", create_examples: bool, plugins: List[\"PluginProtocol\"], use_handler_docstrings: bool\n) -> PathItem:\n \"\"\"Create a PathItem model for the given route parsing all http_methods\n into Operation Models.\"\"\"\n path_item = PathItem()\n for http_method, handler_tuple in route.route_handler_map.items():\n route_handler, _ = handler_tuple\n if route_handler.include_in_schema:\n handler_fields = cast(\"BaseModel\", route_handler.signature_model).__fields__\n parameters = (\n create_parameter_for_handler(\n route_handler=route_handler,\n handler_fields=handler_fields,\n path_parameters=route.path_parameters,\n generate_examples=create_examples,\n )\n or None\n )\n raises_validation_error = bool(\"data\" in handler_fields or path_item.parameters or parameters)\n handler_name = get_name(cast(\"AnyCallable\", route_handler.fn)).replace(\"_\", \" \").title()\n request_body = None\n if \"data\" in handler_fields:\n request_body = create_request_body(\n field=handler_fields[\"data\"], generate_examples=create_examples, plugins=plugins\n )\n\n tags, security = extract_layered_values(route_handler)\n operation = Operation(\n operationId=route_handler.operation_id or handler_name,\n tags=tags,\n summary=route_handler.summary,\n description=get_description_for_handler(route_handler, use_handler_docstrings),\n deprecated=route_handler.deprecated,\n responses=create_responses(\n route_handler=route_handler,\n raises_validation_error=raises_validation_error,\n generate_examples=create_examples,\n plugins=plugins,\n ),\n requestBody=request_body,\n parameters=parameters, # type: ignore[arg-type]\n security=security,\n )\n setattr(path_item, http_method.lower(), operation)\n return path_item\n", "path": "starlite/openapi/path_item.py"}]} | 2,021 | 162 |
gh_patches_debug_48738 | rasdani/github-patches | git_diff | encode__uvicorn-962 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
WSGI middleware error in non-latin1 path
### Checklist
<!-- Please make sure you check all these items before submitting your bug report. -->
- [x] The bug is reproducible against the latest release and/or `master`.
- [x] There are no similar issues or pull requests to fix it yet.
### Describe the bug
<!-- A clear and concise description of what the bug is. -->
The same bug has been addressed in https://github.com/encode/starlette/issues/997.
https://github.com/encode/uvicorn/blob/9d51e1cfa68ee6e4ee3fb8a23b8599bb44c3985a/uvicorn/middleware/wsgi.py#L14
In uvicorn's wsgi middleware, `scope["path"]` in asgi is utf8-encoded but the corresponding `PATH_INFO` required by wsgi should be encoded in latin1 according to the standards.
### To reproduce
<!-- Provide a *minimal* example with steps to reproduce the bug locally.
NOTE: try to keep any external dependencies *at an absolute minimum* .
In other words, remove anything that doesn't make the bug go away.
-->
```python
import flask
import uvicorn
app = flask.Flask(__name__)
@app.route("/<keyword>")
def home(keyword):
return keyword
if __name__ == "__main__":
uvicorn.run(app, interface="wsgi")
```
1. run the script.
2. send a request to "http://127.0.0.1:8000/文"
### Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
Get a 200 response
### Actual behavior
<!-- A clear and concise description of what actually happens. -->
Get 500 Internal Server Error
### Debugging material
<!-- Any tracebacks, screenshots, etc. that can help understanding the problem.
NOTE:
- Please list tracebacks in full (don't truncate them).
- If relevant, consider turning on DEBUG or TRACE logs for additional details (see the Logging section on https://www.uvicorn.org/settings/ specifically the `log-level` flag).
- Consider using `<details>` to make tracebacks/logs collapsible if they're very large (see https://gist.github.com/ericclemmons/b146fe5da72ca1f706b2ef72a20ac39d).
-->
<details>
```python
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "C:\Users\cools\AppData\Local\Programs\Python\Python38\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 394, in run_asgi
result = await app(self.scope, self.receive, self.send)
File "C:\Users\cools\AppData\Local\Programs\Python\Python38\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 45, in __call__
return await self.app(scope, receive, send)
File "C:\Users\cools\AppData\Local\Programs\Python\Python38\lib\site-packages\uvicorn\middleware\wsgi.py", line 64, in __call__
await instance(receive, send)
File "C:\Users\cools\AppData\Local\Programs\Python\Python38\lib\site-packages\uvicorn\middleware\wsgi.py", line 95, in __call__
await asyncio.wait_for(wsgi, None)
File "C:\Users\cools\AppData\Local\Programs\Python\Python38\lib\asyncio\tasks.py", line 455, in wait_for
return await fut
File "C:\Users\cools\AppData\Local\Programs\Python\Python38\lib\concurrent\futures\thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "C:\Users\cools\AppData\Local\Programs\Python\Python38\lib\site-packages\uvicorn\middleware\wsgi.py", line 134, in wsgi
for chunk in self.app(environ, start_response):
File "C:\Users\cools\AppData\Local\Programs\Python\Python38\lib\site-packages\flask\app.py", line 2464, in __call__
return self.wsgi_app(environ, start_response)
File "C:\Users\cools\AppData\Local\Programs\Python\Python38\lib\site-packages\flask\app.py", line 2442, in wsgi_app
ctx = self.request_context(environ)
File "C:\Users\cools\AppData\Local\Programs\Python\Python38\lib\site-packages\flask\app.py", line 2359, in request_context
return RequestContext(self, environ)
File "C:\Users\cools\AppData\Local\Programs\Python\Python38\lib\site-packages\flask\ctx.py", line 292, in __init__
self.url_adapter = app.create_url_adapter(self.request)
File "C:\Users\cools\AppData\Local\Programs\Python\Python38\lib\site-packages\flask\app.py", line 2171, in create_url_adapter
return self.url_map.bind_to_environ(
File "C:\Users\cools\AppData\Local\Programs\Python\Python38\lib\site-packages\werkzeug\routing.py", line 1634, in bind_to_environ
path_info = _get_wsgi_string("PATH_INFO")
File "C:\Users\cools\AppData\Local\Programs\Python\Python38\lib\site-packages\werkzeug\routing.py", line 1631, in _get_wsgi_string
return wsgi_decoding_dance(val, self.charset)
File "C:\Users\cools\AppData\Local\Programs\Python\Python38\lib\site-packages\werkzeug\_compat.py", line 188, in wsgi_decoding_dance
return s.encode("latin1").decode(charset, errors)
UnicodeEncodeError: 'latin-1' codec can't encode character '\u6587' in position 1: ordinal not in range(256)
INFO: 127.0.0.1:52574 - "GET /%E6%96%87 HTTP/1.1" 500 Internal Server Error
```
</details>
### Environment
- OS / Python / Uvicorn version: Running uvicorn 0.13.3 with CPython 3.8.5 on Windows
- The exact command you're running uvicorn with, all flags you passed included. If you run it with gunicorn please do the same. If there is a reverse-proxy involved and you cannot reproduce without it please give the minimal config of it to reproduce.
### Additional context
<!-- Any additional information that can help understanding the problem.
Eg. linked issues, or a description of what you were trying to achieve. -->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `uvicorn/middleware/wsgi.py`
Content:
```
1 import asyncio
2 import concurrent.futures
3 import io
4 import sys
5
6
7 def build_environ(scope, message, body):
8 """
9 Builds a scope and request message into a WSGI environ object.
10 """
11 environ = {
12 "REQUEST_METHOD": scope["method"],
13 "SCRIPT_NAME": "",
14 "PATH_INFO": scope["path"],
15 "QUERY_STRING": scope["query_string"].decode("ascii"),
16 "SERVER_PROTOCOL": "HTTP/%s" % scope["http_version"],
17 "wsgi.version": (1, 0),
18 "wsgi.url_scheme": scope.get("scheme", "http"),
19 "wsgi.input": io.BytesIO(body),
20 "wsgi.errors": sys.stdout,
21 "wsgi.multithread": True,
22 "wsgi.multiprocess": True,
23 "wsgi.run_once": False,
24 }
25
26 # Get server name and port - required in WSGI, not in ASGI
27 server = scope.get("server")
28 if server is None:
29 server = ("localhost", 80)
30 environ["SERVER_NAME"] = server[0]
31 environ["SERVER_PORT"] = server[1]
32
33 # Get client IP address
34 client = scope.get("client")
35 if client is not None:
36 environ["REMOTE_ADDR"] = client[0]
37
38 # Go through headers and make them into environ entries
39 for name, value in scope.get("headers", []):
40 name = name.decode("latin1")
41 if name == "content-length":
42 corrected_name = "CONTENT_LENGTH"
43 elif name == "content-type":
44 corrected_name = "CONTENT_TYPE"
45 else:
46 corrected_name = "HTTP_%s" % name.upper().replace("-", "_")
47 # HTTPbis say only ASCII chars are allowed in headers, but we latin1
48 # just in case
49 value = value.decode("latin1")
50 if corrected_name in environ:
51 value = environ[corrected_name] + "," + value
52 environ[corrected_name] = value
53 return environ
54
55
56 class WSGIMiddleware:
57 def __init__(self, app, workers=10):
58 self.app = app
59 self.executor = concurrent.futures.ThreadPoolExecutor(max_workers=workers)
60
61 async def __call__(self, scope, receive, send):
62 assert scope["type"] == "http"
63 instance = WSGIResponder(self.app, self.executor, scope)
64 await instance(receive, send)
65
66
67 class WSGIResponder:
68 def __init__(self, app, executor, scope):
69 self.app = app
70 self.executor = executor
71 self.scope = scope
72 self.status = None
73 self.response_headers = None
74 self.send_event = asyncio.Event()
75 self.send_queue = []
76 self.loop = None
77 self.response_started = False
78 self.exc_info = None
79
80 async def __call__(self, receive, send):
81 message = await receive()
82 body = message.get("body", b"")
83 more_body = message.get("more_body", False)
84 while more_body:
85 body_message = await receive()
86 body += body_message.get("body", b"")
87 more_body = body_message.get("more_body", False)
88 environ = build_environ(self.scope, message, body)
89 self.loop = asyncio.get_event_loop()
90 wsgi = self.loop.run_in_executor(
91 self.executor, self.wsgi, environ, self.start_response
92 )
93 sender = self.loop.create_task(self.sender(send))
94 try:
95 await asyncio.wait_for(wsgi, None)
96 finally:
97 self.send_queue.append(None)
98 self.send_event.set()
99 await asyncio.wait_for(sender, None)
100 if self.exc_info is not None:
101 raise self.exc_info[0].with_traceback(self.exc_info[1], self.exc_info[2])
102
103 async def sender(self, send):
104 while True:
105 if self.send_queue:
106 message = self.send_queue.pop(0)
107 if message is None:
108 return
109 await send(message)
110 else:
111 await self.send_event.wait()
112 self.send_event.clear()
113
114 def start_response(self, status, response_headers, exc_info=None):
115 self.exc_info = exc_info
116 if not self.response_started:
117 self.response_started = True
118 status_code, _ = status.split(" ", 1)
119 status_code = int(status_code)
120 headers = [
121 (name.encode("ascii"), value.encode("ascii"))
122 for name, value in response_headers
123 ]
124 self.send_queue.append(
125 {
126 "type": "http.response.start",
127 "status": status_code,
128 "headers": headers,
129 }
130 )
131 self.loop.call_soon_threadsafe(self.send_event.set)
132
133 def wsgi(self, environ, start_response):
134 for chunk in self.app(environ, start_response):
135 self.send_queue.append(
136 {"type": "http.response.body", "body": chunk, "more_body": True}
137 )
138 self.loop.call_soon_threadsafe(self.send_event.set)
139
140 self.send_queue.append({"type": "http.response.body", "body": b""})
141 self.loop.call_soon_threadsafe(self.send_event.set)
142
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/uvicorn/middleware/wsgi.py b/uvicorn/middleware/wsgi.py
--- a/uvicorn/middleware/wsgi.py
+++ b/uvicorn/middleware/wsgi.py
@@ -11,7 +11,7 @@
environ = {
"REQUEST_METHOD": scope["method"],
"SCRIPT_NAME": "",
- "PATH_INFO": scope["path"],
+ "PATH_INFO": scope["path"].encode("utf8").decode("latin1"),
"QUERY_STRING": scope["query_string"].decode("ascii"),
"SERVER_PROTOCOL": "HTTP/%s" % scope["http_version"],
"wsgi.version": (1, 0),
| {"golden_diff": "diff --git a/uvicorn/middleware/wsgi.py b/uvicorn/middleware/wsgi.py\n--- a/uvicorn/middleware/wsgi.py\n+++ b/uvicorn/middleware/wsgi.py\n@@ -11,7 +11,7 @@\n environ = {\n \"REQUEST_METHOD\": scope[\"method\"],\n \"SCRIPT_NAME\": \"\",\n- \"PATH_INFO\": scope[\"path\"],\n+ \"PATH_INFO\": scope[\"path\"].encode(\"utf8\").decode(\"latin1\"),\n \"QUERY_STRING\": scope[\"query_string\"].decode(\"ascii\"),\n \"SERVER_PROTOCOL\": \"HTTP/%s\" % scope[\"http_version\"],\n \"wsgi.version\": (1, 0),\n", "issue": "WSGI middleware error in non-latin1 path\n### Checklist\r\n\r\n<!-- Please make sure you check all these items before submitting your bug report. -->\r\n\r\n- [x] The bug is reproducible against the latest release and/or `master`.\r\n- [x] There are no similar issues or pull requests to fix it yet.\r\n\r\n### Describe the bug\r\n\r\n<!-- A clear and concise description of what the bug is. -->\r\n\r\nThe same bug has been addressed in https://github.com/encode/starlette/issues/997.\r\n\r\nhttps://github.com/encode/uvicorn/blob/9d51e1cfa68ee6e4ee3fb8a23b8599bb44c3985a/uvicorn/middleware/wsgi.py#L14\r\n\r\nIn uvicorn's wsgi middleware, `scope[\"path\"]` in asgi is utf8-encoded but the corresponding `PATH_INFO` required by wsgi should be encoded in latin1 according to the standards.\r\n\r\n### To reproduce\r\n\r\n<!-- Provide a *minimal* example with steps to reproduce the bug locally.\r\n\r\nNOTE: try to keep any external dependencies *at an absolute minimum* .\r\nIn other words, remove anything that doesn't make the bug go away.\r\n\r\n-->\r\n\r\n```python\r\nimport flask\r\nimport uvicorn\r\n\r\napp = flask.Flask(__name__)\r\n\r\[email protected](\"/<keyword>\")\r\ndef home(keyword):\r\n return keyword\r\n\r\nif __name__ == \"__main__\":\r\n uvicorn.run(app, interface=\"wsgi\")\r\n```\r\n\r\n1. run the script.\r\n2. send a request to \"http://127.0.0.1:8000/\u6587\"\r\n\r\n### Expected behavior\r\n\r\n<!-- A clear and concise description of what you expected to happen. -->\r\n\r\nGet a 200 response\r\n\r\n### Actual behavior\r\n\r\n<!-- A clear and concise description of what actually happens. -->\r\n\r\nGet 500 Internal Server Error\r\n\r\n### Debugging material\r\n\r\n<!-- Any tracebacks, screenshots, etc. that can help understanding the problem.\r\n\r\nNOTE:\r\n- Please list tracebacks in full (don't truncate them).\r\n- If relevant, consider turning on DEBUG or TRACE logs for additional details (see the Logging section on https://www.uvicorn.org/settings/ specifically the `log-level` flag).\r\n- Consider using `<details>` to make tracebacks/logs collapsible if they're very large (see https://gist.github.com/ericclemmons/b146fe5da72ca1f706b2ef72a20ac39d).\r\n-->\r\n\r\n<details>\r\n\r\n```python\r\nERROR: Exception in ASGI application\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\cools\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\uvicorn\\protocols\\http\\h11_impl.py\", line 394, in run_asgi\r\n result = await app(self.scope, self.receive, self.send)\r\n File \"C:\\Users\\cools\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\uvicorn\\middleware\\proxy_headers.py\", line 45, in __call__\r\n return await self.app(scope, receive, send)\r\n File \"C:\\Users\\cools\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\uvicorn\\middleware\\wsgi.py\", line 64, in __call__\r\n await instance(receive, send)\r\n File \"C:\\Users\\cools\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\uvicorn\\middleware\\wsgi.py\", line 95, in __call__\r\n await asyncio.wait_for(wsgi, None)\r\n File \"C:\\Users\\cools\\AppData\\Local\\Programs\\Python\\Python38\\lib\\asyncio\\tasks.py\", line 455, in wait_for\r\n return await fut\r\n File \"C:\\Users\\cools\\AppData\\Local\\Programs\\Python\\Python38\\lib\\concurrent\\futures\\thread.py\", line 57, in run\r\n result = self.fn(*self.args, **self.kwargs)\r\n File \"C:\\Users\\cools\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\uvicorn\\middleware\\wsgi.py\", line 134, in wsgi\r\n for chunk in self.app(environ, start_response):\r\n File \"C:\\Users\\cools\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\flask\\app.py\", line 2464, in __call__\r\n return self.wsgi_app(environ, start_response)\r\n File \"C:\\Users\\cools\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\flask\\app.py\", line 2442, in wsgi_app\r\n ctx = self.request_context(environ)\r\n File \"C:\\Users\\cools\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\flask\\app.py\", line 2359, in request_context\r\n return RequestContext(self, environ)\r\n File \"C:\\Users\\cools\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\flask\\ctx.py\", line 292, in __init__\r\n self.url_adapter = app.create_url_adapter(self.request)\r\n File \"C:\\Users\\cools\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\flask\\app.py\", line 2171, in create_url_adapter\r\n return self.url_map.bind_to_environ(\r\n File \"C:\\Users\\cools\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\werkzeug\\routing.py\", line 1634, in bind_to_environ\r\n path_info = _get_wsgi_string(\"PATH_INFO\")\r\n File \"C:\\Users\\cools\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\werkzeug\\routing.py\", line 1631, in _get_wsgi_string\r\n return wsgi_decoding_dance(val, self.charset)\r\n File \"C:\\Users\\cools\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\werkzeug\\_compat.py\", line 188, in wsgi_decoding_dance\r\n return s.encode(\"latin1\").decode(charset, errors)\r\nUnicodeEncodeError: 'latin-1' codec can't encode character '\\u6587' in position 1: ordinal not in range(256)\r\nINFO: 127.0.0.1:52574 - \"GET /%E6%96%87 HTTP/1.1\" 500 Internal Server Error\r\n```\r\n\r\n</details>\r\n\r\n### Environment\r\n\r\n- OS / Python / Uvicorn version: Running uvicorn 0.13.3 with CPython 3.8.5 on Windows\r\n- The exact command you're running uvicorn with, all flags you passed included. If you run it with gunicorn please do the same. If there is a reverse-proxy involved and you cannot reproduce without it please give the minimal config of it to reproduce.\r\n\r\n### Additional context\r\n\r\n<!-- Any additional information that can help understanding the problem.\r\n\r\nEg. linked issues, or a description of what you were trying to achieve. -->\n", "before_files": [{"content": "import asyncio\nimport concurrent.futures\nimport io\nimport sys\n\n\ndef build_environ(scope, message, body):\n \"\"\"\n Builds a scope and request message into a WSGI environ object.\n \"\"\"\n environ = {\n \"REQUEST_METHOD\": scope[\"method\"],\n \"SCRIPT_NAME\": \"\",\n \"PATH_INFO\": scope[\"path\"],\n \"QUERY_STRING\": scope[\"query_string\"].decode(\"ascii\"),\n \"SERVER_PROTOCOL\": \"HTTP/%s\" % scope[\"http_version\"],\n \"wsgi.version\": (1, 0),\n \"wsgi.url_scheme\": scope.get(\"scheme\", \"http\"),\n \"wsgi.input\": io.BytesIO(body),\n \"wsgi.errors\": sys.stdout,\n \"wsgi.multithread\": True,\n \"wsgi.multiprocess\": True,\n \"wsgi.run_once\": False,\n }\n\n # Get server name and port - required in WSGI, not in ASGI\n server = scope.get(\"server\")\n if server is None:\n server = (\"localhost\", 80)\n environ[\"SERVER_NAME\"] = server[0]\n environ[\"SERVER_PORT\"] = server[1]\n\n # Get client IP address\n client = scope.get(\"client\")\n if client is not None:\n environ[\"REMOTE_ADDR\"] = client[0]\n\n # Go through headers and make them into environ entries\n for name, value in scope.get(\"headers\", []):\n name = name.decode(\"latin1\")\n if name == \"content-length\":\n corrected_name = \"CONTENT_LENGTH\"\n elif name == \"content-type\":\n corrected_name = \"CONTENT_TYPE\"\n else:\n corrected_name = \"HTTP_%s\" % name.upper().replace(\"-\", \"_\")\n # HTTPbis say only ASCII chars are allowed in headers, but we latin1\n # just in case\n value = value.decode(\"latin1\")\n if corrected_name in environ:\n value = environ[corrected_name] + \",\" + value\n environ[corrected_name] = value\n return environ\n\n\nclass WSGIMiddleware:\n def __init__(self, app, workers=10):\n self.app = app\n self.executor = concurrent.futures.ThreadPoolExecutor(max_workers=workers)\n\n async def __call__(self, scope, receive, send):\n assert scope[\"type\"] == \"http\"\n instance = WSGIResponder(self.app, self.executor, scope)\n await instance(receive, send)\n\n\nclass WSGIResponder:\n def __init__(self, app, executor, scope):\n self.app = app\n self.executor = executor\n self.scope = scope\n self.status = None\n self.response_headers = None\n self.send_event = asyncio.Event()\n self.send_queue = []\n self.loop = None\n self.response_started = False\n self.exc_info = None\n\n async def __call__(self, receive, send):\n message = await receive()\n body = message.get(\"body\", b\"\")\n more_body = message.get(\"more_body\", False)\n while more_body:\n body_message = await receive()\n body += body_message.get(\"body\", b\"\")\n more_body = body_message.get(\"more_body\", False)\n environ = build_environ(self.scope, message, body)\n self.loop = asyncio.get_event_loop()\n wsgi = self.loop.run_in_executor(\n self.executor, self.wsgi, environ, self.start_response\n )\n sender = self.loop.create_task(self.sender(send))\n try:\n await asyncio.wait_for(wsgi, None)\n finally:\n self.send_queue.append(None)\n self.send_event.set()\n await asyncio.wait_for(sender, None)\n if self.exc_info is not None:\n raise self.exc_info[0].with_traceback(self.exc_info[1], self.exc_info[2])\n\n async def sender(self, send):\n while True:\n if self.send_queue:\n message = self.send_queue.pop(0)\n if message is None:\n return\n await send(message)\n else:\n await self.send_event.wait()\n self.send_event.clear()\n\n def start_response(self, status, response_headers, exc_info=None):\n self.exc_info = exc_info\n if not self.response_started:\n self.response_started = True\n status_code, _ = status.split(\" \", 1)\n status_code = int(status_code)\n headers = [\n (name.encode(\"ascii\"), value.encode(\"ascii\"))\n for name, value in response_headers\n ]\n self.send_queue.append(\n {\n \"type\": \"http.response.start\",\n \"status\": status_code,\n \"headers\": headers,\n }\n )\n self.loop.call_soon_threadsafe(self.send_event.set)\n\n def wsgi(self, environ, start_response):\n for chunk in self.app(environ, start_response):\n self.send_queue.append(\n {\"type\": \"http.response.body\", \"body\": chunk, \"more_body\": True}\n )\n self.loop.call_soon_threadsafe(self.send_event.set)\n\n self.send_queue.append({\"type\": \"http.response.body\", \"body\": b\"\"})\n self.loop.call_soon_threadsafe(self.send_event.set)\n", "path": "uvicorn/middleware/wsgi.py"}], "after_files": [{"content": "import asyncio\nimport concurrent.futures\nimport io\nimport sys\n\n\ndef build_environ(scope, message, body):\n \"\"\"\n Builds a scope and request message into a WSGI environ object.\n \"\"\"\n environ = {\n \"REQUEST_METHOD\": scope[\"method\"],\n \"SCRIPT_NAME\": \"\",\n \"PATH_INFO\": scope[\"path\"].encode(\"utf8\").decode(\"latin1\"),\n \"QUERY_STRING\": scope[\"query_string\"].decode(\"ascii\"),\n \"SERVER_PROTOCOL\": \"HTTP/%s\" % scope[\"http_version\"],\n \"wsgi.version\": (1, 0),\n \"wsgi.url_scheme\": scope.get(\"scheme\", \"http\"),\n \"wsgi.input\": io.BytesIO(body),\n \"wsgi.errors\": sys.stdout,\n \"wsgi.multithread\": True,\n \"wsgi.multiprocess\": True,\n \"wsgi.run_once\": False,\n }\n\n # Get server name and port - required in WSGI, not in ASGI\n server = scope.get(\"server\")\n if server is None:\n server = (\"localhost\", 80)\n environ[\"SERVER_NAME\"] = server[0]\n environ[\"SERVER_PORT\"] = server[1]\n\n # Get client IP address\n client = scope.get(\"client\")\n if client is not None:\n environ[\"REMOTE_ADDR\"] = client[0]\n\n # Go through headers and make them into environ entries\n for name, value in scope.get(\"headers\", []):\n name = name.decode(\"latin1\")\n if name == \"content-length\":\n corrected_name = \"CONTENT_LENGTH\"\n elif name == \"content-type\":\n corrected_name = \"CONTENT_TYPE\"\n else:\n corrected_name = \"HTTP_%s\" % name.upper().replace(\"-\", \"_\")\n # HTTPbis say only ASCII chars are allowed in headers, but we latin1\n # just in case\n value = value.decode(\"latin1\")\n if corrected_name in environ:\n value = environ[corrected_name] + \",\" + value\n environ[corrected_name] = value\n return environ\n\n\nclass WSGIMiddleware:\n def __init__(self, app, workers=10):\n self.app = app\n self.executor = concurrent.futures.ThreadPoolExecutor(max_workers=workers)\n\n async def __call__(self, scope, receive, send):\n assert scope[\"type\"] == \"http\"\n instance = WSGIResponder(self.app, self.executor, scope)\n await instance(receive, send)\n\n\nclass WSGIResponder:\n def __init__(self, app, executor, scope):\n self.app = app\n self.executor = executor\n self.scope = scope\n self.status = None\n self.response_headers = None\n self.send_event = asyncio.Event()\n self.send_queue = []\n self.loop = None\n self.response_started = False\n self.exc_info = None\n\n async def __call__(self, receive, send):\n message = await receive()\n body = message.get(\"body\", b\"\")\n more_body = message.get(\"more_body\", False)\n while more_body:\n body_message = await receive()\n body += body_message.get(\"body\", b\"\")\n more_body = body_message.get(\"more_body\", False)\n environ = build_environ(self.scope, message, body)\n self.loop = asyncio.get_event_loop()\n wsgi = self.loop.run_in_executor(\n self.executor, self.wsgi, environ, self.start_response\n )\n sender = self.loop.create_task(self.sender(send))\n try:\n await asyncio.wait_for(wsgi, None)\n finally:\n self.send_queue.append(None)\n self.send_event.set()\n await asyncio.wait_for(sender, None)\n if self.exc_info is not None:\n raise self.exc_info[0].with_traceback(self.exc_info[1], self.exc_info[2])\n\n async def sender(self, send):\n while True:\n if self.send_queue:\n message = self.send_queue.pop(0)\n if message is None:\n return\n await send(message)\n else:\n await self.send_event.wait()\n self.send_event.clear()\n\n def start_response(self, status, response_headers, exc_info=None):\n self.exc_info = exc_info\n if not self.response_started:\n self.response_started = True\n status_code, _ = status.split(\" \", 1)\n status_code = int(status_code)\n headers = [\n (name.encode(\"ascii\"), value.encode(\"ascii\"))\n for name, value in response_headers\n ]\n self.send_queue.append(\n {\n \"type\": \"http.response.start\",\n \"status\": status_code,\n \"headers\": headers,\n }\n )\n self.loop.call_soon_threadsafe(self.send_event.set)\n\n def wsgi(self, environ, start_response):\n for chunk in self.app(environ, start_response):\n self.send_queue.append(\n {\"type\": \"http.response.body\", \"body\": chunk, \"more_body\": True}\n )\n self.loop.call_soon_threadsafe(self.send_event.set)\n\n self.send_queue.append({\"type\": \"http.response.body\", \"body\": b\"\"})\n self.loop.call_soon_threadsafe(self.send_event.set)\n", "path": "uvicorn/middleware/wsgi.py"}]} | 3,285 | 147 |
gh_patches_debug_40832 | rasdani/github-patches | git_diff | ultrabug__py3status-2155 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
clock.py:269: PytzUsageWarning: The zone attribute is specific to pytz's interface
**Describe the bug**
clock.py:269: PytzUsageWarning: The zone attribute is specific to pytz's interface; please migrate to a new time zone provider. For more details on how to do so, see https://pytz-deprecation-shim.readthedocs.io/en/latest/migration.html
**Your py3status version**
py3status version 3.47 (python 3.8.10) on i3
Ubuntu 20.04
**To Reproduce**
Steps to reproduce the behavior along with your py3status configuration (filter out any private values):
1. Run clock.py module inside of terminal emulator.
**Expected behavior**
No warning shown.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `py3status/modules/clock.py`
Content:
```
1 """
2 Display date and time.
3
4 This module allows one or more datetimes to be displayed.
5 All datetimes share the same format_time but can set their own timezones.
6 Timezones are defined in the `format` using the TZ name in squiggly brackets eg
7 `{GMT}`, `{Portugal}`, `{Europe/Paris}`, `{America/Argentina/Buenos_Aires}`.
8
9 ISO-3166 two letter country codes eg `{de}` can also be used but if more than
10 one timezone exists for the country eg `{us}` the first one will be selected.
11
12 `{Local}` can be used for the local settings of your computer.
13
14 Note: Timezones are case sensitive
15
16 A full list of timezones can be found at
17 https://en.wikipedia.org/wiki/List_of_tz_database_time_zones
18
19 Configuration parameters:
20 block_hours: length of time period for all blocks in hours (default 12)
21 blocks: a string, where each character represents time period
22 from the start of a time period.
23 (default '🕛🕧🕐🕜🕑🕝🕒🕞🕓🕟🕔🕠🕕🕡🕖🕢🕗🕣🕘🕤🕙🕥🕚🕦')
24 button_change_format: button that switches format used setting to None
25 disables (default 1)
26 button_change_time_format: button that switches format_time used. Setting
27 to None disables (default 2)
28 button_reset: button that switches display to the first timezone. Setting
29 to None disables (default 3)
30 cycle: If more than one display then how many seconds between changing the
31 display (default 0)
32 format: defines the timezones displayed. This can be a single string or a
33 list. If a list is supplied then the formats can be cycled through
34 using `cycle` or by button click. (default '{Local}')
35 format_time: format to use for the time, strftime directives such as `%H`
36 can be used this can be either a string or to allow multiple formats as
37 a list. The one used can be changed by button click.
38 *(default ['[{name_unclear} ]%c', '[{name_unclear} ]%x %X',
39 '[{name_unclear} ]%a %H:%M', '[{name_unclear} ]{icon}'])*
40 locale: Override the system locale. Examples:
41 when set to 'fr_FR' %a on Tuesday is 'mar.'.
42 (default None)
43 round_to_nearest_block: defines how a block icon is chosen. Examples:
44 when set to True, '13:14' is '🕐', '13:16' is '🕜' and '13:31' is '🕜';
45 when set to False, '13:14' is '🕐', '13:16' is '🕐' and '13:31' is '🕜'.
46 (default True)
47
48 Format placeholders:
49 {icon} a character representing the time from `blocks`
50 {name} friendly timezone name eg `Buenos Aires`
51 {name_unclear} friendly timezone name eg `Buenos Aires` but is empty if
52 only one timezone is provided
53 {timezone} full timezone name eg `America/Argentina/Buenos_Aires`
54 {timezone_unclear} full timezone name eg `America/Argentina/Buenos_Aires`
55 but is empty if only one timezone is provided
56
57 Requires:
58 pytz: cross platform time zone library for python
59 tzlocal: tzinfo object for the local timezone
60
61 Examples:
62 ```
63 # cycling through London, Warsaw, Tokyo
64 clock {
65 cycle = 30
66 format = ["{Europe/London}", "{Europe/Warsaw}", "{Asia/Tokyo}"]
67 format_time = "{name} %H:%M"
68 }
69
70 # Show the time and date in New York
71 clock {
72 format = "Big Apple {America/New_York}"
73 format_time = "%Y-%m-%d %H:%M:%S"
74 }
75
76 # wall clocks
77 clock {
78 format = "{Asia/Calcutta} {Africa/Nairobi} {Asia/Bangkok}"
79 format_time = "{name} {icon}"
80 }
81 ```
82
83 @author tobes
84 @license BSD
85
86 SAMPLE OUTPUT
87 {'full_text': 'Sun 15 Jan 2017 23:27:17 GMT'}
88
89 london
90 {'full_text': 'Thursday Feb 23 1:42 AM London'}
91 """
92
93 import locale
94 import re
95 import time
96 from datetime import datetime
97
98 import pytz
99 import tzlocal
100
101 CLOCK_BLOCKS = "🕛🕧🕐🕜🕑🕝🕒🕞🕓🕟🕔🕠🕕🕡🕖🕢🕗🕣🕘🕤🕙🕥🕚🕦"
102
103
104 class Py3status:
105 """ """
106
107 # available configuration parameters
108 block_hours = 12
109 blocks = CLOCK_BLOCKS
110 button_change_format = 1
111 button_change_time_format = 2
112 button_reset = 3
113 cycle = 0
114 format = "{Local}"
115 format_time = [
116 "[{name_unclear} ]%c",
117 "[{name_unclear} ]%x %X",
118 "[{name_unclear} ]%a %H:%M",
119 "[{name_unclear} ]{icon}",
120 ]
121 locale = None
122 round_to_nearest_block = True
123
124 def post_config_hook(self):
125 if self.locale is not None:
126 locale.setlocale(locale.LC_TIME, self.locale)
127
128 # Multiple clocks are possible that can be cycled through
129 if not isinstance(self.format, list):
130 self.format = [self.format]
131 # if only one item we don't need to cycle
132 if len(self.format) == 1:
133 self.cycle = 0
134 # find any declared timezones eg {Europe/London}
135 self._fmts = set()
136 for fmt in self.format:
137 self._fmts.update(self.py3.get_placeholders_list(fmt))
138
139 self.multiple_tz = len(self._fmts) > 1
140
141 if not isinstance(self.format_time, list):
142 self.format_time = [self.format_time]
143
144 # workout how often in seconds we will need to do an update to keep the
145 # display fresh
146 self.time_deltas = []
147 for format in self.format_time:
148 format_time = re.sub(r"{([^}]*)}", "", format)
149 format_time = format_time.replace("%%", "")
150 if "%f" in format_time:
151 # microseconds
152 time_delta = 0
153 elif "%S" in format_time:
154 # seconds
155 time_delta = 1
156 elif "%s" in format_time:
157 # seconds since unix epoch start
158 time_delta = 1
159 elif "%T" in format_time:
160 # seconds included in "%H:%M:%S"
161 time_delta = 1
162 elif "%c" in format_time:
163 # Locale’s appropriate date and time representation
164 time_delta = 1
165 elif "%X" in format_time:
166 # Locale’s appropriate time representation
167 time_delta = 1
168 else:
169 time_delta = 60
170 self.time_deltas.append(time_delta)
171
172 # If we have saved details we use them.
173 saved_format = self.py3.storage_get("time_format")
174 if saved_format in self.format_time:
175 self.active_time_format = self.format_time.index(saved_format)
176 else:
177 self.active_time_format = 0
178
179 saved_timezone = self.py3.storage_get("timezone")
180 if saved_timezone in self.format:
181 self.active = self.format.index(saved_timezone)
182 else:
183 self.active = 0
184
185 # reset the cycle time
186 self._cycle_time = time.time() + self.cycle
187
188 def _get_timezone(self, tz):
189 """
190 Find and return the time zone if possible
191 """
192 # special Local timezone
193 if tz == "Local":
194 try:
195 return tzlocal.get_localzone()
196 except pytz.UnknownTimeZoneError:
197 return "?"
198
199 # we can use a country code to get tz
200 # FIXME this is broken for multi-timezone countries eg US
201 # for now we just grab the first one
202 if len(tz) == 2:
203 try:
204 zones = pytz.country_timezones(tz)
205 except KeyError:
206 return "?"
207 tz = zones[0]
208
209 # get the timezone
210 try:
211 zone = pytz.timezone(tz)
212 except pytz.UnknownTimeZoneError:
213 return "?"
214 return zone
215
216 def _change_active(self, diff):
217 self.active = (self.active + diff) % len(self.format)
218 # reset the cycle time
219 self._cycle_time = time.time() + self.cycle
220 # save the active format
221 timezone = self.format[self.active]
222 self.py3.storage_set("timezone", timezone)
223
224 def on_click(self, event):
225 """
226 Switch the displayed module or pass the event on to the active module
227 """
228 if event["button"] == self.button_reset:
229 self._change_active(0)
230 elif event["button"] == self.button_change_time_format:
231 self.active_time_format += 1
232 if self.active_time_format >= len(self.format_time):
233 self.active_time_format = 0
234 # save the active format_time
235 time_format = self.format_time[self.active_time_format]
236 self.py3.storage_set("time_format", time_format)
237 elif event["button"] == self.button_change_format:
238 self._change_active(1)
239
240 def clock(self):
241
242 # cycling
243 if self.cycle and time.time() >= self._cycle_time:
244 self._change_active(1)
245 self._cycle_time = time.time() + self.cycle
246
247 # update our times
248 times = {}
249 for name in self._fmts:
250 zone = self._get_timezone(name)
251 if zone == "?":
252 times[name] = "?"
253 else:
254 t = datetime.now(zone)
255 format_time = self.format_time[self.active_time_format]
256 icon = None
257 if self.py3.format_contains(format_time, "icon"):
258 # calculate the decimal hour
259 h = t.hour + t.minute / 60
260 if self.round_to_nearest_block:
261 h += self.block_hours / len(self.blocks) / 2
262 # make 12 hourly etc
263 h = h % self.block_hours
264 idx = int(h / self.block_hours * len(self.blocks))
265 icon = self.blocks[idx]
266
267 try:
268 # tzlocal < 3.0
269 timezone = zone.zone
270 except AttributeError:
271 # tzlocal >= 3.0
272 timezone = zone.key
273 tzname = timezone.split("/")[-1].replace("_", " ")
274
275 if self.multiple_tz:
276 name_unclear = tzname
277 timezone_unclear = timezone
278 else:
279 name_unclear = ""
280 timezone_unclear = ""
281
282 format_time = self.py3.safe_format(
283 format_time,
284 dict(
285 icon=icon,
286 name=tzname,
287 name_unclear=name_unclear,
288 timezone=timezone,
289 timezone_unclear=timezone_unclear,
290 ),
291 )
292
293 if self.py3.is_composite(format_time):
294 for item in format_time:
295 item["full_text"] = t.strftime(item["full_text"])
296 else:
297 format_time = t.strftime(format_time)
298 times[name] = format_time
299
300 # work out when we need to update
301 timeout = self.py3.time_in(sync_to=self.time_deltas[self.active_time_format])
302
303 # if cycling we need to make sure we update when they are needed
304 if self.cycle:
305 cycle_timeout = self._cycle_time
306 timeout = min(timeout, cycle_timeout)
307
308 return {
309 "full_text": self.py3.safe_format(self.format[self.active], times),
310 "cached_until": timeout,
311 }
312
313
314 if __name__ == "__main__":
315 """
316 Run module in test mode.
317 """
318 from py3status.module_test import module_test
319
320 module_test(Py3status)
321
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/py3status/modules/clock.py b/py3status/modules/clock.py
--- a/py3status/modules/clock.py
+++ b/py3status/modules/clock.py
@@ -6,12 +6,11 @@
Timezones are defined in the `format` using the TZ name in squiggly brackets eg
`{GMT}`, `{Portugal}`, `{Europe/Paris}`, `{America/Argentina/Buenos_Aires}`.
-ISO-3166 two letter country codes eg `{de}` can also be used but if more than
-one timezone exists for the country eg `{us}` the first one will be selected.
+See https://docs.python.org/3/library/zoneinfo.html for supported formats.
`{Local}` can be used for the local settings of your computer.
-Note: Timezones are case sensitive
+Note: Timezones are case sensitive!
A full list of timezones can be found at
https://en.wikipedia.org/wiki/List_of_tz_database_time_zones
@@ -54,10 +53,6 @@
{timezone_unclear} full timezone name eg `America/Argentina/Buenos_Aires`
but is empty if only one timezone is provided
-Requires:
- pytz: cross platform time zone library for python
- tzlocal: tzinfo object for the local timezone
-
Examples:
```
# cycling through London, Warsaw, Tokyo
@@ -80,7 +75,7 @@
}
```
-@author tobes
+@author tobes ultrabug
@license BSD
SAMPLE OUTPUT
@@ -95,8 +90,11 @@
import time
from datetime import datetime
-import pytz
-import tzlocal
+try:
+ import zoneinfo
+# Fall back for python 3.7 and python 3.8
+except ImportError:
+ from backports import zoneinfo
CLOCK_BLOCKS = "🕛🕧🕐🕜🕑🕝🕒🕞🕓🕟🕔🕠🕕🕡🕖🕢🕗🕣🕘🕤🕙🕥🕚🕦"
@@ -192,24 +190,13 @@
# special Local timezone
if tz == "Local":
try:
- return tzlocal.get_localzone()
- except pytz.UnknownTimeZoneError:
+ return zoneinfo.ZoneInfo("localtime")
+ except zoneinfo.ZoneInfoNotFoundError:
return "?"
-
- # we can use a country code to get tz
- # FIXME this is broken for multi-timezone countries eg US
- # for now we just grab the first one
- if len(tz) == 2:
- try:
- zones = pytz.country_timezones(tz)
- except KeyError:
- return "?"
- tz = zones[0]
-
# get the timezone
try:
- zone = pytz.timezone(tz)
- except pytz.UnknownTimeZoneError:
+ zone = zoneinfo.ZoneInfo(tz)
+ except zoneinfo.ZoneInfoNotFoundError:
return "?"
return zone
@@ -264,12 +251,7 @@
idx = int(h / self.block_hours * len(self.blocks))
icon = self.blocks[idx]
- try:
- # tzlocal < 3.0
- timezone = zone.zone
- except AttributeError:
- # tzlocal >= 3.0
- timezone = zone.key
+ timezone = zone.key
tzname = timezone.split("/")[-1].replace("_", " ")
if self.multiple_tz:
| {"golden_diff": "diff --git a/py3status/modules/clock.py b/py3status/modules/clock.py\n--- a/py3status/modules/clock.py\n+++ b/py3status/modules/clock.py\n@@ -6,12 +6,11 @@\n Timezones are defined in the `format` using the TZ name in squiggly brackets eg\n `{GMT}`, `{Portugal}`, `{Europe/Paris}`, `{America/Argentina/Buenos_Aires}`.\n \n-ISO-3166 two letter country codes eg `{de}` can also be used but if more than\n-one timezone exists for the country eg `{us}` the first one will be selected.\n+See https://docs.python.org/3/library/zoneinfo.html for supported formats.\n \n `{Local}` can be used for the local settings of your computer.\n \n-Note: Timezones are case sensitive\n+Note: Timezones are case sensitive!\n \n A full list of timezones can be found at\n https://en.wikipedia.org/wiki/List_of_tz_database_time_zones\n@@ -54,10 +53,6 @@\n {timezone_unclear} full timezone name eg `America/Argentina/Buenos_Aires`\n but is empty if only one timezone is provided\n \n-Requires:\n- pytz: cross platform time zone library for python\n- tzlocal: tzinfo object for the local timezone\n-\n Examples:\n ```\n # cycling through London, Warsaw, Tokyo\n@@ -80,7 +75,7 @@\n }\n ```\n \n-@author tobes\n+@author tobes ultrabug\n @license BSD\n \n SAMPLE OUTPUT\n@@ -95,8 +90,11 @@\n import time\n from datetime import datetime\n \n-import pytz\n-import tzlocal\n+try:\n+ import zoneinfo\n+# Fall back for python 3.7 and python 3.8\n+except ImportError:\n+ from backports import zoneinfo\n \n CLOCK_BLOCKS = \"\ud83d\udd5b\ud83d\udd67\ud83d\udd50\ud83d\udd5c\ud83d\udd51\ud83d\udd5d\ud83d\udd52\ud83d\udd5e\ud83d\udd53\ud83d\udd5f\ud83d\udd54\ud83d\udd60\ud83d\udd55\ud83d\udd61\ud83d\udd56\ud83d\udd62\ud83d\udd57\ud83d\udd63\ud83d\udd58\ud83d\udd64\ud83d\udd59\ud83d\udd65\ud83d\udd5a\ud83d\udd66\"\n \n@@ -192,24 +190,13 @@\n # special Local timezone\n if tz == \"Local\":\n try:\n- return tzlocal.get_localzone()\n- except pytz.UnknownTimeZoneError:\n+ return zoneinfo.ZoneInfo(\"localtime\")\n+ except zoneinfo.ZoneInfoNotFoundError:\n return \"?\"\n-\n- # we can use a country code to get tz\n- # FIXME this is broken for multi-timezone countries eg US\n- # for now we just grab the first one\n- if len(tz) == 2:\n- try:\n- zones = pytz.country_timezones(tz)\n- except KeyError:\n- return \"?\"\n- tz = zones[0]\n-\n # get the timezone\n try:\n- zone = pytz.timezone(tz)\n- except pytz.UnknownTimeZoneError:\n+ zone = zoneinfo.ZoneInfo(tz)\n+ except zoneinfo.ZoneInfoNotFoundError:\n return \"?\"\n return zone\n \n@@ -264,12 +251,7 @@\n idx = int(h / self.block_hours * len(self.blocks))\n icon = self.blocks[idx]\n \n- try:\n- # tzlocal < 3.0\n- timezone = zone.zone\n- except AttributeError:\n- # tzlocal >= 3.0\n- timezone = zone.key\n+ timezone = zone.key\n tzname = timezone.split(\"/\")[-1].replace(\"_\", \" \")\n \n if self.multiple_tz:\n", "issue": "clock.py:269: PytzUsageWarning: The zone attribute is specific to pytz's interface\n**Describe the bug**\r\nclock.py:269: PytzUsageWarning: The zone attribute is specific to pytz's interface; please migrate to a new time zone provider. For more details on how to do so, see https://pytz-deprecation-shim.readthedocs.io/en/latest/migration.html\r\n\r\n**Your py3status version**\r\npy3status version 3.47 (python 3.8.10) on i3\r\nUbuntu 20.04\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior along with your py3status configuration (filter out any private values):\r\n1. Run clock.py module inside of terminal emulator.\r\n\r\n**Expected behavior**\r\nNo warning shown.\r\n\n", "before_files": [{"content": "\"\"\"\nDisplay date and time.\n\nThis module allows one or more datetimes to be displayed.\nAll datetimes share the same format_time but can set their own timezones.\nTimezones are defined in the `format` using the TZ name in squiggly brackets eg\n`{GMT}`, `{Portugal}`, `{Europe/Paris}`, `{America/Argentina/Buenos_Aires}`.\n\nISO-3166 two letter country codes eg `{de}` can also be used but if more than\none timezone exists for the country eg `{us}` the first one will be selected.\n\n`{Local}` can be used for the local settings of your computer.\n\nNote: Timezones are case sensitive\n\nA full list of timezones can be found at\nhttps://en.wikipedia.org/wiki/List_of_tz_database_time_zones\n\nConfiguration parameters:\n block_hours: length of time period for all blocks in hours (default 12)\n blocks: a string, where each character represents time period\n from the start of a time period.\n (default '\ud83d\udd5b\ud83d\udd67\ud83d\udd50\ud83d\udd5c\ud83d\udd51\ud83d\udd5d\ud83d\udd52\ud83d\udd5e\ud83d\udd53\ud83d\udd5f\ud83d\udd54\ud83d\udd60\ud83d\udd55\ud83d\udd61\ud83d\udd56\ud83d\udd62\ud83d\udd57\ud83d\udd63\ud83d\udd58\ud83d\udd64\ud83d\udd59\ud83d\udd65\ud83d\udd5a\ud83d\udd66')\n button_change_format: button that switches format used setting to None\n disables (default 1)\n button_change_time_format: button that switches format_time used. Setting\n to None disables (default 2)\n button_reset: button that switches display to the first timezone. Setting\n to None disables (default 3)\n cycle: If more than one display then how many seconds between changing the\n display (default 0)\n format: defines the timezones displayed. This can be a single string or a\n list. If a list is supplied then the formats can be cycled through\n using `cycle` or by button click. (default '{Local}')\n format_time: format to use for the time, strftime directives such as `%H`\n can be used this can be either a string or to allow multiple formats as\n a list. The one used can be changed by button click.\n *(default ['[{name_unclear} ]%c', '[{name_unclear} ]%x %X',\n '[{name_unclear} ]%a %H:%M', '[{name_unclear} ]{icon}'])*\n locale: Override the system locale. Examples:\n when set to 'fr_FR' %a on Tuesday is 'mar.'.\n (default None)\n round_to_nearest_block: defines how a block icon is chosen. Examples:\n when set to True, '13:14' is '\ud83d\udd50', '13:16' is '\ud83d\udd5c' and '13:31' is '\ud83d\udd5c';\n when set to False, '13:14' is '\ud83d\udd50', '13:16' is '\ud83d\udd50' and '13:31' is '\ud83d\udd5c'.\n (default True)\n\nFormat placeholders:\n {icon} a character representing the time from `blocks`\n {name} friendly timezone name eg `Buenos Aires`\n {name_unclear} friendly timezone name eg `Buenos Aires` but is empty if\n only one timezone is provided\n {timezone} full timezone name eg `America/Argentina/Buenos_Aires`\n {timezone_unclear} full timezone name eg `America/Argentina/Buenos_Aires`\n but is empty if only one timezone is provided\n\nRequires:\n pytz: cross platform time zone library for python\n tzlocal: tzinfo object for the local timezone\n\nExamples:\n```\n# cycling through London, Warsaw, Tokyo\nclock {\n cycle = 30\n format = [\"{Europe/London}\", \"{Europe/Warsaw}\", \"{Asia/Tokyo}\"]\n format_time = \"{name} %H:%M\"\n}\n\n# Show the time and date in New York\nclock {\n format = \"Big Apple {America/New_York}\"\n format_time = \"%Y-%m-%d %H:%M:%S\"\n}\n\n# wall clocks\nclock {\n format = \"{Asia/Calcutta} {Africa/Nairobi} {Asia/Bangkok}\"\n format_time = \"{name} {icon}\"\n}\n```\n\n@author tobes\n@license BSD\n\nSAMPLE OUTPUT\n{'full_text': 'Sun 15 Jan 2017 23:27:17 GMT'}\n\nlondon\n{'full_text': 'Thursday Feb 23 1:42 AM London'}\n\"\"\"\n\nimport locale\nimport re\nimport time\nfrom datetime import datetime\n\nimport pytz\nimport tzlocal\n\nCLOCK_BLOCKS = \"\ud83d\udd5b\ud83d\udd67\ud83d\udd50\ud83d\udd5c\ud83d\udd51\ud83d\udd5d\ud83d\udd52\ud83d\udd5e\ud83d\udd53\ud83d\udd5f\ud83d\udd54\ud83d\udd60\ud83d\udd55\ud83d\udd61\ud83d\udd56\ud83d\udd62\ud83d\udd57\ud83d\udd63\ud83d\udd58\ud83d\udd64\ud83d\udd59\ud83d\udd65\ud83d\udd5a\ud83d\udd66\"\n\n\nclass Py3status:\n \"\"\" \"\"\"\n\n # available configuration parameters\n block_hours = 12\n blocks = CLOCK_BLOCKS\n button_change_format = 1\n button_change_time_format = 2\n button_reset = 3\n cycle = 0\n format = \"{Local}\"\n format_time = [\n \"[{name_unclear} ]%c\",\n \"[{name_unclear} ]%x %X\",\n \"[{name_unclear} ]%a %H:%M\",\n \"[{name_unclear} ]{icon}\",\n ]\n locale = None\n round_to_nearest_block = True\n\n def post_config_hook(self):\n if self.locale is not None:\n locale.setlocale(locale.LC_TIME, self.locale)\n\n # Multiple clocks are possible that can be cycled through\n if not isinstance(self.format, list):\n self.format = [self.format]\n # if only one item we don't need to cycle\n if len(self.format) == 1:\n self.cycle = 0\n # find any declared timezones eg {Europe/London}\n self._fmts = set()\n for fmt in self.format:\n self._fmts.update(self.py3.get_placeholders_list(fmt))\n\n self.multiple_tz = len(self._fmts) > 1\n\n if not isinstance(self.format_time, list):\n self.format_time = [self.format_time]\n\n # workout how often in seconds we will need to do an update to keep the\n # display fresh\n self.time_deltas = []\n for format in self.format_time:\n format_time = re.sub(r\"{([^}]*)}\", \"\", format)\n format_time = format_time.replace(\"%%\", \"\")\n if \"%f\" in format_time:\n # microseconds\n time_delta = 0\n elif \"%S\" in format_time:\n # seconds\n time_delta = 1\n elif \"%s\" in format_time:\n # seconds since unix epoch start\n time_delta = 1\n elif \"%T\" in format_time:\n # seconds included in \"%H:%M:%S\"\n time_delta = 1\n elif \"%c\" in format_time:\n # Locale\u2019s appropriate date and time representation\n time_delta = 1\n elif \"%X\" in format_time:\n # Locale\u2019s appropriate time representation\n time_delta = 1\n else:\n time_delta = 60\n self.time_deltas.append(time_delta)\n\n # If we have saved details we use them.\n saved_format = self.py3.storage_get(\"time_format\")\n if saved_format in self.format_time:\n self.active_time_format = self.format_time.index(saved_format)\n else:\n self.active_time_format = 0\n\n saved_timezone = self.py3.storage_get(\"timezone\")\n if saved_timezone in self.format:\n self.active = self.format.index(saved_timezone)\n else:\n self.active = 0\n\n # reset the cycle time\n self._cycle_time = time.time() + self.cycle\n\n def _get_timezone(self, tz):\n \"\"\"\n Find and return the time zone if possible\n \"\"\"\n # special Local timezone\n if tz == \"Local\":\n try:\n return tzlocal.get_localzone()\n except pytz.UnknownTimeZoneError:\n return \"?\"\n\n # we can use a country code to get tz\n # FIXME this is broken for multi-timezone countries eg US\n # for now we just grab the first one\n if len(tz) == 2:\n try:\n zones = pytz.country_timezones(tz)\n except KeyError:\n return \"?\"\n tz = zones[0]\n\n # get the timezone\n try:\n zone = pytz.timezone(tz)\n except pytz.UnknownTimeZoneError:\n return \"?\"\n return zone\n\n def _change_active(self, diff):\n self.active = (self.active + diff) % len(self.format)\n # reset the cycle time\n self._cycle_time = time.time() + self.cycle\n # save the active format\n timezone = self.format[self.active]\n self.py3.storage_set(\"timezone\", timezone)\n\n def on_click(self, event):\n \"\"\"\n Switch the displayed module or pass the event on to the active module\n \"\"\"\n if event[\"button\"] == self.button_reset:\n self._change_active(0)\n elif event[\"button\"] == self.button_change_time_format:\n self.active_time_format += 1\n if self.active_time_format >= len(self.format_time):\n self.active_time_format = 0\n # save the active format_time\n time_format = self.format_time[self.active_time_format]\n self.py3.storage_set(\"time_format\", time_format)\n elif event[\"button\"] == self.button_change_format:\n self._change_active(1)\n\n def clock(self):\n\n # cycling\n if self.cycle and time.time() >= self._cycle_time:\n self._change_active(1)\n self._cycle_time = time.time() + self.cycle\n\n # update our times\n times = {}\n for name in self._fmts:\n zone = self._get_timezone(name)\n if zone == \"?\":\n times[name] = \"?\"\n else:\n t = datetime.now(zone)\n format_time = self.format_time[self.active_time_format]\n icon = None\n if self.py3.format_contains(format_time, \"icon\"):\n # calculate the decimal hour\n h = t.hour + t.minute / 60\n if self.round_to_nearest_block:\n h += self.block_hours / len(self.blocks) / 2\n # make 12 hourly etc\n h = h % self.block_hours\n idx = int(h / self.block_hours * len(self.blocks))\n icon = self.blocks[idx]\n\n try:\n # tzlocal < 3.0\n timezone = zone.zone\n except AttributeError:\n # tzlocal >= 3.0\n timezone = zone.key\n tzname = timezone.split(\"/\")[-1].replace(\"_\", \" \")\n\n if self.multiple_tz:\n name_unclear = tzname\n timezone_unclear = timezone\n else:\n name_unclear = \"\"\n timezone_unclear = \"\"\n\n format_time = self.py3.safe_format(\n format_time,\n dict(\n icon=icon,\n name=tzname,\n name_unclear=name_unclear,\n timezone=timezone,\n timezone_unclear=timezone_unclear,\n ),\n )\n\n if self.py3.is_composite(format_time):\n for item in format_time:\n item[\"full_text\"] = t.strftime(item[\"full_text\"])\n else:\n format_time = t.strftime(format_time)\n times[name] = format_time\n\n # work out when we need to update\n timeout = self.py3.time_in(sync_to=self.time_deltas[self.active_time_format])\n\n # if cycling we need to make sure we update when they are needed\n if self.cycle:\n cycle_timeout = self._cycle_time\n timeout = min(timeout, cycle_timeout)\n\n return {\n \"full_text\": self.py3.safe_format(self.format[self.active], times),\n \"cached_until\": timeout,\n }\n\n\nif __name__ == \"__main__\":\n \"\"\"\n Run module in test mode.\n \"\"\"\n from py3status.module_test import module_test\n\n module_test(Py3status)\n", "path": "py3status/modules/clock.py"}], "after_files": [{"content": "\"\"\"\nDisplay date and time.\n\nThis module allows one or more datetimes to be displayed.\nAll datetimes share the same format_time but can set their own timezones.\nTimezones are defined in the `format` using the TZ name in squiggly brackets eg\n`{GMT}`, `{Portugal}`, `{Europe/Paris}`, `{America/Argentina/Buenos_Aires}`.\n\nSee https://docs.python.org/3/library/zoneinfo.html for supported formats.\n\n`{Local}` can be used for the local settings of your computer.\n\nNote: Timezones are case sensitive!\n\nA full list of timezones can be found at\nhttps://en.wikipedia.org/wiki/List_of_tz_database_time_zones\n\nConfiguration parameters:\n block_hours: length of time period for all blocks in hours (default 12)\n blocks: a string, where each character represents time period\n from the start of a time period.\n (default '\ud83d\udd5b\ud83d\udd67\ud83d\udd50\ud83d\udd5c\ud83d\udd51\ud83d\udd5d\ud83d\udd52\ud83d\udd5e\ud83d\udd53\ud83d\udd5f\ud83d\udd54\ud83d\udd60\ud83d\udd55\ud83d\udd61\ud83d\udd56\ud83d\udd62\ud83d\udd57\ud83d\udd63\ud83d\udd58\ud83d\udd64\ud83d\udd59\ud83d\udd65\ud83d\udd5a\ud83d\udd66')\n button_change_format: button that switches format used setting to None\n disables (default 1)\n button_change_time_format: button that switches format_time used. Setting\n to None disables (default 2)\n button_reset: button that switches display to the first timezone. Setting\n to None disables (default 3)\n cycle: If more than one display then how many seconds between changing the\n display (default 0)\n format: defines the timezones displayed. This can be a single string or a\n list. If a list is supplied then the formats can be cycled through\n using `cycle` or by button click. (default '{Local}')\n format_time: format to use for the time, strftime directives such as `%H`\n can be used this can be either a string or to allow multiple formats as\n a list. The one used can be changed by button click.\n *(default ['[{name_unclear} ]%c', '[{name_unclear} ]%x %X',\n '[{name_unclear} ]%a %H:%M', '[{name_unclear} ]{icon}'])*\n locale: Override the system locale. Examples:\n when set to 'fr_FR' %a on Tuesday is 'mar.'.\n (default None)\n round_to_nearest_block: defines how a block icon is chosen. Examples:\n when set to True, '13:14' is '\ud83d\udd50', '13:16' is '\ud83d\udd5c' and '13:31' is '\ud83d\udd5c';\n when set to False, '13:14' is '\ud83d\udd50', '13:16' is '\ud83d\udd50' and '13:31' is '\ud83d\udd5c'.\n (default True)\n\nFormat placeholders:\n {icon} a character representing the time from `blocks`\n {name} friendly timezone name eg `Buenos Aires`\n {name_unclear} friendly timezone name eg `Buenos Aires` but is empty if\n only one timezone is provided\n {timezone} full timezone name eg `America/Argentina/Buenos_Aires`\n {timezone_unclear} full timezone name eg `America/Argentina/Buenos_Aires`\n but is empty if only one timezone is provided\n\nExamples:\n```\n# cycling through London, Warsaw, Tokyo\nclock {\n cycle = 30\n format = [\"{Europe/London}\", \"{Europe/Warsaw}\", \"{Asia/Tokyo}\"]\n format_time = \"{name} %H:%M\"\n}\n\n# Show the time and date in New York\nclock {\n format = \"Big Apple {America/New_York}\"\n format_time = \"%Y-%m-%d %H:%M:%S\"\n}\n\n# wall clocks\nclock {\n format = \"{Asia/Calcutta} {Africa/Nairobi} {Asia/Bangkok}\"\n format_time = \"{name} {icon}\"\n}\n```\n\n@author tobes ultrabug\n@license BSD\n\nSAMPLE OUTPUT\n{'full_text': 'Sun 15 Jan 2017 23:27:17 GMT'}\n\nlondon\n{'full_text': 'Thursday Feb 23 1:42 AM London'}\n\"\"\"\n\nimport locale\nimport re\nimport time\nfrom datetime import datetime\n\ntry:\n import zoneinfo\n# Fall back for python 3.7 and python 3.8\nexcept ImportError:\n from backports import zoneinfo\n\nCLOCK_BLOCKS = \"\ud83d\udd5b\ud83d\udd67\ud83d\udd50\ud83d\udd5c\ud83d\udd51\ud83d\udd5d\ud83d\udd52\ud83d\udd5e\ud83d\udd53\ud83d\udd5f\ud83d\udd54\ud83d\udd60\ud83d\udd55\ud83d\udd61\ud83d\udd56\ud83d\udd62\ud83d\udd57\ud83d\udd63\ud83d\udd58\ud83d\udd64\ud83d\udd59\ud83d\udd65\ud83d\udd5a\ud83d\udd66\"\n\n\nclass Py3status:\n \"\"\" \"\"\"\n\n # available configuration parameters\n block_hours = 12\n blocks = CLOCK_BLOCKS\n button_change_format = 1\n button_change_time_format = 2\n button_reset = 3\n cycle = 0\n format = \"{Local}\"\n format_time = [\n \"[{name_unclear} ]%c\",\n \"[{name_unclear} ]%x %X\",\n \"[{name_unclear} ]%a %H:%M\",\n \"[{name_unclear} ]{icon}\",\n ]\n locale = None\n round_to_nearest_block = True\n\n def post_config_hook(self):\n if self.locale is not None:\n locale.setlocale(locale.LC_TIME, self.locale)\n\n # Multiple clocks are possible that can be cycled through\n if not isinstance(self.format, list):\n self.format = [self.format]\n # if only one item we don't need to cycle\n if len(self.format) == 1:\n self.cycle = 0\n # find any declared timezones eg {Europe/London}\n self._fmts = set()\n for fmt in self.format:\n self._fmts.update(self.py3.get_placeholders_list(fmt))\n\n self.multiple_tz = len(self._fmts) > 1\n\n if not isinstance(self.format_time, list):\n self.format_time = [self.format_time]\n\n # workout how often in seconds we will need to do an update to keep the\n # display fresh\n self.time_deltas = []\n for format in self.format_time:\n format_time = re.sub(r\"{([^}]*)}\", \"\", format)\n format_time = format_time.replace(\"%%\", \"\")\n if \"%f\" in format_time:\n # microseconds\n time_delta = 0\n elif \"%S\" in format_time:\n # seconds\n time_delta = 1\n elif \"%s\" in format_time:\n # seconds since unix epoch start\n time_delta = 1\n elif \"%T\" in format_time:\n # seconds included in \"%H:%M:%S\"\n time_delta = 1\n elif \"%c\" in format_time:\n # Locale\u2019s appropriate date and time representation\n time_delta = 1\n elif \"%X\" in format_time:\n # Locale\u2019s appropriate time representation\n time_delta = 1\n else:\n time_delta = 60\n self.time_deltas.append(time_delta)\n\n # If we have saved details we use them.\n saved_format = self.py3.storage_get(\"time_format\")\n if saved_format in self.format_time:\n self.active_time_format = self.format_time.index(saved_format)\n else:\n self.active_time_format = 0\n\n saved_timezone = self.py3.storage_get(\"timezone\")\n if saved_timezone in self.format:\n self.active = self.format.index(saved_timezone)\n else:\n self.active = 0\n\n # reset the cycle time\n self._cycle_time = time.time() + self.cycle\n\n def _get_timezone(self, tz):\n \"\"\"\n Find and return the time zone if possible\n \"\"\"\n # special Local timezone\n if tz == \"Local\":\n try:\n return zoneinfo.ZoneInfo(\"localtime\")\n except zoneinfo.ZoneInfoNotFoundError:\n return \"?\"\n # get the timezone\n try:\n zone = zoneinfo.ZoneInfo(tz)\n except zoneinfo.ZoneInfoNotFoundError:\n return \"?\"\n return zone\n\n def _change_active(self, diff):\n self.active = (self.active + diff) % len(self.format)\n # reset the cycle time\n self._cycle_time = time.time() + self.cycle\n # save the active format\n timezone = self.format[self.active]\n self.py3.storage_set(\"timezone\", timezone)\n\n def on_click(self, event):\n \"\"\"\n Switch the displayed module or pass the event on to the active module\n \"\"\"\n if event[\"button\"] == self.button_reset:\n self._change_active(0)\n elif event[\"button\"] == self.button_change_time_format:\n self.active_time_format += 1\n if self.active_time_format >= len(self.format_time):\n self.active_time_format = 0\n # save the active format_time\n time_format = self.format_time[self.active_time_format]\n self.py3.storage_set(\"time_format\", time_format)\n elif event[\"button\"] == self.button_change_format:\n self._change_active(1)\n\n def clock(self):\n\n # cycling\n if self.cycle and time.time() >= self._cycle_time:\n self._change_active(1)\n self._cycle_time = time.time() + self.cycle\n\n # update our times\n times = {}\n for name in self._fmts:\n zone = self._get_timezone(name)\n if zone == \"?\":\n times[name] = \"?\"\n else:\n t = datetime.now(zone)\n format_time = self.format_time[self.active_time_format]\n icon = None\n if self.py3.format_contains(format_time, \"icon\"):\n # calculate the decimal hour\n h = t.hour + t.minute / 60\n if self.round_to_nearest_block:\n h += self.block_hours / len(self.blocks) / 2\n # make 12 hourly etc\n h = h % self.block_hours\n idx = int(h / self.block_hours * len(self.blocks))\n icon = self.blocks[idx]\n\n timezone = zone.key\n tzname = timezone.split(\"/\")[-1].replace(\"_\", \" \")\n\n if self.multiple_tz:\n name_unclear = tzname\n timezone_unclear = timezone\n else:\n name_unclear = \"\"\n timezone_unclear = \"\"\n\n format_time = self.py3.safe_format(\n format_time,\n dict(\n icon=icon,\n name=tzname,\n name_unclear=name_unclear,\n timezone=timezone,\n timezone_unclear=timezone_unclear,\n ),\n )\n\n if self.py3.is_composite(format_time):\n for item in format_time:\n item[\"full_text\"] = t.strftime(item[\"full_text\"])\n else:\n format_time = t.strftime(format_time)\n times[name] = format_time\n\n # work out when we need to update\n timeout = self.py3.time_in(sync_to=self.time_deltas[self.active_time_format])\n\n # if cycling we need to make sure we update when they are needed\n if self.cycle:\n cycle_timeout = self._cycle_time\n timeout = min(timeout, cycle_timeout)\n\n return {\n \"full_text\": self.py3.safe_format(self.format[self.active], times),\n \"cached_until\": timeout,\n }\n\n\nif __name__ == \"__main__\":\n \"\"\"\n Run module in test mode.\n \"\"\"\n from py3status.module_test import module_test\n\n module_test(Py3status)\n", "path": "py3status/modules/clock.py"}]} | 3,909 | 774 |
gh_patches_debug_33372 | rasdani/github-patches | git_diff | rasterio__rasterio-241 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
rio merge assumes nodata
Running `rio merge` over a set of images without a nodata value results in an error:
```
(rio-test)$ rio merge warped.tif merged.tif
ERROR:rio:Failed. Exception caught
Traceback (most recent call last):
File "/Users/amit/Mapbox/rasterio/rasterio/rio/merge.py", line 50, in merge
dest.fill(nodataval)
TypeError: long() argument must be a string or a number, not 'NoneType'
```
Checking for nodata in `rasterio/rio/merge.py` still results in a lower level error (https://github.com/mapbox/rasterio/commit/6b251f8261fd1b3e785dc73462212741b6bd62b7):
```
(rio-test)$ rio merge warped.tif merged.tif
ERROR:rio:Failed. Exception caught
Traceback (most recent call last):
File "/Users/amit/Mapbox/rasterio/rasterio/rio/merge.py", line 55, in merge
data = src.read()
File "rasterio/_io.pyx", line 654, in rasterio._io.RasterReader.read (rasterio/_io.c:8075)
arr.fill(ndv)
TypeError: long() argument must be a string or a number, not 'NoneType'
```
Patching this case throws the error down further until a regular numpy array is confused for a masked array (https://github.com/mapbox/rasterio/commit/c271c4ee23531db7c24208c85f56d04d79acc851):
```
(rio-test)$ rio merge warped.tif merged.tif
/Users/amit/Mapbox/rasterio/rasterio/rio/merge.py:58: FutureWarning: comparison to `None` will result in an elementwise object comparison in the future.
dest==nodataval, data.mask==False))
ERROR:rio:Failed. Exception caught
Traceback (most recent call last):
File "/Users/amit/Mapbox/rasterio/rasterio/rio/merge.py", line 58, in merge
dest==nodataval, data.mask==False))
AttributeError: 'numpy.ndarray' object has no attribute 'mask'
```
/cc @sgillies
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `rasterio/rio/merge.py`
Content:
```
1 # Merge command.
2
3 import logging
4 import os.path
5 import sys
6
7 import click
8 from cligj import files_inout_arg, format_opt
9
10 import rasterio
11
12 from rasterio.rio.cli import cli
13
14
15 @cli.command(short_help="Merge a stack of raster datasets.")
16 @files_inout_arg
17 @format_opt
18 @click.pass_context
19 def merge(ctx, files, driver):
20 """Copy valid pixels from input files to an output file.
21
22 All files must have the same shape, number of bands, and data type.
23
24 Input files are merged in their listed order using a reverse
25 painter's algorithm.
26 """
27 import numpy as np
28
29 verbosity = (ctx.obj and ctx.obj.get('verbosity')) or 1
30 logger = logging.getLogger('rio')
31 try:
32 with rasterio.drivers(CPL_DEBUG=verbosity>2):
33 output = files[-1]
34 files = files[:-1]
35
36 with rasterio.open(files[0]) as first:
37 kwargs = first.meta
38 kwargs['transform'] = kwargs.pop('affine')
39 dest = np.empty((first.count,) + first.shape,
40 dtype=first.dtypes[0])
41
42 if os.path.exists(output):
43 dst = rasterio.open(output, 'r+')
44 nodataval = dst.nodatavals[0]
45 else:
46 kwargs['driver'] == driver
47 dst = rasterio.open(output, 'w', **kwargs)
48 nodataval = first.nodatavals[0]
49
50 dest.fill(nodataval)
51
52 for fname in reversed(files):
53 with rasterio.open(fname) as src:
54 data = src.read()
55 np.copyto(dest, data,
56 where=np.logical_and(
57 dest==nodataval, data.mask==False))
58
59 if dst.mode == 'r+':
60 data = dst.read()
61 np.copyto(dest, data,
62 where=np.logical_and(
63 dest==nodataval, data.mask==False))
64
65 dst.write(dest)
66 dst.close()
67
68 sys.exit(0)
69 except Exception:
70 logger.exception("Failed. Exception caught")
71 sys.exit(1)
72
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/rasterio/rio/merge.py b/rasterio/rio/merge.py
--- a/rasterio/rio/merge.py
+++ b/rasterio/rio/merge.py
@@ -36,31 +36,39 @@
with rasterio.open(files[0]) as first:
kwargs = first.meta
kwargs['transform'] = kwargs.pop('affine')
- dest = np.empty((first.count,) + first.shape,
+ dest = np.zeros((first.count,) + first.shape,
dtype=first.dtypes[0])
+ nodataval = next(dest.flat)
if os.path.exists(output):
dst = rasterio.open(output, 'r+')
- nodataval = dst.nodatavals[0]
+ nodataval = dst.nodatavals[0] or nodataval
else:
kwargs['driver'] == driver
dst = rasterio.open(output, 'w', **kwargs)
- nodataval = first.nodatavals[0]
+ nodataval = first.nodatavals[0] or nodataval
- dest.fill(nodataval)
+ if nodataval:
+ dest.fill(nodataval)
for fname in reversed(files):
with rasterio.open(fname) as src:
data = src.read()
- np.copyto(dest, data,
- where=np.logical_and(
- dest==nodataval, data.mask==False))
+ try:
+ where = np.logical_and(
+ dest==nodataval, data.mask==False)
+ except AttributeError:
+ where = dest==nodataval
+ np.copyto(dest, data, where=where)
if dst.mode == 'r+':
data = dst.read()
- np.copyto(dest, data,
- where=np.logical_and(
- dest==nodataval, data.mask==False))
+ try:
+ where = np.logical_and(
+ dest==nodataval, data.mask==False)
+ except AttributeError:
+ where = dest==nodataval
+ np.copyto(dest, data, where=where)
dst.write(dest)
dst.close()
| {"golden_diff": "diff --git a/rasterio/rio/merge.py b/rasterio/rio/merge.py\n--- a/rasterio/rio/merge.py\n+++ b/rasterio/rio/merge.py\n@@ -36,31 +36,39 @@\n with rasterio.open(files[0]) as first:\n kwargs = first.meta\n kwargs['transform'] = kwargs.pop('affine')\n- dest = np.empty((first.count,) + first.shape, \n+ dest = np.zeros((first.count,) + first.shape, \n dtype=first.dtypes[0])\n \n+ nodataval = next(dest.flat)\n if os.path.exists(output):\n dst = rasterio.open(output, 'r+')\n- nodataval = dst.nodatavals[0]\n+ nodataval = dst.nodatavals[0] or nodataval\n else:\n kwargs['driver'] == driver\n dst = rasterio.open(output, 'w', **kwargs)\n- nodataval = first.nodatavals[0]\n+ nodataval = first.nodatavals[0] or nodataval\n \n- dest.fill(nodataval)\n+ if nodataval:\n+ dest.fill(nodataval)\n \n for fname in reversed(files):\n with rasterio.open(fname) as src:\n data = src.read()\n- np.copyto(dest, data,\n- where=np.logical_and(\n- dest==nodataval, data.mask==False))\n+ try:\n+ where = np.logical_and(\n+ dest==nodataval, data.mask==False)\n+ except AttributeError:\n+ where = dest==nodataval\n+ np.copyto(dest, data, where=where)\n \n if dst.mode == 'r+':\n data = dst.read()\n- np.copyto(dest, data,\n- where=np.logical_and(\n- dest==nodataval, data.mask==False))\n+ try:\n+ where = np.logical_and(\n+ dest==nodataval, data.mask==False)\n+ except AttributeError:\n+ where = dest==nodataval\n+ np.copyto(dest, data, where=where)\n \n dst.write(dest)\n dst.close()\n", "issue": "rio merge assumes nodata\nRunning `rio merge` over a set of images without a nodata value results in an error:\n\n```\n(rio-test)$ rio merge warped.tif merged.tif\nERROR:rio:Failed. Exception caught\nTraceback (most recent call last):\n File \"/Users/amit/Mapbox/rasterio/rasterio/rio/merge.py\", line 50, in merge\n dest.fill(nodataval)\nTypeError: long() argument must be a string or a number, not 'NoneType'\n```\n\nChecking for nodata in `rasterio/rio/merge.py` still results in a lower level error (https://github.com/mapbox/rasterio/commit/6b251f8261fd1b3e785dc73462212741b6bd62b7):\n\n```\n(rio-test)$ rio merge warped.tif merged.tif\nERROR:rio:Failed. Exception caught\nTraceback (most recent call last):\n File \"/Users/amit/Mapbox/rasterio/rasterio/rio/merge.py\", line 55, in merge\n data = src.read()\n File \"rasterio/_io.pyx\", line 654, in rasterio._io.RasterReader.read (rasterio/_io.c:8075)\n arr.fill(ndv)\nTypeError: long() argument must be a string or a number, not 'NoneType'\n```\n\nPatching this case throws the error down further until a regular numpy array is confused for a masked array (https://github.com/mapbox/rasterio/commit/c271c4ee23531db7c24208c85f56d04d79acc851):\n\n```\n(rio-test)$ rio merge warped.tif merged.tif\n/Users/amit/Mapbox/rasterio/rasterio/rio/merge.py:58: FutureWarning: comparison to `None` will result in an elementwise object comparison in the future.\n dest==nodataval, data.mask==False))\nERROR:rio:Failed. Exception caught\nTraceback (most recent call last):\n File \"/Users/amit/Mapbox/rasterio/rasterio/rio/merge.py\", line 58, in merge\n dest==nodataval, data.mask==False))\nAttributeError: 'numpy.ndarray' object has no attribute 'mask'\n```\n\n/cc @sgillies \n\n", "before_files": [{"content": "# Merge command.\n\nimport logging\nimport os.path\nimport sys\n\nimport click\nfrom cligj import files_inout_arg, format_opt\n\nimport rasterio\n\nfrom rasterio.rio.cli import cli\n\n\[email protected](short_help=\"Merge a stack of raster datasets.\")\n@files_inout_arg\n@format_opt\[email protected]_context\ndef merge(ctx, files, driver):\n \"\"\"Copy valid pixels from input files to an output file.\n\n All files must have the same shape, number of bands, and data type.\n\n Input files are merged in their listed order using a reverse\n painter's algorithm.\n \"\"\"\n import numpy as np\n\n verbosity = (ctx.obj and ctx.obj.get('verbosity')) or 1\n logger = logging.getLogger('rio')\n try:\n with rasterio.drivers(CPL_DEBUG=verbosity>2):\n output = files[-1]\n files = files[:-1]\n\n with rasterio.open(files[0]) as first:\n kwargs = first.meta\n kwargs['transform'] = kwargs.pop('affine')\n dest = np.empty((first.count,) + first.shape, \n dtype=first.dtypes[0])\n\n if os.path.exists(output):\n dst = rasterio.open(output, 'r+')\n nodataval = dst.nodatavals[0]\n else:\n kwargs['driver'] == driver\n dst = rasterio.open(output, 'w', **kwargs)\n nodataval = first.nodatavals[0]\n\n dest.fill(nodataval)\n\n for fname in reversed(files):\n with rasterio.open(fname) as src:\n data = src.read()\n np.copyto(dest, data,\n where=np.logical_and(\n dest==nodataval, data.mask==False))\n\n if dst.mode == 'r+':\n data = dst.read()\n np.copyto(dest, data,\n where=np.logical_and(\n dest==nodataval, data.mask==False))\n\n dst.write(dest)\n dst.close()\n\n sys.exit(0)\n except Exception:\n logger.exception(\"Failed. Exception caught\")\n sys.exit(1)\n", "path": "rasterio/rio/merge.py"}], "after_files": [{"content": "# Merge command.\n\nimport logging\nimport os.path\nimport sys\n\nimport click\nfrom cligj import files_inout_arg, format_opt\n\nimport rasterio\n\nfrom rasterio.rio.cli import cli\n\n\[email protected](short_help=\"Merge a stack of raster datasets.\")\n@files_inout_arg\n@format_opt\[email protected]_context\ndef merge(ctx, files, driver):\n \"\"\"Copy valid pixels from input files to an output file.\n\n All files must have the same shape, number of bands, and data type.\n\n Input files are merged in their listed order using a reverse\n painter's algorithm.\n \"\"\"\n import numpy as np\n\n verbosity = (ctx.obj and ctx.obj.get('verbosity')) or 1\n logger = logging.getLogger('rio')\n try:\n with rasterio.drivers(CPL_DEBUG=verbosity>2):\n output = files[-1]\n files = files[:-1]\n\n with rasterio.open(files[0]) as first:\n kwargs = first.meta\n kwargs['transform'] = kwargs.pop('affine')\n dest = np.zeros((first.count,) + first.shape, \n dtype=first.dtypes[0])\n\n nodataval = next(dest.flat)\n if os.path.exists(output):\n dst = rasterio.open(output, 'r+')\n nodataval = dst.nodatavals[0] or nodataval\n else:\n kwargs['driver'] == driver\n dst = rasterio.open(output, 'w', **kwargs)\n nodataval = first.nodatavals[0] or nodataval\n\n if nodataval:\n dest.fill(nodataval)\n\n for fname in reversed(files):\n with rasterio.open(fname) as src:\n data = src.read()\n try:\n where = np.logical_and(\n dest==nodataval, data.mask==False)\n except AttributeError:\n where = dest==nodataval\n np.copyto(dest, data, where=where)\n\n if dst.mode == 'r+':\n data = dst.read()\n try:\n where = np.logical_and(\n dest==nodataval, data.mask==False)\n except AttributeError:\n where = dest==nodataval\n np.copyto(dest, data, where=where)\n\n dst.write(dest)\n dst.close()\n\n sys.exit(0)\n except Exception:\n logger.exception(\"Failed. Exception caught\")\n sys.exit(1)\n", "path": "rasterio/rio/merge.py"}]} | 1,392 | 491 |
gh_patches_debug_14851 | rasdani/github-patches | git_diff | python-poetry__poetry-289 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CLI help for install is ambiguous
<!-- Checked checkbox should look like this: [x] -->
- [x] I am on the [latest](https://github.com/sdispater/poetry/releases/latest) Poetry version.
- [x] I have searched the [issues](https://github.com/sdispater/poetry/issues) of this repo and believe that this is not a duplicate.
- [x] If an exception occurs when executing a command, I executed it again in debug mode (`-vvv` option).
- **OS version and name**: Linux 4.4.0-17134-Microsoft #137-Microsoft Thu Jun 14 18:46:00 PST 2018 x86_64 x86_64 x86_64 GNU/Linux
- **Poetry version**: Poetry 0.11.2
- **Link of a [Gist](https://gist.github.com/) with the contents of your pyproject.toml file**: No such file needed
## Issue
Running `` poetry install --help`` produces the following output:
```sh
Usage:
install [options]
Options:
--no-dev Do not install dev dependencies.
--dry-run Outputs the operations but will not execute anything (implicitly enables --verbose).
-E, --extras=EXTRAS Extra sets of dependencies to install. (multiple values allowed)
--develop=DEVELOP Install given packages in development mode. (multiple values allowed)
-h, --help Display this help message
-q, --quiet Do not output any message
-V, --version Display this application version
--ansi Force ANSI output
--no-ansi Disable ANSI output
-n, --no-interaction Do not ask any interactive question
-v|vv|vvv, --verbose[=VERBOSE] Increase the verbosity of messages: 1 for normal output, 2 for more verbose output and 3 for debug
Help:
The install command reads the pyproject.toml file from
the current directory, processes it, and downloads and installs all the
libraries and dependencies outlined in that file. If the file does not
exist it will look for pyproject.toml and do the same.
poetry install
```
Note that in the final paragraph, ``pyproject.toml`` is listed two times, and ``pyproject.lock`` is not mentioned. Based on the [online docs](https://poetry.eustace.io/docs/cli/#install), I believe the first occurrence of ``pyproject.toml`` should be ``pyproject.lock``.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `poetry/console/commands/install.py`
Content:
```
1 from .venv_command import VenvCommand
2
3
4 class InstallCommand(VenvCommand):
5 """
6 Installs the project dependencies.
7
8 install
9 { --no-dev : Do not install dev dependencies. }
10 { --dry-run : Outputs the operations but will not execute anything
11 (implicitly enables --verbose). }
12 { --E|extras=* : Extra sets of dependencies to install. }
13 { --develop=* : Install given packages in development mode. }
14 """
15
16 help = """The <info>install</info> command reads the <comment>pyproject.toml</> file from
17 the current directory, processes it, and downloads and installs all the
18 libraries and dependencies outlined in that file. If the file does not
19 exist it will look for <comment>pyproject.toml</> and do the same.
20
21 <info>poetry install</info>
22 """
23
24 _loggers = ["poetry.repositories.pypi_repository"]
25
26 def handle(self):
27 from poetry.installation import Installer
28
29 installer = Installer(
30 self.output,
31 self.venv,
32 self.poetry.package,
33 self.poetry.locker,
34 self.poetry.pool,
35 )
36
37 extras = []
38 for extra in self.option("extras"):
39 if " " in extra:
40 extras += [e.strip() for e in extra.split(" ")]
41 else:
42 extras.append(extra)
43
44 installer.extras(extras)
45 installer.dev_mode(not self.option("no-dev"))
46 installer.develop(self.option("develop"))
47 installer.dry_run(self.option("dry-run"))
48 installer.verbose(self.option("verbose"))
49
50 return installer.run()
51
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/poetry/console/commands/install.py b/poetry/console/commands/install.py
--- a/poetry/console/commands/install.py
+++ b/poetry/console/commands/install.py
@@ -13,12 +13,12 @@
{ --develop=* : Install given packages in development mode. }
"""
- help = """The <info>install</info> command reads the <comment>pyproject.toml</> file from
+ help = """The <info>install</info> command reads the <comment>pyproject.lock</> file from
the current directory, processes it, and downloads and installs all the
libraries and dependencies outlined in that file. If the file does not
exist it will look for <comment>pyproject.toml</> and do the same.
-<info>poetry install</info>
+<info>poetry install</info>
"""
_loggers = ["poetry.repositories.pypi_repository"]
| {"golden_diff": "diff --git a/poetry/console/commands/install.py b/poetry/console/commands/install.py\n--- a/poetry/console/commands/install.py\n+++ b/poetry/console/commands/install.py\n@@ -13,12 +13,12 @@\n { --develop=* : Install given packages in development mode. }\n \"\"\"\n \n- help = \"\"\"The <info>install</info> command reads the <comment>pyproject.toml</> file from\n+ help = \"\"\"The <info>install</info> command reads the <comment>pyproject.lock</> file from\n the current directory, processes it, and downloads and installs all the\n libraries and dependencies outlined in that file. If the file does not\n exist it will look for <comment>pyproject.toml</> and do the same.\n \n-<info>poetry install</info> \n+<info>poetry install</info>\n \"\"\"\n \n _loggers = [\"poetry.repositories.pypi_repository\"]\n", "issue": "CLI help for install is ambiguous\n<!-- Checked checkbox should look like this: [x] -->\r\n- [x] I am on the [latest](https://github.com/sdispater/poetry/releases/latest) Poetry version.\r\n- [x] I have searched the [issues](https://github.com/sdispater/poetry/issues) of this repo and believe that this is not a duplicate.\r\n- [x] If an exception occurs when executing a command, I executed it again in debug mode (`-vvv` option).\r\n\r\n- **OS version and name**: Linux 4.4.0-17134-Microsoft #137-Microsoft Thu Jun 14 18:46:00 PST 2018 x86_64 x86_64 x86_64 GNU/Linux\r\n- **Poetry version**: Poetry 0.11.2\r\n- **Link of a [Gist](https://gist.github.com/) with the contents of your pyproject.toml file**: No such file needed\r\n\r\n## Issue\r\nRunning `` poetry install --help`` produces the following output:\r\n```sh\r\nUsage:\r\n install [options]\r\n\r\nOptions:\r\n --no-dev Do not install dev dependencies.\r\n --dry-run Outputs the operations but will not execute anything (implicitly enables --verbose).\r\n -E, --extras=EXTRAS Extra sets of dependencies to install. (multiple values allowed)\r\n --develop=DEVELOP Install given packages in development mode. (multiple values allowed)\r\n -h, --help Display this help message\r\n -q, --quiet Do not output any message\r\n -V, --version Display this application version\r\n --ansi Force ANSI output\r\n --no-ansi Disable ANSI output\r\n -n, --no-interaction Do not ask any interactive question\r\n -v|vv|vvv, --verbose[=VERBOSE] Increase the verbosity of messages: 1 for normal output, 2 for more verbose output and 3 for debug\r\n\r\nHelp:\r\n The install command reads the pyproject.toml file from\r\n the current directory, processes it, and downloads and installs all the\r\n libraries and dependencies outlined in that file. If the file does not\r\n exist it will look for pyproject.toml and do the same.\r\n\r\n poetry install\r\n```\r\nNote that in the final paragraph, ``pyproject.toml`` is listed two times, and ``pyproject.lock`` is not mentioned. Based on the [online docs](https://poetry.eustace.io/docs/cli/#install), I believe the first occurrence of ``pyproject.toml`` should be ``pyproject.lock``.\n", "before_files": [{"content": "from .venv_command import VenvCommand\n\n\nclass InstallCommand(VenvCommand):\n \"\"\"\n Installs the project dependencies.\n\n install\n { --no-dev : Do not install dev dependencies. }\n { --dry-run : Outputs the operations but will not execute anything\n (implicitly enables --verbose). }\n { --E|extras=* : Extra sets of dependencies to install. }\n { --develop=* : Install given packages in development mode. }\n \"\"\"\n\n help = \"\"\"The <info>install</info> command reads the <comment>pyproject.toml</> file from\nthe current directory, processes it, and downloads and installs all the\nlibraries and dependencies outlined in that file. If the file does not\nexist it will look for <comment>pyproject.toml</> and do the same.\n\n<info>poetry install</info> \n\"\"\"\n\n _loggers = [\"poetry.repositories.pypi_repository\"]\n\n def handle(self):\n from poetry.installation import Installer\n\n installer = Installer(\n self.output,\n self.venv,\n self.poetry.package,\n self.poetry.locker,\n self.poetry.pool,\n )\n\n extras = []\n for extra in self.option(\"extras\"):\n if \" \" in extra:\n extras += [e.strip() for e in extra.split(\" \")]\n else:\n extras.append(extra)\n\n installer.extras(extras)\n installer.dev_mode(not self.option(\"no-dev\"))\n installer.develop(self.option(\"develop\"))\n installer.dry_run(self.option(\"dry-run\"))\n installer.verbose(self.option(\"verbose\"))\n\n return installer.run()\n", "path": "poetry/console/commands/install.py"}], "after_files": [{"content": "from .venv_command import VenvCommand\n\n\nclass InstallCommand(VenvCommand):\n \"\"\"\n Installs the project dependencies.\n\n install\n { --no-dev : Do not install dev dependencies. }\n { --dry-run : Outputs the operations but will not execute anything\n (implicitly enables --verbose). }\n { --E|extras=* : Extra sets of dependencies to install. }\n { --develop=* : Install given packages in development mode. }\n \"\"\"\n\n help = \"\"\"The <info>install</info> command reads the <comment>pyproject.lock</> file from\nthe current directory, processes it, and downloads and installs all the\nlibraries and dependencies outlined in that file. If the file does not\nexist it will look for <comment>pyproject.toml</> and do the same.\n\n<info>poetry install</info>\n\"\"\"\n\n _loggers = [\"poetry.repositories.pypi_repository\"]\n\n def handle(self):\n from poetry.installation import Installer\n\n installer = Installer(\n self.output,\n self.venv,\n self.poetry.package,\n self.poetry.locker,\n self.poetry.pool,\n )\n\n extras = []\n for extra in self.option(\"extras\"):\n if \" \" in extra:\n extras += [e.strip() for e in extra.split(\" \")]\n else:\n extras.append(extra)\n\n installer.extras(extras)\n installer.dev_mode(not self.option(\"no-dev\"))\n installer.develop(self.option(\"develop\"))\n installer.dry_run(self.option(\"dry-run\"))\n installer.verbose(self.option(\"verbose\"))\n\n return installer.run()\n", "path": "poetry/console/commands/install.py"}]} | 1,267 | 214 |
gh_patches_debug_19685 | rasdani/github-patches | git_diff | quantumlib__Cirq-1345 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Make ControlledGate work with gates that only provide a decomposition
The following should work:
```
import cirq
class G(cirq.TwoQubitGate):
def _decompose_(self, qubits):
a, b = qubits
yield cirq.X(a)**0.5
yield cirq.H(b)
yield cirq.CZ(a, b)
cg = cirq.ControlledGate(G())
x, y, z = cirq.LineQubit.range(3)
c = cirq.Circuit.from_ops(cg(x, y, z))
print(c.to_unitary_matrix())
```
but currently it raises an exception:
```
Operation without a known matrix or decomposition: cirq.ControlledGate(sub_gate=[...G...].on(cirq.LineQubit(0), cirq.LineQubit(1), cirq.LineQubit(2))
```
because `ControlledGate` doesn't have a `_decompose_` method. This issue is to add that method.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cirq/ops/controlled_gate.py`
Content:
```
1 # Copyright 2018 The Cirq Developers
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from typing import Any, Union
16
17 import numpy as np
18
19 from cirq import linalg, protocols, value
20 from cirq.ops import raw_types, controlled_operation as cop
21 from cirq.type_workarounds import NotImplementedType
22
23
24 @value.value_equality
25 class ControlledGate(raw_types.Gate):
26 """Augments existing gates with a control qubit."""
27
28 def __init__(self, sub_gate: raw_types.Gate) -> None:
29 """Initializes the controlled gate.
30
31 Args:
32 sub_gate: The gate to add a control qubit to.
33 """
34 self.sub_gate = sub_gate
35
36 def num_qubits(self) -> int:
37 return self.sub_gate.num_qubits() + 1
38
39 def _decompose_(self, qubits):
40 result = protocols.decompose_once_with_qubits(self.sub_gate,
41 qubits[1:],
42 NotImplemented)
43 if result is NotImplemented:
44 return NotImplemented
45
46 return [cop.ControlledOperation(qubits[0], op) for op in result]
47
48 def validate_args(self, qubits) -> None:
49 if len(qubits) < 1:
50 raise ValueError('No control qubit specified.')
51 self.sub_gate.validate_args(qubits[1:])
52
53 def _value_equality_values_(self):
54 return self.sub_gate
55
56 def _apply_unitary_(self, args: protocols.ApplyUnitaryArgs) -> np.ndarray:
57 control = args.axes[0]
58 rest = args.axes[1:]
59 active = linalg.slice_for_qubits_equal_to([control], 1)
60 sub_axes = [r - int(r > control) for r in rest]
61 target_view = args.target_tensor[active]
62 buffer_view = args.available_buffer[active]
63 result = protocols.apply_unitary(
64 self.sub_gate,
65 protocols.ApplyUnitaryArgs(
66 target_view,
67 buffer_view,
68 sub_axes),
69 default=NotImplemented)
70
71 if result is NotImplemented:
72 return NotImplemented
73
74 if result is target_view:
75 return args.target_tensor
76
77 if result is buffer_view:
78 inactive = linalg.slice_for_qubits_equal_to([control], 0)
79 args.available_buffer[inactive] = args.target_tensor[inactive]
80 return args.available_buffer
81
82 # HACK: assume they didn't somehow escape the slice view and edit the
83 # rest of target_tensor.
84 args.target_tensor[active] = result
85 return args.target_tensor
86
87 def _has_unitary_(self) -> bool:
88 return protocols.has_unitary(self.sub_gate)
89
90 def _unitary_(self) -> Union[np.ndarray, NotImplementedType]:
91 sub_matrix = protocols.unitary(self.sub_gate, None)
92 if sub_matrix is None:
93 return NotImplemented
94 return linalg.block_diag(np.eye(sub_matrix.shape[0]), sub_matrix)
95
96 def __pow__(self, exponent: Any) -> 'ControlledGate':
97 new_sub_gate = protocols.pow(self.sub_gate,
98 exponent,
99 NotImplemented)
100 if new_sub_gate is NotImplemented:
101 return NotImplemented
102 return ControlledGate(new_sub_gate)
103
104 def _is_parameterized_(self):
105 return protocols.is_parameterized(self.sub_gate)
106
107 def _resolve_parameters_(self, param_resolver):
108 new_sub_gate = protocols.resolve_parameters(self.sub_gate,
109 param_resolver)
110 return ControlledGate(new_sub_gate)
111
112 def _trace_distance_bound_(self):
113 return protocols.trace_distance_bound(self.sub_gate)
114
115 def _circuit_diagram_info_(self,
116 args: protocols.CircuitDiagramInfoArgs
117 ) -> protocols.CircuitDiagramInfo:
118 sub_info = protocols.circuit_diagram_info(self.sub_gate, args, None)
119 if sub_info is None:
120 return NotImplemented
121 return protocols.CircuitDiagramInfo(
122 wire_symbols=('@',) + sub_info.wire_symbols,
123 exponent=sub_info.exponent)
124
125 def __str__(self):
126 return 'C' + str(self.sub_gate)
127
128 def __repr__(self):
129 return 'cirq.ControlledGate(sub_gate={!r})'.format(self.sub_gate)
130
131
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/cirq/ops/controlled_gate.py b/cirq/ops/controlled_gate.py
--- a/cirq/ops/controlled_gate.py
+++ b/cirq/ops/controlled_gate.py
@@ -115,7 +115,18 @@
def _circuit_diagram_info_(self,
args: protocols.CircuitDiagramInfoArgs
) -> protocols.CircuitDiagramInfo:
- sub_info = protocols.circuit_diagram_info(self.sub_gate, args, None)
+ sub_args = protocols.CircuitDiagramInfoArgs(
+ known_qubit_count=(args.known_qubit_count - 1
+ if args.known_qubit_count is not None else None),
+ known_qubits=(args.known_qubits[1:]
+ if args.known_qubits is not None else None),
+ use_unicode_characters=args.use_unicode_characters,
+ precision=args.precision,
+ qubit_map=args.qubit_map
+ )
+ sub_info = protocols.circuit_diagram_info(self.sub_gate,
+ sub_args,
+ None)
if sub_info is None:
return NotImplemented
return protocols.CircuitDiagramInfo(
| {"golden_diff": "diff --git a/cirq/ops/controlled_gate.py b/cirq/ops/controlled_gate.py\n--- a/cirq/ops/controlled_gate.py\n+++ b/cirq/ops/controlled_gate.py\n@@ -115,7 +115,18 @@\n def _circuit_diagram_info_(self,\n args: protocols.CircuitDiagramInfoArgs\n ) -> protocols.CircuitDiagramInfo:\n- sub_info = protocols.circuit_diagram_info(self.sub_gate, args, None)\n+ sub_args = protocols.CircuitDiagramInfoArgs(\n+ known_qubit_count=(args.known_qubit_count - 1\n+ if args.known_qubit_count is not None else None),\n+ known_qubits=(args.known_qubits[1:]\n+ if args.known_qubits is not None else None),\n+ use_unicode_characters=args.use_unicode_characters,\n+ precision=args.precision,\n+ qubit_map=args.qubit_map\n+ )\n+ sub_info = protocols.circuit_diagram_info(self.sub_gate,\n+ sub_args,\n+ None)\n if sub_info is None:\n return NotImplemented\n return protocols.CircuitDiagramInfo(\n", "issue": "Make ControlledGate work with gates that only provide a decomposition\nThe following should work:\r\n\r\n```\r\nimport cirq\r\n\r\n\r\nclass G(cirq.TwoQubitGate):\r\n def _decompose_(self, qubits):\r\n a, b = qubits\r\n yield cirq.X(a)**0.5\r\n yield cirq.H(b)\r\n yield cirq.CZ(a, b)\r\n\r\n\r\ncg = cirq.ControlledGate(G())\r\nx, y, z = cirq.LineQubit.range(3)\r\nc = cirq.Circuit.from_ops(cg(x, y, z))\r\nprint(c.to_unitary_matrix())\r\n```\r\n\r\nbut currently it raises an exception:\r\n\r\n```\r\nOperation without a known matrix or decomposition: cirq.ControlledGate(sub_gate=[...G...].on(cirq.LineQubit(0), cirq.LineQubit(1), cirq.LineQubit(2))\r\n```\r\n\r\nbecause `ControlledGate` doesn't have a `_decompose_` method. This issue is to add that method.\n", "before_files": [{"content": "# Copyright 2018 The Cirq Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import Any, Union\n\nimport numpy as np\n\nfrom cirq import linalg, protocols, value\nfrom cirq.ops import raw_types, controlled_operation as cop\nfrom cirq.type_workarounds import NotImplementedType\n\n\[email protected]_equality\nclass ControlledGate(raw_types.Gate):\n \"\"\"Augments existing gates with a control qubit.\"\"\"\n\n def __init__(self, sub_gate: raw_types.Gate) -> None:\n \"\"\"Initializes the controlled gate.\n\n Args:\n sub_gate: The gate to add a control qubit to.\n \"\"\"\n self.sub_gate = sub_gate\n\n def num_qubits(self) -> int:\n return self.sub_gate.num_qubits() + 1\n\n def _decompose_(self, qubits):\n result = protocols.decompose_once_with_qubits(self.sub_gate,\n qubits[1:],\n NotImplemented)\n if result is NotImplemented:\n return NotImplemented\n\n return [cop.ControlledOperation(qubits[0], op) for op in result]\n\n def validate_args(self, qubits) -> None:\n if len(qubits) < 1:\n raise ValueError('No control qubit specified.')\n self.sub_gate.validate_args(qubits[1:])\n\n def _value_equality_values_(self):\n return self.sub_gate\n\n def _apply_unitary_(self, args: protocols.ApplyUnitaryArgs) -> np.ndarray:\n control = args.axes[0]\n rest = args.axes[1:]\n active = linalg.slice_for_qubits_equal_to([control], 1)\n sub_axes = [r - int(r > control) for r in rest]\n target_view = args.target_tensor[active]\n buffer_view = args.available_buffer[active]\n result = protocols.apply_unitary(\n self.sub_gate,\n protocols.ApplyUnitaryArgs(\n target_view,\n buffer_view,\n sub_axes),\n default=NotImplemented)\n\n if result is NotImplemented:\n return NotImplemented\n\n if result is target_view:\n return args.target_tensor\n\n if result is buffer_view:\n inactive = linalg.slice_for_qubits_equal_to([control], 0)\n args.available_buffer[inactive] = args.target_tensor[inactive]\n return args.available_buffer\n\n # HACK: assume they didn't somehow escape the slice view and edit the\n # rest of target_tensor.\n args.target_tensor[active] = result\n return args.target_tensor\n\n def _has_unitary_(self) -> bool:\n return protocols.has_unitary(self.sub_gate)\n\n def _unitary_(self) -> Union[np.ndarray, NotImplementedType]:\n sub_matrix = protocols.unitary(self.sub_gate, None)\n if sub_matrix is None:\n return NotImplemented\n return linalg.block_diag(np.eye(sub_matrix.shape[0]), sub_matrix)\n\n def __pow__(self, exponent: Any) -> 'ControlledGate':\n new_sub_gate = protocols.pow(self.sub_gate,\n exponent,\n NotImplemented)\n if new_sub_gate is NotImplemented:\n return NotImplemented\n return ControlledGate(new_sub_gate)\n\n def _is_parameterized_(self):\n return protocols.is_parameterized(self.sub_gate)\n\n def _resolve_parameters_(self, param_resolver):\n new_sub_gate = protocols.resolve_parameters(self.sub_gate,\n param_resolver)\n return ControlledGate(new_sub_gate)\n\n def _trace_distance_bound_(self):\n return protocols.trace_distance_bound(self.sub_gate)\n\n def _circuit_diagram_info_(self,\n args: protocols.CircuitDiagramInfoArgs\n ) -> protocols.CircuitDiagramInfo:\n sub_info = protocols.circuit_diagram_info(self.sub_gate, args, None)\n if sub_info is None:\n return NotImplemented\n return protocols.CircuitDiagramInfo(\n wire_symbols=('@',) + sub_info.wire_symbols,\n exponent=sub_info.exponent)\n\n def __str__(self):\n return 'C' + str(self.sub_gate)\n\n def __repr__(self):\n return 'cirq.ControlledGate(sub_gate={!r})'.format(self.sub_gate)\n\n", "path": "cirq/ops/controlled_gate.py"}], "after_files": [{"content": "# Copyright 2018 The Cirq Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import Any, Union\n\nimport numpy as np\n\nfrom cirq import linalg, protocols, value\nfrom cirq.ops import raw_types, controlled_operation as cop\nfrom cirq.type_workarounds import NotImplementedType\n\n\[email protected]_equality\nclass ControlledGate(raw_types.Gate):\n \"\"\"Augments existing gates with a control qubit.\"\"\"\n\n def __init__(self, sub_gate: raw_types.Gate) -> None:\n \"\"\"Initializes the controlled gate.\n\n Args:\n sub_gate: The gate to add a control qubit to.\n \"\"\"\n self.sub_gate = sub_gate\n\n def num_qubits(self) -> int:\n return self.sub_gate.num_qubits() + 1\n\n def _decompose_(self, qubits):\n result = protocols.decompose_once_with_qubits(self.sub_gate,\n qubits[1:],\n NotImplemented)\n if result is NotImplemented:\n return NotImplemented\n\n return [cop.ControlledOperation(qubits[0], op) for op in result]\n\n def validate_args(self, qubits) -> None:\n if len(qubits) < 1:\n raise ValueError('No control qubit specified.')\n self.sub_gate.validate_args(qubits[1:])\n\n def _value_equality_values_(self):\n return self.sub_gate\n\n def _apply_unitary_(self, args: protocols.ApplyUnitaryArgs) -> np.ndarray:\n control = args.axes[0]\n rest = args.axes[1:]\n active = linalg.slice_for_qubits_equal_to([control], 1)\n sub_axes = [r - int(r > control) for r in rest]\n target_view = args.target_tensor[active]\n buffer_view = args.available_buffer[active]\n result = protocols.apply_unitary(\n self.sub_gate,\n protocols.ApplyUnitaryArgs(\n target_view,\n buffer_view,\n sub_axes),\n default=NotImplemented)\n\n if result is NotImplemented:\n return NotImplemented\n\n if result is target_view:\n return args.target_tensor\n\n if result is buffer_view:\n inactive = linalg.slice_for_qubits_equal_to([control], 0)\n args.available_buffer[inactive] = args.target_tensor[inactive]\n return args.available_buffer\n\n # HACK: assume they didn't somehow escape the slice view and edit the\n # rest of target_tensor.\n args.target_tensor[active] = result\n return args.target_tensor\n\n def _has_unitary_(self) -> bool:\n return protocols.has_unitary(self.sub_gate)\n\n def _unitary_(self) -> Union[np.ndarray, NotImplementedType]:\n sub_matrix = protocols.unitary(self.sub_gate, None)\n if sub_matrix is None:\n return NotImplemented\n return linalg.block_diag(np.eye(sub_matrix.shape[0]), sub_matrix)\n\n def __pow__(self, exponent: Any) -> 'ControlledGate':\n new_sub_gate = protocols.pow(self.sub_gate,\n exponent,\n NotImplemented)\n if new_sub_gate is NotImplemented:\n return NotImplemented\n return ControlledGate(new_sub_gate)\n\n def _is_parameterized_(self):\n return protocols.is_parameterized(self.sub_gate)\n\n def _resolve_parameters_(self, param_resolver):\n new_sub_gate = protocols.resolve_parameters(self.sub_gate,\n param_resolver)\n return ControlledGate(new_sub_gate)\n\n def _trace_distance_bound_(self):\n return protocols.trace_distance_bound(self.sub_gate)\n\n def _circuit_diagram_info_(self,\n args: protocols.CircuitDiagramInfoArgs\n ) -> protocols.CircuitDiagramInfo:\n sub_args = protocols.CircuitDiagramInfoArgs(\n known_qubit_count=(args.known_qubit_count - 1\n if args.known_qubit_count is not None else None),\n known_qubits=(args.known_qubits[1:]\n if args.known_qubits is not None else None),\n use_unicode_characters=args.use_unicode_characters,\n precision=args.precision,\n qubit_map=args.qubit_map\n )\n sub_info = protocols.circuit_diagram_info(self.sub_gate,\n sub_args,\n None)\n if sub_info is None:\n return NotImplemented\n return protocols.CircuitDiagramInfo(\n wire_symbols=('@',) + sub_info.wire_symbols,\n exponent=sub_info.exponent)\n\n def __str__(self):\n return 'C' + str(self.sub_gate)\n\n def __repr__(self):\n return 'cirq.ControlledGate(sub_gate={!r})'.format(self.sub_gate)\n\n", "path": "cirq/ops/controlled_gate.py"}]} | 1,762 | 252 |
gh_patches_debug_18024 | rasdani/github-patches | git_diff | pretalx__pretalx-643 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Having only one submission type shows all submission types
## Current Behavior
If an event only uses one submission type, the submission form shows submission types for ALL events
## Expected Behavior
Only submission types for the current conference should populate the dropdown.
Added bonus: if only one, the field should be read-only/text.
## Steps to Reproduce
1. https://pretalx.com/orga/event/[YOUREVENT]/cfp/types/
2. Enter only one type
3. Navigate to https://pretalx.com/[YOUREVENT]/submit/
4. Click the dropdown for "Submission type"
<img width="801" alt="Screen Shot 2019-03-30 at 5 35 39 pm" src="https://user-images.githubusercontent.com/813732/55272568-53aeeb80-5312-11e9-8de9-e101aa53034f.png">
### Work around
Enter more than one submission type.
## Your Environment
Pretalx hosted environment
Having only one submission type shows all submission types
## Current Behavior
If an event only uses one submission type, the submission form shows submission types for ALL events
## Expected Behavior
Only submission types for the current conference should populate the dropdown.
Added bonus: if only one, the field should be read-only/text.
## Steps to Reproduce
1. https://pretalx.com/orga/event/[YOUREVENT]/cfp/types/
2. Enter only one type
3. Navigate to https://pretalx.com/[YOUREVENT]/submit/
4. Click the dropdown for "Submission type"
<img width="801" alt="Screen Shot 2019-03-30 at 5 35 39 pm" src="https://user-images.githubusercontent.com/813732/55272568-53aeeb80-5312-11e9-8de9-e101aa53034f.png">
### Work around
Enter more than one submission type.
## Your Environment
Pretalx hosted environment
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/pretalx/submission/forms/submission.py`
Content:
```
1 import os
2
3 from django import forms
4 from django.conf import settings
5 from django.utils.timezone import now
6 from django.utils.translation import ugettext_lazy as _
7
8 from pretalx.common.forms.fields import IMAGE_EXTENSIONS
9 from pretalx.common.forms.widgets import CheckboxMultiDropdown
10 from pretalx.common.mixins.forms import PublicContent, RequestRequire
11 from pretalx.submission.models import Submission, SubmissionStates
12
13
14 class InfoForm(RequestRequire, PublicContent, forms.ModelForm):
15 additional_speaker = forms.EmailField(
16 label=_('Additional Speaker'),
17 help_text=_('If you have a co-speaker, please add their email address here, and we will invite them to create an account. If you have more than one co-speaker, you can add more speakers after finishing the submission process.'),
18 required=False,
19 )
20
21 def __init__(self, event, **kwargs):
22 self.event = event
23 self.readonly = kwargs.pop('readonly', False)
24 instance = kwargs.get('instance')
25 initial = kwargs.pop('initial', {})
26 initial['submission_type'] = getattr(
27 instance, 'submission_type', self.event.cfp.default_type
28 )
29 initial['content_locale'] = getattr(
30 instance, 'content_locale', self.event.locale
31 )
32
33 super().__init__(initial=initial, **kwargs)
34
35 if 'abstract' in self.fields:
36 self.fields['abstract'].widget.attrs['rows'] = 2
37 if 'track' in self.fields:
38 if not instance or instance.state == SubmissionStates.SUBMITTED:
39 self.fields['track'].queryset = event.tracks.all()
40 elif not event.settings.use_tracks or instance and instance.state != SubmissionStates.SUBMITTED:
41 self.fields.pop('track')
42 if instance and instance.pk:
43 self.fields.pop('additional_speaker')
44
45 self._set_submission_types(instance=instance)
46 self._set_locales()
47
48 if not event.settings.present_multiple_times:
49 self.fields.pop('slot_count', None)
50 elif 'slot_count' in self.fields and instance and instance.state in [SubmissionStates.ACCEPTED, SubmissionStates.CONFIRMED]:
51 self.fields['slot_count'].disabled = True
52 self.fields['slot_count'].help_text += ' ' + str(_('Please contact the organisers if you want to change how often you\'re presenting this submission.'))
53
54 if self.readonly:
55 for f in self.fields.values():
56 f.disabled = True
57
58 def _set_submission_types(self, instance=None):
59 _now = now()
60 if (
61 not self.event.cfp.deadline or self.event.cfp.deadline >= _now
62 ): # No global deadline or still open
63 types = self.event.submission_types.exclude(deadline__lt=_now)
64 else:
65 types = self.event.submission_types.filter(deadline__gte=_now)
66 pks = set(types.values_list('pk', flat=True))
67 if instance and instance.pk:
68 pks |= {instance.submission_type.pk}
69 if len(pks) == 1:
70 self.fields['submission_type'].initial = self.event.submission_types.get(pk=pks.pop())
71 self.fields['content_locale'].widget = forms.HiddenInput()
72 self.fields['content_locale'].disabled = True
73 else:
74 self.fields['submission_type'].queryset = self.event.submission_types.filter(
75 pk__in=pks
76 )
77
78 def _set_locales(self):
79 if len(self.event.locales) == 1:
80 self.fields['content_locale'].initial = self.event.locales[0]
81 self.fields['content_locale'].widget = forms.HiddenInput()
82 self.fields['content_locale'].disabled = True
83 else:
84 locale_names = dict(settings.LANGUAGES)
85 self.fields['content_locale'].choices = [
86 (a, locale_names[a]) for a in self.event.locales
87 ]
88
89 def clean_image(self):
90 image = self.cleaned_data.get('image')
91 if image:
92 extension = os.path.splitext(image.name)[1].lower()
93 if extension not in IMAGE_EXTENSIONS:
94 raise forms.ValidationError(
95 _(
96 "This filetype is not allowed, it has to be one of the following: "
97 )
98 + ', '.join(IMAGE_EXTENSIONS)
99 )
100 return image
101
102 class Meta:
103 model = Submission
104 fields = [
105 'title',
106 'submission_type',
107 'track',
108 'content_locale',
109 'abstract',
110 'description',
111 'notes',
112 'slot_count',
113 'do_not_record',
114 'image',
115 ]
116 request_require = [
117 'abstract',
118 'description',
119 'notes',
120 'image',
121 'do_not_record',
122 'track',
123 ]
124 public_fields = ['title', 'abstract', 'description', 'image']
125
126
127 class SubmissionFilterForm(forms.Form):
128 state = forms.MultipleChoiceField(
129 choices=SubmissionStates.get_choices(),
130 required=False,
131 widget=CheckboxMultiDropdown,
132 )
133 submission_type = forms.MultipleChoiceField(
134 required=False, widget=CheckboxMultiDropdown
135 )
136 track = forms.MultipleChoiceField(
137 required=False, widget=CheckboxMultiDropdown
138 )
139
140 def __init__(self, event, *args, **kwargs):
141 self.event = event
142 usable_states = kwargs.pop('usable_states', None)
143 super().__init__(*args, **kwargs)
144 sub_count = (
145 lambda x: event.submissions(manager='all_objects').filter(state=x).count()
146 ) # noqa
147 type_count = (
148 lambda x: event.submissions(manager='all_objects')
149 .filter(submission_type=x) # noqa
150 .count()
151 )
152 track_count = (
153 lambda x: event.submissions(manager='all_objects')
154 .filter(track=x) #noqa
155 .count()
156 )
157 self.fields['submission_type'].choices = [
158 (sub_type.pk, f'{str(sub_type)} ({type_count(sub_type.pk)})')
159 for sub_type in event.submission_types.all()
160 ]
161 self.fields['submission_type'].widget.attrs['title'] = _('Submission types')
162 if usable_states:
163 usable_states = [
164 choice
165 for choice in self.fields['state'].choices
166 if choice[0] in usable_states
167 ]
168 else:
169 usable_states = self.fields['state'].choices
170 self.fields['state'].choices = [
171 (choice[0], f'{choice[1].capitalize()} ({sub_count(choice[0])})')
172 for choice in usable_states
173 ]
174 self.fields['state'].widget.attrs['title'] = _('Submission states')
175 self.fields['track'].choices = [
176 (track.pk, f'{track.name} ({track_count(track.pk)})')
177 for track in event.tracks.all()
178 ]
179 self.fields['track'].widget.attrs['title'] = _('Tracks')
180
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/pretalx/submission/forms/submission.py b/src/pretalx/submission/forms/submission.py
--- a/src/pretalx/submission/forms/submission.py
+++ b/src/pretalx/submission/forms/submission.py
@@ -66,14 +66,12 @@
pks = set(types.values_list('pk', flat=True))
if instance and instance.pk:
pks |= {instance.submission_type.pk}
+ self.fields['submission_type'].queryset = self.event.submission_types.filter(
+ pk__in=pks
+ )
if len(pks) == 1:
self.fields['submission_type'].initial = self.event.submission_types.get(pk=pks.pop())
- self.fields['content_locale'].widget = forms.HiddenInput()
- self.fields['content_locale'].disabled = True
- else:
- self.fields['submission_type'].queryset = self.event.submission_types.filter(
- pk__in=pks
- )
+ self.fields['submission_type'].widget = forms.HiddenInput()
def _set_locales(self):
if len(self.event.locales) == 1:
| {"golden_diff": "diff --git a/src/pretalx/submission/forms/submission.py b/src/pretalx/submission/forms/submission.py\n--- a/src/pretalx/submission/forms/submission.py\n+++ b/src/pretalx/submission/forms/submission.py\n@@ -66,14 +66,12 @@\n pks = set(types.values_list('pk', flat=True))\n if instance and instance.pk:\n pks |= {instance.submission_type.pk}\n+ self.fields['submission_type'].queryset = self.event.submission_types.filter(\n+ pk__in=pks\n+ )\n if len(pks) == 1:\n self.fields['submission_type'].initial = self.event.submission_types.get(pk=pks.pop())\n- self.fields['content_locale'].widget = forms.HiddenInput()\n- self.fields['content_locale'].disabled = True\n- else:\n- self.fields['submission_type'].queryset = self.event.submission_types.filter(\n- pk__in=pks\n- )\n+ self.fields['submission_type'].widget = forms.HiddenInput()\n \n def _set_locales(self):\n if len(self.event.locales) == 1:\n", "issue": "Having only one submission type shows all submission types\n## Current Behavior\r\nIf an event only uses one submission type, the submission form shows submission types for ALL events\r\n\r\n\r\n## Expected Behavior\r\nOnly submission types for the current conference should populate the dropdown. \r\nAdded bonus: if only one, the field should be read-only/text. \r\n\r\n## Steps to Reproduce\r\n\r\n1. https://pretalx.com/orga/event/[YOUREVENT]/cfp/types/\r\n2. Enter only one type\r\n3. Navigate to https://pretalx.com/[YOUREVENT]/submit/\r\n4. Click the dropdown for \"Submission type\"\r\n\r\n<img width=\"801\" alt=\"Screen Shot 2019-03-30 at 5 35 39 pm\" src=\"https://user-images.githubusercontent.com/813732/55272568-53aeeb80-5312-11e9-8de9-e101aa53034f.png\">\r\n\r\n\r\n### Work around\r\n\r\nEnter more than one submission type. \r\n\r\n## Your Environment\r\n\r\nPretalx hosted environment\r\n\nHaving only one submission type shows all submission types\n## Current Behavior\r\nIf an event only uses one submission type, the submission form shows submission types for ALL events\r\n\r\n\r\n## Expected Behavior\r\nOnly submission types for the current conference should populate the dropdown. \r\nAdded bonus: if only one, the field should be read-only/text. \r\n\r\n## Steps to Reproduce\r\n\r\n1. https://pretalx.com/orga/event/[YOUREVENT]/cfp/types/\r\n2. Enter only one type\r\n3. Navigate to https://pretalx.com/[YOUREVENT]/submit/\r\n4. Click the dropdown for \"Submission type\"\r\n\r\n<img width=\"801\" alt=\"Screen Shot 2019-03-30 at 5 35 39 pm\" src=\"https://user-images.githubusercontent.com/813732/55272568-53aeeb80-5312-11e9-8de9-e101aa53034f.png\">\r\n\r\n\r\n### Work around\r\n\r\nEnter more than one submission type. \r\n\r\n## Your Environment\r\n\r\nPretalx hosted environment\r\n\n", "before_files": [{"content": "import os\n\nfrom django import forms\nfrom django.conf import settings\nfrom django.utils.timezone import now\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom pretalx.common.forms.fields import IMAGE_EXTENSIONS\nfrom pretalx.common.forms.widgets import CheckboxMultiDropdown\nfrom pretalx.common.mixins.forms import PublicContent, RequestRequire\nfrom pretalx.submission.models import Submission, SubmissionStates\n\n\nclass InfoForm(RequestRequire, PublicContent, forms.ModelForm):\n additional_speaker = forms.EmailField(\n label=_('Additional Speaker'),\n help_text=_('If you have a co-speaker, please add their email address here, and we will invite them to create an account. If you have more than one co-speaker, you can add more speakers after finishing the submission process.'),\n required=False,\n )\n\n def __init__(self, event, **kwargs):\n self.event = event\n self.readonly = kwargs.pop('readonly', False)\n instance = kwargs.get('instance')\n initial = kwargs.pop('initial', {})\n initial['submission_type'] = getattr(\n instance, 'submission_type', self.event.cfp.default_type\n )\n initial['content_locale'] = getattr(\n instance, 'content_locale', self.event.locale\n )\n\n super().__init__(initial=initial, **kwargs)\n\n if 'abstract' in self.fields:\n self.fields['abstract'].widget.attrs['rows'] = 2\n if 'track' in self.fields:\n if not instance or instance.state == SubmissionStates.SUBMITTED:\n self.fields['track'].queryset = event.tracks.all()\n elif not event.settings.use_tracks or instance and instance.state != SubmissionStates.SUBMITTED:\n self.fields.pop('track')\n if instance and instance.pk:\n self.fields.pop('additional_speaker')\n\n self._set_submission_types(instance=instance)\n self._set_locales()\n\n if not event.settings.present_multiple_times:\n self.fields.pop('slot_count', None)\n elif 'slot_count' in self.fields and instance and instance.state in [SubmissionStates.ACCEPTED, SubmissionStates.CONFIRMED]:\n self.fields['slot_count'].disabled = True\n self.fields['slot_count'].help_text += ' ' + str(_('Please contact the organisers if you want to change how often you\\'re presenting this submission.'))\n\n if self.readonly:\n for f in self.fields.values():\n f.disabled = True\n\n def _set_submission_types(self, instance=None):\n _now = now()\n if (\n not self.event.cfp.deadline or self.event.cfp.deadline >= _now\n ): # No global deadline or still open\n types = self.event.submission_types.exclude(deadline__lt=_now)\n else:\n types = self.event.submission_types.filter(deadline__gte=_now)\n pks = set(types.values_list('pk', flat=True))\n if instance and instance.pk:\n pks |= {instance.submission_type.pk}\n if len(pks) == 1:\n self.fields['submission_type'].initial = self.event.submission_types.get(pk=pks.pop())\n self.fields['content_locale'].widget = forms.HiddenInput()\n self.fields['content_locale'].disabled = True\n else:\n self.fields['submission_type'].queryset = self.event.submission_types.filter(\n pk__in=pks\n )\n\n def _set_locales(self):\n if len(self.event.locales) == 1:\n self.fields['content_locale'].initial = self.event.locales[0]\n self.fields['content_locale'].widget = forms.HiddenInput()\n self.fields['content_locale'].disabled = True\n else:\n locale_names = dict(settings.LANGUAGES)\n self.fields['content_locale'].choices = [\n (a, locale_names[a]) for a in self.event.locales\n ]\n\n def clean_image(self):\n image = self.cleaned_data.get('image')\n if image:\n extension = os.path.splitext(image.name)[1].lower()\n if extension not in IMAGE_EXTENSIONS:\n raise forms.ValidationError(\n _(\n \"This filetype is not allowed, it has to be one of the following: \"\n )\n + ', '.join(IMAGE_EXTENSIONS)\n )\n return image\n\n class Meta:\n model = Submission\n fields = [\n 'title',\n 'submission_type',\n 'track',\n 'content_locale',\n 'abstract',\n 'description',\n 'notes',\n 'slot_count',\n 'do_not_record',\n 'image',\n ]\n request_require = [\n 'abstract',\n 'description',\n 'notes',\n 'image',\n 'do_not_record',\n 'track',\n ]\n public_fields = ['title', 'abstract', 'description', 'image']\n\n\nclass SubmissionFilterForm(forms.Form):\n state = forms.MultipleChoiceField(\n choices=SubmissionStates.get_choices(),\n required=False,\n widget=CheckboxMultiDropdown,\n )\n submission_type = forms.MultipleChoiceField(\n required=False, widget=CheckboxMultiDropdown\n )\n track = forms.MultipleChoiceField(\n required=False, widget=CheckboxMultiDropdown\n )\n\n def __init__(self, event, *args, **kwargs):\n self.event = event\n usable_states = kwargs.pop('usable_states', None)\n super().__init__(*args, **kwargs)\n sub_count = (\n lambda x: event.submissions(manager='all_objects').filter(state=x).count()\n ) # noqa\n type_count = (\n lambda x: event.submissions(manager='all_objects')\n .filter(submission_type=x) # noqa\n .count()\n )\n track_count = (\n lambda x: event.submissions(manager='all_objects')\n .filter(track=x) #noqa\n .count()\n )\n self.fields['submission_type'].choices = [\n (sub_type.pk, f'{str(sub_type)} ({type_count(sub_type.pk)})')\n for sub_type in event.submission_types.all()\n ]\n self.fields['submission_type'].widget.attrs['title'] = _('Submission types')\n if usable_states:\n usable_states = [\n choice\n for choice in self.fields['state'].choices\n if choice[0] in usable_states\n ]\n else:\n usable_states = self.fields['state'].choices\n self.fields['state'].choices = [\n (choice[0], f'{choice[1].capitalize()} ({sub_count(choice[0])})')\n for choice in usable_states\n ]\n self.fields['state'].widget.attrs['title'] = _('Submission states')\n self.fields['track'].choices = [\n (track.pk, f'{track.name} ({track_count(track.pk)})')\n for track in event.tracks.all()\n ]\n self.fields['track'].widget.attrs['title'] = _('Tracks')\n", "path": "src/pretalx/submission/forms/submission.py"}], "after_files": [{"content": "import os\n\nfrom django import forms\nfrom django.conf import settings\nfrom django.utils.timezone import now\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom pretalx.common.forms.fields import IMAGE_EXTENSIONS\nfrom pretalx.common.forms.widgets import CheckboxMultiDropdown\nfrom pretalx.common.mixins.forms import PublicContent, RequestRequire\nfrom pretalx.submission.models import Submission, SubmissionStates\n\n\nclass InfoForm(RequestRequire, PublicContent, forms.ModelForm):\n additional_speaker = forms.EmailField(\n label=_('Additional Speaker'),\n help_text=_('If you have a co-speaker, please add their email address here, and we will invite them to create an account. If you have more than one co-speaker, you can add more speakers after finishing the submission process.'),\n required=False,\n )\n\n def __init__(self, event, **kwargs):\n self.event = event\n self.readonly = kwargs.pop('readonly', False)\n instance = kwargs.get('instance')\n initial = kwargs.pop('initial', {})\n initial['submission_type'] = getattr(\n instance, 'submission_type', self.event.cfp.default_type\n )\n initial['content_locale'] = getattr(\n instance, 'content_locale', self.event.locale\n )\n\n super().__init__(initial=initial, **kwargs)\n\n if 'abstract' in self.fields:\n self.fields['abstract'].widget.attrs['rows'] = 2\n if 'track' in self.fields:\n if not instance or instance.state == SubmissionStates.SUBMITTED:\n self.fields['track'].queryset = event.tracks.all()\n elif not event.settings.use_tracks or instance and instance.state != SubmissionStates.SUBMITTED:\n self.fields.pop('track')\n if instance and instance.pk:\n self.fields.pop('additional_speaker')\n\n self._set_submission_types(instance=instance)\n self._set_locales()\n\n if not event.settings.present_multiple_times:\n self.fields.pop('slot_count', None)\n elif 'slot_count' in self.fields and instance and instance.state in [SubmissionStates.ACCEPTED, SubmissionStates.CONFIRMED]:\n self.fields['slot_count'].disabled = True\n self.fields['slot_count'].help_text += ' ' + str(_('Please contact the organisers if you want to change how often you\\'re presenting this submission.'))\n\n if self.readonly:\n for f in self.fields.values():\n f.disabled = True\n\n def _set_submission_types(self, instance=None):\n _now = now()\n if (\n not self.event.cfp.deadline or self.event.cfp.deadline >= _now\n ): # No global deadline or still open\n types = self.event.submission_types.exclude(deadline__lt=_now)\n else:\n types = self.event.submission_types.filter(deadline__gte=_now)\n pks = set(types.values_list('pk', flat=True))\n if instance and instance.pk:\n pks |= {instance.submission_type.pk}\n self.fields['submission_type'].queryset = self.event.submission_types.filter(\n pk__in=pks\n )\n if len(pks) == 1:\n self.fields['submission_type'].initial = self.event.submission_types.get(pk=pks.pop())\n self.fields['submission_type'].widget = forms.HiddenInput()\n\n def _set_locales(self):\n if len(self.event.locales) == 1:\n self.fields['content_locale'].initial = self.event.locales[0]\n self.fields['content_locale'].widget = forms.HiddenInput()\n self.fields['content_locale'].disabled = True\n else:\n locale_names = dict(settings.LANGUAGES)\n self.fields['content_locale'].choices = [\n (a, locale_names[a]) for a in self.event.locales\n ]\n\n def clean_image(self):\n image = self.cleaned_data.get('image')\n if image:\n extension = os.path.splitext(image.name)[1].lower()\n if extension not in IMAGE_EXTENSIONS:\n raise forms.ValidationError(\n _(\n \"This filetype is not allowed, it has to be one of the following: \"\n )\n + ', '.join(IMAGE_EXTENSIONS)\n )\n return image\n\n class Meta:\n model = Submission\n fields = [\n 'title',\n 'submission_type',\n 'track',\n 'content_locale',\n 'abstract',\n 'description',\n 'notes',\n 'slot_count',\n 'do_not_record',\n 'image',\n ]\n request_require = [\n 'abstract',\n 'description',\n 'notes',\n 'image',\n 'do_not_record',\n 'track',\n ]\n public_fields = ['title', 'abstract', 'description', 'image']\n\n\nclass SubmissionFilterForm(forms.Form):\n state = forms.MultipleChoiceField(\n choices=SubmissionStates.get_choices(),\n required=False,\n widget=CheckboxMultiDropdown,\n )\n submission_type = forms.MultipleChoiceField(\n required=False, widget=CheckboxMultiDropdown\n )\n track = forms.MultipleChoiceField(\n required=False, widget=CheckboxMultiDropdown\n )\n\n def __init__(self, event, *args, **kwargs):\n self.event = event\n usable_states = kwargs.pop('usable_states', None)\n super().__init__(*args, **kwargs)\n sub_count = (\n lambda x: event.submissions(manager='all_objects').filter(state=x).count()\n ) # noqa\n type_count = (\n lambda x: event.submissions(manager='all_objects')\n .filter(submission_type=x) # noqa\n .count()\n )\n track_count = (\n lambda x: event.submissions(manager='all_objects')\n .filter(track=x) #noqa\n .count()\n )\n self.fields['submission_type'].choices = [\n (sub_type.pk, f'{str(sub_type)} ({type_count(sub_type.pk)})')\n for sub_type in event.submission_types.all()\n ]\n self.fields['submission_type'].widget.attrs['title'] = _('Submission types')\n if usable_states:\n usable_states = [\n choice\n for choice in self.fields['state'].choices\n if choice[0] in usable_states\n ]\n else:\n usable_states = self.fields['state'].choices\n self.fields['state'].choices = [\n (choice[0], f'{choice[1].capitalize()} ({sub_count(choice[0])})')\n for choice in usable_states\n ]\n self.fields['state'].widget.attrs['title'] = _('Submission states')\n self.fields['track'].choices = [\n (track.pk, f'{track.name} ({track_count(track.pk)})')\n for track in event.tracks.all()\n ]\n self.fields['track'].widget.attrs['title'] = _('Tracks')\n", "path": "src/pretalx/submission/forms/submission.py"}]} | 2,633 | 255 |
gh_patches_debug_133 | rasdani/github-patches | git_diff | holoviz__panel-752 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
panel/examples/apps/django2/ example doesn't work
The django2 example doesn't work at least for Django 2.2. The interactive plot doesn't show up and there are no clear error messages either. However, the same example provided by ParamBokeh works fine. But I prefer Panel if this problem can be solved.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/apps/django2/sliders/bk_sliders.py`
Content:
```
1 import panel as pn
2
3 from .sinewave import SineWave
4
5 def app(doc):
6 sw = SineWave()
7 row = pn.Row(sw.param, sw.plot)
8 row._get_root(doc)
9
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/examples/apps/django2/sliders/bk_sliders.py b/examples/apps/django2/sliders/bk_sliders.py
--- a/examples/apps/django2/sliders/bk_sliders.py
+++ b/examples/apps/django2/sliders/bk_sliders.py
@@ -5,4 +5,4 @@
def app(doc):
sw = SineWave()
row = pn.Row(sw.param, sw.plot)
- row._get_root(doc)
+ row.server_doc(doc)
| {"golden_diff": "diff --git a/examples/apps/django2/sliders/bk_sliders.py b/examples/apps/django2/sliders/bk_sliders.py\n--- a/examples/apps/django2/sliders/bk_sliders.py\n+++ b/examples/apps/django2/sliders/bk_sliders.py\n@@ -5,4 +5,4 @@\n def app(doc):\n sw = SineWave()\n row = pn.Row(sw.param, sw.plot)\n- row._get_root(doc)\n+ row.server_doc(doc)\n", "issue": "panel/examples/apps/django2/ example doesn't work\nThe django2 example doesn't work at least for Django 2.2. The interactive plot doesn't show up and there are no clear error messages either. However, the same example provided by ParamBokeh works fine. But I prefer Panel if this problem can be solved. \n", "before_files": [{"content": "import panel as pn\n\nfrom .sinewave import SineWave\n\ndef app(doc):\n sw = SineWave()\n row = pn.Row(sw.param, sw.plot)\n row._get_root(doc)\n", "path": "examples/apps/django2/sliders/bk_sliders.py"}], "after_files": [{"content": "import panel as pn\n\nfrom .sinewave import SineWave\n\ndef app(doc):\n sw = SineWave()\n row = pn.Row(sw.param, sw.plot)\n row.server_doc(doc)\n", "path": "examples/apps/django2/sliders/bk_sliders.py"}]} | 390 | 107 |
gh_patches_debug_23962 | rasdani/github-patches | git_diff | ansible__ansible-42068 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
service_facts returns misleading state on systemd hosts
##### SUMMARY
The system module service_facts returns misleading state information on systemd hosts.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
service_facts
##### ANSIBLE VERSION
```
ansible 2.5.2
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/USER/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Feb 20 2018, 09:19:12) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)]
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
Ansible controller: RHEL 7.5
Remote host: RHEL 7.4
##### STEPS TO REPRODUCE
The service_facts module returns one of two states for each service it discovers: either "running" or "stopped".
```json
{
"ansible_facts": {
"services": {
"rsyncd.service": {
"name": "rsyncd.service",
"source": "systemd",
"state": "stopped"
},
"rsyslog.service": {
"name": "rsyslog.service",
"source": "systemd",
"state": "running"
},
...
}
}
}
```
On systemd hosts, the module obtains state information via `systemctl list-unit-files --type=service` (see [line 181 in module source code](https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/system/service_facts.py#L181)). However, this command does not report running/stopped status. Instead, it reports enabled/disable status which is not the same thing. Enabled/disabled determines whether the service will be started by systemd _on reboot_, but says nothing about whether it is currently running or stopped.
Here are two examples:
**1. Service disabled & started:**
```bash
# On remote host
systemctl disable <service>
systemctl start <service>
systemctl status <service>
# systemctl reports: loaded, disabled, active/running
```
**2. Service enabled & stopped:**
```bash
# On remote host
systemctl enable <service>
systemctl stop <service>
systemctl status <service>
# systemctl reports: loaded, enabled, inactive/dead
```
**On the Ansible controller,** use the service_facts module to query the <service> just configured on the remote host.
##### EXPECTED RESULTS
**1. Service disabled & started:** service_facts should report "running"
**2. Service enabled & stopped:** service_facts should report "stopped"
##### ACTUAL RESULTS
**1. Service disabled & started:** service_facts reports "stopped"
**2. Service enabled & stopped:** service_facts reports "running"
These misleading reports are caused by the test in [line 186 of the source code](https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/system/service_facts.py#L186). The service_facts module explicitly tests whether the service is "enabled" and equates that with running/stopped, which is incorrect for systemd/systemctl.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lib/ansible/modules/system/service_facts.py`
Content:
```
1 #!/usr/bin/python
2 # (c) 2017, Ansible Project
3 # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
4
5 # originally copied from AWX's scan_services module to bring this functionality
6 # into Core
7
8 from __future__ import absolute_import, division, print_function
9 __metaclass__ = type
10
11
12 ANSIBLE_METADATA = {'metadata_version': '1.1',
13 'status': ['preview'],
14 'supported_by': 'community'}
15
16 DOCUMENTATION = '''
17 ---
18 module: service_facts
19 short_description: Return service state information as fact data
20 description:
21 - Return service state information as fact data for various service management utilities
22 version_added: "2.5"
23 requirements: ["Any of the following supported init systems: systemd, sysv, upstart"]
24
25 author:
26 - Matthew Jones
27 - Adam Miller (@maxamillion)
28 '''
29
30 EXAMPLES = '''
31 - name: populate service facts
32 service_facts:
33
34 - debug:
35 var: ansible_facts.services
36
37 '''
38
39 RETURN = '''
40 ansible_facts:
41 description: facts to add to ansible_facts about the services on the system
42 returned: always
43 type: complex
44 contains:
45 "services": {
46 "network": {
47 "source": "sysv",
48 "state": "running",
49 "name": "network"
50 },
51 arp-ethers.service: {
52 "source": "systemd",
53 "state": "stopped",
54 "name": "arp-ethers.service"
55 }
56 }
57 '''
58
59
60 import re
61 from ansible.module_utils.basic import AnsibleModule
62
63
64 class BaseService(object):
65
66 def __init__(self, module):
67 self.module = module
68 self.incomplete_warning = False
69
70
71 class ServiceScanService(BaseService):
72
73 def gather_services(self):
74 services = {}
75 service_path = self.module.get_bin_path("service")
76 if service_path is None:
77 return None
78 initctl_path = self.module.get_bin_path("initctl")
79 chkconfig_path = self.module.get_bin_path("chkconfig")
80
81 # sysvinit
82 if service_path is not None and chkconfig_path is None:
83 rc, stdout, stderr = self.module.run_command("%s --status-all 2>&1 | grep -E \"\\[ (\\+|\\-) \\]\"" % service_path, use_unsafe_shell=True)
84 for line in stdout.split("\n"):
85 line_data = line.split()
86 if len(line_data) < 4:
87 continue # Skipping because we expected more data
88 service_name = " ".join(line_data[3:])
89 if line_data[1] == "+":
90 service_state = "running"
91 else:
92 service_state = "stopped"
93 services[service_name] = {"name": service_name, "state": service_state, "source": "sysv"}
94
95 # Upstart
96 if initctl_path is not None and chkconfig_path is None:
97 p = re.compile(r'^\s?(?P<name>.*)\s(?P<goal>\w+)\/(?P<state>\w+)(\,\sprocess\s(?P<pid>[0-9]+))?\s*$')
98 rc, stdout, stderr = self.module.run_command("%s list" % initctl_path)
99 real_stdout = stdout.replace("\r", "")
100 for line in real_stdout.split("\n"):
101 m = p.match(line)
102 if not m:
103 continue
104 service_name = m.group('name')
105 service_goal = m.group('goal')
106 service_state = m.group('state')
107 if m.group('pid'):
108 pid = m.group('pid')
109 else:
110 pid = None # NOQA
111 payload = {"name": service_name, "state": service_state, "goal": service_goal, "source": "upstart"}
112 services[service_name] = payload
113
114 # RH sysvinit
115 elif chkconfig_path is not None:
116 # print '%s --status-all | grep -E "is (running|stopped)"' % service_path
117 p = re.compile(
118 r'(?P<service>.*?)\s+[0-9]:(?P<rl0>on|off)\s+[0-9]:(?P<rl1>on|off)\s+[0-9]:(?P<rl2>on|off)\s+'
119 r'[0-9]:(?P<rl3>on|off)\s+[0-9]:(?P<rl4>on|off)\s+[0-9]:(?P<rl5>on|off)\s+[0-9]:(?P<rl6>on|off)')
120 rc, stdout, stderr = self.module.run_command('%s' % chkconfig_path, use_unsafe_shell=True)
121 # Check for special cases where stdout does not fit pattern
122 match_any = False
123 for line in stdout.split('\n'):
124 if p.match(line):
125 match_any = True
126 if not match_any:
127 p_simple = re.compile(r'(?P<service>.*?)\s+(?P<rl0>on|off)')
128 match_any = False
129 for line in stdout.split('\n'):
130 if p_simple.match(line):
131 match_any = True
132 if match_any:
133 # Try extra flags " -l --allservices" needed for SLES11
134 rc, stdout, stderr = self.module.run_command('%s -l --allservices' % chkconfig_path, use_unsafe_shell=True)
135 elif '--list' in stderr:
136 # Extra flag needed for RHEL5
137 rc, stdout, stderr = self.module.run_command('%s --list' % chkconfig_path, use_unsafe_shell=True)
138 for line in stdout.split('\n'):
139 m = p.match(line)
140 if m:
141 service_name = m.group('service')
142 service_state = 'stopped'
143 if m.group('rl3') == 'on':
144 rc, stdout, stderr = self.module.run_command('%s %s status' % (service_path, service_name), use_unsafe_shell=True)
145 service_state = rc
146 if rc in (0,):
147 service_state = 'running'
148 # elif rc in (1,3):
149 else:
150 if 'root' in stderr or 'permission' in stderr.lower() or 'not in sudoers' in stderr.lower():
151 self.incomplete_warning = True
152 continue
153 else:
154 service_state = 'stopped'
155 service_data = {"name": service_name, "state": service_state, "source": "sysv"}
156 services[service_name] = service_data
157 return services
158
159
160 class SystemctlScanService(BaseService):
161
162 def systemd_enabled(self):
163 # Check if init is the systemd command, using comm as cmdline could be symlink
164 try:
165 f = open('/proc/1/comm', 'r')
166 except IOError:
167 # If comm doesn't exist, old kernel, no systemd
168 return False
169 for line in f:
170 if 'systemd' in line:
171 return True
172 return False
173
174 def gather_services(self):
175 services = {}
176 if not self.systemd_enabled():
177 return None
178 systemctl_path = self.module.get_bin_path("systemctl", opt_dirs=["/usr/bin", "/usr/local/bin"])
179 if systemctl_path is None:
180 return None
181 rc, stdout, stderr = self.module.run_command("%s list-unit-files --type=service | tail -n +2 | head -n -2" % systemctl_path, use_unsafe_shell=True)
182 for line in stdout.split("\n"):
183 line_data = line.split()
184 if len(line_data) != 2:
185 continue
186 if line_data[1] == "enabled":
187 state_val = "running"
188 else:
189 state_val = "stopped"
190 services[line_data[0]] = {"name": line_data[0], "state": state_val, "source": "systemd"}
191 return services
192
193
194 def main():
195 module = AnsibleModule(argument_spec=dict())
196 service_modules = (ServiceScanService, SystemctlScanService)
197 all_services = {}
198 incomplete_warning = False
199 for svc_module in service_modules:
200 svcmod = svc_module(module)
201 svc = svcmod.gather_services()
202 if svc is not None:
203 all_services.update(svc)
204 if svcmod.incomplete_warning:
205 incomplete_warning = True
206 if len(all_services) == 0:
207 results = dict(skipped=True, msg="Failed to find any services. Sometimes this is due to insufficient privileges.")
208 else:
209 results = dict(ansible_facts=dict(services=all_services))
210 if incomplete_warning:
211 results['msg'] = "WARNING: Could not find status for all services. Sometimes this is due to insufficient privileges."
212 module.exit_json(**results)
213
214
215 if __name__ == '__main__':
216 main()
217
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lib/ansible/modules/system/service_facts.py b/lib/ansible/modules/system/service_facts.py
--- a/lib/ansible/modules/system/service_facts.py
+++ b/lib/ansible/modules/system/service_facts.py
@@ -178,16 +178,16 @@
systemctl_path = self.module.get_bin_path("systemctl", opt_dirs=["/usr/bin", "/usr/local/bin"])
if systemctl_path is None:
return None
- rc, stdout, stderr = self.module.run_command("%s list-unit-files --type=service | tail -n +2 | head -n -2" % systemctl_path, use_unsafe_shell=True)
- for line in stdout.split("\n"):
- line_data = line.split()
- if len(line_data) != 2:
- continue
- if line_data[1] == "enabled":
+ rc, stdout, stderr = self.module.run_command("%s list-units --no-pager --type service --all" % systemctl_path, use_unsafe_shell=True)
+ for line in [svc_line for svc_line in stdout.split('\n') if '.service' in svc_line and 'not-found' not in svc_line]:
+ service_name = line.split()[0]
+ if "running" in line:
state_val = "running"
else:
+ if 'failed' in line:
+ service_name = line.split()[1]
state_val = "stopped"
- services[line_data[0]] = {"name": line_data[0], "state": state_val, "source": "systemd"}
+ services[service_name] = {"name": service_name, "state": state_val, "source": "systemd"}
return services
| {"golden_diff": "diff --git a/lib/ansible/modules/system/service_facts.py b/lib/ansible/modules/system/service_facts.py\n--- a/lib/ansible/modules/system/service_facts.py\n+++ b/lib/ansible/modules/system/service_facts.py\n@@ -178,16 +178,16 @@\n systemctl_path = self.module.get_bin_path(\"systemctl\", opt_dirs=[\"/usr/bin\", \"/usr/local/bin\"])\n if systemctl_path is None:\n return None\n- rc, stdout, stderr = self.module.run_command(\"%s list-unit-files --type=service | tail -n +2 | head -n -2\" % systemctl_path, use_unsafe_shell=True)\n- for line in stdout.split(\"\\n\"):\n- line_data = line.split()\n- if len(line_data) != 2:\n- continue\n- if line_data[1] == \"enabled\":\n+ rc, stdout, stderr = self.module.run_command(\"%s list-units --no-pager --type service --all\" % systemctl_path, use_unsafe_shell=True)\n+ for line in [svc_line for svc_line in stdout.split('\\n') if '.service' in svc_line and 'not-found' not in svc_line]:\n+ service_name = line.split()[0]\n+ if \"running\" in line:\n state_val = \"running\"\n else:\n+ if 'failed' in line:\n+ service_name = line.split()[1]\n state_val = \"stopped\"\n- services[line_data[0]] = {\"name\": line_data[0], \"state\": state_val, \"source\": \"systemd\"}\n+ services[service_name] = {\"name\": service_name, \"state\": state_val, \"source\": \"systemd\"}\n return services\n", "issue": "service_facts returns misleading state on systemd hosts\n##### SUMMARY\r\nThe system module service_facts returns misleading state information on systemd hosts.\r\n\r\n##### ISSUE TYPE\r\n - Bug Report\r\n\r\n##### COMPONENT NAME\r\nservice_facts\r\n\r\n##### ANSIBLE VERSION\r\n```\r\nansible 2.5.2\r\n config file = /etc/ansible/ansible.cfg\r\n configured module search path = [u'/home/USER/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']\r\n ansible python module location = /usr/lib/python2.7/site-packages/ansible\r\n executable location = /usr/bin/ansible\r\n python version = 2.7.5 (default, Feb 20 2018, 09:19:12) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)]\r\n```\r\n\r\n##### CONFIGURATION\r\nN/A\r\n\r\n##### OS / ENVIRONMENT\r\nAnsible controller: RHEL 7.5\r\nRemote host: RHEL 7.4\r\n\r\n##### STEPS TO REPRODUCE\r\nThe service_facts module returns one of two states for each service it discovers: either \"running\" or \"stopped\". \r\n\r\n```json\r\n{\r\n \"ansible_facts\": { \r\n \"services\": {\r\n \"rsyncd.service\": {\r\n \"name\": \"rsyncd.service\",\r\n \"source\": \"systemd\",\r\n \"state\": \"stopped\"\r\n },\r\n \"rsyslog.service\": {\r\n \"name\": \"rsyslog.service\",\r\n \"source\": \"systemd\",\r\n \"state\": \"running\"\r\n },\r\n ... \r\n } \r\n }\r\n}\r\n```\r\n\r\nOn systemd hosts, the module obtains state information via `systemctl list-unit-files --type=service` (see [line 181 in module source code](https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/system/service_facts.py#L181)). However, this command does not report running/stopped status. Instead, it reports enabled/disable status which is not the same thing. Enabled/disabled determines whether the service will be started by systemd _on reboot_, but says nothing about whether it is currently running or stopped.\r\n\r\nHere are two examples:\r\n\r\n**1. Service disabled & started:**\r\n\r\n```bash\r\n# On remote host\r\nsystemctl disable <service>\r\nsystemctl start <service>\r\nsystemctl status <service>\r\n# systemctl reports: loaded, disabled, active/running\r\n```\r\n\r\n**2. Service enabled & stopped:**\r\n\r\n```bash\r\n# On remote host\r\nsystemctl enable <service>\r\nsystemctl stop <service>\r\nsystemctl status <service>\r\n# systemctl reports: loaded, enabled, inactive/dead\r\n```\r\n\r\n**On the Ansible controller,** use the service_facts module to query the <service> just configured on the remote host.\r\n\r\n##### EXPECTED RESULTS\r\n**1. Service disabled & started:** service_facts should report \"running\"\r\n\r\n**2. Service enabled & stopped:** service_facts should report \"stopped\"\r\n\r\n##### ACTUAL RESULTS\r\n**1. Service disabled & started:** service_facts reports \"stopped\"\r\n\r\n**2. Service enabled & stopped:** service_facts reports \"running\"\r\n\r\nThese misleading reports are caused by the test in [line 186 of the source code](https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/system/service_facts.py#L186). The service_facts module explicitly tests whether the service is \"enabled\" and equates that with running/stopped, which is incorrect for systemd/systemctl.\r\n\n", "before_files": [{"content": "#!/usr/bin/python\n# (c) 2017, Ansible Project\n# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)\n\n# originally copied from AWX's scan_services module to bring this functionality\n# into Core\n\nfrom __future__ import absolute_import, division, print_function\n__metaclass__ = type\n\n\nANSIBLE_METADATA = {'metadata_version': '1.1',\n 'status': ['preview'],\n 'supported_by': 'community'}\n\nDOCUMENTATION = '''\n---\nmodule: service_facts\nshort_description: Return service state information as fact data\ndescription:\n - Return service state information as fact data for various service management utilities\nversion_added: \"2.5\"\nrequirements: [\"Any of the following supported init systems: systemd, sysv, upstart\"]\n\nauthor:\n - Matthew Jones\n - Adam Miller (@maxamillion)\n'''\n\nEXAMPLES = '''\n- name: populate service facts\n service_facts:\n\n- debug:\n var: ansible_facts.services\n\n'''\n\nRETURN = '''\nansible_facts:\n description: facts to add to ansible_facts about the services on the system\n returned: always\n type: complex\n contains:\n \"services\": {\n \"network\": {\n \"source\": \"sysv\",\n \"state\": \"running\",\n \"name\": \"network\"\n },\n arp-ethers.service: {\n \"source\": \"systemd\",\n \"state\": \"stopped\",\n \"name\": \"arp-ethers.service\"\n }\n }\n'''\n\n\nimport re\nfrom ansible.module_utils.basic import AnsibleModule\n\n\nclass BaseService(object):\n\n def __init__(self, module):\n self.module = module\n self.incomplete_warning = False\n\n\nclass ServiceScanService(BaseService):\n\n def gather_services(self):\n services = {}\n service_path = self.module.get_bin_path(\"service\")\n if service_path is None:\n return None\n initctl_path = self.module.get_bin_path(\"initctl\")\n chkconfig_path = self.module.get_bin_path(\"chkconfig\")\n\n # sysvinit\n if service_path is not None and chkconfig_path is None:\n rc, stdout, stderr = self.module.run_command(\"%s --status-all 2>&1 | grep -E \\\"\\\\[ (\\\\+|\\\\-) \\\\]\\\"\" % service_path, use_unsafe_shell=True)\n for line in stdout.split(\"\\n\"):\n line_data = line.split()\n if len(line_data) < 4:\n continue # Skipping because we expected more data\n service_name = \" \".join(line_data[3:])\n if line_data[1] == \"+\":\n service_state = \"running\"\n else:\n service_state = \"stopped\"\n services[service_name] = {\"name\": service_name, \"state\": service_state, \"source\": \"sysv\"}\n\n # Upstart\n if initctl_path is not None and chkconfig_path is None:\n p = re.compile(r'^\\s?(?P<name>.*)\\s(?P<goal>\\w+)\\/(?P<state>\\w+)(\\,\\sprocess\\s(?P<pid>[0-9]+))?\\s*$')\n rc, stdout, stderr = self.module.run_command(\"%s list\" % initctl_path)\n real_stdout = stdout.replace(\"\\r\", \"\")\n for line in real_stdout.split(\"\\n\"):\n m = p.match(line)\n if not m:\n continue\n service_name = m.group('name')\n service_goal = m.group('goal')\n service_state = m.group('state')\n if m.group('pid'):\n pid = m.group('pid')\n else:\n pid = None # NOQA\n payload = {\"name\": service_name, \"state\": service_state, \"goal\": service_goal, \"source\": \"upstart\"}\n services[service_name] = payload\n\n # RH sysvinit\n elif chkconfig_path is not None:\n # print '%s --status-all | grep -E \"is (running|stopped)\"' % service_path\n p = re.compile(\n r'(?P<service>.*?)\\s+[0-9]:(?P<rl0>on|off)\\s+[0-9]:(?P<rl1>on|off)\\s+[0-9]:(?P<rl2>on|off)\\s+'\n r'[0-9]:(?P<rl3>on|off)\\s+[0-9]:(?P<rl4>on|off)\\s+[0-9]:(?P<rl5>on|off)\\s+[0-9]:(?P<rl6>on|off)')\n rc, stdout, stderr = self.module.run_command('%s' % chkconfig_path, use_unsafe_shell=True)\n # Check for special cases where stdout does not fit pattern\n match_any = False\n for line in stdout.split('\\n'):\n if p.match(line):\n match_any = True\n if not match_any:\n p_simple = re.compile(r'(?P<service>.*?)\\s+(?P<rl0>on|off)')\n match_any = False\n for line in stdout.split('\\n'):\n if p_simple.match(line):\n match_any = True\n if match_any:\n # Try extra flags \" -l --allservices\" needed for SLES11\n rc, stdout, stderr = self.module.run_command('%s -l --allservices' % chkconfig_path, use_unsafe_shell=True)\n elif '--list' in stderr:\n # Extra flag needed for RHEL5\n rc, stdout, stderr = self.module.run_command('%s --list' % chkconfig_path, use_unsafe_shell=True)\n for line in stdout.split('\\n'):\n m = p.match(line)\n if m:\n service_name = m.group('service')\n service_state = 'stopped'\n if m.group('rl3') == 'on':\n rc, stdout, stderr = self.module.run_command('%s %s status' % (service_path, service_name), use_unsafe_shell=True)\n service_state = rc\n if rc in (0,):\n service_state = 'running'\n # elif rc in (1,3):\n else:\n if 'root' in stderr or 'permission' in stderr.lower() or 'not in sudoers' in stderr.lower():\n self.incomplete_warning = True\n continue\n else:\n service_state = 'stopped'\n service_data = {\"name\": service_name, \"state\": service_state, \"source\": \"sysv\"}\n services[service_name] = service_data\n return services\n\n\nclass SystemctlScanService(BaseService):\n\n def systemd_enabled(self):\n # Check if init is the systemd command, using comm as cmdline could be symlink\n try:\n f = open('/proc/1/comm', 'r')\n except IOError:\n # If comm doesn't exist, old kernel, no systemd\n return False\n for line in f:\n if 'systemd' in line:\n return True\n return False\n\n def gather_services(self):\n services = {}\n if not self.systemd_enabled():\n return None\n systemctl_path = self.module.get_bin_path(\"systemctl\", opt_dirs=[\"/usr/bin\", \"/usr/local/bin\"])\n if systemctl_path is None:\n return None\n rc, stdout, stderr = self.module.run_command(\"%s list-unit-files --type=service | tail -n +2 | head -n -2\" % systemctl_path, use_unsafe_shell=True)\n for line in stdout.split(\"\\n\"):\n line_data = line.split()\n if len(line_data) != 2:\n continue\n if line_data[1] == \"enabled\":\n state_val = \"running\"\n else:\n state_val = \"stopped\"\n services[line_data[0]] = {\"name\": line_data[0], \"state\": state_val, \"source\": \"systemd\"}\n return services\n\n\ndef main():\n module = AnsibleModule(argument_spec=dict())\n service_modules = (ServiceScanService, SystemctlScanService)\n all_services = {}\n incomplete_warning = False\n for svc_module in service_modules:\n svcmod = svc_module(module)\n svc = svcmod.gather_services()\n if svc is not None:\n all_services.update(svc)\n if svcmod.incomplete_warning:\n incomplete_warning = True\n if len(all_services) == 0:\n results = dict(skipped=True, msg=\"Failed to find any services. Sometimes this is due to insufficient privileges.\")\n else:\n results = dict(ansible_facts=dict(services=all_services))\n if incomplete_warning:\n results['msg'] = \"WARNING: Could not find status for all services. Sometimes this is due to insufficient privileges.\"\n module.exit_json(**results)\n\n\nif __name__ == '__main__':\n main()\n", "path": "lib/ansible/modules/system/service_facts.py"}], "after_files": [{"content": "#!/usr/bin/python\n# (c) 2017, Ansible Project\n# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)\n\n# originally copied from AWX's scan_services module to bring this functionality\n# into Core\n\nfrom __future__ import absolute_import, division, print_function\n__metaclass__ = type\n\n\nANSIBLE_METADATA = {'metadata_version': '1.1',\n 'status': ['preview'],\n 'supported_by': 'community'}\n\nDOCUMENTATION = '''\n---\nmodule: service_facts\nshort_description: Return service state information as fact data\ndescription:\n - Return service state information as fact data for various service management utilities\nversion_added: \"2.5\"\nrequirements: [\"Any of the following supported init systems: systemd, sysv, upstart\"]\n\nauthor:\n - Matthew Jones\n - Adam Miller (@maxamillion)\n'''\n\nEXAMPLES = '''\n- name: populate service facts\n service_facts:\n\n- debug:\n var: ansible_facts.services\n\n'''\n\nRETURN = '''\nansible_facts:\n description: facts to add to ansible_facts about the services on the system\n returned: always\n type: complex\n contains:\n \"services\": {\n \"network\": {\n \"source\": \"sysv\",\n \"state\": \"running\",\n \"name\": \"network\"\n },\n arp-ethers.service: {\n \"source\": \"systemd\",\n \"state\": \"stopped\",\n \"name\": \"arp-ethers.service\"\n }\n }\n'''\n\n\nimport re\nfrom ansible.module_utils.basic import AnsibleModule\n\n\nclass BaseService(object):\n\n def __init__(self, module):\n self.module = module\n self.incomplete_warning = False\n\n\nclass ServiceScanService(BaseService):\n\n def gather_services(self):\n services = {}\n service_path = self.module.get_bin_path(\"service\")\n if service_path is None:\n return None\n initctl_path = self.module.get_bin_path(\"initctl\")\n chkconfig_path = self.module.get_bin_path(\"chkconfig\")\n\n # sysvinit\n if service_path is not None and chkconfig_path is None:\n rc, stdout, stderr = self.module.run_command(\"%s --status-all 2>&1 | grep -E \\\"\\\\[ (\\\\+|\\\\-) \\\\]\\\"\" % service_path, use_unsafe_shell=True)\n for line in stdout.split(\"\\n\"):\n line_data = line.split()\n if len(line_data) < 4:\n continue # Skipping because we expected more data\n service_name = \" \".join(line_data[3:])\n if line_data[1] == \"+\":\n service_state = \"running\"\n else:\n service_state = \"stopped\"\n services[service_name] = {\"name\": service_name, \"state\": service_state, \"source\": \"sysv\"}\n\n # Upstart\n if initctl_path is not None and chkconfig_path is None:\n p = re.compile(r'^\\s?(?P<name>.*)\\s(?P<goal>\\w+)\\/(?P<state>\\w+)(\\,\\sprocess\\s(?P<pid>[0-9]+))?\\s*$')\n rc, stdout, stderr = self.module.run_command(\"%s list\" % initctl_path)\n real_stdout = stdout.replace(\"\\r\", \"\")\n for line in real_stdout.split(\"\\n\"):\n m = p.match(line)\n if not m:\n continue\n service_name = m.group('name')\n service_goal = m.group('goal')\n service_state = m.group('state')\n if m.group('pid'):\n pid = m.group('pid')\n else:\n pid = None # NOQA\n payload = {\"name\": service_name, \"state\": service_state, \"goal\": service_goal, \"source\": \"upstart\"}\n services[service_name] = payload\n\n # RH sysvinit\n elif chkconfig_path is not None:\n # print '%s --status-all | grep -E \"is (running|stopped)\"' % service_path\n p = re.compile(\n r'(?P<service>.*?)\\s+[0-9]:(?P<rl0>on|off)\\s+[0-9]:(?P<rl1>on|off)\\s+[0-9]:(?P<rl2>on|off)\\s+'\n r'[0-9]:(?P<rl3>on|off)\\s+[0-9]:(?P<rl4>on|off)\\s+[0-9]:(?P<rl5>on|off)\\s+[0-9]:(?P<rl6>on|off)')\n rc, stdout, stderr = self.module.run_command('%s' % chkconfig_path, use_unsafe_shell=True)\n # Check for special cases where stdout does not fit pattern\n match_any = False\n for line in stdout.split('\\n'):\n if p.match(line):\n match_any = True\n if not match_any:\n p_simple = re.compile(r'(?P<service>.*?)\\s+(?P<rl0>on|off)')\n match_any = False\n for line in stdout.split('\\n'):\n if p_simple.match(line):\n match_any = True\n if match_any:\n # Try extra flags \" -l --allservices\" needed for SLES11\n rc, stdout, stderr = self.module.run_command('%s -l --allservices' % chkconfig_path, use_unsafe_shell=True)\n elif '--list' in stderr:\n # Extra flag needed for RHEL5\n rc, stdout, stderr = self.module.run_command('%s --list' % chkconfig_path, use_unsafe_shell=True)\n for line in stdout.split('\\n'):\n m = p.match(line)\n if m:\n service_name = m.group('service')\n service_state = 'stopped'\n if m.group('rl3') == 'on':\n rc, stdout, stderr = self.module.run_command('%s %s status' % (service_path, service_name), use_unsafe_shell=True)\n service_state = rc\n if rc in (0,):\n service_state = 'running'\n # elif rc in (1,3):\n else:\n if 'root' in stderr or 'permission' in stderr.lower() or 'not in sudoers' in stderr.lower():\n self.incomplete_warning = True\n continue\n else:\n service_state = 'stopped'\n service_data = {\"name\": service_name, \"state\": service_state, \"source\": \"sysv\"}\n services[service_name] = service_data\n return services\n\n\nclass SystemctlScanService(BaseService):\n\n def systemd_enabled(self):\n # Check if init is the systemd command, using comm as cmdline could be symlink\n try:\n f = open('/proc/1/comm', 'r')\n except IOError:\n # If comm doesn't exist, old kernel, no systemd\n return False\n for line in f:\n if 'systemd' in line:\n return True\n return False\n\n def gather_services(self):\n services = {}\n if not self.systemd_enabled():\n return None\n systemctl_path = self.module.get_bin_path(\"systemctl\", opt_dirs=[\"/usr/bin\", \"/usr/local/bin\"])\n if systemctl_path is None:\n return None\n rc, stdout, stderr = self.module.run_command(\"%s list-units --no-pager --type service --all\" % systemctl_path, use_unsafe_shell=True)\n for line in [svc_line for svc_line in stdout.split('\\n') if '.service' in svc_line and 'not-found' not in svc_line]:\n service_name = line.split()[0]\n if \"running\" in line:\n state_val = \"running\"\n else:\n if 'failed' in line:\n service_name = line.split()[1]\n state_val = \"stopped\"\n services[service_name] = {\"name\": service_name, \"state\": state_val, \"source\": \"systemd\"}\n return services\n\n\ndef main():\n module = AnsibleModule(argument_spec=dict())\n service_modules = (ServiceScanService, SystemctlScanService)\n all_services = {}\n incomplete_warning = False\n for svc_module in service_modules:\n svcmod = svc_module(module)\n svc = svcmod.gather_services()\n if svc is not None:\n all_services.update(svc)\n if svcmod.incomplete_warning:\n incomplete_warning = True\n if len(all_services) == 0:\n results = dict(skipped=True, msg=\"Failed to find any services. Sometimes this is due to insufficient privileges.\")\n else:\n results = dict(ansible_facts=dict(services=all_services))\n if incomplete_warning:\n results['msg'] = \"WARNING: Could not find status for all services. Sometimes this is due to insufficient privileges.\"\n module.exit_json(**results)\n\n\nif __name__ == '__main__':\n main()\n", "path": "lib/ansible/modules/system/service_facts.py"}]} | 3,492 | 376 |
gh_patches_debug_11398 | rasdani/github-patches | git_diff | dotkom__onlineweb4-741 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Ability to format company-url

Like in this case, this ugly url should be formated to vimsa.no or the like.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `apps/companyprofile/models.py`
Content:
```
1 from django.db import models
2 from django.utils.translation import ugettext_lazy as _
3 from filebrowser.fields import FileBrowseField
4
5 class Company(models.Model):
6
7 IMAGE_FOLDER = "images/companies"
8 IMAGE_EXTENSIONS = ['.jpg', '.jpeg', '.gif', '.png', '.tif', '.tiff']
9
10
11 name = models.CharField(_(u"bedriftsnavn"), max_length=100)
12 short_description = models.TextField(_(u"kort beskrivelse"), max_length=200)
13 long_description = models.TextField(_(u"utdypende beskrivelse"), blank=True, null=True)
14 image = FileBrowseField(_(u"bilde"),
15 max_length=200, directory=IMAGE_FOLDER,
16 extensions=IMAGE_EXTENSIONS, null=False, blank=False)
17 site = models.URLField(_(u"hjemmeside"))
18 email_address = models.EmailField(_(u"epostaddresse"), max_length=75, blank=True, null=True)
19 phone_number = models.CharField(_(u"telefonnummer"), max_length=20, blank=True, null=True)
20
21 def __unicode__(self):
22 return self.name
23
24 class Meta:
25 verbose_name = _(u"Bedrift")
26 verbose_name_plural = _(u"Bedrifter")
27
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/apps/companyprofile/models.py b/apps/companyprofile/models.py
--- a/apps/companyprofile/models.py
+++ b/apps/companyprofile/models.py
@@ -14,7 +14,7 @@
image = FileBrowseField(_(u"bilde"),
max_length=200, directory=IMAGE_FOLDER,
extensions=IMAGE_EXTENSIONS, null=False, blank=False)
- site = models.URLField(_(u"hjemmeside"))
+ site = models.CharField(_(u"hjemmeside"), max_length=100)
email_address = models.EmailField(_(u"epostaddresse"), max_length=75, blank=True, null=True)
phone_number = models.CharField(_(u"telefonnummer"), max_length=20, blank=True, null=True)
| {"golden_diff": "diff --git a/apps/companyprofile/models.py b/apps/companyprofile/models.py\n--- a/apps/companyprofile/models.py\n+++ b/apps/companyprofile/models.py\n@@ -14,7 +14,7 @@\n image = FileBrowseField(_(u\"bilde\"), \n max_length=200, directory=IMAGE_FOLDER,\n extensions=IMAGE_EXTENSIONS, null=False, blank=False)\n- site = models.URLField(_(u\"hjemmeside\"))\n+ site = models.CharField(_(u\"hjemmeside\"), max_length=100)\n email_address = models.EmailField(_(u\"epostaddresse\"), max_length=75, blank=True, null=True)\n phone_number = models.CharField(_(u\"telefonnummer\"), max_length=20, blank=True, null=True)\n", "issue": "Ability to format company-url\n\nLike in this case, this ugly url should be formated to vimsa.no or the like. \n\n", "before_files": [{"content": "from django.db import models\nfrom django.utils.translation import ugettext_lazy as _\nfrom filebrowser.fields import FileBrowseField\n\nclass Company(models.Model):\n\n IMAGE_FOLDER = \"images/companies\"\n IMAGE_EXTENSIONS = ['.jpg', '.jpeg', '.gif', '.png', '.tif', '.tiff']\n\n\n name = models.CharField(_(u\"bedriftsnavn\"), max_length=100)\n short_description = models.TextField(_(u\"kort beskrivelse\"), max_length=200)\n long_description = models.TextField(_(u\"utdypende beskrivelse\"), blank=True, null=True)\n image = FileBrowseField(_(u\"bilde\"), \n max_length=200, directory=IMAGE_FOLDER,\n extensions=IMAGE_EXTENSIONS, null=False, blank=False)\n site = models.URLField(_(u\"hjemmeside\"))\n email_address = models.EmailField(_(u\"epostaddresse\"), max_length=75, blank=True, null=True)\n phone_number = models.CharField(_(u\"telefonnummer\"), max_length=20, blank=True, null=True)\n\n def __unicode__(self):\n return self.name\n\n class Meta:\n verbose_name = _(u\"Bedrift\")\n verbose_name_plural = _(u\"Bedrifter\")\n", "path": "apps/companyprofile/models.py"}], "after_files": [{"content": "from django.db import models\nfrom django.utils.translation import ugettext_lazy as _\nfrom filebrowser.fields import FileBrowseField\n\nclass Company(models.Model):\n\n IMAGE_FOLDER = \"images/companies\"\n IMAGE_EXTENSIONS = ['.jpg', '.jpeg', '.gif', '.png', '.tif', '.tiff']\n\n\n name = models.CharField(_(u\"bedriftsnavn\"), max_length=100)\n short_description = models.TextField(_(u\"kort beskrivelse\"), max_length=200)\n long_description = models.TextField(_(u\"utdypende beskrivelse\"), blank=True, null=True)\n image = FileBrowseField(_(u\"bilde\"), \n max_length=200, directory=IMAGE_FOLDER,\n extensions=IMAGE_EXTENSIONS, null=False, blank=False)\n site = models.CharField(_(u\"hjemmeside\"), max_length=100)\n email_address = models.EmailField(_(u\"epostaddresse\"), max_length=75, blank=True, null=True)\n phone_number = models.CharField(_(u\"telefonnummer\"), max_length=20, blank=True, null=True)\n\n def __unicode__(self):\n return self.name\n\n class Meta:\n verbose_name = _(u\"Bedrift\")\n verbose_name_plural = _(u\"Bedrifter\")\n", "path": "apps/companyprofile/models.py"}]} | 699 | 173 |
gh_patches_debug_40688 | rasdani/github-patches | git_diff | localstack__localstack-4875 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
SSM - labeling more than 10 parameters does not work right
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
Adding and labeling more than 10 parameters only labels (or only returns) the first 10 labeled parameters.
### Expected Behavior
Adding and labeling more than 10 parameters should add & then return all the parameters.
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
docker-compose.yml
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
```
params = [ ... ] # create a list of **more** than 10 parameters ( Name + Value)
for param of params:
# add the parameter (returns 200)
res = ssm.put_parameter(Name=param["Name"], Value=param["Value], Type='String')
# label the returned version with 'latest' (returns 200)
label_res = ssm.label_parameter_version(Name=path, ParameterVersion=res['Version'], Labels=['latest'])
# get the added parameter (returns the parameter successfully)
prm = ssm.get_parameter(Name=path)
# get all parameters with the 'latest' label - this returns the first 10 parameters (although `MaxResults` is 100), and also doesn't return a `NextToken` property to iterate through the next batch of parameters.
ps = ssm.get_parameters_by_path(Path=base_path,
ParameterFilters=[ { 'Key': 'Label', 'Values': [ 'latest' ] } ],
MaxResults=100)
```
### Environment
```markdown
- OS: Windows 10
- LocalStack: 0.12.17.5
```
### Anything else?
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `localstack/services/ssm/ssm_listener.py`
Content:
```
1 import json
2 import logging
3 import time
4 import traceback
5
6 from requests.models import Request
7
8 from localstack.utils.aws import aws_stack
9 from localstack.utils.bootstrap import is_api_enabled
10 from localstack.utils.common import to_str
11 from localstack.utils.persistence import PersistingProxyListener
12
13 # logger
14 LOG = logging.getLogger(__name__)
15
16 ACTION_PUT_PARAM = "AmazonSSM.PutParameter"
17 ACTION_GET_PARAM = "AmazonSSM.GetParameter"
18 ACTION_GET_PARAMS = "AmazonSSM.GetParameters"
19 ACTION_DELETE_PARAM = "AmazonSSM.DeleteParameter"
20 ACTION_GET_PARAMS_BY_PATH = "AmazonSSM.GetParametersByPath"
21 ACTION_LABEL_PARAM_VERSION = "AmazonSSM.LabelParameterVersion"
22
23 # maps SSM target header names to operation names used in EventBridge event notifications
24 EVENT_BRIDGE_OPERATIONS = {
25 ACTION_PUT_PARAM: "Create",
26 ACTION_DELETE_PARAM: "Delete",
27 ACTION_LABEL_PARAM_VERSION: "LabelParameterVersion",
28 }
29
30
31 def normalize_name(param_name):
32 param_name = param_name.strip("/")
33 param_name = param_name.replace("//", "/")
34 if "/" in param_name:
35 param_name = "/%s" % param_name
36 return param_name
37
38
39 def get_secrets_information(name, resource_name):
40 client = aws_stack.connect_to_service("secretsmanager")
41 try:
42 secret_info = client.get_secret_value(SecretId=resource_name)
43
44 del secret_info["ResponseMetadata"]
45 created_date_timestamp = time.mktime(secret_info["CreatedDate"].timetuple())
46 secret_info["CreatedDate"] = created_date_timestamp
47 result = {
48 "Parameter": {
49 "SourceResult": json.dumps(secret_info, default=str),
50 "Name": name,
51 "Value": secret_info.get("SecretString"),
52 "Type": "SecureString",
53 "LastModifiedDate": created_date_timestamp,
54 }
55 }
56 return result
57 except client.exceptions.ResourceNotFoundException:
58 return None
59
60
61 def has_secrets(names):
62 for name in names:
63 if name.startswith("/aws/reference/secretsmanager"):
64 return True
65
66
67 def get_params_and_secrets(names):
68 ssm_client = aws_stack.connect_to_service("ssm")
69 result = {"Parameters": [], "InvalidParameters": []}
70 secrets_prefix = "/aws/reference/secretsmanager"
71
72 for name in names:
73 if name.startswith(secrets_prefix):
74 secret = get_secrets_information(name, name[len(secrets_prefix) + 1 :])
75 if secret is not None:
76 secret = secret["Parameter"]
77 result["Parameters"].append(secret)
78 else:
79 result["InvalidParameters"].append(name)
80 else:
81 try:
82 param = ssm_client.get_parameter(Name=name)
83 param["Parameter"]["LastModifiedDate"] = time.mktime(
84 param["Parameter"]["LastModifiedDate"].timetuple()
85 )
86 result["Parameters"].append(param["Parameter"])
87 except ssm_client.exceptions.ParameterNotFound:
88 result["InvalidParameters"].append(name)
89
90 return result
91
92
93 def get_params_by_path_with_labels(
94 path="", param_filters=None, labels_to_filter=None, recursive=False
95 ):
96 ssm_client = aws_stack.connect_to_service("ssm")
97 result = {"Parameters": []}
98 filters = [{"Key": "Path", "Values": [path]}]
99 filters.extend(param_filters)
100 if recursive:
101 filters[0]["Option"] = "Recursive"
102
103 def filter_by_label(param, labels):
104 for label in param["Labels"]:
105 if label in labels:
106 return param
107
108 try:
109 # Get all the params in the path
110 params_in_path = ssm_client.describe_parameters(ParameterFilters=filters)["Parameters"]
111
112 # Get parameter with all its labels (for all the parameters in params_in_path)
113 # Labels of the parameters can be obtained by calling get_parameter_history with parameter name
114 all_params = []
115 for params in params_in_path:
116 all_params.extend(ssm_client.get_parameter_history(Name=params["Name"])["Parameters"])
117
118 # Filter the params with matched labels
119 filtered_params = list(
120 filter(
121 lambda param: filter_by_label(param=param, labels=labels_to_filter),
122 all_params,
123 )
124 )
125
126 # Get details of the filtered params to return
127 # This step is needed because get_parameter_history doesn't return parameter's ARN
128 details_of_filtered_params = list(
129 map(
130 lambda param: ssm_client.get_parameter(Name=param["Name"])["Parameter"],
131 filtered_params,
132 )
133 )
134 result["Parameters"].extend(details_of_filtered_params)
135 except Exception as e:
136 LOG.info(
137 "Unable to get SSM parameters by path and filter by labels : %s %s"
138 % (e, traceback.format_exc())
139 )
140 raise e
141 return result
142
143
144 def notify_event_subscribers(data, target_header):
145 """Publish an EventBridge event to notify subscribers of changes."""
146 if not is_api_enabled("events"):
147 return
148 events = aws_stack.connect_to_service("events")
149 operation = EVENT_BRIDGE_OPERATIONS.get(target_header)
150 if not operation:
151 LOG.warning(
152 "Unexpected target header '%s' when sending EventBridge notification from SSM",
153 target_header,
154 )
155 detail = {"name": data["Name"], "operation": operation}
156 event = {
157 "Source": "aws.ssm",
158 "Detail": json.dumps(detail),
159 "DetailType": "Parameter Store Change",
160 }
161 events.put_events(Entries=[event])
162
163
164 class ProxyListenerSSM(PersistingProxyListener):
165 def api_name(self):
166 return "ssm"
167
168 def forward_request(self, method, path, data, headers):
169 if method == "OPTIONS":
170 return 200
171
172 target = headers.get("X-Amz-Target")
173 data_orig = data
174 if method == "POST" and target:
175 data = json.loads(to_str(data))
176 if target == ACTION_GET_PARAMS:
177 names = data["Names"] = data.get("Names") or []
178 if has_secrets(names):
179 return get_params_and_secrets(names)
180 else:
181 for i in range(len(names)):
182 names[i] = normalize_name(names[i])
183 elif target in [ACTION_PUT_PARAM, ACTION_GET_PARAM]:
184 name = data.get("Name") or ""
185 data["Name"] = normalize_name(name)
186 if target == ACTION_GET_PARAM:
187 details = name.split("/")
188 if len(details) > 4:
189 service = details[3]
190 if service == "secretsmanager":
191 resource_name = "/".join(details[4:])
192 secret = get_secrets_information(name, resource_name)
193 if secret is not None:
194 return secret
195 elif target == ACTION_GET_PARAMS_BY_PATH and data.get("ParameterFilters"):
196 params_filters = data.get("ParameterFilters") or []
197 labels = []
198 for filter in params_filters:
199 if filter["Key"] == "Label":
200 labels = filter["Values"]
201 params_filters.remove(filter)
202 if labels:
203 path = data.get("Path")
204 recursive = data.get("Recursive") or False
205 return get_params_by_path_with_labels(
206 path=path,
207 param_filters=params_filters,
208 labels_to_filter=labels,
209 recursive=recursive,
210 )
211 # send event notifications
212 if target in EVENT_BRIDGE_OPERATIONS:
213 notify_event_subscribers(data, target)
214 # forward request
215 data = json.dumps(data)
216 if data != data_orig:
217 return Request(data=data, headers=headers, method=method)
218
219 return True
220
221
222 # instantiate listener
223 UPDATE_SSM = ProxyListenerSSM()
224
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/localstack/services/ssm/ssm_listener.py b/localstack/services/ssm/ssm_listener.py
--- a/localstack/services/ssm/ssm_listener.py
+++ b/localstack/services/ssm/ssm_listener.py
@@ -1,7 +1,6 @@
import json
import logging
import time
-import traceback
from requests.models import Request
@@ -90,57 +89,6 @@
return result
-def get_params_by_path_with_labels(
- path="", param_filters=None, labels_to_filter=None, recursive=False
-):
- ssm_client = aws_stack.connect_to_service("ssm")
- result = {"Parameters": []}
- filters = [{"Key": "Path", "Values": [path]}]
- filters.extend(param_filters)
- if recursive:
- filters[0]["Option"] = "Recursive"
-
- def filter_by_label(param, labels):
- for label in param["Labels"]:
- if label in labels:
- return param
-
- try:
- # Get all the params in the path
- params_in_path = ssm_client.describe_parameters(ParameterFilters=filters)["Parameters"]
-
- # Get parameter with all its labels (for all the parameters in params_in_path)
- # Labels of the parameters can be obtained by calling get_parameter_history with parameter name
- all_params = []
- for params in params_in_path:
- all_params.extend(ssm_client.get_parameter_history(Name=params["Name"])["Parameters"])
-
- # Filter the params with matched labels
- filtered_params = list(
- filter(
- lambda param: filter_by_label(param=param, labels=labels_to_filter),
- all_params,
- )
- )
-
- # Get details of the filtered params to return
- # This step is needed because get_parameter_history doesn't return parameter's ARN
- details_of_filtered_params = list(
- map(
- lambda param: ssm_client.get_parameter(Name=param["Name"])["Parameter"],
- filtered_params,
- )
- )
- result["Parameters"].extend(details_of_filtered_params)
- except Exception as e:
- LOG.info(
- "Unable to get SSM parameters by path and filter by labels : %s %s"
- % (e, traceback.format_exc())
- )
- raise e
- return result
-
-
def notify_event_subscribers(data, target_header):
"""Publish an EventBridge event to notify subscribers of changes."""
if not is_api_enabled("events"):
@@ -192,22 +140,7 @@
secret = get_secrets_information(name, resource_name)
if secret is not None:
return secret
- elif target == ACTION_GET_PARAMS_BY_PATH and data.get("ParameterFilters"):
- params_filters = data.get("ParameterFilters") or []
- labels = []
- for filter in params_filters:
- if filter["Key"] == "Label":
- labels = filter["Values"]
- params_filters.remove(filter)
- if labels:
- path = data.get("Path")
- recursive = data.get("Recursive") or False
- return get_params_by_path_with_labels(
- path=path,
- param_filters=params_filters,
- labels_to_filter=labels,
- recursive=recursive,
- )
+
# send event notifications
if target in EVENT_BRIDGE_OPERATIONS:
notify_event_subscribers(data, target)
| {"golden_diff": "diff --git a/localstack/services/ssm/ssm_listener.py b/localstack/services/ssm/ssm_listener.py\n--- a/localstack/services/ssm/ssm_listener.py\n+++ b/localstack/services/ssm/ssm_listener.py\n@@ -1,7 +1,6 @@\n import json\n import logging\n import time\n-import traceback\n \n from requests.models import Request\n \n@@ -90,57 +89,6 @@\n return result\n \n \n-def get_params_by_path_with_labels(\n- path=\"\", param_filters=None, labels_to_filter=None, recursive=False\n-):\n- ssm_client = aws_stack.connect_to_service(\"ssm\")\n- result = {\"Parameters\": []}\n- filters = [{\"Key\": \"Path\", \"Values\": [path]}]\n- filters.extend(param_filters)\n- if recursive:\n- filters[0][\"Option\"] = \"Recursive\"\n-\n- def filter_by_label(param, labels):\n- for label in param[\"Labels\"]:\n- if label in labels:\n- return param\n-\n- try:\n- # Get all the params in the path\n- params_in_path = ssm_client.describe_parameters(ParameterFilters=filters)[\"Parameters\"]\n-\n- # Get parameter with all its labels (for all the parameters in params_in_path)\n- # Labels of the parameters can be obtained by calling get_parameter_history with parameter name\n- all_params = []\n- for params in params_in_path:\n- all_params.extend(ssm_client.get_parameter_history(Name=params[\"Name\"])[\"Parameters\"])\n-\n- # Filter the params with matched labels\n- filtered_params = list(\n- filter(\n- lambda param: filter_by_label(param=param, labels=labels_to_filter),\n- all_params,\n- )\n- )\n-\n- # Get details of the filtered params to return\n- # This step is needed because get_parameter_history doesn't return parameter's ARN\n- details_of_filtered_params = list(\n- map(\n- lambda param: ssm_client.get_parameter(Name=param[\"Name\"])[\"Parameter\"],\n- filtered_params,\n- )\n- )\n- result[\"Parameters\"].extend(details_of_filtered_params)\n- except Exception as e:\n- LOG.info(\n- \"Unable to get SSM parameters by path and filter by labels : %s %s\"\n- % (e, traceback.format_exc())\n- )\n- raise e\n- return result\n-\n-\n def notify_event_subscribers(data, target_header):\n \"\"\"Publish an EventBridge event to notify subscribers of changes.\"\"\"\n if not is_api_enabled(\"events\"):\n@@ -192,22 +140,7 @@\n secret = get_secrets_information(name, resource_name)\n if secret is not None:\n return secret\n- elif target == ACTION_GET_PARAMS_BY_PATH and data.get(\"ParameterFilters\"):\n- params_filters = data.get(\"ParameterFilters\") or []\n- labels = []\n- for filter in params_filters:\n- if filter[\"Key\"] == \"Label\":\n- labels = filter[\"Values\"]\n- params_filters.remove(filter)\n- if labels:\n- path = data.get(\"Path\")\n- recursive = data.get(\"Recursive\") or False\n- return get_params_by_path_with_labels(\n- path=path,\n- param_filters=params_filters,\n- labels_to_filter=labels,\n- recursive=recursive,\n- )\n+\n # send event notifications\n if target in EVENT_BRIDGE_OPERATIONS:\n notify_event_subscribers(data, target)\n", "issue": "SSM - labeling more than 10 parameters does not work right\n### Is there an existing issue for this?\r\n\r\n- [X] I have searched the existing issues\r\n\r\n### Current Behavior\r\n\r\nAdding and labeling more than 10 parameters only labels (or only returns) the first 10 labeled parameters.\r\n\r\n### Expected Behavior\r\n\r\nAdding and labeling more than 10 parameters should add & then return all the parameters.\r\n\r\n### How are you starting LocalStack?\r\n\r\nWith a docker-compose file\r\n\r\n### Steps To Reproduce\r\n\r\n#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)\r\n\r\ndocker-compose.yml\r\n\r\n#### Client commands (e.g., AWS SDK code snippet, or sequence of \"awslocal\" commands)\r\n\r\n```\r\nparams = [ ... ] # create a list of **more** than 10 parameters ( Name + Value)\r\nfor param of params:\r\n # add the parameter (returns 200)\r\n res = ssm.put_parameter(Name=param[\"Name\"], Value=param[\"Value], Type='String')\r\n\r\n # label the returned version with 'latest' (returns 200)\r\n label_res = ssm.label_parameter_version(Name=path, ParameterVersion=res['Version'], Labels=['latest'])\r\n\r\n # get the added parameter (returns the parameter successfully)\r\n prm = ssm.get_parameter(Name=path)\r\n\r\n# get all parameters with the 'latest' label - this returns the first 10 parameters (although `MaxResults` is 100), and also doesn't return a `NextToken` property to iterate through the next batch of parameters.\r\nps = ssm.get_parameters_by_path(Path=base_path,\r\n ParameterFilters=[ { 'Key': 'Label', 'Values': [ 'latest' ] } ],\r\n MaxResults=100) \r\n\r\n```\r\n\r\n\r\n### Environment\r\n\r\n```markdown\r\n- OS: Windows 10\r\n- LocalStack: 0.12.17.5\r\n```\r\n\r\n\r\n### Anything else?\r\n\r\n_No response_\n", "before_files": [{"content": "import json\nimport logging\nimport time\nimport traceback\n\nfrom requests.models import Request\n\nfrom localstack.utils.aws import aws_stack\nfrom localstack.utils.bootstrap import is_api_enabled\nfrom localstack.utils.common import to_str\nfrom localstack.utils.persistence import PersistingProxyListener\n\n# logger\nLOG = logging.getLogger(__name__)\n\nACTION_PUT_PARAM = \"AmazonSSM.PutParameter\"\nACTION_GET_PARAM = \"AmazonSSM.GetParameter\"\nACTION_GET_PARAMS = \"AmazonSSM.GetParameters\"\nACTION_DELETE_PARAM = \"AmazonSSM.DeleteParameter\"\nACTION_GET_PARAMS_BY_PATH = \"AmazonSSM.GetParametersByPath\"\nACTION_LABEL_PARAM_VERSION = \"AmazonSSM.LabelParameterVersion\"\n\n# maps SSM target header names to operation names used in EventBridge event notifications\nEVENT_BRIDGE_OPERATIONS = {\n ACTION_PUT_PARAM: \"Create\",\n ACTION_DELETE_PARAM: \"Delete\",\n ACTION_LABEL_PARAM_VERSION: \"LabelParameterVersion\",\n}\n\n\ndef normalize_name(param_name):\n param_name = param_name.strip(\"/\")\n param_name = param_name.replace(\"//\", \"/\")\n if \"/\" in param_name:\n param_name = \"/%s\" % param_name\n return param_name\n\n\ndef get_secrets_information(name, resource_name):\n client = aws_stack.connect_to_service(\"secretsmanager\")\n try:\n secret_info = client.get_secret_value(SecretId=resource_name)\n\n del secret_info[\"ResponseMetadata\"]\n created_date_timestamp = time.mktime(secret_info[\"CreatedDate\"].timetuple())\n secret_info[\"CreatedDate\"] = created_date_timestamp\n result = {\n \"Parameter\": {\n \"SourceResult\": json.dumps(secret_info, default=str),\n \"Name\": name,\n \"Value\": secret_info.get(\"SecretString\"),\n \"Type\": \"SecureString\",\n \"LastModifiedDate\": created_date_timestamp,\n }\n }\n return result\n except client.exceptions.ResourceNotFoundException:\n return None\n\n\ndef has_secrets(names):\n for name in names:\n if name.startswith(\"/aws/reference/secretsmanager\"):\n return True\n\n\ndef get_params_and_secrets(names):\n ssm_client = aws_stack.connect_to_service(\"ssm\")\n result = {\"Parameters\": [], \"InvalidParameters\": []}\n secrets_prefix = \"/aws/reference/secretsmanager\"\n\n for name in names:\n if name.startswith(secrets_prefix):\n secret = get_secrets_information(name, name[len(secrets_prefix) + 1 :])\n if secret is not None:\n secret = secret[\"Parameter\"]\n result[\"Parameters\"].append(secret)\n else:\n result[\"InvalidParameters\"].append(name)\n else:\n try:\n param = ssm_client.get_parameter(Name=name)\n param[\"Parameter\"][\"LastModifiedDate\"] = time.mktime(\n param[\"Parameter\"][\"LastModifiedDate\"].timetuple()\n )\n result[\"Parameters\"].append(param[\"Parameter\"])\n except ssm_client.exceptions.ParameterNotFound:\n result[\"InvalidParameters\"].append(name)\n\n return result\n\n\ndef get_params_by_path_with_labels(\n path=\"\", param_filters=None, labels_to_filter=None, recursive=False\n):\n ssm_client = aws_stack.connect_to_service(\"ssm\")\n result = {\"Parameters\": []}\n filters = [{\"Key\": \"Path\", \"Values\": [path]}]\n filters.extend(param_filters)\n if recursive:\n filters[0][\"Option\"] = \"Recursive\"\n\n def filter_by_label(param, labels):\n for label in param[\"Labels\"]:\n if label in labels:\n return param\n\n try:\n # Get all the params in the path\n params_in_path = ssm_client.describe_parameters(ParameterFilters=filters)[\"Parameters\"]\n\n # Get parameter with all its labels (for all the parameters in params_in_path)\n # Labels of the parameters can be obtained by calling get_parameter_history with parameter name\n all_params = []\n for params in params_in_path:\n all_params.extend(ssm_client.get_parameter_history(Name=params[\"Name\"])[\"Parameters\"])\n\n # Filter the params with matched labels\n filtered_params = list(\n filter(\n lambda param: filter_by_label(param=param, labels=labels_to_filter),\n all_params,\n )\n )\n\n # Get details of the filtered params to return\n # This step is needed because get_parameter_history doesn't return parameter's ARN\n details_of_filtered_params = list(\n map(\n lambda param: ssm_client.get_parameter(Name=param[\"Name\"])[\"Parameter\"],\n filtered_params,\n )\n )\n result[\"Parameters\"].extend(details_of_filtered_params)\n except Exception as e:\n LOG.info(\n \"Unable to get SSM parameters by path and filter by labels : %s %s\"\n % (e, traceback.format_exc())\n )\n raise e\n return result\n\n\ndef notify_event_subscribers(data, target_header):\n \"\"\"Publish an EventBridge event to notify subscribers of changes.\"\"\"\n if not is_api_enabled(\"events\"):\n return\n events = aws_stack.connect_to_service(\"events\")\n operation = EVENT_BRIDGE_OPERATIONS.get(target_header)\n if not operation:\n LOG.warning(\n \"Unexpected target header '%s' when sending EventBridge notification from SSM\",\n target_header,\n )\n detail = {\"name\": data[\"Name\"], \"operation\": operation}\n event = {\n \"Source\": \"aws.ssm\",\n \"Detail\": json.dumps(detail),\n \"DetailType\": \"Parameter Store Change\",\n }\n events.put_events(Entries=[event])\n\n\nclass ProxyListenerSSM(PersistingProxyListener):\n def api_name(self):\n return \"ssm\"\n\n def forward_request(self, method, path, data, headers):\n if method == \"OPTIONS\":\n return 200\n\n target = headers.get(\"X-Amz-Target\")\n data_orig = data\n if method == \"POST\" and target:\n data = json.loads(to_str(data))\n if target == ACTION_GET_PARAMS:\n names = data[\"Names\"] = data.get(\"Names\") or []\n if has_secrets(names):\n return get_params_and_secrets(names)\n else:\n for i in range(len(names)):\n names[i] = normalize_name(names[i])\n elif target in [ACTION_PUT_PARAM, ACTION_GET_PARAM]:\n name = data.get(\"Name\") or \"\"\n data[\"Name\"] = normalize_name(name)\n if target == ACTION_GET_PARAM:\n details = name.split(\"/\")\n if len(details) > 4:\n service = details[3]\n if service == \"secretsmanager\":\n resource_name = \"/\".join(details[4:])\n secret = get_secrets_information(name, resource_name)\n if secret is not None:\n return secret\n elif target == ACTION_GET_PARAMS_BY_PATH and data.get(\"ParameterFilters\"):\n params_filters = data.get(\"ParameterFilters\") or []\n labels = []\n for filter in params_filters:\n if filter[\"Key\"] == \"Label\":\n labels = filter[\"Values\"]\n params_filters.remove(filter)\n if labels:\n path = data.get(\"Path\")\n recursive = data.get(\"Recursive\") or False\n return get_params_by_path_with_labels(\n path=path,\n param_filters=params_filters,\n labels_to_filter=labels,\n recursive=recursive,\n )\n # send event notifications\n if target in EVENT_BRIDGE_OPERATIONS:\n notify_event_subscribers(data, target)\n # forward request\n data = json.dumps(data)\n if data != data_orig:\n return Request(data=data, headers=headers, method=method)\n\n return True\n\n\n# instantiate listener\nUPDATE_SSM = ProxyListenerSSM()\n", "path": "localstack/services/ssm/ssm_listener.py"}], "after_files": [{"content": "import json\nimport logging\nimport time\n\nfrom requests.models import Request\n\nfrom localstack.utils.aws import aws_stack\nfrom localstack.utils.bootstrap import is_api_enabled\nfrom localstack.utils.common import to_str\nfrom localstack.utils.persistence import PersistingProxyListener\n\n# logger\nLOG = logging.getLogger(__name__)\n\nACTION_PUT_PARAM = \"AmazonSSM.PutParameter\"\nACTION_GET_PARAM = \"AmazonSSM.GetParameter\"\nACTION_GET_PARAMS = \"AmazonSSM.GetParameters\"\nACTION_DELETE_PARAM = \"AmazonSSM.DeleteParameter\"\nACTION_GET_PARAMS_BY_PATH = \"AmazonSSM.GetParametersByPath\"\nACTION_LABEL_PARAM_VERSION = \"AmazonSSM.LabelParameterVersion\"\n\n# maps SSM target header names to operation names used in EventBridge event notifications\nEVENT_BRIDGE_OPERATIONS = {\n ACTION_PUT_PARAM: \"Create\",\n ACTION_DELETE_PARAM: \"Delete\",\n ACTION_LABEL_PARAM_VERSION: \"LabelParameterVersion\",\n}\n\n\ndef normalize_name(param_name):\n param_name = param_name.strip(\"/\")\n param_name = param_name.replace(\"//\", \"/\")\n if \"/\" in param_name:\n param_name = \"/%s\" % param_name\n return param_name\n\n\ndef get_secrets_information(name, resource_name):\n client = aws_stack.connect_to_service(\"secretsmanager\")\n try:\n secret_info = client.get_secret_value(SecretId=resource_name)\n\n del secret_info[\"ResponseMetadata\"]\n created_date_timestamp = time.mktime(secret_info[\"CreatedDate\"].timetuple())\n secret_info[\"CreatedDate\"] = created_date_timestamp\n result = {\n \"Parameter\": {\n \"SourceResult\": json.dumps(secret_info, default=str),\n \"Name\": name,\n \"Value\": secret_info.get(\"SecretString\"),\n \"Type\": \"SecureString\",\n \"LastModifiedDate\": created_date_timestamp,\n }\n }\n return result\n except client.exceptions.ResourceNotFoundException:\n return None\n\n\ndef has_secrets(names):\n for name in names:\n if name.startswith(\"/aws/reference/secretsmanager\"):\n return True\n\n\ndef get_params_and_secrets(names):\n ssm_client = aws_stack.connect_to_service(\"ssm\")\n result = {\"Parameters\": [], \"InvalidParameters\": []}\n secrets_prefix = \"/aws/reference/secretsmanager\"\n\n for name in names:\n if name.startswith(secrets_prefix):\n secret = get_secrets_information(name, name[len(secrets_prefix) + 1 :])\n if secret is not None:\n secret = secret[\"Parameter\"]\n result[\"Parameters\"].append(secret)\n else:\n result[\"InvalidParameters\"].append(name)\n else:\n try:\n param = ssm_client.get_parameter(Name=name)\n param[\"Parameter\"][\"LastModifiedDate\"] = time.mktime(\n param[\"Parameter\"][\"LastModifiedDate\"].timetuple()\n )\n result[\"Parameters\"].append(param[\"Parameter\"])\n except ssm_client.exceptions.ParameterNotFound:\n result[\"InvalidParameters\"].append(name)\n\n return result\n\n\ndef notify_event_subscribers(data, target_header):\n \"\"\"Publish an EventBridge event to notify subscribers of changes.\"\"\"\n if not is_api_enabled(\"events\"):\n return\n events = aws_stack.connect_to_service(\"events\")\n operation = EVENT_BRIDGE_OPERATIONS.get(target_header)\n if not operation:\n LOG.warning(\n \"Unexpected target header '%s' when sending EventBridge notification from SSM\",\n target_header,\n )\n detail = {\"name\": data[\"Name\"], \"operation\": operation}\n event = {\n \"Source\": \"aws.ssm\",\n \"Detail\": json.dumps(detail),\n \"DetailType\": \"Parameter Store Change\",\n }\n events.put_events(Entries=[event])\n\n\nclass ProxyListenerSSM(PersistingProxyListener):\n def api_name(self):\n return \"ssm\"\n\n def forward_request(self, method, path, data, headers):\n if method == \"OPTIONS\":\n return 200\n\n target = headers.get(\"X-Amz-Target\")\n data_orig = data\n if method == \"POST\" and target:\n data = json.loads(to_str(data))\n if target == ACTION_GET_PARAMS:\n names = data[\"Names\"] = data.get(\"Names\") or []\n if has_secrets(names):\n return get_params_and_secrets(names)\n else:\n for i in range(len(names)):\n names[i] = normalize_name(names[i])\n elif target in [ACTION_PUT_PARAM, ACTION_GET_PARAM]:\n name = data.get(\"Name\") or \"\"\n data[\"Name\"] = normalize_name(name)\n if target == ACTION_GET_PARAM:\n details = name.split(\"/\")\n if len(details) > 4:\n service = details[3]\n if service == \"secretsmanager\":\n resource_name = \"/\".join(details[4:])\n secret = get_secrets_information(name, resource_name)\n if secret is not None:\n return secret\n\n # send event notifications\n if target in EVENT_BRIDGE_OPERATIONS:\n notify_event_subscribers(data, target)\n # forward request\n data = json.dumps(data)\n if data != data_orig:\n return Request(data=data, headers=headers, method=method)\n\n return True\n\n\n# instantiate listener\nUPDATE_SSM = ProxyListenerSSM()\n", "path": "localstack/services/ssm/ssm_listener.py"}]} | 2,866 | 750 |
gh_patches_debug_578 | rasdani/github-patches | git_diff | mlcommons__GaNDLF-537 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Radiology DataLoader takes up a *lot* memory during certain conditions
**Describe the bug**
During sanity checking of subjects, the queue construction seems to take up a lot of memory.
**To Reproduce**
Steps to reproduce the behavior:
1. Have a ridiculous number of subjects on a small machine (e.g., 10k on a machine with 16G RAM)
2. Start training on rad mode
4. See error sometime during/after queue construction:
```bash
## last message
Constructing queue for train data: 100%|██████████| 8681/8681 [07:57<00:00, 18.19it/s]
## failure with message related to exceeded RAM usage
```
**Expected behavior**
There should not be any failure at this stage.
**Screenshots**
N.A>
**GaNDLF Version**
<!-- Put the output of the following command:
python -c 'import GANDLF as g;print(g.__version__)'
-->
0.0.16-dev
**Desktop (please complete the following information):**
CentOS 7
**Additional context**
N.A.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2
3 """The setup script."""
4
5
6 import os
7 from setuptools import setup, find_packages
8 from setuptools.command.install import install
9 from setuptools.command.develop import develop
10 from setuptools.command.egg_info import egg_info
11
12 with open("README.md") as readme_file:
13 readme = readme_file.read()
14
15
16 def git_submodule_update():
17 ## submodule update
18 os.system("git submodule update --init --recursive")
19
20
21 class CustomInstallCommand(install):
22 def run(self):
23 install.run(self)
24 git_submodule_update()
25
26
27 class CustomDevelopCommand(develop):
28 def run(self):
29 develop.run(self)
30 git_submodule_update()
31
32
33 class CustomEggInfoCommand(egg_info):
34 def run(self):
35 egg_info.run(self)
36 git_submodule_update()
37
38
39 # read version.py
40 import sys, re
41
42 try:
43 filepath = "GANDLF/version.py"
44 version_file = open(filepath)
45 (__version__,) = re.findall('__version__ = "(.*)"', version_file.read())
46
47 except Exception as error:
48 __version__ = "0.0.1"
49 sys.stderr.write("Warning: Could not open '%s' due %s\n" % (filepath, error))
50
51 requirements = [
52 "black",
53 "numpy==1.22.0",
54 "scipy",
55 "SimpleITK!=2.0.*",
56 "torchvision",
57 "tqdm",
58 "torchio==0.18.75",
59 "pandas",
60 "scikit-learn>=0.23.2",
61 "scikit-image>=0.19.1",
62 'pickle5>=0.0.11; python_version < "3.8.0"',
63 "setuptools",
64 "seaborn",
65 "pyyaml",
66 "tiffslide",
67 "matplotlib",
68 "requests>=2.25.0",
69 "pytest",
70 "coverage",
71 "pytest-cov",
72 "psutil",
73 "medcam",
74 "opencv-python",
75 "torchmetrics==0.5.1", # newer versions have changed api for f1 invocation
76 "OpenPatchMiner==0.1.8",
77 "zarr==2.10.3",
78 "pydicom",
79 "onnx",
80 "torchinfo==1.7.0",
81 "segmentation-models-pytorch==0.3.0",
82 "ACSConv==0.1.1",
83 ]
84
85 # pytorch doesn't have LTS support on OSX - https://github.com/mlcommons/GaNDLF/issues/389
86 if sys.platform == "darwin":
87 requirements.append("torch==1.11.0")
88 else:
89 requirements.append("torch==1.11.0")
90
91 setup(
92 name="GANDLF",
93 version=__version__,
94 author="MLCommons",
95 author_email="[email protected]",
96 python_requires=">=3.7",
97 packages=find_packages(),
98 cmdclass={ # this ensures git_submodule_update is called during install
99 "install": CustomInstallCommand,
100 "develop": CustomDevelopCommand,
101 "egg_info": CustomEggInfoCommand,
102 },
103 scripts=[
104 "gandlf_run",
105 "gandlf_constructCSV",
106 "gandlf_collectStats",
107 "gandlf_patchMiner",
108 "gandlf_preprocess",
109 "gandlf_anonymizer",
110 "gandlf_verifyInstall",
111 "gandlf_configGenerator",
112 ],
113 classifiers=[
114 "Development Status :: 3 - Alpha",
115 "Intended Audience :: Science/Research",
116 "License :: OSI Approved :: Apache Software License",
117 "Natural Language :: English",
118 "Operating System :: OS Independent",
119 "Programming Language :: Python :: 3.7",
120 "Programming Language :: Python :: 3.8",
121 "Programming Language :: Python :: 3.9",
122 "Programming Language :: Python :: 3.10",
123 "Topic :: Scientific/Engineering :: Medical Science Apps",
124 ],
125 description=(
126 "PyTorch-based framework that handles segmentation/regression/classification using various DL architectures for medical imaging."
127 ),
128 install_requires=requirements,
129 license="Apache-2.0",
130 long_description=readme,
131 long_description_content_type="text/markdown",
132 include_package_data=True,
133 keywords="semantic, segmentation, regression, classification, data-augmentation, medical-imaging, clinical-workflows, deep-learning, pytorch",
134 zip_safe=False,
135 )
136
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -53,6 +53,7 @@
"numpy==1.22.0",
"scipy",
"SimpleITK!=2.0.*",
+ "SimpleITK!=2.2.1", # https://github.com/mlcommons/GaNDLF/issues/536
"torchvision",
"tqdm",
"torchio==0.18.75",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -53,6 +53,7 @@\n \"numpy==1.22.0\",\n \"scipy\",\n \"SimpleITK!=2.0.*\",\n+ \"SimpleITK!=2.2.1\", # https://github.com/mlcommons/GaNDLF/issues/536\n \"torchvision\",\n \"tqdm\",\n \"torchio==0.18.75\",\n", "issue": "Radiology DataLoader takes up a *lot* memory during certain conditions\n**Describe the bug**\r\nDuring sanity checking of subjects, the queue construction seems to take up a lot of memory.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Have a ridiculous number of subjects on a small machine (e.g., 10k on a machine with 16G RAM)\r\n2. Start training on rad mode\r\n4. See error sometime during/after queue construction:\r\n```bash\r\n## last message\r\nConstructing queue for train data: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 8681/8681 [07:57<00:00, 18.19it/s]\r\n## failure with message related to exceeded RAM usage\r\n```\r\n\r\n**Expected behavior**\r\nThere should not be any failure at this stage.\r\n\r\n**Screenshots**\r\nN.A>\r\n\r\n**GaNDLF Version**\r\n<!-- Put the output of the following command:\r\npython -c 'import GANDLF as g;print(g.__version__)'\r\n-->\r\n0.0.16-dev\r\n\r\n**Desktop (please complete the following information):**\r\nCentOS 7\r\n\r\n**Additional context**\r\nN.A.\n", "before_files": [{"content": "#!/usr/bin/env python\n\n\"\"\"The setup script.\"\"\"\n\n\nimport os\nfrom setuptools import setup, find_packages\nfrom setuptools.command.install import install\nfrom setuptools.command.develop import develop\nfrom setuptools.command.egg_info import egg_info\n\nwith open(\"README.md\") as readme_file:\n readme = readme_file.read()\n\n\ndef git_submodule_update():\n ## submodule update\n os.system(\"git submodule update --init --recursive\")\n\n\nclass CustomInstallCommand(install):\n def run(self):\n install.run(self)\n git_submodule_update()\n\n\nclass CustomDevelopCommand(develop):\n def run(self):\n develop.run(self)\n git_submodule_update()\n\n\nclass CustomEggInfoCommand(egg_info):\n def run(self):\n egg_info.run(self)\n git_submodule_update()\n\n\n# read version.py\nimport sys, re\n\ntry:\n filepath = \"GANDLF/version.py\"\n version_file = open(filepath)\n (__version__,) = re.findall('__version__ = \"(.*)\"', version_file.read())\n\nexcept Exception as error:\n __version__ = \"0.0.1\"\n sys.stderr.write(\"Warning: Could not open '%s' due %s\\n\" % (filepath, error))\n\nrequirements = [\n \"black\",\n \"numpy==1.22.0\",\n \"scipy\",\n \"SimpleITK!=2.0.*\",\n \"torchvision\",\n \"tqdm\",\n \"torchio==0.18.75\",\n \"pandas\",\n \"scikit-learn>=0.23.2\",\n \"scikit-image>=0.19.1\",\n 'pickle5>=0.0.11; python_version < \"3.8.0\"',\n \"setuptools\",\n \"seaborn\",\n \"pyyaml\",\n \"tiffslide\",\n \"matplotlib\",\n \"requests>=2.25.0\",\n \"pytest\",\n \"coverage\",\n \"pytest-cov\",\n \"psutil\",\n \"medcam\",\n \"opencv-python\",\n \"torchmetrics==0.5.1\", # newer versions have changed api for f1 invocation\n \"OpenPatchMiner==0.1.8\",\n \"zarr==2.10.3\",\n \"pydicom\",\n \"onnx\",\n \"torchinfo==1.7.0\",\n \"segmentation-models-pytorch==0.3.0\",\n \"ACSConv==0.1.1\",\n]\n\n# pytorch doesn't have LTS support on OSX - https://github.com/mlcommons/GaNDLF/issues/389\nif sys.platform == \"darwin\":\n requirements.append(\"torch==1.11.0\")\nelse:\n requirements.append(\"torch==1.11.0\")\n\nsetup(\n name=\"GANDLF\",\n version=__version__,\n author=\"MLCommons\",\n author_email=\"[email protected]\",\n python_requires=\">=3.7\",\n packages=find_packages(),\n cmdclass={ # this ensures git_submodule_update is called during install\n \"install\": CustomInstallCommand,\n \"develop\": CustomDevelopCommand,\n \"egg_info\": CustomEggInfoCommand,\n },\n scripts=[\n \"gandlf_run\",\n \"gandlf_constructCSV\",\n \"gandlf_collectStats\",\n \"gandlf_patchMiner\",\n \"gandlf_preprocess\",\n \"gandlf_anonymizer\",\n \"gandlf_verifyInstall\",\n \"gandlf_configGenerator\",\n ],\n classifiers=[\n \"Development Status :: 3 - Alpha\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Natural Language :: English\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Topic :: Scientific/Engineering :: Medical Science Apps\",\n ],\n description=(\n \"PyTorch-based framework that handles segmentation/regression/classification using various DL architectures for medical imaging.\"\n ),\n install_requires=requirements,\n license=\"Apache-2.0\",\n long_description=readme,\n long_description_content_type=\"text/markdown\",\n include_package_data=True,\n keywords=\"semantic, segmentation, regression, classification, data-augmentation, medical-imaging, clinical-workflows, deep-learning, pytorch\",\n zip_safe=False,\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\n\"\"\"The setup script.\"\"\"\n\n\nimport os\nfrom setuptools import setup, find_packages\nfrom setuptools.command.install import install\nfrom setuptools.command.develop import develop\nfrom setuptools.command.egg_info import egg_info\n\nwith open(\"README.md\") as readme_file:\n readme = readme_file.read()\n\n\ndef git_submodule_update():\n ## submodule update\n os.system(\"git submodule update --init --recursive\")\n\n\nclass CustomInstallCommand(install):\n def run(self):\n install.run(self)\n git_submodule_update()\n\n\nclass CustomDevelopCommand(develop):\n def run(self):\n develop.run(self)\n git_submodule_update()\n\n\nclass CustomEggInfoCommand(egg_info):\n def run(self):\n egg_info.run(self)\n git_submodule_update()\n\n\n# read version.py\nimport sys, re\n\ntry:\n filepath = \"GANDLF/version.py\"\n version_file = open(filepath)\n (__version__,) = re.findall('__version__ = \"(.*)\"', version_file.read())\n\nexcept Exception as error:\n __version__ = \"0.0.1\"\n sys.stderr.write(\"Warning: Could not open '%s' due %s\\n\" % (filepath, error))\n\nrequirements = [\n \"black\",\n \"numpy==1.22.0\",\n \"scipy\",\n \"SimpleITK!=2.0.*\",\n \"SimpleITK!=2.2.1\", # https://github.com/mlcommons/GaNDLF/issues/536\n \"torchvision\",\n \"tqdm\",\n \"torchio==0.18.75\",\n \"pandas\",\n \"scikit-learn>=0.23.2\",\n \"scikit-image>=0.19.1\",\n 'pickle5>=0.0.11; python_version < \"3.8.0\"',\n \"setuptools\",\n \"seaborn\",\n \"pyyaml\",\n \"tiffslide\",\n \"matplotlib\",\n \"requests>=2.25.0\",\n \"pytest\",\n \"coverage\",\n \"pytest-cov\",\n \"psutil\",\n \"medcam\",\n \"opencv-python\",\n \"torchmetrics==0.5.1\", # newer versions have changed api for f1 invocation\n \"OpenPatchMiner==0.1.8\",\n \"zarr==2.10.3\",\n \"pydicom\",\n \"onnx\",\n \"torchinfo==1.7.0\",\n \"segmentation-models-pytorch==0.3.0\",\n \"ACSConv==0.1.1\",\n]\n\n# pytorch doesn't have LTS support on OSX - https://github.com/mlcommons/GaNDLF/issues/389\nif sys.platform == \"darwin\":\n requirements.append(\"torch==1.11.0\")\nelse:\n requirements.append(\"torch==1.11.0\")\n\nsetup(\n name=\"GANDLF\",\n version=__version__,\n author=\"MLCommons\",\n author_email=\"[email protected]\",\n python_requires=\">=3.7\",\n packages=find_packages(),\n cmdclass={ # this ensures git_submodule_update is called during install\n \"install\": CustomInstallCommand,\n \"develop\": CustomDevelopCommand,\n \"egg_info\": CustomEggInfoCommand,\n },\n scripts=[\n \"gandlf_run\",\n \"gandlf_constructCSV\",\n \"gandlf_collectStats\",\n \"gandlf_patchMiner\",\n \"gandlf_preprocess\",\n \"gandlf_anonymizer\",\n \"gandlf_verifyInstall\",\n \"gandlf_configGenerator\",\n ],\n classifiers=[\n \"Development Status :: 3 - Alpha\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Natural Language :: English\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Topic :: Scientific/Engineering :: Medical Science Apps\",\n ],\n description=(\n \"PyTorch-based framework that handles segmentation/regression/classification using various DL architectures for medical imaging.\"\n ),\n install_requires=requirements,\n license=\"Apache-2.0\",\n long_description=readme,\n long_description_content_type=\"text/markdown\",\n include_package_data=True,\n keywords=\"semantic, segmentation, regression, classification, data-augmentation, medical-imaging, clinical-workflows, deep-learning, pytorch\",\n zip_safe=False,\n)\n", "path": "setup.py"}]} | 1,787 | 114 |
gh_patches_debug_2803 | rasdani/github-patches | git_diff | sopel-irc__sopel-1261 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Results from currency conversion should be rounded to 2 decimals
At the moment, the results of a currency conversion query is reported as a float:
< mynick> .cur 100 SEK to USD
< mybot> 100.0 SEK (Swedish krona) = 12.202017114914426 USD (US dollar)
As far as I know, no currency reports more than 2 decimals, and some even less (JPY comes to mind). Rounding the result to 2 decimals will result in more readable output.
< mynick> .cur 100 SEK to USD
< mybot> 100.0 SEK (Swedish krona) = 12.20 USD (US dollar)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sopel/modules/currency.py`
Content:
```
1 # coding=utf-8
2 # Copyright 2013 Elsie Powell, embolalia.com
3 # Licensed under the Eiffel Forum License 2
4 from __future__ import unicode_literals, absolute_import, print_function, division
5
6 import re
7
8 from requests import get
9 from sopel.module import commands, example, NOLIMIT
10
11 # The Canadian central bank has better exchange rate data than the Fed, the
12 # Bank of England, or the European Central Bank. Who knew?
13 base_url = 'http://www.bankofcanada.ca/stats/assets/rates_rss/noon/en_{}.xml'
14 regex = re.compile(r'''
15 (\d+(?:\.\d+)?) # Decimal number
16 \s*([a-zA-Z]{3}) # 3-letter currency code
17 \s+(?:in|as|of|to)\s+ # preposition
18 ([a-zA-Z]{3}) # 3-letter currency code
19 ''', re.VERBOSE)
20
21
22 def get_rate(code):
23 code = code.upper()
24 if code == 'CAD':
25 return 1, 'Canadian Dollar'
26 elif code == 'BTC':
27 btc_rate = get('https://apiv2.bitcoinaverage.com/indices/global/ticker/BTCCAD')
28 rates = btc_rate.json()
29 return 1 / rates['averages']['day'], 'Bitcoin—24hr average'
30
31 data = get("http://www.bankofcanada.ca/valet/observations/FX{}CAD/json".format(code))
32 name = data.json()['seriesDetail']['FX{}CAD'.format(code)]['description']
33 name = name.split(" to Canadian")[0]
34 json = data.json()['observations']
35 for element in reversed(json):
36 if 'v' in element['FX{}CAD'.format(code)]:
37 return 1 / float(element['FX{}CAD'.format(code)]['v']), name
38
39
40 @commands('cur', 'currency', 'exchange')
41 @example('.cur 20 EUR in USD')
42 def exchange(bot, trigger):
43 """Show the exchange rate between two currencies"""
44 if not trigger.group(2):
45 return bot.reply("No search term. An example: .cur 20 EUR in USD")
46 match = regex.match(trigger.group(2))
47 if not match:
48 # It's apologetic, because it's using Canadian data.
49 bot.reply("Sorry, I didn't understand the input.")
50 return NOLIMIT
51
52 amount, of, to = match.groups()
53 try:
54 amount = float(amount)
55 except:
56 bot.reply("Sorry, I didn't understand the input.")
57 display(bot, amount, of, to)
58
59
60 def display(bot, amount, of, to):
61 if not amount:
62 bot.reply("Zero is zero, no matter what country you're in.")
63 try:
64 of_rate, of_name = get_rate(of)
65 if not of_name:
66 bot.reply("Unknown currency: %s" % of)
67 return
68 to_rate, to_name = get_rate(to)
69 if not to_name:
70 bot.reply("Unknown currency: %s" % to)
71 return
72 except Exception:
73 bot.reply("Something went wrong while I was getting the exchange rate.")
74 return NOLIMIT
75
76 result = amount / of_rate * to_rate
77 bot.say("{} {} ({}) = {} {} ({})".format(amount, of.upper(), of_name,
78 result, to.upper(), to_name))
79
80
81 @commands('btc', 'bitcoin')
82 @example('.btc 20 EUR')
83 def bitcoin(bot, trigger):
84 #if 2 args, 1st is number and 2nd is currency. If 1 arg, it's either the number or the currency.
85 to = trigger.group(4)
86 amount = trigger.group(3)
87 if not to:
88 to = trigger.group(3) or 'USD'
89 amount = 1
90
91 try:
92 amount = float(amount)
93 except:
94 bot.reply("Sorry, I didn't understand the input.")
95 return NOLIMIT
96
97 display(bot, amount, 'BTC', to)
98
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sopel/modules/currency.py b/sopel/modules/currency.py
--- a/sopel/modules/currency.py
+++ b/sopel/modules/currency.py
@@ -74,7 +74,7 @@
return NOLIMIT
result = amount / of_rate * to_rate
- bot.say("{} {} ({}) = {} {} ({})".format(amount, of.upper(), of_name,
+ bot.say("{:.2f} {} ({}) = {:.2f} {} ({})".format(amount, of.upper(), of_name,
result, to.upper(), to_name))
| {"golden_diff": "diff --git a/sopel/modules/currency.py b/sopel/modules/currency.py\n--- a/sopel/modules/currency.py\n+++ b/sopel/modules/currency.py\n@@ -74,7 +74,7 @@\n return NOLIMIT\n \n result = amount / of_rate * to_rate\n- bot.say(\"{} {} ({}) = {} {} ({})\".format(amount, of.upper(), of_name,\n+ bot.say(\"{:.2f} {} ({}) = {:.2f} {} ({})\".format(amount, of.upper(), of_name,\n result, to.upper(), to_name))\n", "issue": "Results from currency conversion should be rounded to 2 decimals\nAt the moment, the results of a currency conversion query is reported as a float:\r\n\r\n < mynick> .cur 100 SEK to USD\r\n < mybot> 100.0 SEK (Swedish krona) = 12.202017114914426 USD (US dollar)\r\n\r\nAs far as I know, no currency reports more than 2 decimals, and some even less (JPY comes to mind). Rounding the result to 2 decimals will result in more readable output.\r\n\r\n < mynick> .cur 100 SEK to USD\r\n < mybot> 100.0 SEK (Swedish krona) = 12.20 USD (US dollar)\r\n\n", "before_files": [{"content": "# coding=utf-8\n# Copyright 2013 Elsie Powell, embolalia.com\n# Licensed under the Eiffel Forum License 2\nfrom __future__ import unicode_literals, absolute_import, print_function, division\n\nimport re\n\nfrom requests import get\nfrom sopel.module import commands, example, NOLIMIT\n\n# The Canadian central bank has better exchange rate data than the Fed, the\n# Bank of England, or the European Central Bank. Who knew?\nbase_url = 'http://www.bankofcanada.ca/stats/assets/rates_rss/noon/en_{}.xml'\nregex = re.compile(r'''\n (\\d+(?:\\.\\d+)?) # Decimal number\n \\s*([a-zA-Z]{3}) # 3-letter currency code\n \\s+(?:in|as|of|to)\\s+ # preposition\n ([a-zA-Z]{3}) # 3-letter currency code\n ''', re.VERBOSE)\n\n\ndef get_rate(code):\n code = code.upper()\n if code == 'CAD':\n return 1, 'Canadian Dollar'\n elif code == 'BTC':\n btc_rate = get('https://apiv2.bitcoinaverage.com/indices/global/ticker/BTCCAD')\n rates = btc_rate.json()\n return 1 / rates['averages']['day'], 'Bitcoin\u201424hr average'\n\n data = get(\"http://www.bankofcanada.ca/valet/observations/FX{}CAD/json\".format(code))\n name = data.json()['seriesDetail']['FX{}CAD'.format(code)]['description']\n name = name.split(\" to Canadian\")[0]\n json = data.json()['observations']\n for element in reversed(json):\n if 'v' in element['FX{}CAD'.format(code)]:\n return 1 / float(element['FX{}CAD'.format(code)]['v']), name\n\n\n@commands('cur', 'currency', 'exchange')\n@example('.cur 20 EUR in USD')\ndef exchange(bot, trigger):\n \"\"\"Show the exchange rate between two currencies\"\"\"\n if not trigger.group(2):\n return bot.reply(\"No search term. An example: .cur 20 EUR in USD\")\n match = regex.match(trigger.group(2))\n if not match:\n # It's apologetic, because it's using Canadian data.\n bot.reply(\"Sorry, I didn't understand the input.\")\n return NOLIMIT\n\n amount, of, to = match.groups()\n try:\n amount = float(amount)\n except:\n bot.reply(\"Sorry, I didn't understand the input.\")\n display(bot, amount, of, to)\n\n\ndef display(bot, amount, of, to):\n if not amount:\n bot.reply(\"Zero is zero, no matter what country you're in.\")\n try:\n of_rate, of_name = get_rate(of)\n if not of_name:\n bot.reply(\"Unknown currency: %s\" % of)\n return\n to_rate, to_name = get_rate(to)\n if not to_name:\n bot.reply(\"Unknown currency: %s\" % to)\n return\n except Exception:\n bot.reply(\"Something went wrong while I was getting the exchange rate.\")\n return NOLIMIT\n\n result = amount / of_rate * to_rate\n bot.say(\"{} {} ({}) = {} {} ({})\".format(amount, of.upper(), of_name,\n result, to.upper(), to_name))\n\n\n@commands('btc', 'bitcoin')\n@example('.btc 20 EUR')\ndef bitcoin(bot, trigger):\n #if 2 args, 1st is number and 2nd is currency. If 1 arg, it's either the number or the currency.\n to = trigger.group(4)\n amount = trigger.group(3)\n if not to:\n to = trigger.group(3) or 'USD'\n amount = 1\n\n try:\n amount = float(amount)\n except:\n bot.reply(\"Sorry, I didn't understand the input.\")\n return NOLIMIT\n\n display(bot, amount, 'BTC', to)\n", "path": "sopel/modules/currency.py"}], "after_files": [{"content": "# coding=utf-8\n# Copyright 2013 Elsie Powell, embolalia.com\n# Licensed under the Eiffel Forum License 2\nfrom __future__ import unicode_literals, absolute_import, print_function, division\n\nimport re\n\nfrom requests import get\nfrom sopel.module import commands, example, NOLIMIT\n\n# The Canadian central bank has better exchange rate data than the Fed, the\n# Bank of England, or the European Central Bank. Who knew?\nbase_url = 'http://www.bankofcanada.ca/stats/assets/rates_rss/noon/en_{}.xml'\nregex = re.compile(r'''\n (\\d+(?:\\.\\d+)?) # Decimal number\n \\s*([a-zA-Z]{3}) # 3-letter currency code\n \\s+(?:in|as|of|to)\\s+ # preposition\n ([a-zA-Z]{3}) # 3-letter currency code\n ''', re.VERBOSE)\n\n\ndef get_rate(code):\n code = code.upper()\n if code == 'CAD':\n return 1, 'Canadian Dollar'\n elif code == 'BTC':\n btc_rate = get('https://apiv2.bitcoinaverage.com/indices/global/ticker/BTCCAD')\n rates = btc_rate.json()\n return 1 / rates['averages']['day'], 'Bitcoin\u201424hr average'\n\n data = get(\"http://www.bankofcanada.ca/valet/observations/FX{}CAD/json\".format(code))\n name = data.json()['seriesDetail']['FX{}CAD'.format(code)]['description']\n name = name.split(\" to Canadian\")[0]\n json = data.json()['observations']\n for element in reversed(json):\n if 'v' in element['FX{}CAD'.format(code)]:\n return 1 / float(element['FX{}CAD'.format(code)]['v']), name\n\n\n@commands('cur', 'currency', 'exchange')\n@example('.cur 20 EUR in USD')\ndef exchange(bot, trigger):\n \"\"\"Show the exchange rate between two currencies\"\"\"\n if not trigger.group(2):\n return bot.reply(\"No search term. An example: .cur 20 EUR in USD\")\n match = regex.match(trigger.group(2))\n if not match:\n # It's apologetic, because it's using Canadian data.\n bot.reply(\"Sorry, I didn't understand the input.\")\n return NOLIMIT\n\n amount, of, to = match.groups()\n try:\n amount = float(amount)\n except:\n bot.reply(\"Sorry, I didn't understand the input.\")\n display(bot, amount, of, to)\n\n\ndef display(bot, amount, of, to):\n if not amount:\n bot.reply(\"Zero is zero, no matter what country you're in.\")\n try:\n of_rate, of_name = get_rate(of)\n if not of_name:\n bot.reply(\"Unknown currency: %s\" % of)\n return\n to_rate, to_name = get_rate(to)\n if not to_name:\n bot.reply(\"Unknown currency: %s\" % to)\n return\n except Exception:\n bot.reply(\"Something went wrong while I was getting the exchange rate.\")\n return NOLIMIT\n\n result = amount / of_rate * to_rate\n bot.say(\"{:.2f} {} ({}) = {:.2f} {} ({})\".format(amount, of.upper(), of_name,\n result, to.upper(), to_name))\n\n\n@commands('btc', 'bitcoin')\n@example('.btc 20 EUR')\ndef bitcoin(bot, trigger):\n #if 2 args, 1st is number and 2nd is currency. If 1 arg, it's either the number or the currency.\n to = trigger.group(4)\n amount = trigger.group(3)\n if not to:\n to = trigger.group(3) or 'USD'\n amount = 1\n\n try:\n amount = float(amount)\n except:\n bot.reply(\"Sorry, I didn't understand the input.\")\n return NOLIMIT\n\n display(bot, amount, 'BTC', to)\n", "path": "sopel/modules/currency.py"}]} | 1,511 | 131 |
gh_patches_debug_1569 | rasdani/github-patches | git_diff | voicepaw__so-vits-svc-fork-1139 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
scipy ImportError on multiple platforms
### Describe the bug
When attempting to Infer, the process is cancelled with the following error message:
[10:53:57] ERROR [10:53:57] cannot import name 'kaiser' from 'scipy.signal' gui.py:764
(C:\Users\Marcello\AppData\Roaming\so-vits-svc-fork\venv\lib\site-packages\scipy\signal\_
_init__.py)
Traceback (most recent call last):
File
"C:\Users\Marcello\AppData\Roaming\so-vits-svc-fork\venv\lib\site-packages\so_vits_svc_fo
rk\gui.py", line 723, in main
from so_vits_svc_fork.inference.main import infer
File
"C:\Users\Marcello\AppData\Roaming\so-vits-svc-fork\venv\lib\site-packages\so_vits_svc_fo
rk\inference\main.py", line 14, in <module>
from so_vits_svc_fork.inference.core import RealtimeVC, RealtimeVC2, Svc
File
"C:\Users\Marcello\AppData\Roaming\so-vits-svc-fork\venv\lib\site-packages\so_vits_svc_fo
rk\inference\core.py", line 18, in <module>
from ..modules.synthesizers import SynthesizerTrn
File
"C:\Users\Marcello\AppData\Roaming\so-vits-svc-fork\venv\lib\site-packages\so_vits_svc_fo
rk\modules\synthesizers.py", line 13, in <module>
from so_vits_svc_fork.modules.decoders.mb_istft import (
File
"C:\Users\Marcello\AppData\Roaming\so-vits-svc-fork\venv\lib\site-packages\so_vits_svc_fo
rk\modules\decoders\mb_istft\__init__.py", line 1, in <module>
from ._generators import (
File
"C:\Users\Marcello\AppData\Roaming\so-vits-svc-fork\venv\lib\site-packages\so_vits_svc_fo
rk\modules\decoders\mb_istft\_generators.py", line 11, in <module>
from ._pqmf import PQMF
File
"C:\Users\Marcello\AppData\Roaming\so-vits-svc-fork\venv\lib\site-packages\so_vits_svc_fo
rk\modules\decoders\mb_istft\_pqmf.py", line 9, in <module>
from scipy.signal import kaiser
ImportError: cannot import name 'kaiser' from 'scipy.signal'
(C:\Users\Marcello\AppData\Roaming\so-vits-svc-fork\venv\lib\site-packages\scipy\signal\_
_init__.py)
### To Reproduce
1. Install so-vits-svc-fork using option 1, 2 or 3 natively or within Anaconda
2. Select model & config
3. Select input file, click on Infer
### Additional context
The same error message appears on every operating system I've tried.
### Version
4.1.58
### Platform
Windows 11 / MacOS Sonoma 14.1.1 / Anaconda3
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct.
### No Duplicate
- [X] I have checked existing issues to avoid duplicates.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/so_vits_svc_fork/modules/decoders/mb_istft/_pqmf.py`
Content:
```
1 # Copyright 2020 Tomoki Hayashi
2 # MIT License (https://opensource.org/licenses/MIT)
3
4 """Pseudo QMF modules."""
5
6 import numpy as np
7 import torch
8 import torch.nn.functional as F
9 from scipy.signal import kaiser
10
11
12 def design_prototype_filter(taps=62, cutoff_ratio=0.15, beta=9.0):
13 """Design prototype filter for PQMF.
14 This method is based on `A Kaiser window approach for the design of prototype
15 filters of cosine modulated filterbanks`_.
16 Args:
17 taps (int): The number of filter taps.
18 cutoff_ratio (float): Cut-off frequency ratio.
19 beta (float): Beta coefficient for kaiser window.
20 Returns:
21 ndarray: Impluse response of prototype filter (taps + 1,).
22 .. _`A Kaiser window approach for the design of prototype filters of cosine modulated filterbanks`:
23 https://ieeexplore.ieee.org/abstract/document/681427
24 """
25 # check the arguments are valid
26 assert taps % 2 == 0, "The number of taps mush be even number."
27 assert 0.0 < cutoff_ratio < 1.0, "Cutoff ratio must be > 0.0 and < 1.0."
28
29 # make initial filter
30 omega_c = np.pi * cutoff_ratio
31 with np.errstate(invalid="ignore"):
32 h_i = np.sin(omega_c * (np.arange(taps + 1) - 0.5 * taps)) / (
33 np.pi * (np.arange(taps + 1) - 0.5 * taps)
34 )
35 h_i[taps // 2] = np.cos(0) * cutoff_ratio # fix nan due to indeterminate form
36
37 # apply kaiser window
38 w = kaiser(taps + 1, beta)
39 h = h_i * w
40
41 return h
42
43
44 class PQMF(torch.nn.Module):
45 """PQMF module.
46 This module is based on `Near-perfect-reconstruction pseudo-QMF banks`_.
47 .. _`Near-perfect-reconstruction pseudo-QMF banks`:
48 https://ieeexplore.ieee.org/document/258122
49 """
50
51 def __init__(self, device, subbands=8, taps=62, cutoff_ratio=0.15, beta=9.0):
52 """Initialize PQMF module.
53 Args:
54 subbands (int): The number of subbands.
55 taps (int): The number of filter taps.
56 cutoff_ratio (float): Cut-off frequency ratio.
57 beta (float): Beta coefficient for kaiser window.
58 """
59 super().__init__()
60
61 # define filter coefficient
62 h_proto = design_prototype_filter(taps, cutoff_ratio, beta)
63 h_analysis = np.zeros((subbands, len(h_proto)))
64 h_synthesis = np.zeros((subbands, len(h_proto)))
65 for k in range(subbands):
66 h_analysis[k] = (
67 2
68 * h_proto
69 * np.cos(
70 (2 * k + 1)
71 * (np.pi / (2 * subbands))
72 * (np.arange(taps + 1) - ((taps - 1) / 2))
73 + (-1) ** k * np.pi / 4
74 )
75 )
76 h_synthesis[k] = (
77 2
78 * h_proto
79 * np.cos(
80 (2 * k + 1)
81 * (np.pi / (2 * subbands))
82 * (np.arange(taps + 1) - ((taps - 1) / 2))
83 - (-1) ** k * np.pi / 4
84 )
85 )
86
87 # convert to tensor
88 analysis_filter = torch.from_numpy(h_analysis).float().unsqueeze(1).to(device)
89 synthesis_filter = torch.from_numpy(h_synthesis).float().unsqueeze(0).to(device)
90
91 # register coefficients as buffer
92 self.register_buffer("analysis_filter", analysis_filter)
93 self.register_buffer("synthesis_filter", synthesis_filter)
94
95 # filter for downsampling & upsampling
96 updown_filter = torch.zeros((subbands, subbands, subbands)).float().to(device)
97 for k in range(subbands):
98 updown_filter[k, k, 0] = 1.0
99 self.register_buffer("updown_filter", updown_filter)
100 self.subbands = subbands
101
102 # keep padding info
103 self.pad_fn = torch.nn.ConstantPad1d(taps // 2, 0.0)
104
105 def analysis(self, x):
106 """Analysis with PQMF.
107 Args:
108 x (Tensor): Input tensor (B, 1, T).
109 Returns:
110 Tensor: Output tensor (B, subbands, T // subbands).
111 """
112 x = F.conv1d(self.pad_fn(x), self.analysis_filter)
113 return F.conv1d(x, self.updown_filter, stride=self.subbands)
114
115 def synthesis(self, x):
116 """Synthesis with PQMF.
117 Args:
118 x (Tensor): Input tensor (B, subbands, T // subbands).
119 Returns:
120 Tensor: Output tensor (B, 1, T).
121 """
122 # NOTE(kan-bayashi): Power will be dreased so here multiply by # subbands.
123 # Not sure this is the correct way, it is better to check again.
124 # TODO(kan-bayashi): Understand the reconstruction procedure
125 x = F.conv_transpose1d(
126 x, self.updown_filter * self.subbands, stride=self.subbands
127 )
128 return F.conv1d(self.pad_fn(x), self.synthesis_filter)
129
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/so_vits_svc_fork/modules/decoders/mb_istft/_pqmf.py b/src/so_vits_svc_fork/modules/decoders/mb_istft/_pqmf.py
--- a/src/so_vits_svc_fork/modules/decoders/mb_istft/_pqmf.py
+++ b/src/so_vits_svc_fork/modules/decoders/mb_istft/_pqmf.py
@@ -6,7 +6,7 @@
import numpy as np
import torch
import torch.nn.functional as F
-from scipy.signal import kaiser
+from scipy.signal.windows import kaiser
def design_prototype_filter(taps=62, cutoff_ratio=0.15, beta=9.0):
| {"golden_diff": "diff --git a/src/so_vits_svc_fork/modules/decoders/mb_istft/_pqmf.py b/src/so_vits_svc_fork/modules/decoders/mb_istft/_pqmf.py\n--- a/src/so_vits_svc_fork/modules/decoders/mb_istft/_pqmf.py\n+++ b/src/so_vits_svc_fork/modules/decoders/mb_istft/_pqmf.py\n@@ -6,7 +6,7 @@\n import numpy as np\n import torch\n import torch.nn.functional as F\n-from scipy.signal import kaiser\n+from scipy.signal.windows import kaiser\n \n \n def design_prototype_filter(taps=62, cutoff_ratio=0.15, beta=9.0):\n", "issue": "scipy ImportError on multiple platforms\n### Describe the bug\n\nWhen attempting to Infer, the process is cancelled with the following error message:\r\n\r\n[10:53:57] ERROR [10:53:57] cannot import name 'kaiser' from 'scipy.signal' gui.py:764\r\n (C:\\Users\\Marcello\\AppData\\Roaming\\so-vits-svc-fork\\venv\\lib\\site-packages\\scipy\\signal\\_\r\n _init__.py)\r\n Traceback (most recent call last):\r\n File\r\n \"C:\\Users\\Marcello\\AppData\\Roaming\\so-vits-svc-fork\\venv\\lib\\site-packages\\so_vits_svc_fo\r\n rk\\gui.py\", line 723, in main\r\n from so_vits_svc_fork.inference.main import infer\r\n File\r\n \"C:\\Users\\Marcello\\AppData\\Roaming\\so-vits-svc-fork\\venv\\lib\\site-packages\\so_vits_svc_fo\r\n rk\\inference\\main.py\", line 14, in <module>\r\n from so_vits_svc_fork.inference.core import RealtimeVC, RealtimeVC2, Svc\r\n File\r\n \"C:\\Users\\Marcello\\AppData\\Roaming\\so-vits-svc-fork\\venv\\lib\\site-packages\\so_vits_svc_fo\r\n rk\\inference\\core.py\", line 18, in <module>\r\n from ..modules.synthesizers import SynthesizerTrn\r\n File\r\n \"C:\\Users\\Marcello\\AppData\\Roaming\\so-vits-svc-fork\\venv\\lib\\site-packages\\so_vits_svc_fo\r\n rk\\modules\\synthesizers.py\", line 13, in <module>\r\n from so_vits_svc_fork.modules.decoders.mb_istft import (\r\n File\r\n \"C:\\Users\\Marcello\\AppData\\Roaming\\so-vits-svc-fork\\venv\\lib\\site-packages\\so_vits_svc_fo\r\n rk\\modules\\decoders\\mb_istft\\__init__.py\", line 1, in <module>\r\n from ._generators import (\r\n File\r\n \"C:\\Users\\Marcello\\AppData\\Roaming\\so-vits-svc-fork\\venv\\lib\\site-packages\\so_vits_svc_fo\r\n rk\\modules\\decoders\\mb_istft\\_generators.py\", line 11, in <module>\r\n from ._pqmf import PQMF\r\n File\r\n \"C:\\Users\\Marcello\\AppData\\Roaming\\so-vits-svc-fork\\venv\\lib\\site-packages\\so_vits_svc_fo\r\n rk\\modules\\decoders\\mb_istft\\_pqmf.py\", line 9, in <module>\r\n from scipy.signal import kaiser\r\n ImportError: cannot import name 'kaiser' from 'scipy.signal'\r\n (C:\\Users\\Marcello\\AppData\\Roaming\\so-vits-svc-fork\\venv\\lib\\site-packages\\scipy\\signal\\_\r\n _init__.py)\r\n\n\n### To Reproduce\n\n1. Install so-vits-svc-fork using option 1, 2 or 3 natively or within Anaconda\r\n2. Select model & config\r\n3. Select input file, click on Infer \n\n### Additional context\n\nThe same error message appears on every operating system I've tried.\n\n### Version\n\n4.1.58\n\n### Platform\n\nWindows 11 / MacOS Sonoma 14.1.1 / Anaconda3\n\n### Code of Conduct\n\n- [X] I agree to follow this project's Code of Conduct.\n\n### No Duplicate\n\n- [X] I have checked existing issues to avoid duplicates.\n", "before_files": [{"content": "# Copyright 2020 Tomoki Hayashi\n# MIT License (https://opensource.org/licenses/MIT)\n\n\"\"\"Pseudo QMF modules.\"\"\"\n\nimport numpy as np\nimport torch\nimport torch.nn.functional as F\nfrom scipy.signal import kaiser\n\n\ndef design_prototype_filter(taps=62, cutoff_ratio=0.15, beta=9.0):\n \"\"\"Design prototype filter for PQMF.\n This method is based on `A Kaiser window approach for the design of prototype\n filters of cosine modulated filterbanks`_.\n Args:\n taps (int): The number of filter taps.\n cutoff_ratio (float): Cut-off frequency ratio.\n beta (float): Beta coefficient for kaiser window.\n Returns:\n ndarray: Impluse response of prototype filter (taps + 1,).\n .. _`A Kaiser window approach for the design of prototype filters of cosine modulated filterbanks`:\n https://ieeexplore.ieee.org/abstract/document/681427\n \"\"\"\n # check the arguments are valid\n assert taps % 2 == 0, \"The number of taps mush be even number.\"\n assert 0.0 < cutoff_ratio < 1.0, \"Cutoff ratio must be > 0.0 and < 1.0.\"\n\n # make initial filter\n omega_c = np.pi * cutoff_ratio\n with np.errstate(invalid=\"ignore\"):\n h_i = np.sin(omega_c * (np.arange(taps + 1) - 0.5 * taps)) / (\n np.pi * (np.arange(taps + 1) - 0.5 * taps)\n )\n h_i[taps // 2] = np.cos(0) * cutoff_ratio # fix nan due to indeterminate form\n\n # apply kaiser window\n w = kaiser(taps + 1, beta)\n h = h_i * w\n\n return h\n\n\nclass PQMF(torch.nn.Module):\n \"\"\"PQMF module.\n This module is based on `Near-perfect-reconstruction pseudo-QMF banks`_.\n .. _`Near-perfect-reconstruction pseudo-QMF banks`:\n https://ieeexplore.ieee.org/document/258122\n \"\"\"\n\n def __init__(self, device, subbands=8, taps=62, cutoff_ratio=0.15, beta=9.0):\n \"\"\"Initialize PQMF module.\n Args:\n subbands (int): The number of subbands.\n taps (int): The number of filter taps.\n cutoff_ratio (float): Cut-off frequency ratio.\n beta (float): Beta coefficient for kaiser window.\n \"\"\"\n super().__init__()\n\n # define filter coefficient\n h_proto = design_prototype_filter(taps, cutoff_ratio, beta)\n h_analysis = np.zeros((subbands, len(h_proto)))\n h_synthesis = np.zeros((subbands, len(h_proto)))\n for k in range(subbands):\n h_analysis[k] = (\n 2\n * h_proto\n * np.cos(\n (2 * k + 1)\n * (np.pi / (2 * subbands))\n * (np.arange(taps + 1) - ((taps - 1) / 2))\n + (-1) ** k * np.pi / 4\n )\n )\n h_synthesis[k] = (\n 2\n * h_proto\n * np.cos(\n (2 * k + 1)\n * (np.pi / (2 * subbands))\n * (np.arange(taps + 1) - ((taps - 1) / 2))\n - (-1) ** k * np.pi / 4\n )\n )\n\n # convert to tensor\n analysis_filter = torch.from_numpy(h_analysis).float().unsqueeze(1).to(device)\n synthesis_filter = torch.from_numpy(h_synthesis).float().unsqueeze(0).to(device)\n\n # register coefficients as buffer\n self.register_buffer(\"analysis_filter\", analysis_filter)\n self.register_buffer(\"synthesis_filter\", synthesis_filter)\n\n # filter for downsampling & upsampling\n updown_filter = torch.zeros((subbands, subbands, subbands)).float().to(device)\n for k in range(subbands):\n updown_filter[k, k, 0] = 1.0\n self.register_buffer(\"updown_filter\", updown_filter)\n self.subbands = subbands\n\n # keep padding info\n self.pad_fn = torch.nn.ConstantPad1d(taps // 2, 0.0)\n\n def analysis(self, x):\n \"\"\"Analysis with PQMF.\n Args:\n x (Tensor): Input tensor (B, 1, T).\n Returns:\n Tensor: Output tensor (B, subbands, T // subbands).\n \"\"\"\n x = F.conv1d(self.pad_fn(x), self.analysis_filter)\n return F.conv1d(x, self.updown_filter, stride=self.subbands)\n\n def synthesis(self, x):\n \"\"\"Synthesis with PQMF.\n Args:\n x (Tensor): Input tensor (B, subbands, T // subbands).\n Returns:\n Tensor: Output tensor (B, 1, T).\n \"\"\"\n # NOTE(kan-bayashi): Power will be dreased so here multiply by # subbands.\n # Not sure this is the correct way, it is better to check again.\n # TODO(kan-bayashi): Understand the reconstruction procedure\n x = F.conv_transpose1d(\n x, self.updown_filter * self.subbands, stride=self.subbands\n )\n return F.conv1d(self.pad_fn(x), self.synthesis_filter)\n", "path": "src/so_vits_svc_fork/modules/decoders/mb_istft/_pqmf.py"}], "after_files": [{"content": "# Copyright 2020 Tomoki Hayashi\n# MIT License (https://opensource.org/licenses/MIT)\n\n\"\"\"Pseudo QMF modules.\"\"\"\n\nimport numpy as np\nimport torch\nimport torch.nn.functional as F\nfrom scipy.signal.windows import kaiser\n\n\ndef design_prototype_filter(taps=62, cutoff_ratio=0.15, beta=9.0):\n \"\"\"Design prototype filter for PQMF.\n This method is based on `A Kaiser window approach for the design of prototype\n filters of cosine modulated filterbanks`_.\n Args:\n taps (int): The number of filter taps.\n cutoff_ratio (float): Cut-off frequency ratio.\n beta (float): Beta coefficient for kaiser window.\n Returns:\n ndarray: Impluse response of prototype filter (taps + 1,).\n .. _`A Kaiser window approach for the design of prototype filters of cosine modulated filterbanks`:\n https://ieeexplore.ieee.org/abstract/document/681427\n \"\"\"\n # check the arguments are valid\n assert taps % 2 == 0, \"The number of taps mush be even number.\"\n assert 0.0 < cutoff_ratio < 1.0, \"Cutoff ratio must be > 0.0 and < 1.0.\"\n\n # make initial filter\n omega_c = np.pi * cutoff_ratio\n with np.errstate(invalid=\"ignore\"):\n h_i = np.sin(omega_c * (np.arange(taps + 1) - 0.5 * taps)) / (\n np.pi * (np.arange(taps + 1) - 0.5 * taps)\n )\n h_i[taps // 2] = np.cos(0) * cutoff_ratio # fix nan due to indeterminate form\n\n # apply kaiser window\n w = kaiser(taps + 1, beta)\n h = h_i * w\n\n return h\n\n\nclass PQMF(torch.nn.Module):\n \"\"\"PQMF module.\n This module is based on `Near-perfect-reconstruction pseudo-QMF banks`_.\n .. _`Near-perfect-reconstruction pseudo-QMF banks`:\n https://ieeexplore.ieee.org/document/258122\n \"\"\"\n\n def __init__(self, device, subbands=8, taps=62, cutoff_ratio=0.15, beta=9.0):\n \"\"\"Initialize PQMF module.\n Args:\n subbands (int): The number of subbands.\n taps (int): The number of filter taps.\n cutoff_ratio (float): Cut-off frequency ratio.\n beta (float): Beta coefficient for kaiser window.\n \"\"\"\n super().__init__()\n\n # define filter coefficient\n h_proto = design_prototype_filter(taps, cutoff_ratio, beta)\n h_analysis = np.zeros((subbands, len(h_proto)))\n h_synthesis = np.zeros((subbands, len(h_proto)))\n for k in range(subbands):\n h_analysis[k] = (\n 2\n * h_proto\n * np.cos(\n (2 * k + 1)\n * (np.pi / (2 * subbands))\n * (np.arange(taps + 1) - ((taps - 1) / 2))\n + (-1) ** k * np.pi / 4\n )\n )\n h_synthesis[k] = (\n 2\n * h_proto\n * np.cos(\n (2 * k + 1)\n * (np.pi / (2 * subbands))\n * (np.arange(taps + 1) - ((taps - 1) / 2))\n - (-1) ** k * np.pi / 4\n )\n )\n\n # convert to tensor\n analysis_filter = torch.from_numpy(h_analysis).float().unsqueeze(1).to(device)\n synthesis_filter = torch.from_numpy(h_synthesis).float().unsqueeze(0).to(device)\n\n # register coefficients as buffer\n self.register_buffer(\"analysis_filter\", analysis_filter)\n self.register_buffer(\"synthesis_filter\", synthesis_filter)\n\n # filter for downsampling & upsampling\n updown_filter = torch.zeros((subbands, subbands, subbands)).float().to(device)\n for k in range(subbands):\n updown_filter[k, k, 0] = 1.0\n self.register_buffer(\"updown_filter\", updown_filter)\n self.subbands = subbands\n\n # keep padding info\n self.pad_fn = torch.nn.ConstantPad1d(taps // 2, 0.0)\n\n def analysis(self, x):\n \"\"\"Analysis with PQMF.\n Args:\n x (Tensor): Input tensor (B, 1, T).\n Returns:\n Tensor: Output tensor (B, subbands, T // subbands).\n \"\"\"\n x = F.conv1d(self.pad_fn(x), self.analysis_filter)\n return F.conv1d(x, self.updown_filter, stride=self.subbands)\n\n def synthesis(self, x):\n \"\"\"Synthesis with PQMF.\n Args:\n x (Tensor): Input tensor (B, subbands, T // subbands).\n Returns:\n Tensor: Output tensor (B, 1, T).\n \"\"\"\n # NOTE(kan-bayashi): Power will be dreased so here multiply by # subbands.\n # Not sure this is the correct way, it is better to check again.\n # TODO(kan-bayashi): Understand the reconstruction procedure\n x = F.conv_transpose1d(\n x, self.updown_filter * self.subbands, stride=self.subbands\n )\n return F.conv1d(self.pad_fn(x), self.synthesis_filter)\n", "path": "src/so_vits_svc_fork/modules/decoders/mb_istft/_pqmf.py"}]} | 2,632 | 163 |
gh_patches_debug_2768 | rasdani/github-patches | git_diff | microsoft__botbuilder-python-2117 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
valid_token_issuers incorrectly populated during SingleTenant validation
### [Github issues](https://github.com/Microsoft/botbuilder-python) should be used for bugs and feature requests. Use [Stack Overflow](https://stackoverflow.com/questions/tagged/botframework) for general "how-to" questions.
## Version
4.15.0
## Describe the bug
Tenant specific token issuers (valid_token_issuers) are added as a nested element, instead of individual issuers.
## To Reproduce
Steps to reproduce the behavior:
Authenticating with app_type 'SingleTenant' will result in unauthorized requests.
## Expected behavior
Tenant specific token issues are treated as valid.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `libraries/botframework-connector/botframework/connector/auth/skill_validation.py`
Content:
```
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License.
3
4 from datetime import timedelta
5 from typing import Dict, Union
6
7 import jwt
8
9 from .authentication_configuration import AuthenticationConfiguration
10 from .authentication_constants import AuthenticationConstants
11 from .claims_identity import ClaimsIdentity
12 from .credential_provider import CredentialProvider
13 from .government_constants import GovernmentConstants
14 from .verify_options import VerifyOptions
15 from .jwt_token_extractor import JwtTokenExtractor
16 from .channel_provider import ChannelProvider
17
18
19 class SkillValidation:
20 # TODO: Remove circular dependcies after C# refactor
21 # pylint: disable=import-outside-toplevel
22
23 """
24 Validates JWT tokens sent to and from a Skill.
25 """
26
27 @staticmethod
28 def is_skill_token(auth_header: str) -> bool:
29 """
30 Determines if a given Auth header is from from a skill to bot or bot to skill request.
31 :param auth_header: Bearer Token, in the "Bearer [Long String]" Format.
32 :return bool:
33 """
34 from .jwt_token_validation import JwtTokenValidation
35
36 if not JwtTokenValidation.is_valid_token_format(auth_header):
37 return False
38
39 bearer_token = auth_header.split(" ")[1]
40
41 # Parse the Big Long String into an actual token.
42 token = jwt.decode(bearer_token, options={"verify_signature": False})
43 return SkillValidation.is_skill_claim(token)
44
45 @staticmethod
46 def is_skill_claim(claims: Dict[str, object]) -> bool:
47 """
48 Checks if the given list of claims represents a skill.
49 :param claims: A dict of claims.
50 :return bool:
51 """
52 if (
53 claims.get(AuthenticationConstants.APP_ID_CLAIM, None)
54 == AuthenticationConstants.ANONYMOUS_SKILL_APP_ID
55 ):
56 return True
57
58 if AuthenticationConstants.VERSION_CLAIM not in claims:
59 return False
60
61 audience = claims.get(AuthenticationConstants.AUDIENCE_CLAIM)
62
63 # The audience is https://api.botframework.com and not an appId.
64 if (
65 not audience
66 or audience == AuthenticationConstants.TO_BOT_FROM_CHANNEL_TOKEN_ISSUER
67 ):
68 return False
69
70 from .jwt_token_validation import JwtTokenValidation
71
72 app_id = JwtTokenValidation.get_app_id_from_claims(claims)
73
74 if not app_id:
75 return False
76
77 # Skill claims must contain and app ID and the AppID must be different than the audience.
78 return app_id != audience
79
80 @staticmethod
81 async def authenticate_channel_token(
82 auth_header: str,
83 credentials: CredentialProvider,
84 channel_service_or_provider: Union[str, ChannelProvider],
85 channel_id: str,
86 auth_configuration: AuthenticationConfiguration,
87 ) -> ClaimsIdentity:
88 if auth_configuration is None:
89 raise Exception(
90 "auth_configuration cannot be None in SkillValidation.authenticate_channel_token"
91 )
92
93 from .jwt_token_validation import JwtTokenValidation
94
95 if isinstance(channel_service_or_provider, ChannelProvider):
96 is_gov = channel_service_or_provider.is_government()
97 else:
98 is_gov = JwtTokenValidation.is_government(channel_service_or_provider)
99
100 open_id_metadata_url = (
101 GovernmentConstants.TO_BOT_FROM_EMULATOR_OPENID_METADATA_URL
102 if is_gov
103 else AuthenticationConstants.TO_BOT_FROM_EMULATOR_OPENID_METADATA_URL
104 )
105
106 token_validation_parameters = VerifyOptions(
107 issuer=[
108 "https://sts.windows.net/d6d49420-f39b-4df7-a1dc-d59a935871db/", # v3.1, 1.0 token
109 "https://login.microsoftonline.com/d6d49420-f39b-4df7-a1dc-d59a935871db/v2.0", # v3.1, 2.0 token
110 "https://sts.windows.net/f8cdef31-a31e-4b4a-93e4-5f571e91255a/", # v3.2, 1.0 token
111 "https://login.microsoftonline.com/f8cdef31-a31e-4b4a-93e4-5f571e91255a/v2.0", # v3.2, 2.0 token
112 "https://sts.windows.net/cab8a31a-1906-4287-a0d8-4eef66b95f6e/", # US Gov, 1.0 token
113 "https://login.microsoftonline.us/cab8a31a-1906-4287-a0d8-4eef66b95f6e/v2.0", # US Gov, 2.0 token
114 "https://login.microsoftonline.us/f8cdef31-a31e-4b4a-93e4-5f571e91255a/", # US Gov, 1.0 token
115 "https://login.microsoftonline.us/f8cdef31-a31e-4b4a-93e4-5f571e91255a/v2.0", # US Gov, 2.0 token
116 ],
117 audience=None,
118 clock_tolerance=timedelta(minutes=5),
119 ignore_expiration=False,
120 )
121
122 if auth_configuration.valid_token_issuers:
123 token_validation_parameters.issuer.append(
124 auth_configuration.valid_token_issuers
125 )
126
127 token_extractor = JwtTokenExtractor(
128 token_validation_parameters,
129 open_id_metadata_url,
130 AuthenticationConstants.ALLOWED_SIGNING_ALGORITHMS,
131 )
132
133 identity = await token_extractor.get_identity_from_auth_header(
134 auth_header, channel_id, auth_configuration.required_endorsements
135 )
136 await SkillValidation._validate_identity(identity, credentials)
137
138 return identity
139
140 @staticmethod
141 def create_anonymous_skill_claim():
142 """
143 Creates a ClaimsIdentity for an anonymous (unauthenticated) skill.
144 :return ClaimsIdentity:
145 """
146 return ClaimsIdentity(
147 {
148 AuthenticationConstants.APP_ID_CLAIM: AuthenticationConstants.ANONYMOUS_SKILL_APP_ID
149 },
150 True,
151 AuthenticationConstants.ANONYMOUS_AUTH_TYPE,
152 )
153
154 @staticmethod
155 async def _validate_identity(
156 identity: ClaimsIdentity, credentials: CredentialProvider
157 ):
158 if not identity:
159 # No valid identity. Not Authorized.
160 raise PermissionError("Invalid Identity")
161
162 if not identity.is_authenticated:
163 # The token is in some way invalid. Not Authorized.
164 raise PermissionError("Token Not Authenticated")
165
166 version_claim = identity.claims.get(AuthenticationConstants.VERSION_CLAIM)
167 if not version_claim:
168 # No version claim
169 raise PermissionError(
170 f"'{AuthenticationConstants.VERSION_CLAIM}' claim is required on skill Tokens."
171 )
172
173 # Look for the "aud" claim, but only if issued from the Bot Framework
174 audience_claim = identity.claims.get(AuthenticationConstants.AUDIENCE_CLAIM)
175 if not audience_claim:
176 # Claim is not present or doesn't have a value. Not Authorized.
177 raise PermissionError(
178 f"'{AuthenticationConstants.AUDIENCE_CLAIM}' claim is required on skill Tokens."
179 )
180
181 if not await credentials.is_valid_appid(audience_claim):
182 # The AppId is not valid. Not Authorized.
183 raise PermissionError("Invalid audience.")
184
185 from .jwt_token_validation import JwtTokenValidation
186
187 app_id = JwtTokenValidation.get_app_id_from_claims(identity.claims)
188 if not app_id:
189 # Invalid AppId
190 raise PermissionError("Invalid app_id.")
191
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/libraries/botframework-connector/botframework/connector/auth/skill_validation.py b/libraries/botframework-connector/botframework/connector/auth/skill_validation.py
--- a/libraries/botframework-connector/botframework/connector/auth/skill_validation.py
+++ b/libraries/botframework-connector/botframework/connector/auth/skill_validation.py
@@ -120,7 +120,7 @@
)
if auth_configuration.valid_token_issuers:
- token_validation_parameters.issuer.append(
+ token_validation_parameters.issuer.extend(
auth_configuration.valid_token_issuers
)
| {"golden_diff": "diff --git a/libraries/botframework-connector/botframework/connector/auth/skill_validation.py b/libraries/botframework-connector/botframework/connector/auth/skill_validation.py\n--- a/libraries/botframework-connector/botframework/connector/auth/skill_validation.py\n+++ b/libraries/botframework-connector/botframework/connector/auth/skill_validation.py\n@@ -120,7 +120,7 @@\n )\n \n if auth_configuration.valid_token_issuers:\n- token_validation_parameters.issuer.append(\n+ token_validation_parameters.issuer.extend(\n auth_configuration.valid_token_issuers\n )\n", "issue": "valid_token_issuers incorrectly populated during SingleTenant validation\n### [Github issues](https://github.com/Microsoft/botbuilder-python) should be used for bugs and feature requests. Use [Stack Overflow](https://stackoverflow.com/questions/tagged/botframework) for general \"how-to\" questions. \r\n\r\n## Version\r\n4.15.0\r\n\r\n## Describe the bug\r\nTenant specific token issuers (valid_token_issuers) are added as a nested element, instead of individual issuers.\r\n\r\n## To Reproduce\r\nSteps to reproduce the behavior:\r\nAuthenticating with app_type 'SingleTenant' will result in unauthorized requests.\r\n\r\n## Expected behavior\r\nTenant specific token issues are treated as valid.\r\n\r\n\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\nfrom datetime import timedelta\nfrom typing import Dict, Union\n\nimport jwt\n\nfrom .authentication_configuration import AuthenticationConfiguration\nfrom .authentication_constants import AuthenticationConstants\nfrom .claims_identity import ClaimsIdentity\nfrom .credential_provider import CredentialProvider\nfrom .government_constants import GovernmentConstants\nfrom .verify_options import VerifyOptions\nfrom .jwt_token_extractor import JwtTokenExtractor\nfrom .channel_provider import ChannelProvider\n\n\nclass SkillValidation:\n # TODO: Remove circular dependcies after C# refactor\n # pylint: disable=import-outside-toplevel\n\n \"\"\"\n Validates JWT tokens sent to and from a Skill.\n \"\"\"\n\n @staticmethod\n def is_skill_token(auth_header: str) -> bool:\n \"\"\"\n Determines if a given Auth header is from from a skill to bot or bot to skill request.\n :param auth_header: Bearer Token, in the \"Bearer [Long String]\" Format.\n :return bool:\n \"\"\"\n from .jwt_token_validation import JwtTokenValidation\n\n if not JwtTokenValidation.is_valid_token_format(auth_header):\n return False\n\n bearer_token = auth_header.split(\" \")[1]\n\n # Parse the Big Long String into an actual token.\n token = jwt.decode(bearer_token, options={\"verify_signature\": False})\n return SkillValidation.is_skill_claim(token)\n\n @staticmethod\n def is_skill_claim(claims: Dict[str, object]) -> bool:\n \"\"\"\n Checks if the given list of claims represents a skill.\n :param claims: A dict of claims.\n :return bool:\n \"\"\"\n if (\n claims.get(AuthenticationConstants.APP_ID_CLAIM, None)\n == AuthenticationConstants.ANONYMOUS_SKILL_APP_ID\n ):\n return True\n\n if AuthenticationConstants.VERSION_CLAIM not in claims:\n return False\n\n audience = claims.get(AuthenticationConstants.AUDIENCE_CLAIM)\n\n # The audience is https://api.botframework.com and not an appId.\n if (\n not audience\n or audience == AuthenticationConstants.TO_BOT_FROM_CHANNEL_TOKEN_ISSUER\n ):\n return False\n\n from .jwt_token_validation import JwtTokenValidation\n\n app_id = JwtTokenValidation.get_app_id_from_claims(claims)\n\n if not app_id:\n return False\n\n # Skill claims must contain and app ID and the AppID must be different than the audience.\n return app_id != audience\n\n @staticmethod\n async def authenticate_channel_token(\n auth_header: str,\n credentials: CredentialProvider,\n channel_service_or_provider: Union[str, ChannelProvider],\n channel_id: str,\n auth_configuration: AuthenticationConfiguration,\n ) -> ClaimsIdentity:\n if auth_configuration is None:\n raise Exception(\n \"auth_configuration cannot be None in SkillValidation.authenticate_channel_token\"\n )\n\n from .jwt_token_validation import JwtTokenValidation\n\n if isinstance(channel_service_or_provider, ChannelProvider):\n is_gov = channel_service_or_provider.is_government()\n else:\n is_gov = JwtTokenValidation.is_government(channel_service_or_provider)\n\n open_id_metadata_url = (\n GovernmentConstants.TO_BOT_FROM_EMULATOR_OPENID_METADATA_URL\n if is_gov\n else AuthenticationConstants.TO_BOT_FROM_EMULATOR_OPENID_METADATA_URL\n )\n\n token_validation_parameters = VerifyOptions(\n issuer=[\n \"https://sts.windows.net/d6d49420-f39b-4df7-a1dc-d59a935871db/\", # v3.1, 1.0 token\n \"https://login.microsoftonline.com/d6d49420-f39b-4df7-a1dc-d59a935871db/v2.0\", # v3.1, 2.0 token\n \"https://sts.windows.net/f8cdef31-a31e-4b4a-93e4-5f571e91255a/\", # v3.2, 1.0 token\n \"https://login.microsoftonline.com/f8cdef31-a31e-4b4a-93e4-5f571e91255a/v2.0\", # v3.2, 2.0 token\n \"https://sts.windows.net/cab8a31a-1906-4287-a0d8-4eef66b95f6e/\", # US Gov, 1.0 token\n \"https://login.microsoftonline.us/cab8a31a-1906-4287-a0d8-4eef66b95f6e/v2.0\", # US Gov, 2.0 token\n \"https://login.microsoftonline.us/f8cdef31-a31e-4b4a-93e4-5f571e91255a/\", # US Gov, 1.0 token\n \"https://login.microsoftonline.us/f8cdef31-a31e-4b4a-93e4-5f571e91255a/v2.0\", # US Gov, 2.0 token\n ],\n audience=None,\n clock_tolerance=timedelta(minutes=5),\n ignore_expiration=False,\n )\n\n if auth_configuration.valid_token_issuers:\n token_validation_parameters.issuer.append(\n auth_configuration.valid_token_issuers\n )\n\n token_extractor = JwtTokenExtractor(\n token_validation_parameters,\n open_id_metadata_url,\n AuthenticationConstants.ALLOWED_SIGNING_ALGORITHMS,\n )\n\n identity = await token_extractor.get_identity_from_auth_header(\n auth_header, channel_id, auth_configuration.required_endorsements\n )\n await SkillValidation._validate_identity(identity, credentials)\n\n return identity\n\n @staticmethod\n def create_anonymous_skill_claim():\n \"\"\"\n Creates a ClaimsIdentity for an anonymous (unauthenticated) skill.\n :return ClaimsIdentity:\n \"\"\"\n return ClaimsIdentity(\n {\n AuthenticationConstants.APP_ID_CLAIM: AuthenticationConstants.ANONYMOUS_SKILL_APP_ID\n },\n True,\n AuthenticationConstants.ANONYMOUS_AUTH_TYPE,\n )\n\n @staticmethod\n async def _validate_identity(\n identity: ClaimsIdentity, credentials: CredentialProvider\n ):\n if not identity:\n # No valid identity. Not Authorized.\n raise PermissionError(\"Invalid Identity\")\n\n if not identity.is_authenticated:\n # The token is in some way invalid. Not Authorized.\n raise PermissionError(\"Token Not Authenticated\")\n\n version_claim = identity.claims.get(AuthenticationConstants.VERSION_CLAIM)\n if not version_claim:\n # No version claim\n raise PermissionError(\n f\"'{AuthenticationConstants.VERSION_CLAIM}' claim is required on skill Tokens.\"\n )\n\n # Look for the \"aud\" claim, but only if issued from the Bot Framework\n audience_claim = identity.claims.get(AuthenticationConstants.AUDIENCE_CLAIM)\n if not audience_claim:\n # Claim is not present or doesn't have a value. Not Authorized.\n raise PermissionError(\n f\"'{AuthenticationConstants.AUDIENCE_CLAIM}' claim is required on skill Tokens.\"\n )\n\n if not await credentials.is_valid_appid(audience_claim):\n # The AppId is not valid. Not Authorized.\n raise PermissionError(\"Invalid audience.\")\n\n from .jwt_token_validation import JwtTokenValidation\n\n app_id = JwtTokenValidation.get_app_id_from_claims(identity.claims)\n if not app_id:\n # Invalid AppId\n raise PermissionError(\"Invalid app_id.\")\n", "path": "libraries/botframework-connector/botframework/connector/auth/skill_validation.py"}], "after_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\nfrom datetime import timedelta\nfrom typing import Dict, Union\n\nimport jwt\n\nfrom .authentication_configuration import AuthenticationConfiguration\nfrom .authentication_constants import AuthenticationConstants\nfrom .claims_identity import ClaimsIdentity\nfrom .credential_provider import CredentialProvider\nfrom .government_constants import GovernmentConstants\nfrom .verify_options import VerifyOptions\nfrom .jwt_token_extractor import JwtTokenExtractor\nfrom .channel_provider import ChannelProvider\n\n\nclass SkillValidation:\n # TODO: Remove circular dependcies after C# refactor\n # pylint: disable=import-outside-toplevel\n\n \"\"\"\n Validates JWT tokens sent to and from a Skill.\n \"\"\"\n\n @staticmethod\n def is_skill_token(auth_header: str) -> bool:\n \"\"\"\n Determines if a given Auth header is from from a skill to bot or bot to skill request.\n :param auth_header: Bearer Token, in the \"Bearer [Long String]\" Format.\n :return bool:\n \"\"\"\n from .jwt_token_validation import JwtTokenValidation\n\n if not JwtTokenValidation.is_valid_token_format(auth_header):\n return False\n\n bearer_token = auth_header.split(\" \")[1]\n\n # Parse the Big Long String into an actual token.\n token = jwt.decode(bearer_token, options={\"verify_signature\": False})\n return SkillValidation.is_skill_claim(token)\n\n @staticmethod\n def is_skill_claim(claims: Dict[str, object]) -> bool:\n \"\"\"\n Checks if the given list of claims represents a skill.\n :param claims: A dict of claims.\n :return bool:\n \"\"\"\n if (\n claims.get(AuthenticationConstants.APP_ID_CLAIM, None)\n == AuthenticationConstants.ANONYMOUS_SKILL_APP_ID\n ):\n return True\n\n if AuthenticationConstants.VERSION_CLAIM not in claims:\n return False\n\n audience = claims.get(AuthenticationConstants.AUDIENCE_CLAIM)\n\n # The audience is https://api.botframework.com and not an appId.\n if (\n not audience\n or audience == AuthenticationConstants.TO_BOT_FROM_CHANNEL_TOKEN_ISSUER\n ):\n return False\n\n from .jwt_token_validation import JwtTokenValidation\n\n app_id = JwtTokenValidation.get_app_id_from_claims(claims)\n\n if not app_id:\n return False\n\n # Skill claims must contain and app ID and the AppID must be different than the audience.\n return app_id != audience\n\n @staticmethod\n async def authenticate_channel_token(\n auth_header: str,\n credentials: CredentialProvider,\n channel_service_or_provider: Union[str, ChannelProvider],\n channel_id: str,\n auth_configuration: AuthenticationConfiguration,\n ) -> ClaimsIdentity:\n if auth_configuration is None:\n raise Exception(\n \"auth_configuration cannot be None in SkillValidation.authenticate_channel_token\"\n )\n\n from .jwt_token_validation import JwtTokenValidation\n\n if isinstance(channel_service_or_provider, ChannelProvider):\n is_gov = channel_service_or_provider.is_government()\n else:\n is_gov = JwtTokenValidation.is_government(channel_service_or_provider)\n\n open_id_metadata_url = (\n GovernmentConstants.TO_BOT_FROM_EMULATOR_OPENID_METADATA_URL\n if is_gov\n else AuthenticationConstants.TO_BOT_FROM_EMULATOR_OPENID_METADATA_URL\n )\n\n token_validation_parameters = VerifyOptions(\n issuer=[\n \"https://sts.windows.net/d6d49420-f39b-4df7-a1dc-d59a935871db/\", # v3.1, 1.0 token\n \"https://login.microsoftonline.com/d6d49420-f39b-4df7-a1dc-d59a935871db/v2.0\", # v3.1, 2.0 token\n \"https://sts.windows.net/f8cdef31-a31e-4b4a-93e4-5f571e91255a/\", # v3.2, 1.0 token\n \"https://login.microsoftonline.com/f8cdef31-a31e-4b4a-93e4-5f571e91255a/v2.0\", # v3.2, 2.0 token\n \"https://sts.windows.net/cab8a31a-1906-4287-a0d8-4eef66b95f6e/\", # US Gov, 1.0 token\n \"https://login.microsoftonline.us/cab8a31a-1906-4287-a0d8-4eef66b95f6e/v2.0\", # US Gov, 2.0 token\n \"https://login.microsoftonline.us/f8cdef31-a31e-4b4a-93e4-5f571e91255a/\", # US Gov, 1.0 token\n \"https://login.microsoftonline.us/f8cdef31-a31e-4b4a-93e4-5f571e91255a/v2.0\", # US Gov, 2.0 token\n ],\n audience=None,\n clock_tolerance=timedelta(minutes=5),\n ignore_expiration=False,\n )\n\n if auth_configuration.valid_token_issuers:\n token_validation_parameters.issuer.extend(\n auth_configuration.valid_token_issuers\n )\n\n token_extractor = JwtTokenExtractor(\n token_validation_parameters,\n open_id_metadata_url,\n AuthenticationConstants.ALLOWED_SIGNING_ALGORITHMS,\n )\n\n identity = await token_extractor.get_identity_from_auth_header(\n auth_header, channel_id, auth_configuration.required_endorsements\n )\n await SkillValidation._validate_identity(identity, credentials)\n\n return identity\n\n @staticmethod\n def create_anonymous_skill_claim():\n \"\"\"\n Creates a ClaimsIdentity for an anonymous (unauthenticated) skill.\n :return ClaimsIdentity:\n \"\"\"\n return ClaimsIdentity(\n {\n AuthenticationConstants.APP_ID_CLAIM: AuthenticationConstants.ANONYMOUS_SKILL_APP_ID\n },\n True,\n AuthenticationConstants.ANONYMOUS_AUTH_TYPE,\n )\n\n @staticmethod\n async def _validate_identity(\n identity: ClaimsIdentity, credentials: CredentialProvider\n ):\n if not identity:\n # No valid identity. Not Authorized.\n raise PermissionError(\"Invalid Identity\")\n\n if not identity.is_authenticated:\n # The token is in some way invalid. Not Authorized.\n raise PermissionError(\"Token Not Authenticated\")\n\n version_claim = identity.claims.get(AuthenticationConstants.VERSION_CLAIM)\n if not version_claim:\n # No version claim\n raise PermissionError(\n f\"'{AuthenticationConstants.VERSION_CLAIM}' claim is required on skill Tokens.\"\n )\n\n # Look for the \"aud\" claim, but only if issued from the Bot Framework\n audience_claim = identity.claims.get(AuthenticationConstants.AUDIENCE_CLAIM)\n if not audience_claim:\n # Claim is not present or doesn't have a value. Not Authorized.\n raise PermissionError(\n f\"'{AuthenticationConstants.AUDIENCE_CLAIM}' claim is required on skill Tokens.\"\n )\n\n if not await credentials.is_valid_appid(audience_claim):\n # The AppId is not valid. Not Authorized.\n raise PermissionError(\"Invalid audience.\")\n\n from .jwt_token_validation import JwtTokenValidation\n\n app_id = JwtTokenValidation.get_app_id_from_claims(identity.claims)\n if not app_id:\n # Invalid AppId\n raise PermissionError(\"Invalid app_id.\")\n", "path": "libraries/botframework-connector/botframework/connector/auth/skill_validation.py"}]} | 2,558 | 136 |
gh_patches_debug_445 | rasdani/github-patches | git_diff | docker__docker-py-1156 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Requests v2.11.0 causes breakage
This is a known issue within requests and looks like it will be fixed in v2.11.1. Documenting for the benefit of anyone else who runs into this :).
After a recent pip upgrade I found that docker-py was passing along an error from requests:
File "/usr/local/lib/python2.7/dist-packages/docker/client.py", line 307, in _stream_raw_result
for out in response.iter_content(chunk_size=1, decode_unicode=True):
File "/usr/local/lib/python2.7/dist-packages/requests/utils.py", line 372, in stream_decode_response_unicode
raise UnicodeError("Unable to decode contents with encoding %s." % encoding)
UnicodeError: Unable to decode contents with encoding None.
This has already been reported to requests (https://github.com/kennethreitz/requests/issues/3481) and fixed (https://github.com/kennethreitz/requests/commit/d7f56ba9383575a6b7d361db0123a93c70a2b42f) for the next version.
If you are running into this issue, the easiest fix for now appears to be reverting to a pre 2.11 version of requests.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 import os
3 import sys
4
5 from setuptools import setup
6
7
8 ROOT_DIR = os.path.dirname(__file__)
9 SOURCE_DIR = os.path.join(ROOT_DIR)
10
11 requirements = [
12 'requests >= 2.5.2',
13 'six >= 1.4.0',
14 'websocket-client >= 0.32.0',
15 ]
16
17 if sys.platform == 'win32':
18 requirements.append('pypiwin32 >= 219')
19
20 extras_require = {
21 ':python_version < "3.5"': 'backports.ssl_match_hostname >= 3.5',
22 ':python_version < "3.3"': 'ipaddress >= 1.0.16',
23 }
24
25 version = None
26 exec(open('docker/version.py').read())
27
28 with open('./test-requirements.txt') as test_reqs_txt:
29 test_requirements = [line for line in test_reqs_txt]
30
31
32 setup(
33 name="docker-py",
34 version=version,
35 description="Python client for Docker.",
36 url='https://github.com/docker/docker-py/',
37 packages=[
38 'docker', 'docker.api', 'docker.auth', 'docker.transport',
39 'docker.utils', 'docker.utils.ports', 'docker.ssladapter'
40 ],
41 install_requires=requirements,
42 tests_require=test_requirements,
43 extras_require=extras_require,
44 zip_safe=False,
45 test_suite='tests',
46 classifiers=[
47 'Development Status :: 4 - Beta',
48 'Environment :: Other Environment',
49 'Intended Audience :: Developers',
50 'Operating System :: OS Independent',
51 'Programming Language :: Python',
52 'Programming Language :: Python :: 2',
53 'Programming Language :: Python :: 2.6',
54 'Programming Language :: Python :: 2.7',
55 'Programming Language :: Python :: 3',
56 'Programming Language :: Python :: 3.3',
57 'Programming Language :: Python :: 3.4',
58 'Programming Language :: Python :: 3.5',
59 'Topic :: Utilities',
60 'License :: OSI Approved :: Apache Software License',
61 ],
62 )
63
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -9,7 +9,7 @@
SOURCE_DIR = os.path.join(ROOT_DIR)
requirements = [
- 'requests >= 2.5.2',
+ 'requests >= 2.5.2, < 2.11',
'six >= 1.4.0',
'websocket-client >= 0.32.0',
]
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -9,7 +9,7 @@\n SOURCE_DIR = os.path.join(ROOT_DIR)\n \n requirements = [\n- 'requests >= 2.5.2',\n+ 'requests >= 2.5.2, < 2.11',\n 'six >= 1.4.0',\n 'websocket-client >= 0.32.0',\n ]\n", "issue": "Requests v2.11.0 causes breakage\nThis is a known issue within requests and looks like it will be fixed in v2.11.1. Documenting for the benefit of anyone else who runs into this :).\n\nAfter a recent pip upgrade I found that docker-py was passing along an error from requests:\n\nFile \"/usr/local/lib/python2.7/dist-packages/docker/client.py\", line 307, in _stream_raw_result\n for out in response.iter_content(chunk_size=1, decode_unicode=True):\n File \"/usr/local/lib/python2.7/dist-packages/requests/utils.py\", line 372, in stream_decode_response_unicode\n raise UnicodeError(\"Unable to decode contents with encoding %s.\" % encoding)\nUnicodeError: Unable to decode contents with encoding None.\n\nThis has already been reported to requests (https://github.com/kennethreitz/requests/issues/3481) and fixed (https://github.com/kennethreitz/requests/commit/d7f56ba9383575a6b7d361db0123a93c70a2b42f) for the next version.\n\nIf you are running into this issue, the easiest fix for now appears to be reverting to a pre 2.11 version of requests.\n\n", "before_files": [{"content": "#!/usr/bin/env python\nimport os\nimport sys\n\nfrom setuptools import setup\n\n\nROOT_DIR = os.path.dirname(__file__)\nSOURCE_DIR = os.path.join(ROOT_DIR)\n\nrequirements = [\n 'requests >= 2.5.2',\n 'six >= 1.4.0',\n 'websocket-client >= 0.32.0',\n]\n\nif sys.platform == 'win32':\n requirements.append('pypiwin32 >= 219')\n\nextras_require = {\n ':python_version < \"3.5\"': 'backports.ssl_match_hostname >= 3.5',\n ':python_version < \"3.3\"': 'ipaddress >= 1.0.16',\n}\n\nversion = None\nexec(open('docker/version.py').read())\n\nwith open('./test-requirements.txt') as test_reqs_txt:\n test_requirements = [line for line in test_reqs_txt]\n\n\nsetup(\n name=\"docker-py\",\n version=version,\n description=\"Python client for Docker.\",\n url='https://github.com/docker/docker-py/',\n packages=[\n 'docker', 'docker.api', 'docker.auth', 'docker.transport',\n 'docker.utils', 'docker.utils.ports', 'docker.ssladapter'\n ],\n install_requires=requirements,\n tests_require=test_requirements,\n extras_require=extras_require,\n zip_safe=False,\n test_suite='tests',\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Environment :: Other Environment',\n 'Intended Audience :: Developers',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Topic :: Utilities',\n 'License :: OSI Approved :: Apache Software License',\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\nimport os\nimport sys\n\nfrom setuptools import setup\n\n\nROOT_DIR = os.path.dirname(__file__)\nSOURCE_DIR = os.path.join(ROOT_DIR)\n\nrequirements = [\n 'requests >= 2.5.2, < 2.11',\n 'six >= 1.4.0',\n 'websocket-client >= 0.32.0',\n]\n\nif sys.platform == 'win32':\n requirements.append('pypiwin32 >= 219')\n\nextras_require = {\n ':python_version < \"3.5\"': 'backports.ssl_match_hostname >= 3.5',\n ':python_version < \"3.3\"': 'ipaddress >= 1.0.16',\n}\n\nversion = None\nexec(open('docker/version.py').read())\n\nwith open('./test-requirements.txt') as test_reqs_txt:\n test_requirements = [line for line in test_reqs_txt]\n\n\nsetup(\n name=\"docker-py\",\n version=version,\n description=\"Python client for Docker.\",\n url='https://github.com/docker/docker-py/',\n packages=[\n 'docker', 'docker.api', 'docker.auth', 'docker.transport',\n 'docker.utils', 'docker.utils.ports', 'docker.ssladapter'\n ],\n install_requires=requirements,\n tests_require=test_requirements,\n extras_require=extras_require,\n zip_safe=False,\n test_suite='tests',\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Environment :: Other Environment',\n 'Intended Audience :: Developers',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Topic :: Utilities',\n 'License :: OSI Approved :: Apache Software License',\n ],\n)\n", "path": "setup.py"}]} | 1,101 | 99 |
gh_patches_debug_50128 | rasdani/github-patches | git_diff | optuna__optuna-50 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`pfnopt.minimize` fails under `strorage=None` (default)
```python
import pfnopt
def obj(client):
x = client.sample_uniform('x', 0.1, 0.2)
return x
def main():
pfnopt.minimize(obj, n_trials=2)
if __name__ == '__main__':
main()
```
```
AttributeError: 'NoneType' object has no attribute 'get_study_uuid_from_id'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pfnopt/study.py`
Content:
```
1 import datetime
2 import multiprocessing
3 import multiprocessing.pool
4 from typing import Any # NOQA
5 from typing import Callable # NOQA
6 from typing import Dict # NOQA
7 from typing import Iterable # NOQA
8 from typing import List # NOQA
9 from typing import Optional # NOQA
10
11 from pfnopt import client as client_module
12 from pfnopt import pruners
13 from pfnopt import samplers
14 from pfnopt import storages
15 from pfnopt import trial # NOQA
16
17 ObjectiveFuncType = Callable[[client_module.BaseClient], float]
18
19
20 class Study(object):
21
22 def __init__(
23 self,
24 study_uuid, # type: str
25 storage, # type: storages.BaseStorage
26 sampler=None, # type: samplers.BaseSampler
27 pruner=None, # type: pruners.BasePruner
28 ):
29 # type: (...) -> None
30
31 self.study_uuid = study_uuid
32 self.storage = storage
33 self.sampler = sampler or samplers.TPESampler()
34 self.pruner = pruner or pruners.MedianPruner()
35
36 self.study_id = storage.get_study_id_from_uuid(study_uuid)
37
38 @property
39 def best_params(self):
40 # type: () -> Dict[str, Any]
41
42 return self.best_trial.params
43
44 @property
45 def best_value(self):
46 # type: () -> float
47
48 return self.best_trial.value
49
50 @property
51 def best_trial(self):
52 # type: () -> trial.Trial
53
54 return self.storage.get_best_trial(self.study_id)
55
56 @property
57 def trials(self):
58 # type: () -> List[trial.Trial]
59
60 return self.storage.get_all_trials(self.study_id)
61
62 def run(self, func, n_trials=None, timeout_seconds=None, n_jobs=1):
63 # type: (ObjectiveFuncType, Optional[int], Optional[float], int) -> None
64
65 if n_jobs == 1:
66 self._run_sequential(func, n_trials, timeout_seconds)
67 else:
68 self._run_parallel(func, n_trials, timeout_seconds, n_jobs)
69
70 def _run_sequential(self, func, n_trials, timeout_seconds):
71 # type: (ObjectiveFuncType, Optional[int], Optional[float]) -> None
72
73 i_trial = 0
74 time_start = datetime.datetime.now()
75 while True:
76 if n_trials is not None:
77 if i_trial >= n_trials:
78 break
79 i_trial += 1
80
81 if timeout_seconds is not None:
82 elapsed_seconds = (datetime.datetime.now() - time_start).total_seconds()
83 if elapsed_seconds >= timeout_seconds:
84 break
85
86 trial_id = self.storage.create_new_trial_id(self.study_id)
87 client = client_module.LocalClient(self, trial_id)
88 result = func(client)
89 client.complete(result)
90
91 def _run_parallel(self, func, n_trials, timeout_seconds, n_jobs):
92 # type: (ObjectiveFuncType, Optional[int], Optional[float], int) -> None
93
94 if isinstance(self.storage, storages.RDBStorage):
95 raise TypeError('Parallel run with RDBStorage is not supported.')
96
97 if n_jobs == -1:
98 n_jobs = multiprocessing.cpu_count()
99
100 pool = multiprocessing.pool.ThreadPool(n_jobs) # type: ignore
101
102 def f(_):
103 trial_id = self.storage.create_new_trial_id(self.study_id)
104 client = client_module.LocalClient(self, trial_id)
105 result = func(client)
106 client.complete(result)
107
108 self.start_datetime = datetime.datetime.now()
109
110 if n_trials is not None:
111 ite = range(n_trials) # type: Iterable[int]
112 else:
113 ite = iter(int, 1) # Infinite iterator
114
115 imap_ite = pool.imap(f, ite, chunksize=1)
116 while True:
117 if timeout_seconds is None:
118 to = None
119 else:
120 elapsed_timedelta = datetime.datetime.now() - self.start_datetime
121 elapsed_seconds = elapsed_timedelta.total_seconds()
122 to = (timeout_seconds - elapsed_seconds)
123
124 try:
125 imap_ite.next(timeout=to) # type: ignore
126 except (StopIteration, multiprocessing.TimeoutError): # type: ignore
127 break
128
129 pool.terminate()
130
131
132 def minimize(
133 func, # type: ObjectiveFuncType
134 n_trials=None, # type: Optional[int]
135 timeout_seconds=None, # type: Optional[float]
136 n_jobs=1, # type: int
137 storage=None, # type: storages.BaseStorage
138 sampler=None, # type: samplers.BaseSampler
139 pruner=None, # type: pruners.BasePruner
140 study=None, # type: Study
141 ):
142 # type: (...) -> Study
143
144 study = study or create_new_study(storage=storage, sampler=sampler, pruner=pruner)
145 study.run(func, n_trials, timeout_seconds, n_jobs)
146 return study
147
148
149 # TODO(akiba): implement me
150 def maximize():
151 raise NotImplementedError
152
153
154 def create_new_study(storage, sampler=None, pruner=None):
155 # type: (storages.BaseStorage, samplers.BaseSampler, pruners.BasePruner) -> Study
156 study_uuid = storage.get_study_uuid_from_id(storage.create_new_study_id())
157 return Study(study_uuid=study_uuid, storage=storage, sampler=sampler, pruner=pruner)
158
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pfnopt/study.py b/pfnopt/study.py
--- a/pfnopt/study.py
+++ b/pfnopt/study.py
@@ -140,7 +140,7 @@
study=None, # type: Study
):
# type: (...) -> Study
-
+ storage = storage or storages.InMemoryStorage()
study = study or create_new_study(storage=storage, sampler=sampler, pruner=pruner)
study.run(func, n_trials, timeout_seconds, n_jobs)
return study
| {"golden_diff": "diff --git a/pfnopt/study.py b/pfnopt/study.py\n--- a/pfnopt/study.py\n+++ b/pfnopt/study.py\n@@ -140,7 +140,7 @@\n study=None, # type: Study\n ):\n # type: (...) -> Study\n-\n+ storage = storage or storages.InMemoryStorage()\n study = study or create_new_study(storage=storage, sampler=sampler, pruner=pruner)\n study.run(func, n_trials, timeout_seconds, n_jobs)\n return study\n", "issue": "`pfnopt.minimize` fails under `strorage=None` (default)\n```python\r\nimport pfnopt\r\n\r\n\r\ndef obj(client):\r\n x = client.sample_uniform('x', 0.1, 0.2)\r\n return x\r\n\r\n\r\ndef main():\r\n pfnopt.minimize(obj, n_trials=2)\r\n\r\n\r\nif __name__ == '__main__':\r\n main()\r\n```\r\n\r\n```\r\nAttributeError: 'NoneType' object has no attribute 'get_study_uuid_from_id'\r\n```\n", "before_files": [{"content": "import datetime\nimport multiprocessing\nimport multiprocessing.pool\nfrom typing import Any # NOQA\nfrom typing import Callable # NOQA\nfrom typing import Dict # NOQA\nfrom typing import Iterable # NOQA\nfrom typing import List # NOQA\nfrom typing import Optional # NOQA\n\nfrom pfnopt import client as client_module\nfrom pfnopt import pruners\nfrom pfnopt import samplers\nfrom pfnopt import storages\nfrom pfnopt import trial # NOQA\n\nObjectiveFuncType = Callable[[client_module.BaseClient], float]\n\n\nclass Study(object):\n\n def __init__(\n self,\n study_uuid, # type: str\n storage, # type: storages.BaseStorage\n sampler=None, # type: samplers.BaseSampler\n pruner=None, # type: pruners.BasePruner\n ):\n # type: (...) -> None\n\n self.study_uuid = study_uuid\n self.storage = storage\n self.sampler = sampler or samplers.TPESampler()\n self.pruner = pruner or pruners.MedianPruner()\n\n self.study_id = storage.get_study_id_from_uuid(study_uuid)\n\n @property\n def best_params(self):\n # type: () -> Dict[str, Any]\n\n return self.best_trial.params\n\n @property\n def best_value(self):\n # type: () -> float\n\n return self.best_trial.value\n\n @property\n def best_trial(self):\n # type: () -> trial.Trial\n\n return self.storage.get_best_trial(self.study_id)\n\n @property\n def trials(self):\n # type: () -> List[trial.Trial]\n\n return self.storage.get_all_trials(self.study_id)\n\n def run(self, func, n_trials=None, timeout_seconds=None, n_jobs=1):\n # type: (ObjectiveFuncType, Optional[int], Optional[float], int) -> None\n\n if n_jobs == 1:\n self._run_sequential(func, n_trials, timeout_seconds)\n else:\n self._run_parallel(func, n_trials, timeout_seconds, n_jobs)\n\n def _run_sequential(self, func, n_trials, timeout_seconds):\n # type: (ObjectiveFuncType, Optional[int], Optional[float]) -> None\n\n i_trial = 0\n time_start = datetime.datetime.now()\n while True:\n if n_trials is not None:\n if i_trial >= n_trials:\n break\n i_trial += 1\n\n if timeout_seconds is not None:\n elapsed_seconds = (datetime.datetime.now() - time_start).total_seconds()\n if elapsed_seconds >= timeout_seconds:\n break\n\n trial_id = self.storage.create_new_trial_id(self.study_id)\n client = client_module.LocalClient(self, trial_id)\n result = func(client)\n client.complete(result)\n\n def _run_parallel(self, func, n_trials, timeout_seconds, n_jobs):\n # type: (ObjectiveFuncType, Optional[int], Optional[float], int) -> None\n\n if isinstance(self.storage, storages.RDBStorage):\n raise TypeError('Parallel run with RDBStorage is not supported.')\n\n if n_jobs == -1:\n n_jobs = multiprocessing.cpu_count()\n\n pool = multiprocessing.pool.ThreadPool(n_jobs) # type: ignore\n\n def f(_):\n trial_id = self.storage.create_new_trial_id(self.study_id)\n client = client_module.LocalClient(self, trial_id)\n result = func(client)\n client.complete(result)\n\n self.start_datetime = datetime.datetime.now()\n\n if n_trials is not None:\n ite = range(n_trials) # type: Iterable[int]\n else:\n ite = iter(int, 1) # Infinite iterator\n\n imap_ite = pool.imap(f, ite, chunksize=1)\n while True:\n if timeout_seconds is None:\n to = None\n else:\n elapsed_timedelta = datetime.datetime.now() - self.start_datetime\n elapsed_seconds = elapsed_timedelta.total_seconds()\n to = (timeout_seconds - elapsed_seconds)\n\n try:\n imap_ite.next(timeout=to) # type: ignore\n except (StopIteration, multiprocessing.TimeoutError): # type: ignore\n break\n\n pool.terminate()\n\n\ndef minimize(\n func, # type: ObjectiveFuncType\n n_trials=None, # type: Optional[int]\n timeout_seconds=None, # type: Optional[float]\n n_jobs=1, # type: int\n storage=None, # type: storages.BaseStorage\n sampler=None, # type: samplers.BaseSampler\n pruner=None, # type: pruners.BasePruner\n study=None, # type: Study\n):\n # type: (...) -> Study\n\n study = study or create_new_study(storage=storage, sampler=sampler, pruner=pruner)\n study.run(func, n_trials, timeout_seconds, n_jobs)\n return study\n\n\n# TODO(akiba): implement me\ndef maximize():\n raise NotImplementedError\n\n\ndef create_new_study(storage, sampler=None, pruner=None):\n # type: (storages.BaseStorage, samplers.BaseSampler, pruners.BasePruner) -> Study\n study_uuid = storage.get_study_uuid_from_id(storage.create_new_study_id())\n return Study(study_uuid=study_uuid, storage=storage, sampler=sampler, pruner=pruner)\n", "path": "pfnopt/study.py"}], "after_files": [{"content": "import datetime\nimport multiprocessing\nimport multiprocessing.pool\nfrom typing import Any # NOQA\nfrom typing import Callable # NOQA\nfrom typing import Dict # NOQA\nfrom typing import Iterable # NOQA\nfrom typing import List # NOQA\nfrom typing import Optional # NOQA\n\nfrom pfnopt import client as client_module\nfrom pfnopt import pruners\nfrom pfnopt import samplers\nfrom pfnopt import storages\nfrom pfnopt import trial # NOQA\n\nObjectiveFuncType = Callable[[client_module.BaseClient], float]\n\n\nclass Study(object):\n\n def __init__(\n self,\n study_uuid, # type: str\n storage, # type: storages.BaseStorage\n sampler=None, # type: samplers.BaseSampler\n pruner=None, # type: pruners.BasePruner\n ):\n # type: (...) -> None\n\n self.study_uuid = study_uuid\n self.storage = storage\n self.sampler = sampler or samplers.TPESampler()\n self.pruner = pruner or pruners.MedianPruner()\n\n self.study_id = storage.get_study_id_from_uuid(study_uuid)\n\n @property\n def best_params(self):\n # type: () -> Dict[str, Any]\n\n return self.best_trial.params\n\n @property\n def best_value(self):\n # type: () -> float\n\n return self.best_trial.value\n\n @property\n def best_trial(self):\n # type: () -> trial.Trial\n\n return self.storage.get_best_trial(self.study_id)\n\n @property\n def trials(self):\n # type: () -> List[trial.Trial]\n\n return self.storage.get_all_trials(self.study_id)\n\n def run(self, func, n_trials=None, timeout_seconds=None, n_jobs=1):\n # type: (ObjectiveFuncType, Optional[int], Optional[float], int) -> None\n\n if n_jobs == 1:\n self._run_sequential(func, n_trials, timeout_seconds)\n else:\n self._run_parallel(func, n_trials, timeout_seconds, n_jobs)\n\n def _run_sequential(self, func, n_trials, timeout_seconds):\n # type: (ObjectiveFuncType, Optional[int], Optional[float]) -> None\n\n i_trial = 0\n time_start = datetime.datetime.now()\n while True:\n if n_trials is not None:\n if i_trial >= n_trials:\n break\n i_trial += 1\n\n if timeout_seconds is not None:\n elapsed_seconds = (datetime.datetime.now() - time_start).total_seconds()\n if elapsed_seconds >= timeout_seconds:\n break\n\n trial_id = self.storage.create_new_trial_id(self.study_id)\n client = client_module.LocalClient(self, trial_id)\n result = func(client)\n client.complete(result)\n\n def _run_parallel(self, func, n_trials, timeout_seconds, n_jobs):\n # type: (ObjectiveFuncType, Optional[int], Optional[float], int) -> None\n\n if isinstance(self.storage, storages.RDBStorage):\n raise TypeError('Parallel run with RDBStorage is not supported.')\n\n if n_jobs == -1:\n n_jobs = multiprocessing.cpu_count()\n\n pool = multiprocessing.pool.ThreadPool(n_jobs) # type: ignore\n\n def f(_):\n trial_id = self.storage.create_new_trial_id(self.study_id)\n client = client_module.LocalClient(self, trial_id)\n result = func(client)\n client.complete(result)\n\n self.start_datetime = datetime.datetime.now()\n\n if n_trials is not None:\n ite = range(n_trials) # type: Iterable[int]\n else:\n ite = iter(int, 1) # Infinite iterator\n\n imap_ite = pool.imap(f, ite, chunksize=1)\n while True:\n if timeout_seconds is None:\n to = None\n else:\n elapsed_timedelta = datetime.datetime.now() - self.start_datetime\n elapsed_seconds = elapsed_timedelta.total_seconds()\n to = (timeout_seconds - elapsed_seconds)\n\n try:\n imap_ite.next(timeout=to) # type: ignore\n except (StopIteration, multiprocessing.TimeoutError): # type: ignore\n break\n\n pool.terminate()\n\n\ndef minimize(\n func, # type: ObjectiveFuncType\n n_trials=None, # type: Optional[int]\n timeout_seconds=None, # type: Optional[float]\n n_jobs=1, # type: int\n storage=None, # type: storages.BaseStorage\n sampler=None, # type: samplers.BaseSampler\n pruner=None, # type: pruners.BasePruner\n study=None, # type: Study\n):\n # type: (...) -> Study\n storage = storage or storages.InMemoryStorage()\n study = study or create_new_study(storage=storage, sampler=sampler, pruner=pruner)\n study.run(func, n_trials, timeout_seconds, n_jobs)\n return study\n\n\n# TODO(akiba): implement me\ndef maximize():\n raise NotImplementedError\n\n\ndef create_new_study(storage, sampler=None, pruner=None):\n # type: (storages.BaseStorage, samplers.BaseSampler, pruners.BasePruner) -> Study\n study_uuid = storage.get_study_uuid_from_id(storage.create_new_study_id())\n return Study(study_uuid=study_uuid, storage=storage, sampler=sampler, pruner=pruner)\n", "path": "pfnopt/study.py"}]} | 1,916 | 126 |
gh_patches_debug_30893 | rasdani/github-patches | git_diff | microsoft__AzureTRE-1764 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Change firewall bundle to read rules from resource properties
In order for the firewall to be updated by the API / other resources as part of pipelines, it must read the 'configurable' properties (parts of the firewall we want to allow a user/process to update) from the resource properties in cosmos.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `resource_processor/resources/commands.py`
Content:
```
1 import asyncio
2 import json
3 import logging
4 from resources.helpers import get_installation_id
5 from shared.logging import shell_output_logger
6
7
8 def azure_login_command(config):
9 # Use a Service Principal when running locally
10 local_login = f"az login --service-principal --username {config['arm_client_id']} --password {config['arm_client_secret']} --tenant {config['arm_tenant_id']}"
11
12 # Use the Managed Identity when in VMSS context
13 vmss_login = f"az login --identity -u {config['vmss_msi_id']}"
14
15 command = vmss_login if config["vmss_msi_id"] else local_login
16 return command
17
18
19 def azure_acr_login_command(config):
20 return f"az acr login --name {config['registry_server'].replace('.azurecr.io','')}"
21
22
23 async def build_porter_command(config, logger, msg_body, custom_action=False):
24 porter_parameter_keys = await get_porter_parameter_keys(config, logger, msg_body)
25 porter_parameters = ""
26
27 if porter_parameter_keys is None:
28 logger.warning("Unknown porter parameters - explain probably failed.")
29 else:
30 for parameter_name in porter_parameter_keys:
31 # try to find the param in order of priorities:
32 parameter_value = None
33
34 # 1. msg parameters collection
35 if parameter_name in msg_body["parameters"]:
36 parameter_value = msg_body["parameters"][parameter_name]
37
38 # 2. config (e.g. terraform state env vars)
39 elif parameter_name in config:
40 parameter_value = config[parameter_name]
41
42 # 3. msg body root (e.g. id of the resource)
43 elif parameter_name in msg_body:
44 parameter_value = msg_body[parameter_name]
45
46 # if still not found, might be a special case
47 # (we give a chance to the method above to allow override of the special handeling done below)
48 else:
49 parameter_value = get_special_porter_param_value(config, parameter_name, msg_body)
50
51 # only append if we have a value, porter will complain anyway about missing parameters
52 if parameter_value is not None:
53 porter_parameters = porter_parameters + f" --param {parameter_name}=\"{parameter_value}\""
54
55 installation_id = get_installation_id(msg_body)
56
57 command_line = [f"{azure_login_command(config)} && {azure_acr_login_command(config)} && porter "
58 # If a custom action (i.e. not install, uninstall, upgrade) we need to use 'invoke'
59 f"{'invoke --action ' if custom_action else ''}"
60 f"{msg_body['action']} \"{installation_id}\" "
61 f" --reference {config['registry_server']}/{msg_body['name']}:v{msg_body['version']}"
62 f" {porter_parameters} --allow-docker-host-access --force"
63 f" --cred ./vmss_porter/arm_auth_local_debugging.json"
64 f" --cred ./vmss_porter/aad_auth.json"
65 f" && porter show {installation_id}"]
66 return command_line
67
68
69 async def build_porter_command_for_outputs(msg_body):
70 installation_id = get_installation_id(msg_body)
71 # we only need "real" outputs and use jq to remove the logs which are big
72 command_line = [f"porter show {installation_id} --output json | jq -c '. | select(.Outputs!=null) | .Outputs | del (.[] | select(.Name==\"io.cnab.outputs.invocationImageLogs\"))'"]
73 return command_line
74
75
76 async def get_porter_parameter_keys(config, logger, msg_body):
77 command = [f"{azure_login_command(config)} >/dev/null && \
78 {azure_acr_login_command(config)} >/dev/null && \
79 porter explain --reference {config['registry_server']}/{msg_body['name']}:v{msg_body['version']} -ojson"]
80
81 proc = await asyncio.create_subprocess_shell(
82 ''.join(command),
83 stdout=asyncio.subprocess.PIPE,
84 stderr=asyncio.subprocess.PIPE,
85 env=config["porter_env"])
86
87 stdout, stderr = await proc.communicate()
88 logging.info(f'get_porter_parameter_keys exited with {proc.returncode}')
89 result_stdout = None
90 result_stderr = None
91
92 if stdout:
93 result_stdout = stdout.decode()
94 porter_explain_parameters = json.loads(result_stdout)["parameters"]
95 porter_parameter_keys = [item["name"] for item in porter_explain_parameters]
96 return porter_parameter_keys
97 if stderr:
98 result_stderr = stderr.decode()
99 shell_output_logger(result_stderr, '[stderr]', logger, logging.WARN)
100
101
102 def get_special_porter_param_value(config, parameter_name: str, msg_body):
103 # some parameters might not have identical names and this comes to handle that
104 if parameter_name == "mgmt_acr_name":
105 return config["registry_server"].replace('.azurecr.io', '')
106 if parameter_name == "mgmt_resource_group_name":
107 return config["tfstate_resource_group_name"]
108 if parameter_name == "workspace_id":
109 return msg_body.get("workspaceId") # not included in all messages
110 if parameter_name == "parent_service_id":
111 return msg_body.get("parentWorkspaceServiceId") # not included in all messages
112
```
Path: `api_app/models/domain/resource_template.py`
Content:
```
1 from typing import Dict, Any, List, Optional
2
3 from pydantic import Field
4
5 from models.domain.azuretremodel import AzureTREModel
6 from models.domain.resource import ResourceType
7
8
9 class Property(AzureTREModel):
10 type: str = Field(title="Property type")
11 title: str = Field("", title="Property description")
12 description: str = Field("", title="Property description")
13 default: Any = Field(None, title="Default value for the property")
14 enum: Optional[List[str]] = Field(None, title="Enum values")
15 const: Optional[Any] = Field(None, title="Constant value")
16 multipleOf: Optional[float] = Field(None, title="Multiple of")
17 maximum: Optional[float] = Field(None, title="Maximum value")
18 exclusiveMaximum: Optional[float] = Field(None, title="Exclusive maximum value")
19 minimum: Optional[float] = Field(None, title="Minimum value")
20 exclusiveMinimum: Optional[float] = Field(None, title="Exclusive minimum value")
21 maxLength: Optional[int] = Field(None, title="Maximum length")
22 minLength: Optional[int] = Field(None, title="Minimum length")
23 pattern: Optional[str] = Field(None, title="Pattern")
24 updateable: Optional[bool] = Field(None, title="Indicates that the field can be updated")
25 readOnly: Optional[bool] = Field(None, title="Indicates the field is read-only")
26
27
28 class CustomAction(AzureTREModel):
29 name: str = Field(None, title="Custom action name")
30 description: str = Field("", title="Action description")
31
32
33 class PipelineStepProperty(AzureTREModel):
34 name: str = Field(title="name", description="name of the property to update")
35 type: str = Field(title="type", description="data type of the property to update")
36 value: str = Field(title="value", description="value to use in substitution for the property to update")
37
38
39 class PipelineStep(AzureTREModel):
40 stepId: Optional[str] = Field(title="stepId", description="Unique id identifying the step")
41 stepTitle: Optional[str] = Field(title="stepTitle", description="Human readable title of what the step is for")
42 resourceTemplateName: Optional[str] = Field(title="resourceTemplateName", description="Name of the template for the resource under change")
43 resourceType: Optional[ResourceType] = Field(title="resourceType", description="Type of resource under change")
44 resourceAction: Optional[str] = Field(title="resourceAction", description="Action - install / upgrade / uninstall etc")
45 properties: Optional[List[PipelineStepProperty]]
46
47
48 class Pipeline(AzureTREModel):
49 install: Optional[List[PipelineStep]]
50 upgrade: Optional[List[PipelineStep]]
51 uninstall: Optional[List[PipelineStep]]
52
53
54 class ResourceTemplate(AzureTREModel):
55 id: str
56 name: str = Field(title="Unique template name")
57 title: str = Field("", title="Template title or friendly name")
58 description: str = Field(title="Template description")
59 version: str = Field(title="Template version")
60 resourceType: ResourceType = Field(title="Type of resource this template is for (workspace/service)")
61 current: bool = Field(title="Is this the current version of this template")
62 type: str = "object"
63 required: List[str] = Field(title="List of properties which must be provided")
64 properties: Dict[str, Property] = Field(title="Template properties")
65 actions: List[CustomAction] = Field(default=[], title="Template actions")
66 customActions: List[CustomAction] = Field(default=[], title="Template custom actions")
67 pipeline: Optional[Pipeline] = Field(default=None, title="Template pipeline to define updates to other resources")
68
69 # setting this to false means if extra, unexpected fields are supplied, the request is invalidated
70 additionalProperties: bool = Field(default=False, title="Prevent unspecified properties being applied")
71
```
Path: `api_app/_version.py`
Content:
```
1 __version__ = "0.2.23"
2
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/api_app/_version.py b/api_app/_version.py
--- a/api_app/_version.py
+++ b/api_app/_version.py
@@ -1 +1 @@
-__version__ = "0.2.23"
+__version__ = "0.2.24"
diff --git a/api_app/models/domain/resource_template.py b/api_app/models/domain/resource_template.py
--- a/api_app/models/domain/resource_template.py
+++ b/api_app/models/domain/resource_template.py
@@ -23,6 +23,7 @@
pattern: Optional[str] = Field(None, title="Pattern")
updateable: Optional[bool] = Field(None, title="Indicates that the field can be updated")
readOnly: Optional[bool] = Field(None, title="Indicates the field is read-only")
+ items: Optional[dict] = None # items can contain sub-properties
class CustomAction(AzureTREModel):
diff --git a/resource_processor/resources/commands.py b/resource_processor/resources/commands.py
--- a/resource_processor/resources/commands.py
+++ b/resource_processor/resources/commands.py
@@ -1,6 +1,8 @@
import asyncio
import json
import logging
+import base64
+
from resources.helpers import get_installation_id
from shared.logging import shell_output_logger
@@ -50,6 +52,13 @@
# only append if we have a value, porter will complain anyway about missing parameters
if parameter_value is not None:
+ if isinstance(parameter_value, dict) or isinstance(parameter_value, list):
+ # base64 encode complex types to pass in safely
+ val = json.dumps(parameter_value)
+ val_bytes = val.encode("ascii")
+ val_base64_bytes = base64.b64encode(val_bytes)
+ parameter_value = val_base64_bytes.decode("ascii")
+
porter_parameters = porter_parameters + f" --param {parameter_name}=\"{parameter_value}\""
installation_id = get_installation_id(msg_body)
| {"golden_diff": "diff --git a/api_app/_version.py b/api_app/_version.py\n--- a/api_app/_version.py\n+++ b/api_app/_version.py\n@@ -1 +1 @@\n-__version__ = \"0.2.23\"\n+__version__ = \"0.2.24\"\ndiff --git a/api_app/models/domain/resource_template.py b/api_app/models/domain/resource_template.py\n--- a/api_app/models/domain/resource_template.py\n+++ b/api_app/models/domain/resource_template.py\n@@ -23,6 +23,7 @@\n pattern: Optional[str] = Field(None, title=\"Pattern\")\n updateable: Optional[bool] = Field(None, title=\"Indicates that the field can be updated\")\n readOnly: Optional[bool] = Field(None, title=\"Indicates the field is read-only\")\n+ items: Optional[dict] = None # items can contain sub-properties\n \n \n class CustomAction(AzureTREModel):\ndiff --git a/resource_processor/resources/commands.py b/resource_processor/resources/commands.py\n--- a/resource_processor/resources/commands.py\n+++ b/resource_processor/resources/commands.py\n@@ -1,6 +1,8 @@\n import asyncio\n import json\n import logging\n+import base64\n+\n from resources.helpers import get_installation_id\n from shared.logging import shell_output_logger\n \n@@ -50,6 +52,13 @@\n \n # only append if we have a value, porter will complain anyway about missing parameters\n if parameter_value is not None:\n+ if isinstance(parameter_value, dict) or isinstance(parameter_value, list):\n+ # base64 encode complex types to pass in safely\n+ val = json.dumps(parameter_value)\n+ val_bytes = val.encode(\"ascii\")\n+ val_base64_bytes = base64.b64encode(val_bytes)\n+ parameter_value = val_base64_bytes.decode(\"ascii\")\n+\n porter_parameters = porter_parameters + f\" --param {parameter_name}=\\\"{parameter_value}\\\"\"\n \n installation_id = get_installation_id(msg_body)\n", "issue": "Change firewall bundle to read rules from resource properties\nIn order for the firewall to be updated by the API / other resources as part of pipelines, it must read the 'configurable' properties (parts of the firewall we want to allow a user/process to update) from the resource properties in cosmos. \n", "before_files": [{"content": "import asyncio\nimport json\nimport logging\nfrom resources.helpers import get_installation_id\nfrom shared.logging import shell_output_logger\n\n\ndef azure_login_command(config):\n # Use a Service Principal when running locally\n local_login = f\"az login --service-principal --username {config['arm_client_id']} --password {config['arm_client_secret']} --tenant {config['arm_tenant_id']}\"\n\n # Use the Managed Identity when in VMSS context\n vmss_login = f\"az login --identity -u {config['vmss_msi_id']}\"\n\n command = vmss_login if config[\"vmss_msi_id\"] else local_login\n return command\n\n\ndef azure_acr_login_command(config):\n return f\"az acr login --name {config['registry_server'].replace('.azurecr.io','')}\"\n\n\nasync def build_porter_command(config, logger, msg_body, custom_action=False):\n porter_parameter_keys = await get_porter_parameter_keys(config, logger, msg_body)\n porter_parameters = \"\"\n\n if porter_parameter_keys is None:\n logger.warning(\"Unknown porter parameters - explain probably failed.\")\n else:\n for parameter_name in porter_parameter_keys:\n # try to find the param in order of priorities:\n parameter_value = None\n\n # 1. msg parameters collection\n if parameter_name in msg_body[\"parameters\"]:\n parameter_value = msg_body[\"parameters\"][parameter_name]\n\n # 2. config (e.g. terraform state env vars)\n elif parameter_name in config:\n parameter_value = config[parameter_name]\n\n # 3. msg body root (e.g. id of the resource)\n elif parameter_name in msg_body:\n parameter_value = msg_body[parameter_name]\n\n # if still not found, might be a special case\n # (we give a chance to the method above to allow override of the special handeling done below)\n else:\n parameter_value = get_special_porter_param_value(config, parameter_name, msg_body)\n\n # only append if we have a value, porter will complain anyway about missing parameters\n if parameter_value is not None:\n porter_parameters = porter_parameters + f\" --param {parameter_name}=\\\"{parameter_value}\\\"\"\n\n installation_id = get_installation_id(msg_body)\n\n command_line = [f\"{azure_login_command(config)} && {azure_acr_login_command(config)} && porter \"\n # If a custom action (i.e. not install, uninstall, upgrade) we need to use 'invoke'\n f\"{'invoke --action ' if custom_action else ''}\"\n f\"{msg_body['action']} \\\"{installation_id}\\\" \"\n f\" --reference {config['registry_server']}/{msg_body['name']}:v{msg_body['version']}\"\n f\" {porter_parameters} --allow-docker-host-access --force\"\n f\" --cred ./vmss_porter/arm_auth_local_debugging.json\"\n f\" --cred ./vmss_porter/aad_auth.json\"\n f\" && porter show {installation_id}\"]\n return command_line\n\n\nasync def build_porter_command_for_outputs(msg_body):\n installation_id = get_installation_id(msg_body)\n # we only need \"real\" outputs and use jq to remove the logs which are big\n command_line = [f\"porter show {installation_id} --output json | jq -c '. | select(.Outputs!=null) | .Outputs | del (.[] | select(.Name==\\\"io.cnab.outputs.invocationImageLogs\\\"))'\"]\n return command_line\n\n\nasync def get_porter_parameter_keys(config, logger, msg_body):\n command = [f\"{azure_login_command(config)} >/dev/null && \\\n {azure_acr_login_command(config)} >/dev/null && \\\n porter explain --reference {config['registry_server']}/{msg_body['name']}:v{msg_body['version']} -ojson\"]\n\n proc = await asyncio.create_subprocess_shell(\n ''.join(command),\n stdout=asyncio.subprocess.PIPE,\n stderr=asyncio.subprocess.PIPE,\n env=config[\"porter_env\"])\n\n stdout, stderr = await proc.communicate()\n logging.info(f'get_porter_parameter_keys exited with {proc.returncode}')\n result_stdout = None\n result_stderr = None\n\n if stdout:\n result_stdout = stdout.decode()\n porter_explain_parameters = json.loads(result_stdout)[\"parameters\"]\n porter_parameter_keys = [item[\"name\"] for item in porter_explain_parameters]\n return porter_parameter_keys\n if stderr:\n result_stderr = stderr.decode()\n shell_output_logger(result_stderr, '[stderr]', logger, logging.WARN)\n\n\ndef get_special_porter_param_value(config, parameter_name: str, msg_body):\n # some parameters might not have identical names and this comes to handle that\n if parameter_name == \"mgmt_acr_name\":\n return config[\"registry_server\"].replace('.azurecr.io', '')\n if parameter_name == \"mgmt_resource_group_name\":\n return config[\"tfstate_resource_group_name\"]\n if parameter_name == \"workspace_id\":\n return msg_body.get(\"workspaceId\") # not included in all messages\n if parameter_name == \"parent_service_id\":\n return msg_body.get(\"parentWorkspaceServiceId\") # not included in all messages\n", "path": "resource_processor/resources/commands.py"}, {"content": "from typing import Dict, Any, List, Optional\n\nfrom pydantic import Field\n\nfrom models.domain.azuretremodel import AzureTREModel\nfrom models.domain.resource import ResourceType\n\n\nclass Property(AzureTREModel):\n type: str = Field(title=\"Property type\")\n title: str = Field(\"\", title=\"Property description\")\n description: str = Field(\"\", title=\"Property description\")\n default: Any = Field(None, title=\"Default value for the property\")\n enum: Optional[List[str]] = Field(None, title=\"Enum values\")\n const: Optional[Any] = Field(None, title=\"Constant value\")\n multipleOf: Optional[float] = Field(None, title=\"Multiple of\")\n maximum: Optional[float] = Field(None, title=\"Maximum value\")\n exclusiveMaximum: Optional[float] = Field(None, title=\"Exclusive maximum value\")\n minimum: Optional[float] = Field(None, title=\"Minimum value\")\n exclusiveMinimum: Optional[float] = Field(None, title=\"Exclusive minimum value\")\n maxLength: Optional[int] = Field(None, title=\"Maximum length\")\n minLength: Optional[int] = Field(None, title=\"Minimum length\")\n pattern: Optional[str] = Field(None, title=\"Pattern\")\n updateable: Optional[bool] = Field(None, title=\"Indicates that the field can be updated\")\n readOnly: Optional[bool] = Field(None, title=\"Indicates the field is read-only\")\n\n\nclass CustomAction(AzureTREModel):\n name: str = Field(None, title=\"Custom action name\")\n description: str = Field(\"\", title=\"Action description\")\n\n\nclass PipelineStepProperty(AzureTREModel):\n name: str = Field(title=\"name\", description=\"name of the property to update\")\n type: str = Field(title=\"type\", description=\"data type of the property to update\")\n value: str = Field(title=\"value\", description=\"value to use in substitution for the property to update\")\n\n\nclass PipelineStep(AzureTREModel):\n stepId: Optional[str] = Field(title=\"stepId\", description=\"Unique id identifying the step\")\n stepTitle: Optional[str] = Field(title=\"stepTitle\", description=\"Human readable title of what the step is for\")\n resourceTemplateName: Optional[str] = Field(title=\"resourceTemplateName\", description=\"Name of the template for the resource under change\")\n resourceType: Optional[ResourceType] = Field(title=\"resourceType\", description=\"Type of resource under change\")\n resourceAction: Optional[str] = Field(title=\"resourceAction\", description=\"Action - install / upgrade / uninstall etc\")\n properties: Optional[List[PipelineStepProperty]]\n\n\nclass Pipeline(AzureTREModel):\n install: Optional[List[PipelineStep]]\n upgrade: Optional[List[PipelineStep]]\n uninstall: Optional[List[PipelineStep]]\n\n\nclass ResourceTemplate(AzureTREModel):\n id: str\n name: str = Field(title=\"Unique template name\")\n title: str = Field(\"\", title=\"Template title or friendly name\")\n description: str = Field(title=\"Template description\")\n version: str = Field(title=\"Template version\")\n resourceType: ResourceType = Field(title=\"Type of resource this template is for (workspace/service)\")\n current: bool = Field(title=\"Is this the current version of this template\")\n type: str = \"object\"\n required: List[str] = Field(title=\"List of properties which must be provided\")\n properties: Dict[str, Property] = Field(title=\"Template properties\")\n actions: List[CustomAction] = Field(default=[], title=\"Template actions\")\n customActions: List[CustomAction] = Field(default=[], title=\"Template custom actions\")\n pipeline: Optional[Pipeline] = Field(default=None, title=\"Template pipeline to define updates to other resources\")\n\n # setting this to false means if extra, unexpected fields are supplied, the request is invalidated\n additionalProperties: bool = Field(default=False, title=\"Prevent unspecified properties being applied\")\n", "path": "api_app/models/domain/resource_template.py"}, {"content": "__version__ = \"0.2.23\"\n", "path": "api_app/_version.py"}], "after_files": [{"content": "import asyncio\nimport json\nimport logging\nimport base64\n\nfrom resources.helpers import get_installation_id\nfrom shared.logging import shell_output_logger\n\n\ndef azure_login_command(config):\n # Use a Service Principal when running locally\n local_login = f\"az login --service-principal --username {config['arm_client_id']} --password {config['arm_client_secret']} --tenant {config['arm_tenant_id']}\"\n\n # Use the Managed Identity when in VMSS context\n vmss_login = f\"az login --identity -u {config['vmss_msi_id']}\"\n\n command = vmss_login if config[\"vmss_msi_id\"] else local_login\n return command\n\n\ndef azure_acr_login_command(config):\n return f\"az acr login --name {config['registry_server'].replace('.azurecr.io','')}\"\n\n\nasync def build_porter_command(config, logger, msg_body, custom_action=False):\n porter_parameter_keys = await get_porter_parameter_keys(config, logger, msg_body)\n porter_parameters = \"\"\n\n if porter_parameter_keys is None:\n logger.warning(\"Unknown porter parameters - explain probably failed.\")\n else:\n for parameter_name in porter_parameter_keys:\n # try to find the param in order of priorities:\n parameter_value = None\n\n # 1. msg parameters collection\n if parameter_name in msg_body[\"parameters\"]:\n parameter_value = msg_body[\"parameters\"][parameter_name]\n\n # 2. config (e.g. terraform state env vars)\n elif parameter_name in config:\n parameter_value = config[parameter_name]\n\n # 3. msg body root (e.g. id of the resource)\n elif parameter_name in msg_body:\n parameter_value = msg_body[parameter_name]\n\n # if still not found, might be a special case\n # (we give a chance to the method above to allow override of the special handeling done below)\n else:\n parameter_value = get_special_porter_param_value(config, parameter_name, msg_body)\n\n # only append if we have a value, porter will complain anyway about missing parameters\n if parameter_value is not None:\n if isinstance(parameter_value, dict) or isinstance(parameter_value, list):\n # base64 encode complex types to pass in safely\n val = json.dumps(parameter_value)\n val_bytes = val.encode(\"ascii\")\n val_base64_bytes = base64.b64encode(val_bytes)\n parameter_value = val_base64_bytes.decode(\"ascii\")\n\n porter_parameters = porter_parameters + f\" --param {parameter_name}=\\\"{parameter_value}\\\"\"\n\n installation_id = get_installation_id(msg_body)\n\n command_line = [f\"{azure_login_command(config)} && {azure_acr_login_command(config)} && porter \"\n # If a custom action (i.e. not install, uninstall, upgrade) we need to use 'invoke'\n f\"{'invoke --action ' if custom_action else ''}\"\n f\"{msg_body['action']} \\\"{installation_id}\\\" \"\n f\" --reference {config['registry_server']}/{msg_body['name']}:v{msg_body['version']}\"\n f\" {porter_parameters} --allow-docker-host-access --force\"\n f\" --cred ./vmss_porter/arm_auth_local_debugging.json\"\n f\" --cred ./vmss_porter/aad_auth.json\"\n f\" && porter show {installation_id}\"]\n return command_line\n\n\nasync def build_porter_command_for_outputs(msg_body):\n installation_id = get_installation_id(msg_body)\n # we only need \"real\" outputs and use jq to remove the logs which are big\n command_line = [f\"porter show {installation_id} --output json | jq -c '. | select(.Outputs!=null) | .Outputs | del (.[] | select(.Name==\\\"io.cnab.outputs.invocationImageLogs\\\"))'\"]\n return command_line\n\n\nasync def get_porter_parameter_keys(config, logger, msg_body):\n command = [f\"{azure_login_command(config)} >/dev/null && \\\n {azure_acr_login_command(config)} >/dev/null && \\\n porter explain --reference {config['registry_server']}/{msg_body['name']}:v{msg_body['version']} -ojson\"]\n\n proc = await asyncio.create_subprocess_shell(\n ''.join(command),\n stdout=asyncio.subprocess.PIPE,\n stderr=asyncio.subprocess.PIPE,\n env=config[\"porter_env\"])\n\n stdout, stderr = await proc.communicate()\n logging.info(f'get_porter_parameter_keys exited with {proc.returncode}')\n result_stdout = None\n result_stderr = None\n\n if stdout:\n result_stdout = stdout.decode()\n porter_explain_parameters = json.loads(result_stdout)[\"parameters\"]\n porter_parameter_keys = [item[\"name\"] for item in porter_explain_parameters]\n return porter_parameter_keys\n if stderr:\n result_stderr = stderr.decode()\n shell_output_logger(result_stderr, '[stderr]', logger, logging.WARN)\n\n\ndef get_special_porter_param_value(config, parameter_name: str, msg_body):\n # some parameters might not have identical names and this comes to handle that\n if parameter_name == \"mgmt_acr_name\":\n return config[\"registry_server\"].replace('.azurecr.io', '')\n if parameter_name == \"mgmt_resource_group_name\":\n return config[\"tfstate_resource_group_name\"]\n if parameter_name == \"workspace_id\":\n return msg_body.get(\"workspaceId\") # not included in all messages\n if parameter_name == \"parent_service_id\":\n return msg_body.get(\"parentWorkspaceServiceId\") # not included in all messages\n", "path": "resource_processor/resources/commands.py"}, {"content": "from typing import Dict, Any, List, Optional\n\nfrom pydantic import Field\n\nfrom models.domain.azuretremodel import AzureTREModel\nfrom models.domain.resource import ResourceType\n\n\nclass Property(AzureTREModel):\n type: str = Field(title=\"Property type\")\n title: str = Field(\"\", title=\"Property description\")\n description: str = Field(\"\", title=\"Property description\")\n default: Any = Field(None, title=\"Default value for the property\")\n enum: Optional[List[str]] = Field(None, title=\"Enum values\")\n const: Optional[Any] = Field(None, title=\"Constant value\")\n multipleOf: Optional[float] = Field(None, title=\"Multiple of\")\n maximum: Optional[float] = Field(None, title=\"Maximum value\")\n exclusiveMaximum: Optional[float] = Field(None, title=\"Exclusive maximum value\")\n minimum: Optional[float] = Field(None, title=\"Minimum value\")\n exclusiveMinimum: Optional[float] = Field(None, title=\"Exclusive minimum value\")\n maxLength: Optional[int] = Field(None, title=\"Maximum length\")\n minLength: Optional[int] = Field(None, title=\"Minimum length\")\n pattern: Optional[str] = Field(None, title=\"Pattern\")\n updateable: Optional[bool] = Field(None, title=\"Indicates that the field can be updated\")\n readOnly: Optional[bool] = Field(None, title=\"Indicates the field is read-only\")\n items: Optional[dict] = None # items can contain sub-properties\n\n\nclass CustomAction(AzureTREModel):\n name: str = Field(None, title=\"Custom action name\")\n description: str = Field(\"\", title=\"Action description\")\n\n\nclass PipelineStepProperty(AzureTREModel):\n name: str = Field(title=\"name\", description=\"name of the property to update\")\n type: str = Field(title=\"type\", description=\"data type of the property to update\")\n value: str = Field(title=\"value\", description=\"value to use in substitution for the property to update\")\n\n\nclass PipelineStep(AzureTREModel):\n stepId: Optional[str] = Field(title=\"stepId\", description=\"Unique id identifying the step\")\n stepTitle: Optional[str] = Field(title=\"stepTitle\", description=\"Human readable title of what the step is for\")\n resourceTemplateName: Optional[str] = Field(title=\"resourceTemplateName\", description=\"Name of the template for the resource under change\")\n resourceType: Optional[ResourceType] = Field(title=\"resourceType\", description=\"Type of resource under change\")\n resourceAction: Optional[str] = Field(title=\"resourceAction\", description=\"Action - install / upgrade / uninstall etc\")\n properties: Optional[List[PipelineStepProperty]]\n\n\nclass Pipeline(AzureTREModel):\n install: Optional[List[PipelineStep]]\n upgrade: Optional[List[PipelineStep]]\n uninstall: Optional[List[PipelineStep]]\n\n\nclass ResourceTemplate(AzureTREModel):\n id: str\n name: str = Field(title=\"Unique template name\")\n title: str = Field(\"\", title=\"Template title or friendly name\")\n description: str = Field(title=\"Template description\")\n version: str = Field(title=\"Template version\")\n resourceType: ResourceType = Field(title=\"Type of resource this template is for (workspace/service)\")\n current: bool = Field(title=\"Is this the current version of this template\")\n type: str = \"object\"\n required: List[str] = Field(title=\"List of properties which must be provided\")\n properties: Dict[str, Property] = Field(title=\"Template properties\")\n actions: List[CustomAction] = Field(default=[], title=\"Template actions\")\n customActions: List[CustomAction] = Field(default=[], title=\"Template custom actions\")\n pipeline: Optional[Pipeline] = Field(default=None, title=\"Template pipeline to define updates to other resources\")\n\n # setting this to false means if extra, unexpected fields are supplied, the request is invalidated\n additionalProperties: bool = Field(default=False, title=\"Prevent unspecified properties being applied\")\n", "path": "api_app/models/domain/resource_template.py"}, {"content": "__version__ = \"0.2.24\"\n", "path": "api_app/_version.py"}]} | 2,692 | 436 |
gh_patches_debug_8977 | rasdani/github-patches | git_diff | paperless-ngx__paperless-ngx-1670 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] 1.9.1 OCR output mode "none" is not valid
### Description
Very similar to #1643
Updating via portainer, and with the following ENV:
PAPERLESS_OCR_CLEAN=none
the startup of the web server goes into a loop (see logs below).
### Steps to reproduce
1. Set PAPERLESS_OCR_CLEAN=none (from 1.8.0 install)
2. Update docker via portainer to latest paperless-ngx
3. Start container
### Webserver logs
```bash
Paperless-ngx docker container starting...
Creating directory /tmp/paperless
Adjusting permissions of paperless files. This may take a while.
Waiting for PostgreSQL to start...
Waiting for Redis...
Connected to Redis broker.
Apply database migrations...
SystemCheckError: System check identified some issues:
ERRORS:
?: OCR clean mode "none" is not valid
```
### Paperless-ngx version
1.9.1
### Host OS
Raspberry Pi 4B
### Installation method
Docker - official image
### Browser
Firefox
### Configuration changes
_No response_
### Other
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/paperless/checks.py`
Content:
```
1 import grp
2 import os
3 import pwd
4 import shutil
5 import stat
6
7 from django.conf import settings
8 from django.core.checks import Error
9 from django.core.checks import register
10 from django.core.checks import Warning
11
12 exists_message = "{} is set but doesn't exist."
13 exists_hint = "Create a directory at {}"
14 writeable_message = "{} is not writeable"
15 writeable_hint = (
16 "Set the permissions of {} to be writeable by the user running the "
17 "Paperless services"
18 )
19
20
21 def path_check(var, directory):
22 messages = []
23 if directory:
24 if not os.path.isdir(directory):
25 messages.append(
26 Error(exists_message.format(var), exists_hint.format(directory)),
27 )
28 else:
29 test_file = os.path.join(
30 directory,
31 f"__paperless_write_test_{os.getpid()}__",
32 )
33 try:
34 with open(test_file, "w"):
35 pass
36 except PermissionError:
37 dir_stat = os.stat(directory)
38 dir_mode = stat.filemode(dir_stat.st_mode)
39 dir_owner = pwd.getpwuid(dir_stat.st_uid).pw_name
40 dir_group = grp.getgrgid(dir_stat.st_gid).gr_name
41 messages.append(
42 Error(
43 writeable_message.format(var),
44 writeable_hint.format(
45 f"\n{dir_mode} {dir_owner} {dir_group} " f"{directory}\n",
46 ),
47 ),
48 )
49 finally:
50 if os.path.isfile(test_file):
51 os.remove(test_file)
52
53 return messages
54
55
56 @register()
57 def paths_check(app_configs, **kwargs):
58 """
59 Check the various paths for existence, readability and writeability
60 """
61
62 return (
63 path_check("PAPERLESS_DATA_DIR", settings.DATA_DIR)
64 + path_check("PAPERLESS_TRASH_DIR", settings.TRASH_DIR)
65 + path_check("PAPERLESS_MEDIA_ROOT", settings.MEDIA_ROOT)
66 + path_check("PAPERLESS_CONSUMPTION_DIR", settings.CONSUMPTION_DIR)
67 )
68
69
70 @register()
71 def binaries_check(app_configs, **kwargs):
72 """
73 Paperless requires the existence of a few binaries, so we do some checks
74 for those here.
75 """
76
77 error = "Paperless can't find {}. Without it, consumption is impossible."
78 hint = "Either it's not in your ${PATH} or it's not installed."
79
80 binaries = (settings.CONVERT_BINARY, "tesseract")
81
82 check_messages = []
83 for binary in binaries:
84 if shutil.which(binary) is None:
85 check_messages.append(Warning(error.format(binary), hint))
86
87 return check_messages
88
89
90 @register()
91 def debug_mode_check(app_configs, **kwargs):
92 if settings.DEBUG:
93 return [
94 Warning(
95 "DEBUG mode is enabled. Disable Debug mode. This is a serious "
96 "security issue, since it puts security overides in place which "
97 "are meant to be only used during development. This "
98 "also means that paperless will tell anyone various "
99 "debugging information when something goes wrong.",
100 ),
101 ]
102 else:
103 return []
104
105
106 @register()
107 def settings_values_check(app_configs, **kwargs):
108 """
109 Validates at least some of the user provided settings
110 """
111
112 def _ocrmypdf_settings_check():
113 """
114 Validates some of the arguments which will be provided to ocrmypdf
115 against the valid options. Use "ocrmypdf --help" to see the valid
116 inputs
117 """
118 msgs = []
119 if settings.OCR_OUTPUT_TYPE not in {
120 "pdfa",
121 "pdf",
122 "pdfa-1",
123 "pdfa-2",
124 "pdfa-3",
125 }:
126 msgs.append(
127 Error(f'OCR output type "{settings.OCR_OUTPUT_TYPE}" is not valid'),
128 )
129
130 if settings.OCR_MODE not in {"force", "skip", "redo", "skip_noarchive"}:
131 msgs.append(Error(f'OCR output mode "{settings.OCR_MODE}" is not valid'))
132
133 if settings.OCR_CLEAN not in {"clean", "clean-final"}:
134 msgs.append(Error(f'OCR clean mode "{settings.OCR_CLEAN}" is not valid'))
135 return msgs
136
137 def _timezone_validate():
138 """
139 Validates the user provided timezone is a valid timezone
140 """
141 try:
142 import zoneinfo
143 except ImportError: # pragma: nocover
144 import backports.zoneinfo as zoneinfo
145 msgs = []
146 if settings.TIME_ZONE not in zoneinfo.available_timezones():
147 msgs.append(
148 Error(f'Timezone "{settings.TIME_ZONE}" is not a valid timezone'),
149 )
150 return msgs
151
152 return _ocrmypdf_settings_check() + _timezone_validate()
153
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/paperless/checks.py b/src/paperless/checks.py
--- a/src/paperless/checks.py
+++ b/src/paperless/checks.py
@@ -130,7 +130,7 @@
if settings.OCR_MODE not in {"force", "skip", "redo", "skip_noarchive"}:
msgs.append(Error(f'OCR output mode "{settings.OCR_MODE}" is not valid'))
- if settings.OCR_CLEAN not in {"clean", "clean-final"}:
+ if settings.OCR_CLEAN not in {"clean", "clean-final", "none"}:
msgs.append(Error(f'OCR clean mode "{settings.OCR_CLEAN}" is not valid'))
return msgs
| {"golden_diff": "diff --git a/src/paperless/checks.py b/src/paperless/checks.py\n--- a/src/paperless/checks.py\n+++ b/src/paperless/checks.py\n@@ -130,7 +130,7 @@\n if settings.OCR_MODE not in {\"force\", \"skip\", \"redo\", \"skip_noarchive\"}:\n msgs.append(Error(f'OCR output mode \"{settings.OCR_MODE}\" is not valid'))\n \n- if settings.OCR_CLEAN not in {\"clean\", \"clean-final\"}:\n+ if settings.OCR_CLEAN not in {\"clean\", \"clean-final\", \"none\"}:\n msgs.append(Error(f'OCR clean mode \"{settings.OCR_CLEAN}\" is not valid'))\n return msgs\n", "issue": "[BUG] 1.9.1 OCR output mode \"none\" is not valid\n### Description\n\nVery similar to #1643 \r\n\r\nUpdating via portainer, and with the following ENV:\r\n\r\nPAPERLESS_OCR_CLEAN=none\r\n\r\nthe startup of the web server goes into a loop (see logs below).\r\n\r\n\n\n### Steps to reproduce\n\n1. Set PAPERLESS_OCR_CLEAN=none (from 1.8.0 install)\r\n2. Update docker via portainer to latest paperless-ngx\r\n3. Start container\n\n### Webserver logs\n\n```bash\nPaperless-ngx docker container starting...\r\nCreating directory /tmp/paperless\r\nAdjusting permissions of paperless files. This may take a while.\r\nWaiting for PostgreSQL to start...\r\nWaiting for Redis...\r\nConnected to Redis broker.\r\nApply database migrations...\r\nSystemCheckError: System check identified some issues:\r\nERRORS:\r\n?: OCR clean mode \"none\" is not valid\n```\n\n\n### Paperless-ngx version\n\n1.9.1\n\n### Host OS\n\nRaspberry Pi 4B\n\n### Installation method\n\nDocker - official image\n\n### Browser\n\nFirefox\n\n### Configuration changes\n\n_No response_\n\n### Other\n\n_No response_\n", "before_files": [{"content": "import grp\nimport os\nimport pwd\nimport shutil\nimport stat\n\nfrom django.conf import settings\nfrom django.core.checks import Error\nfrom django.core.checks import register\nfrom django.core.checks import Warning\n\nexists_message = \"{} is set but doesn't exist.\"\nexists_hint = \"Create a directory at {}\"\nwriteable_message = \"{} is not writeable\"\nwriteable_hint = (\n \"Set the permissions of {} to be writeable by the user running the \"\n \"Paperless services\"\n)\n\n\ndef path_check(var, directory):\n messages = []\n if directory:\n if not os.path.isdir(directory):\n messages.append(\n Error(exists_message.format(var), exists_hint.format(directory)),\n )\n else:\n test_file = os.path.join(\n directory,\n f\"__paperless_write_test_{os.getpid()}__\",\n )\n try:\n with open(test_file, \"w\"):\n pass\n except PermissionError:\n dir_stat = os.stat(directory)\n dir_mode = stat.filemode(dir_stat.st_mode)\n dir_owner = pwd.getpwuid(dir_stat.st_uid).pw_name\n dir_group = grp.getgrgid(dir_stat.st_gid).gr_name\n messages.append(\n Error(\n writeable_message.format(var),\n writeable_hint.format(\n f\"\\n{dir_mode} {dir_owner} {dir_group} \" f\"{directory}\\n\",\n ),\n ),\n )\n finally:\n if os.path.isfile(test_file):\n os.remove(test_file)\n\n return messages\n\n\n@register()\ndef paths_check(app_configs, **kwargs):\n \"\"\"\n Check the various paths for existence, readability and writeability\n \"\"\"\n\n return (\n path_check(\"PAPERLESS_DATA_DIR\", settings.DATA_DIR)\n + path_check(\"PAPERLESS_TRASH_DIR\", settings.TRASH_DIR)\n + path_check(\"PAPERLESS_MEDIA_ROOT\", settings.MEDIA_ROOT)\n + path_check(\"PAPERLESS_CONSUMPTION_DIR\", settings.CONSUMPTION_DIR)\n )\n\n\n@register()\ndef binaries_check(app_configs, **kwargs):\n \"\"\"\n Paperless requires the existence of a few binaries, so we do some checks\n for those here.\n \"\"\"\n\n error = \"Paperless can't find {}. Without it, consumption is impossible.\"\n hint = \"Either it's not in your ${PATH} or it's not installed.\"\n\n binaries = (settings.CONVERT_BINARY, \"tesseract\")\n\n check_messages = []\n for binary in binaries:\n if shutil.which(binary) is None:\n check_messages.append(Warning(error.format(binary), hint))\n\n return check_messages\n\n\n@register()\ndef debug_mode_check(app_configs, **kwargs):\n if settings.DEBUG:\n return [\n Warning(\n \"DEBUG mode is enabled. Disable Debug mode. This is a serious \"\n \"security issue, since it puts security overides in place which \"\n \"are meant to be only used during development. This \"\n \"also means that paperless will tell anyone various \"\n \"debugging information when something goes wrong.\",\n ),\n ]\n else:\n return []\n\n\n@register()\ndef settings_values_check(app_configs, **kwargs):\n \"\"\"\n Validates at least some of the user provided settings\n \"\"\"\n\n def _ocrmypdf_settings_check():\n \"\"\"\n Validates some of the arguments which will be provided to ocrmypdf\n against the valid options. Use \"ocrmypdf --help\" to see the valid\n inputs\n \"\"\"\n msgs = []\n if settings.OCR_OUTPUT_TYPE not in {\n \"pdfa\",\n \"pdf\",\n \"pdfa-1\",\n \"pdfa-2\",\n \"pdfa-3\",\n }:\n msgs.append(\n Error(f'OCR output type \"{settings.OCR_OUTPUT_TYPE}\" is not valid'),\n )\n\n if settings.OCR_MODE not in {\"force\", \"skip\", \"redo\", \"skip_noarchive\"}:\n msgs.append(Error(f'OCR output mode \"{settings.OCR_MODE}\" is not valid'))\n\n if settings.OCR_CLEAN not in {\"clean\", \"clean-final\"}:\n msgs.append(Error(f'OCR clean mode \"{settings.OCR_CLEAN}\" is not valid'))\n return msgs\n\n def _timezone_validate():\n \"\"\"\n Validates the user provided timezone is a valid timezone\n \"\"\"\n try:\n import zoneinfo\n except ImportError: # pragma: nocover\n import backports.zoneinfo as zoneinfo\n msgs = []\n if settings.TIME_ZONE not in zoneinfo.available_timezones():\n msgs.append(\n Error(f'Timezone \"{settings.TIME_ZONE}\" is not a valid timezone'),\n )\n return msgs\n\n return _ocrmypdf_settings_check() + _timezone_validate()\n", "path": "src/paperless/checks.py"}], "after_files": [{"content": "import grp\nimport os\nimport pwd\nimport shutil\nimport stat\n\nfrom django.conf import settings\nfrom django.core.checks import Error\nfrom django.core.checks import register\nfrom django.core.checks import Warning\n\nexists_message = \"{} is set but doesn't exist.\"\nexists_hint = \"Create a directory at {}\"\nwriteable_message = \"{} is not writeable\"\nwriteable_hint = (\n \"Set the permissions of {} to be writeable by the user running the \"\n \"Paperless services\"\n)\n\n\ndef path_check(var, directory):\n messages = []\n if directory:\n if not os.path.isdir(directory):\n messages.append(\n Error(exists_message.format(var), exists_hint.format(directory)),\n )\n else:\n test_file = os.path.join(\n directory,\n f\"__paperless_write_test_{os.getpid()}__\",\n )\n try:\n with open(test_file, \"w\"):\n pass\n except PermissionError:\n dir_stat = os.stat(directory)\n dir_mode = stat.filemode(dir_stat.st_mode)\n dir_owner = pwd.getpwuid(dir_stat.st_uid).pw_name\n dir_group = grp.getgrgid(dir_stat.st_gid).gr_name\n messages.append(\n Error(\n writeable_message.format(var),\n writeable_hint.format(\n f\"\\n{dir_mode} {dir_owner} {dir_group} \" f\"{directory}\\n\",\n ),\n ),\n )\n finally:\n if os.path.isfile(test_file):\n os.remove(test_file)\n\n return messages\n\n\n@register()\ndef paths_check(app_configs, **kwargs):\n \"\"\"\n Check the various paths for existence, readability and writeability\n \"\"\"\n\n return (\n path_check(\"PAPERLESS_DATA_DIR\", settings.DATA_DIR)\n + path_check(\"PAPERLESS_TRASH_DIR\", settings.TRASH_DIR)\n + path_check(\"PAPERLESS_MEDIA_ROOT\", settings.MEDIA_ROOT)\n + path_check(\"PAPERLESS_CONSUMPTION_DIR\", settings.CONSUMPTION_DIR)\n )\n\n\n@register()\ndef binaries_check(app_configs, **kwargs):\n \"\"\"\n Paperless requires the existence of a few binaries, so we do some checks\n for those here.\n \"\"\"\n\n error = \"Paperless can't find {}. Without it, consumption is impossible.\"\n hint = \"Either it's not in your ${PATH} or it's not installed.\"\n\n binaries = (settings.CONVERT_BINARY, \"tesseract\")\n\n check_messages = []\n for binary in binaries:\n if shutil.which(binary) is None:\n check_messages.append(Warning(error.format(binary), hint))\n\n return check_messages\n\n\n@register()\ndef debug_mode_check(app_configs, **kwargs):\n if settings.DEBUG:\n return [\n Warning(\n \"DEBUG mode is enabled. Disable Debug mode. This is a serious \"\n \"security issue, since it puts security overides in place which \"\n \"are meant to be only used during development. This \"\n \"also means that paperless will tell anyone various \"\n \"debugging information when something goes wrong.\",\n ),\n ]\n else:\n return []\n\n\n@register()\ndef settings_values_check(app_configs, **kwargs):\n \"\"\"\n Validates at least some of the user provided settings\n \"\"\"\n\n def _ocrmypdf_settings_check():\n \"\"\"\n Validates some of the arguments which will be provided to ocrmypdf\n against the valid options. Use \"ocrmypdf --help\" to see the valid\n inputs\n \"\"\"\n msgs = []\n if settings.OCR_OUTPUT_TYPE not in {\n \"pdfa\",\n \"pdf\",\n \"pdfa-1\",\n \"pdfa-2\",\n \"pdfa-3\",\n }:\n msgs.append(\n Error(f'OCR output type \"{settings.OCR_OUTPUT_TYPE}\" is not valid'),\n )\n\n if settings.OCR_MODE not in {\"force\", \"skip\", \"redo\", \"skip_noarchive\"}:\n msgs.append(Error(f'OCR output mode \"{settings.OCR_MODE}\" is not valid'))\n\n if settings.OCR_CLEAN not in {\"clean\", \"clean-final\", \"none\"}:\n msgs.append(Error(f'OCR clean mode \"{settings.OCR_CLEAN}\" is not valid'))\n return msgs\n\n def _timezone_validate():\n \"\"\"\n Validates the user provided timezone is a valid timezone\n \"\"\"\n try:\n import zoneinfo\n except ImportError: # pragma: nocover\n import backports.zoneinfo as zoneinfo\n msgs = []\n if settings.TIME_ZONE not in zoneinfo.available_timezones():\n msgs.append(\n Error(f'Timezone \"{settings.TIME_ZONE}\" is not a valid timezone'),\n )\n return msgs\n\n return _ocrmypdf_settings_check() + _timezone_validate()\n", "path": "src/paperless/checks.py"}]} | 1,866 | 157 |
gh_patches_debug_39038 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-3314 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Spider kum_and_go is broken
During the global build at 2021-08-25-14-42-15, spider **kum_and_go** failed with **0 features** and **0 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-08-25-14-42-15/logs/kum_and_go.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-08-25-14-42-15/output/kum_and_go.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-08-25-14-42-15/output/kum_and_go.geojson))
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `locations/spiders/kum_and_go.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 import scrapy
3 import json
4
5 from locations.items import GeojsonPointItem
6 from locations.hours import OpeningHours
7
8
9 class KumAndGoSpider(scrapy.Spider):
10 name = "kum_and_go"
11 item_attributes = {'brand': "Kum & Go", 'brand_wikidata': "Q6443340"}
12 allowed_domains = ["kumandgo.com"]
13
14 def start_requests(self):
15 yield scrapy.FormRequest(
16 'https://www.kumandgo.com/wordpress/wp-admin/admin-ajax.php',
17 method='POST',
18 formdata={
19 'coords[latitude]': '39.74581290359507',
20 'coords[longitude]': '-104.96756559990148',
21 'radius': '3000',
22 'action': 'stores_coords'
23 })
24
25 def parse(self, response):
26 result = json.loads(response.body_as_unicode())
27 for store in result['data']['stores']:
28 opening_hours = OpeningHours()
29
30 for hours_key in [
31 'monday_hours',
32 'tuesday_hours',
33 'wednesday_hours',
34 'thursday_hours',
35 'friday_hours',
36 'saturday_hours',
37 'sunday_hours'
38 ]:
39 (open_time, close_time) = store[hours_key].split(' - ')
40
41 opening_hours.add_range(day=hours_key[:2].capitalize(),
42 open_time=open_time,
43 close_time=close_time,
44 time_format='%I:%M %p')
45
46 yield GeojsonPointItem(
47 ref=store['store_id'],
48 lon=store['longitude'],
49 lat=store['latitude'],
50 name=store['name'],
51 addr_full=store['address1'],
52 city=store['city'],
53 state=store['state'],
54 postcode=store['postalcode'],
55 country=store['country'],
56 phone=store['phone'],
57 website=store['url'],
58 opening_hours=opening_hours.as_opening_hours(),
59 extras={
60 'amenity:fuel': True,
61 'atm': int(store['atm'] or 0) == 1,
62 'car_wash': int(store['car_wash'] or 0) == 1,
63 'fuel:diesel': int(store['diesel'] or 0) == 1,
64 'fuel:e85': int(store['e85'] or 0) == 1,
65 'hgv': int(store['semi_truck_fuel_island'] or 0) == 1,
66 }
67 )
68
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/locations/spiders/kum_and_go.py b/locations/spiders/kum_and_go.py
--- a/locations/spiders/kum_and_go.py
+++ b/locations/spiders/kum_and_go.py
@@ -1,4 +1,5 @@
# -*- coding: utf-8 -*-
+import csv
import scrapy
import json
@@ -12,56 +13,27 @@
allowed_domains = ["kumandgo.com"]
def start_requests(self):
- yield scrapy.FormRequest(
- 'https://www.kumandgo.com/wordpress/wp-admin/admin-ajax.php',
- method='POST',
- formdata={
- 'coords[latitude]': '39.74581290359507',
- 'coords[longitude]': '-104.96756559990148',
- 'radius': '3000',
- 'action': 'stores_coords'
- })
+ with open('./locations/searchable_points/us_centroids_100mile_radius_state.csv') as points:
+ reader = csv.DictReader(points)
+ for point in reader:
+ if point['state'] in ('IA', 'AR', 'CO', 'MN', 'MO', 'MT', 'NE', 'ND', 'OK', 'SD', 'WY'):
+ yield scrapy.Request(
+ f'https://www.kumandgo.com/wordpress/wp-admin/admin-ajax.php?action=store_search&lat={point["latitude"]}&lng={point["longitude"]}&max_results=100&search_radius=100',
+ )
def parse(self, response):
- result = json.loads(response.body_as_unicode())
- for store in result['data']['stores']:
- opening_hours = OpeningHours()
-
- for hours_key in [
- 'monday_hours',
- 'tuesday_hours',
- 'wednesday_hours',
- 'thursday_hours',
- 'friday_hours',
- 'saturday_hours',
- 'sunday_hours'
- ]:
- (open_time, close_time) = store[hours_key].split(' - ')
-
- opening_hours.add_range(day=hours_key[:2].capitalize(),
- open_time=open_time,
- close_time=close_time,
- time_format='%I:%M %p')
+ result = json.loads(response.text)
+ for store in result:
yield GeojsonPointItem(
- ref=store['store_id'],
- lon=store['longitude'],
- lat=store['latitude'],
- name=store['name'],
- addr_full=store['address1'],
+ ref=store['id'],
+ lon=store['lng'],
+ lat=store['lat'],
+ addr_full=store['address'],
city=store['city'],
state=store['state'],
- postcode=store['postalcode'],
+ postcode=store['zip'],
country=store['country'],
phone=store['phone'],
- website=store['url'],
- opening_hours=opening_hours.as_opening_hours(),
- extras={
- 'amenity:fuel': True,
- 'atm': int(store['atm'] or 0) == 1,
- 'car_wash': int(store['car_wash'] or 0) == 1,
- 'fuel:diesel': int(store['diesel'] or 0) == 1,
- 'fuel:e85': int(store['e85'] or 0) == 1,
- 'hgv': int(store['semi_truck_fuel_island'] or 0) == 1,
- }
+ website=store['permalink'],
)
| {"golden_diff": "diff --git a/locations/spiders/kum_and_go.py b/locations/spiders/kum_and_go.py\n--- a/locations/spiders/kum_and_go.py\n+++ b/locations/spiders/kum_and_go.py\n@@ -1,4 +1,5 @@\n # -*- coding: utf-8 -*-\n+import csv\n import scrapy\n import json\n \n@@ -12,56 +13,27 @@\n allowed_domains = [\"kumandgo.com\"]\n \n def start_requests(self):\n- yield scrapy.FormRequest(\n- 'https://www.kumandgo.com/wordpress/wp-admin/admin-ajax.php',\n- method='POST',\n- formdata={\n- 'coords[latitude]': '39.74581290359507',\n- 'coords[longitude]': '-104.96756559990148',\n- 'radius': '3000',\n- 'action': 'stores_coords'\n- })\n+ with open('./locations/searchable_points/us_centroids_100mile_radius_state.csv') as points:\n+ reader = csv.DictReader(points)\n+ for point in reader:\n+ if point['state'] in ('IA', 'AR', 'CO', 'MN', 'MO', 'MT', 'NE', 'ND', 'OK', 'SD', 'WY'):\n+ yield scrapy.Request(\n+ f'https://www.kumandgo.com/wordpress/wp-admin/admin-ajax.php?action=store_search&lat={point[\"latitude\"]}&lng={point[\"longitude\"]}&max_results=100&search_radius=100',\n+ )\n \n def parse(self, response):\n- result = json.loads(response.body_as_unicode())\n- for store in result['data']['stores']:\n- opening_hours = OpeningHours()\n-\n- for hours_key in [\n- 'monday_hours',\n- 'tuesday_hours',\n- 'wednesday_hours',\n- 'thursday_hours',\n- 'friday_hours',\n- 'saturday_hours',\n- 'sunday_hours'\n- ]:\n- (open_time, close_time) = store[hours_key].split(' - ')\n-\n- opening_hours.add_range(day=hours_key[:2].capitalize(),\n- open_time=open_time,\n- close_time=close_time,\n- time_format='%I:%M %p')\n+ result = json.loads(response.text)\n+ for store in result:\n \n yield GeojsonPointItem(\n- ref=store['store_id'],\n- lon=store['longitude'],\n- lat=store['latitude'],\n- name=store['name'],\n- addr_full=store['address1'],\n+ ref=store['id'],\n+ lon=store['lng'],\n+ lat=store['lat'],\n+ addr_full=store['address'],\n city=store['city'],\n state=store['state'],\n- postcode=store['postalcode'],\n+ postcode=store['zip'],\n country=store['country'],\n phone=store['phone'],\n- website=store['url'],\n- opening_hours=opening_hours.as_opening_hours(),\n- extras={\n- 'amenity:fuel': True,\n- 'atm': int(store['atm'] or 0) == 1,\n- 'car_wash': int(store['car_wash'] or 0) == 1,\n- 'fuel:diesel': int(store['diesel'] or 0) == 1,\n- 'fuel:e85': int(store['e85'] or 0) == 1,\n- 'hgv': int(store['semi_truck_fuel_island'] or 0) == 1,\n- }\n+ website=store['permalink'],\n )\n", "issue": "Spider kum_and_go is broken\nDuring the global build at 2021-08-25-14-42-15, spider **kum_and_go** failed with **0 features** and **0 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-08-25-14-42-15/logs/kum_and_go.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-08-25-14-42-15/output/kum_and_go.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-08-25-14-42-15/output/kum_and_go.geojson))\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport scrapy\nimport json\n\nfrom locations.items import GeojsonPointItem\nfrom locations.hours import OpeningHours\n\n\nclass KumAndGoSpider(scrapy.Spider):\n name = \"kum_and_go\"\n item_attributes = {'brand': \"Kum & Go\", 'brand_wikidata': \"Q6443340\"}\n allowed_domains = [\"kumandgo.com\"]\n\n def start_requests(self):\n yield scrapy.FormRequest(\n 'https://www.kumandgo.com/wordpress/wp-admin/admin-ajax.php',\n method='POST',\n formdata={\n 'coords[latitude]': '39.74581290359507',\n 'coords[longitude]': '-104.96756559990148',\n 'radius': '3000',\n 'action': 'stores_coords'\n })\n\n def parse(self, response):\n result = json.loads(response.body_as_unicode())\n for store in result['data']['stores']:\n opening_hours = OpeningHours()\n\n for hours_key in [\n 'monday_hours',\n 'tuesday_hours',\n 'wednesday_hours',\n 'thursday_hours',\n 'friday_hours',\n 'saturday_hours',\n 'sunday_hours'\n ]:\n (open_time, close_time) = store[hours_key].split(' - ')\n\n opening_hours.add_range(day=hours_key[:2].capitalize(),\n open_time=open_time,\n close_time=close_time,\n time_format='%I:%M %p')\n\n yield GeojsonPointItem(\n ref=store['store_id'],\n lon=store['longitude'],\n lat=store['latitude'],\n name=store['name'],\n addr_full=store['address1'],\n city=store['city'],\n state=store['state'],\n postcode=store['postalcode'],\n country=store['country'],\n phone=store['phone'],\n website=store['url'],\n opening_hours=opening_hours.as_opening_hours(),\n extras={\n 'amenity:fuel': True,\n 'atm': int(store['atm'] or 0) == 1,\n 'car_wash': int(store['car_wash'] or 0) == 1,\n 'fuel:diesel': int(store['diesel'] or 0) == 1,\n 'fuel:e85': int(store['e85'] or 0) == 1,\n 'hgv': int(store['semi_truck_fuel_island'] or 0) == 1,\n }\n )\n", "path": "locations/spiders/kum_and_go.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nimport csv\nimport scrapy\nimport json\n\nfrom locations.items import GeojsonPointItem\nfrom locations.hours import OpeningHours\n\n\nclass KumAndGoSpider(scrapy.Spider):\n name = \"kum_and_go\"\n item_attributes = {'brand': \"Kum & Go\", 'brand_wikidata': \"Q6443340\"}\n allowed_domains = [\"kumandgo.com\"]\n\n def start_requests(self):\n with open('./locations/searchable_points/us_centroids_100mile_radius_state.csv') as points:\n reader = csv.DictReader(points)\n for point in reader:\n if point['state'] in ('IA', 'AR', 'CO', 'MN', 'MO', 'MT', 'NE', 'ND', 'OK', 'SD', 'WY'):\n yield scrapy.Request(\n f'https://www.kumandgo.com/wordpress/wp-admin/admin-ajax.php?action=store_search&lat={point[\"latitude\"]}&lng={point[\"longitude\"]}&max_results=100&search_radius=100',\n )\n\n def parse(self, response):\n result = json.loads(response.text)\n for store in result:\n\n yield GeojsonPointItem(\n ref=store['id'],\n lon=store['lng'],\n lat=store['lat'],\n addr_full=store['address'],\n city=store['city'],\n state=store['state'],\n postcode=store['zip'],\n country=store['country'],\n phone=store['phone'],\n website=store['permalink'],\n )\n", "path": "locations/spiders/kum_and_go.py"}]} | 1,148 | 827 |
gh_patches_debug_16932 | rasdani/github-patches | git_diff | Mailu__Mailu-812 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Password field(s) of type "text" instead of "password"
In 1.5 [this line](https://github.com/Mailu/Mailu/blob/master/core/admin/mailu/ui/templates/fetch/create.html#L22) creates a field of type _text_ not _password_.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `core/admin/mailu/ui/forms.py`
Content:
```
1 from wtforms import validators, fields, widgets
2 from wtforms_components import fields as fields_
3 from flask_babel import lazy_gettext as _
4
5 import flask_login
6 import flask_wtf
7 import re
8
9 LOCALPART_REGEX = "^[a-zA-Z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-zA-Z0-9!#$%&'*+/=?^_`{|}~-]+)*$"
10
11 class DestinationField(fields.SelectMultipleField):
12 """ Allow for multiple emails selection from current user choices and
13 additional email addresses.
14 """
15
16 validator = re.compile(r'^.+@([^.@][^@]+)$', re.IGNORECASE)
17
18 def iter_choices(self):
19 managed = [
20 str(email)
21 for email in flask_login.current_user.get_managed_emails()
22 ]
23 for email in managed:
24 selected = self.data is not None and self.coerce(email) in self.data
25 yield (email, email, selected)
26 for email in self.data or ():
27 if email not in managed:
28 yield (email, email, True)
29
30 def pre_validate(self, form):
31 for item in self.data:
32 if not self.validator.match(item):
33 raise validators.ValidationError(_('Invalid email address.'))
34
35 class MultipleEmailAddressesVerify(object):
36 def __init__(self,message=_('Invalid email address.')):
37 self.message = message
38
39 def __call__(self, form, field):
40 pattern = re.compile(r'^([_a-z0-9\-]+)(\.[_a-z0-9\-]+)*@([a-z0-9\-]{2,}\.)*([a-z]{2,4})(,([_a-z0-9\-]+)(\.[_a-z0-9\-]+)*@([a-z0-9\-]{2,}\.)*([a-z]{2,4}))*$')
41 if not pattern.match(field.data.replace(" ", "")):
42 raise validators.ValidationError(self.message)
43
44 class ConfirmationForm(flask_wtf.FlaskForm):
45 submit = fields.SubmitField(_('Confirm'))
46
47
48 class LoginForm(flask_wtf.FlaskForm):
49 email = fields.StringField(_('E-mail'), [validators.Email()])
50 pw = fields.PasswordField(_('Password'), [validators.DataRequired()])
51 submit = fields.SubmitField(_('Sign in'))
52
53
54 class DomainForm(flask_wtf.FlaskForm):
55 name = fields.StringField(_('Domain name'), [validators.DataRequired()])
56 max_users = fields_.IntegerField(_('Maximum user count'), default=10)
57 max_aliases = fields_.IntegerField(_('Maximum alias count'), default=10)
58 max_quota_bytes = fields_.IntegerSliderField(_('Maximum user quota'), default=0)
59 signup_enabled = fields.BooleanField(_('Enable sign-up'), default=False)
60 comment = fields.StringField(_('Comment'))
61 submit = fields.SubmitField(_('Save'))
62
63
64 class DomainSignupForm(flask_wtf.FlaskForm):
65 name = fields.StringField(_('Domain name'), [validators.DataRequired()])
66 localpart = fields.StringField(_('Initial admin'), [validators.DataRequired()])
67 pw = fields.PasswordField(_('Admin password'), [validators.DataRequired()])
68 pw2 = fields.PasswordField(_('Confirm password'), [validators.EqualTo('pw')])
69 captcha = flask_wtf.RecaptchaField()
70 submit = fields.SubmitField(_('Create'))
71
72
73 class AlternativeForm(flask_wtf.FlaskForm):
74 name = fields.StringField(_('Alternative name'), [validators.DataRequired()])
75 submit = fields.SubmitField(_('Save'))
76
77
78 class RelayForm(flask_wtf.FlaskForm):
79 name = fields.StringField(_('Relayed domain name'), [validators.DataRequired()])
80 smtp = fields.StringField(_('Remote host'))
81 comment = fields.StringField(_('Comment'))
82 submit = fields.SubmitField(_('Save'))
83
84
85 class UserForm(flask_wtf.FlaskForm):
86 localpart = fields.StringField(_('E-mail'), [validators.DataRequired(), validators.Regexp(LOCALPART_REGEX)])
87 pw = fields.PasswordField(_('Password'), [validators.DataRequired()])
88 pw2 = fields.PasswordField(_('Confirm password'), [validators.EqualTo('pw')])
89 quota_bytes = fields_.IntegerSliderField(_('Quota'), default=1000000000)
90 enable_imap = fields.BooleanField(_('Allow IMAP access'), default=True)
91 enable_pop = fields.BooleanField(_('Allow POP3 access'), default=True)
92 displayed_name = fields.StringField(_('Displayed name'))
93 comment = fields.StringField(_('Comment'))
94 enabled = fields.BooleanField(_('Enabled'), default=True)
95 submit = fields.SubmitField(_('Save'))
96
97
98 class UserSignupForm(flask_wtf.FlaskForm):
99 localpart = fields.StringField(_('Email address'), [validators.DataRequired(), validators.Regexp(LOCALPART_REGEX)])
100 pw = fields.PasswordField(_('Password'), [validators.DataRequired()])
101 pw2 = fields.PasswordField(_('Confirm password'), [validators.EqualTo('pw')])
102 submit = fields.SubmitField(_('Sign up'))
103
104 class UserSignupFormCaptcha(UserSignupForm):
105 captcha = flask_wtf.RecaptchaField()
106
107 class UserSettingsForm(flask_wtf.FlaskForm):
108 displayed_name = fields.StringField(_('Displayed name'))
109 spam_enabled = fields.BooleanField(_('Enable spam filter'))
110 spam_threshold = fields_.IntegerSliderField(_('Spam filter tolerance'))
111 forward_enabled = fields.BooleanField(_('Enable forwarding'))
112 forward_keep = fields.BooleanField(_('Keep a copy of the emails'))
113 forward_destination = fields.StringField(_('Destination'), [validators.Optional(), MultipleEmailAddressesVerify()])
114 submit = fields.SubmitField(_('Save settings'))
115
116
117 class UserPasswordForm(flask_wtf.FlaskForm):
118 pw = fields.PasswordField(_('Password'), [validators.DataRequired()])
119 pw2 = fields.PasswordField(_('Password check'), [validators.DataRequired()])
120 submit = fields.SubmitField(_('Update password'))
121
122
123 class UserReplyForm(flask_wtf.FlaskForm):
124 reply_enabled = fields.BooleanField(_('Enable automatic reply'))
125 reply_subject = fields.StringField(_('Reply subject'))
126 reply_body = fields.StringField(_('Reply body'),
127 widget=widgets.TextArea())
128 reply_startdate = fields.html5.DateField(_('Start of vacation'))
129 reply_enddate = fields.html5.DateField(_('End of vacation'))
130 submit = fields.SubmitField(_('Update'))
131
132
133 class TokenForm(flask_wtf.FlaskForm):
134 displayed_password = fields.StringField(
135 _('Your token (write it down, as it will never be displayed again)')
136 )
137 raw_password = fields.HiddenField([validators.DataRequired()])
138 comment = fields.StringField(_('Comment'))
139 ip = fields.StringField(
140 _('Authorized IP'), [validators.Optional(), validators.IPAddress()]
141 )
142 submit = fields.SubmitField(_('Save'))
143
144
145 class AliasForm(flask_wtf.FlaskForm):
146 localpart = fields.StringField(_('Alias'), [validators.DataRequired(), validators.Regexp(LOCALPART_REGEX)])
147 wildcard = fields.BooleanField(
148 _('Use SQL LIKE Syntax (e.g. for catch-all aliases)'))
149 destination = DestinationField(_('Destination'))
150 comment = fields.StringField(_('Comment'))
151 submit = fields.SubmitField(_('Save'))
152
153
154 class AdminForm(flask_wtf.FlaskForm):
155 admin = fields.SelectField(_('Admin email'), choices=[])
156 submit = fields.SubmitField(_('Submit'))
157
158
159 class ManagerForm(flask_wtf.FlaskForm):
160 manager = fields.SelectField(_('Manager email'))
161 submit = fields.SubmitField(_('Submit'))
162
163
164 class FetchForm(flask_wtf.FlaskForm):
165 protocol = fields.SelectField(_('Protocol'), choices=[
166 ('imap', 'IMAP'), ('pop3', 'POP3')
167 ])
168 host = fields.StringField(_('Hostname or IP'))
169 port = fields.IntegerField(_('TCP port'))
170 tls = fields.BooleanField(_('Enable TLS'))
171 username = fields.StringField(_('Username'))
172 password = fields.StringField(_('Password'))
173 keep = fields.BooleanField(_('Keep emails on the server'))
174 submit = fields.SubmitField(_('Submit'))
175
176
177 class AnnouncementForm(flask_wtf.FlaskForm):
178 announcement_subject = fields.StringField(_('Announcement subject'),
179 [validators.DataRequired()])
180 announcement_body = fields.StringField(_('Announcement body'),
181 [validators.DataRequired()], widget=widgets.TextArea())
182 submit = fields.SubmitField(_('Send'))
183
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/core/admin/mailu/ui/forms.py b/core/admin/mailu/ui/forms.py
--- a/core/admin/mailu/ui/forms.py
+++ b/core/admin/mailu/ui/forms.py
@@ -165,11 +165,11 @@
protocol = fields.SelectField(_('Protocol'), choices=[
('imap', 'IMAP'), ('pop3', 'POP3')
])
- host = fields.StringField(_('Hostname or IP'))
- port = fields.IntegerField(_('TCP port'))
+ host = fields.StringField(_('Hostname or IP'), [validators.DataRequired()])
+ port = fields.IntegerField(_('TCP port'), [validators.DataRequired(), validators.NumberRange(min=0, max=65535)])
tls = fields.BooleanField(_('Enable TLS'))
- username = fields.StringField(_('Username'))
- password = fields.StringField(_('Password'))
+ username = fields.StringField(_('Username'), [validators.DataRequired()])
+ password = fields.PasswordField(_('Password'), [validators.DataRequired()])
keep = fields.BooleanField(_('Keep emails on the server'))
submit = fields.SubmitField(_('Submit'))
| {"golden_diff": "diff --git a/core/admin/mailu/ui/forms.py b/core/admin/mailu/ui/forms.py\n--- a/core/admin/mailu/ui/forms.py\n+++ b/core/admin/mailu/ui/forms.py\n@@ -165,11 +165,11 @@\n protocol = fields.SelectField(_('Protocol'), choices=[\n ('imap', 'IMAP'), ('pop3', 'POP3')\n ])\n- host = fields.StringField(_('Hostname or IP'))\n- port = fields.IntegerField(_('TCP port'))\n+ host = fields.StringField(_('Hostname or IP'), [validators.DataRequired()])\n+ port = fields.IntegerField(_('TCP port'), [validators.DataRequired(), validators.NumberRange(min=0, max=65535)])\n tls = fields.BooleanField(_('Enable TLS'))\n- username = fields.StringField(_('Username'))\n- password = fields.StringField(_('Password'))\n+ username = fields.StringField(_('Username'), [validators.DataRequired()])\n+ password = fields.PasswordField(_('Password'), [validators.DataRequired()])\n keep = fields.BooleanField(_('Keep emails on the server'))\n submit = fields.SubmitField(_('Submit'))\n", "issue": "Password field(s) of type \"text\" instead of \"password\"\nIn 1.5 [this line](https://github.com/Mailu/Mailu/blob/master/core/admin/mailu/ui/templates/fetch/create.html#L22) creates a field of type _text_ not _password_.\n", "before_files": [{"content": "from wtforms import validators, fields, widgets\nfrom wtforms_components import fields as fields_\nfrom flask_babel import lazy_gettext as _\n\nimport flask_login\nimport flask_wtf\nimport re\n\nLOCALPART_REGEX = \"^[a-zA-Z0-9!#$%&'*+/=?^_`{|}~-]+(?:\\.[a-zA-Z0-9!#$%&'*+/=?^_`{|}~-]+)*$\"\n\nclass DestinationField(fields.SelectMultipleField):\n \"\"\" Allow for multiple emails selection from current user choices and\n additional email addresses.\n \"\"\"\n\n validator = re.compile(r'^.+@([^.@][^@]+)$', re.IGNORECASE)\n\n def iter_choices(self):\n managed = [\n str(email)\n for email in flask_login.current_user.get_managed_emails()\n ]\n for email in managed:\n selected = self.data is not None and self.coerce(email) in self.data\n yield (email, email, selected)\n for email in self.data or ():\n if email not in managed:\n yield (email, email, True)\n\n def pre_validate(self, form):\n for item in self.data:\n if not self.validator.match(item):\n raise validators.ValidationError(_('Invalid email address.'))\n\nclass MultipleEmailAddressesVerify(object):\n def __init__(self,message=_('Invalid email address.')):\n self.message = message\n\n def __call__(self, form, field):\n pattern = re.compile(r'^([_a-z0-9\\-]+)(\\.[_a-z0-9\\-]+)*@([a-z0-9\\-]{2,}\\.)*([a-z]{2,4})(,([_a-z0-9\\-]+)(\\.[_a-z0-9\\-]+)*@([a-z0-9\\-]{2,}\\.)*([a-z]{2,4}))*$')\n if not pattern.match(field.data.replace(\" \", \"\")):\n raise validators.ValidationError(self.message)\n\nclass ConfirmationForm(flask_wtf.FlaskForm):\n submit = fields.SubmitField(_('Confirm'))\n\n\nclass LoginForm(flask_wtf.FlaskForm):\n email = fields.StringField(_('E-mail'), [validators.Email()])\n pw = fields.PasswordField(_('Password'), [validators.DataRequired()])\n submit = fields.SubmitField(_('Sign in'))\n\n\nclass DomainForm(flask_wtf.FlaskForm):\n name = fields.StringField(_('Domain name'), [validators.DataRequired()])\n max_users = fields_.IntegerField(_('Maximum user count'), default=10)\n max_aliases = fields_.IntegerField(_('Maximum alias count'), default=10)\n max_quota_bytes = fields_.IntegerSliderField(_('Maximum user quota'), default=0)\n signup_enabled = fields.BooleanField(_('Enable sign-up'), default=False)\n comment = fields.StringField(_('Comment'))\n submit = fields.SubmitField(_('Save'))\n\n\nclass DomainSignupForm(flask_wtf.FlaskForm):\n name = fields.StringField(_('Domain name'), [validators.DataRequired()])\n localpart = fields.StringField(_('Initial admin'), [validators.DataRequired()])\n pw = fields.PasswordField(_('Admin password'), [validators.DataRequired()])\n pw2 = fields.PasswordField(_('Confirm password'), [validators.EqualTo('pw')])\n captcha = flask_wtf.RecaptchaField()\n submit = fields.SubmitField(_('Create'))\n\n\nclass AlternativeForm(flask_wtf.FlaskForm):\n name = fields.StringField(_('Alternative name'), [validators.DataRequired()])\n submit = fields.SubmitField(_('Save'))\n\n\nclass RelayForm(flask_wtf.FlaskForm):\n name = fields.StringField(_('Relayed domain name'), [validators.DataRequired()])\n smtp = fields.StringField(_('Remote host'))\n comment = fields.StringField(_('Comment'))\n submit = fields.SubmitField(_('Save'))\n\n\nclass UserForm(flask_wtf.FlaskForm):\n localpart = fields.StringField(_('E-mail'), [validators.DataRequired(), validators.Regexp(LOCALPART_REGEX)])\n pw = fields.PasswordField(_('Password'), [validators.DataRequired()])\n pw2 = fields.PasswordField(_('Confirm password'), [validators.EqualTo('pw')])\n quota_bytes = fields_.IntegerSliderField(_('Quota'), default=1000000000)\n enable_imap = fields.BooleanField(_('Allow IMAP access'), default=True)\n enable_pop = fields.BooleanField(_('Allow POP3 access'), default=True)\n displayed_name = fields.StringField(_('Displayed name'))\n comment = fields.StringField(_('Comment'))\n enabled = fields.BooleanField(_('Enabled'), default=True)\n submit = fields.SubmitField(_('Save'))\n\n\nclass UserSignupForm(flask_wtf.FlaskForm):\n localpart = fields.StringField(_('Email address'), [validators.DataRequired(), validators.Regexp(LOCALPART_REGEX)])\n pw = fields.PasswordField(_('Password'), [validators.DataRequired()])\n pw2 = fields.PasswordField(_('Confirm password'), [validators.EqualTo('pw')])\n submit = fields.SubmitField(_('Sign up'))\n\nclass UserSignupFormCaptcha(UserSignupForm):\n captcha = flask_wtf.RecaptchaField()\n\nclass UserSettingsForm(flask_wtf.FlaskForm):\n displayed_name = fields.StringField(_('Displayed name'))\n spam_enabled = fields.BooleanField(_('Enable spam filter'))\n spam_threshold = fields_.IntegerSliderField(_('Spam filter tolerance'))\n forward_enabled = fields.BooleanField(_('Enable forwarding'))\n forward_keep = fields.BooleanField(_('Keep a copy of the emails'))\n forward_destination = fields.StringField(_('Destination'), [validators.Optional(), MultipleEmailAddressesVerify()])\n submit = fields.SubmitField(_('Save settings'))\n\n\nclass UserPasswordForm(flask_wtf.FlaskForm):\n pw = fields.PasswordField(_('Password'), [validators.DataRequired()])\n pw2 = fields.PasswordField(_('Password check'), [validators.DataRequired()])\n submit = fields.SubmitField(_('Update password'))\n\n\nclass UserReplyForm(flask_wtf.FlaskForm):\n reply_enabled = fields.BooleanField(_('Enable automatic reply'))\n reply_subject = fields.StringField(_('Reply subject'))\n reply_body = fields.StringField(_('Reply body'),\n widget=widgets.TextArea())\n reply_startdate = fields.html5.DateField(_('Start of vacation'))\n reply_enddate = fields.html5.DateField(_('End of vacation'))\n submit = fields.SubmitField(_('Update'))\n\n\nclass TokenForm(flask_wtf.FlaskForm):\n displayed_password = fields.StringField(\n _('Your token (write it down, as it will never be displayed again)')\n )\n raw_password = fields.HiddenField([validators.DataRequired()])\n comment = fields.StringField(_('Comment'))\n ip = fields.StringField(\n _('Authorized IP'), [validators.Optional(), validators.IPAddress()]\n )\n submit = fields.SubmitField(_('Save'))\n\n\nclass AliasForm(flask_wtf.FlaskForm):\n localpart = fields.StringField(_('Alias'), [validators.DataRequired(), validators.Regexp(LOCALPART_REGEX)])\n wildcard = fields.BooleanField(\n _('Use SQL LIKE Syntax (e.g. for catch-all aliases)'))\n destination = DestinationField(_('Destination'))\n comment = fields.StringField(_('Comment'))\n submit = fields.SubmitField(_('Save'))\n\n\nclass AdminForm(flask_wtf.FlaskForm):\n admin = fields.SelectField(_('Admin email'), choices=[])\n submit = fields.SubmitField(_('Submit'))\n\n\nclass ManagerForm(flask_wtf.FlaskForm):\n manager = fields.SelectField(_('Manager email'))\n submit = fields.SubmitField(_('Submit'))\n\n\nclass FetchForm(flask_wtf.FlaskForm):\n protocol = fields.SelectField(_('Protocol'), choices=[\n ('imap', 'IMAP'), ('pop3', 'POP3')\n ])\n host = fields.StringField(_('Hostname or IP'))\n port = fields.IntegerField(_('TCP port'))\n tls = fields.BooleanField(_('Enable TLS'))\n username = fields.StringField(_('Username'))\n password = fields.StringField(_('Password'))\n keep = fields.BooleanField(_('Keep emails on the server'))\n submit = fields.SubmitField(_('Submit'))\n\n\nclass AnnouncementForm(flask_wtf.FlaskForm):\n announcement_subject = fields.StringField(_('Announcement subject'),\n [validators.DataRequired()])\n announcement_body = fields.StringField(_('Announcement body'),\n [validators.DataRequired()], widget=widgets.TextArea())\n submit = fields.SubmitField(_('Send'))\n", "path": "core/admin/mailu/ui/forms.py"}], "after_files": [{"content": "from wtforms import validators, fields, widgets\nfrom wtforms_components import fields as fields_\nfrom flask_babel import lazy_gettext as _\n\nimport flask_login\nimport flask_wtf\nimport re\n\nLOCALPART_REGEX = \"^[a-zA-Z0-9!#$%&'*+/=?^_`{|}~-]+(?:\\.[a-zA-Z0-9!#$%&'*+/=?^_`{|}~-]+)*$\"\n\nclass DestinationField(fields.SelectMultipleField):\n \"\"\" Allow for multiple emails selection from current user choices and\n additional email addresses.\n \"\"\"\n\n validator = re.compile(r'^.+@([^.@][^@]+)$', re.IGNORECASE)\n\n def iter_choices(self):\n managed = [\n str(email)\n for email in flask_login.current_user.get_managed_emails()\n ]\n for email in managed:\n selected = self.data is not None and self.coerce(email) in self.data\n yield (email, email, selected)\n for email in self.data or ():\n if email not in managed:\n yield (email, email, True)\n\n def pre_validate(self, form):\n for item in self.data:\n if not self.validator.match(item):\n raise validators.ValidationError(_('Invalid email address.'))\n\nclass MultipleEmailAddressesVerify(object):\n def __init__(self,message=_('Invalid email address.')):\n self.message = message\n\n def __call__(self, form, field):\n pattern = re.compile(r'^([_a-z0-9\\-]+)(\\.[_a-z0-9\\-]+)*@([a-z0-9\\-]{2,}\\.)*([a-z]{2,4})(,([_a-z0-9\\-]+)(\\.[_a-z0-9\\-]+)*@([a-z0-9\\-]{2,}\\.)*([a-z]{2,4}))*$')\n if not pattern.match(field.data.replace(\" \", \"\")):\n raise validators.ValidationError(self.message)\n\nclass ConfirmationForm(flask_wtf.FlaskForm):\n submit = fields.SubmitField(_('Confirm'))\n\n\nclass LoginForm(flask_wtf.FlaskForm):\n email = fields.StringField(_('E-mail'), [validators.Email()])\n pw = fields.PasswordField(_('Password'), [validators.DataRequired()])\n submit = fields.SubmitField(_('Sign in'))\n\n\nclass DomainForm(flask_wtf.FlaskForm):\n name = fields.StringField(_('Domain name'), [validators.DataRequired()])\n max_users = fields_.IntegerField(_('Maximum user count'), default=10)\n max_aliases = fields_.IntegerField(_('Maximum alias count'), default=10)\n max_quota_bytes = fields_.IntegerSliderField(_('Maximum user quota'), default=0)\n signup_enabled = fields.BooleanField(_('Enable sign-up'), default=False)\n comment = fields.StringField(_('Comment'))\n submit = fields.SubmitField(_('Save'))\n\n\nclass DomainSignupForm(flask_wtf.FlaskForm):\n name = fields.StringField(_('Domain name'), [validators.DataRequired()])\n localpart = fields.StringField(_('Initial admin'), [validators.DataRequired()])\n pw = fields.PasswordField(_('Admin password'), [validators.DataRequired()])\n pw2 = fields.PasswordField(_('Confirm password'), [validators.EqualTo('pw')])\n captcha = flask_wtf.RecaptchaField()\n submit = fields.SubmitField(_('Create'))\n\n\nclass AlternativeForm(flask_wtf.FlaskForm):\n name = fields.StringField(_('Alternative name'), [validators.DataRequired()])\n submit = fields.SubmitField(_('Save'))\n\n\nclass RelayForm(flask_wtf.FlaskForm):\n name = fields.StringField(_('Relayed domain name'), [validators.DataRequired()])\n smtp = fields.StringField(_('Remote host'))\n comment = fields.StringField(_('Comment'))\n submit = fields.SubmitField(_('Save'))\n\n\nclass UserForm(flask_wtf.FlaskForm):\n localpart = fields.StringField(_('E-mail'), [validators.DataRequired(), validators.Regexp(LOCALPART_REGEX)])\n pw = fields.PasswordField(_('Password'), [validators.DataRequired()])\n pw2 = fields.PasswordField(_('Confirm password'), [validators.EqualTo('pw')])\n quota_bytes = fields_.IntegerSliderField(_('Quota'), default=1000000000)\n enable_imap = fields.BooleanField(_('Allow IMAP access'), default=True)\n enable_pop = fields.BooleanField(_('Allow POP3 access'), default=True)\n displayed_name = fields.StringField(_('Displayed name'))\n comment = fields.StringField(_('Comment'))\n enabled = fields.BooleanField(_('Enabled'), default=True)\n submit = fields.SubmitField(_('Save'))\n\n\nclass UserSignupForm(flask_wtf.FlaskForm):\n localpart = fields.StringField(_('Email address'), [validators.DataRequired(), validators.Regexp(LOCALPART_REGEX)])\n pw = fields.PasswordField(_('Password'), [validators.DataRequired()])\n pw2 = fields.PasswordField(_('Confirm password'), [validators.EqualTo('pw')])\n submit = fields.SubmitField(_('Sign up'))\n\nclass UserSignupFormCaptcha(UserSignupForm):\n captcha = flask_wtf.RecaptchaField()\n\nclass UserSettingsForm(flask_wtf.FlaskForm):\n displayed_name = fields.StringField(_('Displayed name'))\n spam_enabled = fields.BooleanField(_('Enable spam filter'))\n spam_threshold = fields_.IntegerSliderField(_('Spam filter tolerance'))\n forward_enabled = fields.BooleanField(_('Enable forwarding'))\n forward_keep = fields.BooleanField(_('Keep a copy of the emails'))\n forward_destination = fields.StringField(_('Destination'), [validators.Optional(), MultipleEmailAddressesVerify()])\n submit = fields.SubmitField(_('Save settings'))\n\n\nclass UserPasswordForm(flask_wtf.FlaskForm):\n pw = fields.PasswordField(_('Password'), [validators.DataRequired()])\n pw2 = fields.PasswordField(_('Password check'), [validators.DataRequired()])\n submit = fields.SubmitField(_('Update password'))\n\n\nclass UserReplyForm(flask_wtf.FlaskForm):\n reply_enabled = fields.BooleanField(_('Enable automatic reply'))\n reply_subject = fields.StringField(_('Reply subject'))\n reply_body = fields.StringField(_('Reply body'),\n widget=widgets.TextArea())\n reply_startdate = fields.html5.DateField(_('Start of vacation'))\n reply_enddate = fields.html5.DateField(_('End of vacation'))\n submit = fields.SubmitField(_('Update'))\n\n\nclass TokenForm(flask_wtf.FlaskForm):\n displayed_password = fields.StringField(\n _('Your token (write it down, as it will never be displayed again)')\n )\n raw_password = fields.HiddenField([validators.DataRequired()])\n comment = fields.StringField(_('Comment'))\n ip = fields.StringField(\n _('Authorized IP'), [validators.Optional(), validators.IPAddress()]\n )\n submit = fields.SubmitField(_('Save'))\n\n\nclass AliasForm(flask_wtf.FlaskForm):\n localpart = fields.StringField(_('Alias'), [validators.DataRequired(), validators.Regexp(LOCALPART_REGEX)])\n wildcard = fields.BooleanField(\n _('Use SQL LIKE Syntax (e.g. for catch-all aliases)'))\n destination = DestinationField(_('Destination'))\n comment = fields.StringField(_('Comment'))\n submit = fields.SubmitField(_('Save'))\n\n\nclass AdminForm(flask_wtf.FlaskForm):\n admin = fields.SelectField(_('Admin email'), choices=[])\n submit = fields.SubmitField(_('Submit'))\n\n\nclass ManagerForm(flask_wtf.FlaskForm):\n manager = fields.SelectField(_('Manager email'))\n submit = fields.SubmitField(_('Submit'))\n\n\nclass FetchForm(flask_wtf.FlaskForm):\n protocol = fields.SelectField(_('Protocol'), choices=[\n ('imap', 'IMAP'), ('pop3', 'POP3')\n ])\n host = fields.StringField(_('Hostname or IP'), [validators.DataRequired()])\n port = fields.IntegerField(_('TCP port'), [validators.DataRequired(), validators.NumberRange(min=0, max=65535)])\n tls = fields.BooleanField(_('Enable TLS'))\n username = fields.StringField(_('Username'), [validators.DataRequired()])\n password = fields.PasswordField(_('Password'), [validators.DataRequired()])\n keep = fields.BooleanField(_('Keep emails on the server'))\n submit = fields.SubmitField(_('Submit'))\n\n\nclass AnnouncementForm(flask_wtf.FlaskForm):\n announcement_subject = fields.StringField(_('Announcement subject'),\n [validators.DataRequired()])\n announcement_body = fields.StringField(_('Announcement body'),\n [validators.DataRequired()], widget=widgets.TextArea())\n submit = fields.SubmitField(_('Send'))\n", "path": "core/admin/mailu/ui/forms.py"}]} | 2,493 | 234 |
gh_patches_debug_13116 | rasdani/github-patches | git_diff | nipy__nipype-2767 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Req to deal with SLURM socket errors more patiently
### Summary
At end of issue #2693 @effigies noted that the error that @dalejn was experiencing was due to the SLURM master throwing an error when it was polled with squeue, possibly because it was busy. After some further testing, we now believe that the NIH HPC SLURM master will throw this error at least once a day even with a modest polling interval.
We would like to request a patch such that if NiPype receives any kind of timeout error (we've seen a few different kinds) from squeue, that it politely waits and tries again.
### Actual behavior
```
RuntimeError: Command:
squeue -j 9448406
Standard output:
Standard error:
slurm_load_jobs error: Socket timed out on send/recv operation
Return code: 1
```
or
```
The batch system is not available at the moment.
```
and NiPype exits
### Requested behavior
```
squeue is busy, will try again
```
And NiPype does _not_exit
### Platform details:
```
(NiPypeUpdate) [zhoud4@felix ETPB]$ python -c "import nipype; from pprint import pprint; pprint(nipype.get_info())"
{'commit_hash': 'ec7457c23',
'commit_source': 'installation',
'networkx_version': '2.2',
'nibabel_version': '2.3.1',
'nipype_version': '1.1.3',
'numpy_version': '1.15.3',
'pkg_path': '/data/zhoud4/python/envs/NiPypeUpdate/lib/python3.5/site-packages/nipype',
'scipy_version': '1.1.0',
'sys_executable': '/data/zhoud4/python/envs/NiPypeUpdate/bin/python',
'sys_platform': 'linux',
'sys_version': '3.5.4 | packaged by conda-forge | (default, Aug 10 2017, '
'01:38:41) \n'
'[GCC 4.8.2 20140120 (Red Hat 4.8.2-15)]',
'traits_version': '4.6.0'}
(NiPypeUpdate) [zhoud4@felix ETPB]$
(NiPypeUpdate) [zhoud4@biowulf ETPB]$ sinfo -V
slurm 17.02.9
(NiPypeUpdate) [zhoud4@biowulf ETPB]$
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nipype/pipeline/plugins/slurm.py`
Content:
```
1 '''
2 Created on Aug 2, 2013
3
4 @author: chadcumba
5
6 Parallel workflow execution with SLURM
7 '''
8 from __future__ import (print_function, division, unicode_literals,
9 absolute_import)
10 from builtins import open
11
12 import os
13 import re
14 from time import sleep
15
16 from ... import logging
17 from ...interfaces.base import CommandLine
18 from .base import SGELikeBatchManagerBase, logger
19
20 iflogger = logging.getLogger('nipype.interface')
21
22
23 class SLURMPlugin(SGELikeBatchManagerBase):
24 '''
25 Execute using SLURM
26
27 The plugin_args input to run can be used to control the SLURM execution.
28 Currently supported options are:
29
30 - template : template to use for batch job submission
31
32 - sbatch_args: arguments to pass prepend to the sbatch call
33
34
35 '''
36
37 def __init__(self, **kwargs):
38
39 template = "#!/bin/bash"
40
41 self._retry_timeout = 2
42 self._max_tries = 2
43 self._template = template
44 self._sbatch_args = None
45 self._jobid_re = "Submitted batch job ([0-9]*)"
46
47 if 'plugin_args' in kwargs and kwargs['plugin_args']:
48 if 'retry_timeout' in kwargs['plugin_args']:
49 self._retry_timeout = kwargs['plugin_args']['retry_timeout']
50 if 'max_tries' in kwargs['plugin_args']:
51 self._max_tries = kwargs['plugin_args']['max_tries']
52 if 'jobid_re' in kwargs['plugin_args']:
53 self._jobid_re = kwargs['plugin_args']['jobid_re']
54 if 'template' in kwargs['plugin_args']:
55 self._template = kwargs['plugin_args']['template']
56 if os.path.isfile(self._template):
57 with open(self._template) as f:
58 self._template = f.read()
59 if 'sbatch_args' in kwargs['plugin_args']:
60 self._sbatch_args = kwargs['plugin_args']['sbatch_args']
61 self._pending = {}
62 super(SLURMPlugin, self).__init__(self._template, **kwargs)
63
64 def _is_pending(self, taskid):
65 try:
66 res = CommandLine(
67 'squeue',
68 args=' '.join(['-j', '%s' % taskid]),
69 resource_monitor=False,
70 terminal_output='allatonce').run()
71 return res.runtime.stdout.find(str(taskid)) > -1
72 except RuntimeError as e:
73 if 'Invalid job id' not in str(e):
74 raise(e)
75 return False
76
77 def _submit_batchtask(self, scriptfile, node):
78 """
79 This is more or less the _submit_batchtask from sge.py with flipped
80 variable names, different command line switches, and different output
81 formatting/processing
82 """
83 cmd = CommandLine(
84 'sbatch',
85 environ=dict(os.environ),
86 resource_monitor=False,
87 terminal_output='allatonce')
88 path = os.path.dirname(scriptfile)
89
90 sbatch_args = ''
91 if self._sbatch_args:
92 sbatch_args = self._sbatch_args
93 if 'sbatch_args' in node.plugin_args:
94 if 'overwrite' in node.plugin_args and\
95 node.plugin_args['overwrite']:
96 sbatch_args = node.plugin_args['sbatch_args']
97 else:
98 sbatch_args += (" " + node.plugin_args['sbatch_args'])
99 if '-o' not in sbatch_args:
100 sbatch_args = '%s -o %s' % (sbatch_args,
101 os.path.join(path, 'slurm-%j.out'))
102 if '-e' not in sbatch_args:
103 sbatch_args = '%s -e %s' % (sbatch_args,
104 os.path.join(path, 'slurm-%j.out'))
105 if node._hierarchy:
106 jobname = '.'.join((dict(os.environ)['LOGNAME'], node._hierarchy,
107 node._id))
108 else:
109 jobname = '.'.join((dict(os.environ)['LOGNAME'], node._id))
110 jobnameitems = jobname.split('.')
111 jobnameitems.reverse()
112 jobname = '.'.join(jobnameitems)
113 cmd.inputs.args = '%s -J %s %s' % (sbatch_args, jobname, scriptfile)
114 oldlevel = iflogger.level
115 iflogger.setLevel(logging.getLevelName('CRITICAL'))
116 tries = 0
117 while True:
118 try:
119 result = cmd.run()
120 except Exception as e:
121 if tries < self._max_tries:
122 tries += 1
123 # sleep 2 seconds and try again.
124 sleep(self._retry_timeout)
125 else:
126 iflogger.setLevel(oldlevel)
127 raise RuntimeError('\n'.join(
128 (('Could not submit sbatch task'
129 ' for node %s') % node._id, str(e))))
130 else:
131 break
132 logger.debug('Ran command ({0})'.format(cmd.cmdline))
133 iflogger.setLevel(oldlevel)
134 # retrieve taskid
135 lines = [line for line in result.runtime.stdout.split('\n') if line]
136 taskid = int(re.match(self._jobid_re, lines[-1]).groups()[0])
137 self._pending[taskid] = node.output_dir()
138 logger.debug('submitted sbatch task: %d for node %s' % (taskid,
139 node._id))
140 return taskid
141
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/nipype/pipeline/plugins/slurm.py b/nipype/pipeline/plugins/slurm.py
--- a/nipype/pipeline/plugins/slurm.py
+++ b/nipype/pipeline/plugins/slurm.py
@@ -70,6 +70,14 @@
terminal_output='allatonce').run()
return res.runtime.stdout.find(str(taskid)) > -1
except RuntimeError as e:
+ if any(ss in str(e) for ss
+ in ['Socket timed out', 'not available at the moment']):
+ # do not raise error and allow recheck
+ logger.warning(
+ "SLURM timeout encountered while checking job status,"
+ " treating job %d as pending", taskid
+ )
+ return True
if 'Invalid job id' not in str(e):
raise(e)
return False
| {"golden_diff": "diff --git a/nipype/pipeline/plugins/slurm.py b/nipype/pipeline/plugins/slurm.py\n--- a/nipype/pipeline/plugins/slurm.py\n+++ b/nipype/pipeline/plugins/slurm.py\n@@ -70,6 +70,14 @@\n terminal_output='allatonce').run()\n return res.runtime.stdout.find(str(taskid)) > -1\n except RuntimeError as e:\n+ if any(ss in str(e) for ss\n+ in ['Socket timed out', 'not available at the moment']):\n+ # do not raise error and allow recheck\n+ logger.warning(\n+ \"SLURM timeout encountered while checking job status,\"\n+ \" treating job %d as pending\", taskid\n+ )\n+ return True\n if 'Invalid job id' not in str(e):\n raise(e)\n return False\n", "issue": "Req to deal with SLURM socket errors more patiently\n### Summary\r\n\r\nAt end of issue #2693 @effigies noted that the error that @dalejn was experiencing was due to the SLURM master throwing an error when it was polled with squeue, possibly because it was busy. After some further testing, we now believe that the NIH HPC SLURM master will throw this error at least once a day even with a modest polling interval. \r\n\r\nWe would like to request a patch such that if NiPype receives any kind of timeout error (we've seen a few different kinds) from squeue, that it politely waits and tries again.\r\n\r\n### Actual behavior\r\n```\r\nRuntimeError: Command:\r\nsqueue -j 9448406\r\nStandard output:\r\n\r\nStandard error:\r\nslurm_load_jobs error: Socket timed out on send/recv operation\r\nReturn code: 1\r\n```\r\nor\r\n```\r\nThe batch system is not available at the moment.\r\n```\r\nand NiPype exits\r\n### Requested behavior\r\n```\r\nsqueue is busy, will try again\r\n```\r\nAnd NiPype does _not_exit\r\n\r\n### Platform details:\r\n```\r\n(NiPypeUpdate) [zhoud4@felix ETPB]$ python -c \"import nipype; from pprint import pprint; pprint(nipype.get_info())\"\r\n{'commit_hash': 'ec7457c23',\r\n 'commit_source': 'installation',\r\n 'networkx_version': '2.2',\r\n 'nibabel_version': '2.3.1',\r\n 'nipype_version': '1.1.3',\r\n 'numpy_version': '1.15.3',\r\n 'pkg_path': '/data/zhoud4/python/envs/NiPypeUpdate/lib/python3.5/site-packages/nipype',\r\n 'scipy_version': '1.1.0',\r\n 'sys_executable': '/data/zhoud4/python/envs/NiPypeUpdate/bin/python',\r\n 'sys_platform': 'linux',\r\n 'sys_version': '3.5.4 | packaged by conda-forge | (default, Aug 10 2017, '\r\n '01:38:41) \\n'\r\n '[GCC 4.8.2 20140120 (Red Hat 4.8.2-15)]',\r\n 'traits_version': '4.6.0'}\r\n(NiPypeUpdate) [zhoud4@felix ETPB]$\r\n(NiPypeUpdate) [zhoud4@biowulf ETPB]$ sinfo -V\r\nslurm 17.02.9\r\n(NiPypeUpdate) [zhoud4@biowulf ETPB]$ \r\n```\r\n\n", "before_files": [{"content": "'''\nCreated on Aug 2, 2013\n\n@author: chadcumba\n\nParallel workflow execution with SLURM\n'''\nfrom __future__ import (print_function, division, unicode_literals,\n absolute_import)\nfrom builtins import open\n\nimport os\nimport re\nfrom time import sleep\n\nfrom ... import logging\nfrom ...interfaces.base import CommandLine\nfrom .base import SGELikeBatchManagerBase, logger\n\niflogger = logging.getLogger('nipype.interface')\n\n\nclass SLURMPlugin(SGELikeBatchManagerBase):\n '''\n Execute using SLURM\n\n The plugin_args input to run can be used to control the SLURM execution.\n Currently supported options are:\n\n - template : template to use for batch job submission\n\n - sbatch_args: arguments to pass prepend to the sbatch call\n\n\n '''\n\n def __init__(self, **kwargs):\n\n template = \"#!/bin/bash\"\n\n self._retry_timeout = 2\n self._max_tries = 2\n self._template = template\n self._sbatch_args = None\n self._jobid_re = \"Submitted batch job ([0-9]*)\"\n\n if 'plugin_args' in kwargs and kwargs['plugin_args']:\n if 'retry_timeout' in kwargs['plugin_args']:\n self._retry_timeout = kwargs['plugin_args']['retry_timeout']\n if 'max_tries' in kwargs['plugin_args']:\n self._max_tries = kwargs['plugin_args']['max_tries']\n if 'jobid_re' in kwargs['plugin_args']:\n self._jobid_re = kwargs['plugin_args']['jobid_re']\n if 'template' in kwargs['plugin_args']:\n self._template = kwargs['plugin_args']['template']\n if os.path.isfile(self._template):\n with open(self._template) as f:\n self._template = f.read()\n if 'sbatch_args' in kwargs['plugin_args']:\n self._sbatch_args = kwargs['plugin_args']['sbatch_args']\n self._pending = {}\n super(SLURMPlugin, self).__init__(self._template, **kwargs)\n\n def _is_pending(self, taskid):\n try:\n res = CommandLine(\n 'squeue',\n args=' '.join(['-j', '%s' % taskid]),\n resource_monitor=False,\n terminal_output='allatonce').run()\n return res.runtime.stdout.find(str(taskid)) > -1\n except RuntimeError as e:\n if 'Invalid job id' not in str(e):\n raise(e)\n return False\n\n def _submit_batchtask(self, scriptfile, node):\n \"\"\"\n This is more or less the _submit_batchtask from sge.py with flipped\n variable names, different command line switches, and different output\n formatting/processing\n \"\"\"\n cmd = CommandLine(\n 'sbatch',\n environ=dict(os.environ),\n resource_monitor=False,\n terminal_output='allatonce')\n path = os.path.dirname(scriptfile)\n\n sbatch_args = ''\n if self._sbatch_args:\n sbatch_args = self._sbatch_args\n if 'sbatch_args' in node.plugin_args:\n if 'overwrite' in node.plugin_args and\\\n node.plugin_args['overwrite']:\n sbatch_args = node.plugin_args['sbatch_args']\n else:\n sbatch_args += (\" \" + node.plugin_args['sbatch_args'])\n if '-o' not in sbatch_args:\n sbatch_args = '%s -o %s' % (sbatch_args,\n os.path.join(path, 'slurm-%j.out'))\n if '-e' not in sbatch_args:\n sbatch_args = '%s -e %s' % (sbatch_args,\n os.path.join(path, 'slurm-%j.out'))\n if node._hierarchy:\n jobname = '.'.join((dict(os.environ)['LOGNAME'], node._hierarchy,\n node._id))\n else:\n jobname = '.'.join((dict(os.environ)['LOGNAME'], node._id))\n jobnameitems = jobname.split('.')\n jobnameitems.reverse()\n jobname = '.'.join(jobnameitems)\n cmd.inputs.args = '%s -J %s %s' % (sbatch_args, jobname, scriptfile)\n oldlevel = iflogger.level\n iflogger.setLevel(logging.getLevelName('CRITICAL'))\n tries = 0\n while True:\n try:\n result = cmd.run()\n except Exception as e:\n if tries < self._max_tries:\n tries += 1\n # sleep 2 seconds and try again.\n sleep(self._retry_timeout)\n else:\n iflogger.setLevel(oldlevel)\n raise RuntimeError('\\n'.join(\n (('Could not submit sbatch task'\n ' for node %s') % node._id, str(e))))\n else:\n break\n logger.debug('Ran command ({0})'.format(cmd.cmdline))\n iflogger.setLevel(oldlevel)\n # retrieve taskid\n lines = [line for line in result.runtime.stdout.split('\\n') if line]\n taskid = int(re.match(self._jobid_re, lines[-1]).groups()[0])\n self._pending[taskid] = node.output_dir()\n logger.debug('submitted sbatch task: %d for node %s' % (taskid,\n node._id))\n return taskid\n", "path": "nipype/pipeline/plugins/slurm.py"}], "after_files": [{"content": "'''\nCreated on Aug 2, 2013\n\n@author: chadcumba\n\nParallel workflow execution with SLURM\n'''\nfrom __future__ import (print_function, division, unicode_literals,\n absolute_import)\nfrom builtins import open\n\nimport os\nimport re\nfrom time import sleep\n\nfrom ... import logging\nfrom ...interfaces.base import CommandLine\nfrom .base import SGELikeBatchManagerBase, logger\n\niflogger = logging.getLogger('nipype.interface')\n\n\nclass SLURMPlugin(SGELikeBatchManagerBase):\n '''\n Execute using SLURM\n\n The plugin_args input to run can be used to control the SLURM execution.\n Currently supported options are:\n\n - template : template to use for batch job submission\n\n - sbatch_args: arguments to pass prepend to the sbatch call\n\n\n '''\n\n def __init__(self, **kwargs):\n\n template = \"#!/bin/bash\"\n\n self._retry_timeout = 2\n self._max_tries = 2\n self._template = template\n self._sbatch_args = None\n self._jobid_re = \"Submitted batch job ([0-9]*)\"\n\n if 'plugin_args' in kwargs and kwargs['plugin_args']:\n if 'retry_timeout' in kwargs['plugin_args']:\n self._retry_timeout = kwargs['plugin_args']['retry_timeout']\n if 'max_tries' in kwargs['plugin_args']:\n self._max_tries = kwargs['plugin_args']['max_tries']\n if 'jobid_re' in kwargs['plugin_args']:\n self._jobid_re = kwargs['plugin_args']['jobid_re']\n if 'template' in kwargs['plugin_args']:\n self._template = kwargs['plugin_args']['template']\n if os.path.isfile(self._template):\n with open(self._template) as f:\n self._template = f.read()\n if 'sbatch_args' in kwargs['plugin_args']:\n self._sbatch_args = kwargs['plugin_args']['sbatch_args']\n self._pending = {}\n super(SLURMPlugin, self).__init__(self._template, **kwargs)\n\n def _is_pending(self, taskid):\n try:\n res = CommandLine(\n 'squeue',\n args=' '.join(['-j', '%s' % taskid]),\n resource_monitor=False,\n terminal_output='allatonce').run()\n return res.runtime.stdout.find(str(taskid)) > -1\n except RuntimeError as e:\n if any(ss in str(e) for ss\n in ['Socket timed out', 'not available at the moment']):\n # do not raise error and allow recheck\n logger.warning(\n \"SLURM timeout encountered while checking job status,\"\n \" treating job %d as pending\", taskid\n )\n return True\n if 'Invalid job id' not in str(e):\n raise(e)\n return False\n\n def _submit_batchtask(self, scriptfile, node):\n \"\"\"\n This is more or less the _submit_batchtask from sge.py with flipped\n variable names, different command line switches, and different output\n formatting/processing\n \"\"\"\n cmd = CommandLine(\n 'sbatch',\n environ=dict(os.environ),\n resource_monitor=False,\n terminal_output='allatonce')\n path = os.path.dirname(scriptfile)\n\n sbatch_args = ''\n if self._sbatch_args:\n sbatch_args = self._sbatch_args\n if 'sbatch_args' in node.plugin_args:\n if 'overwrite' in node.plugin_args and\\\n node.plugin_args['overwrite']:\n sbatch_args = node.plugin_args['sbatch_args']\n else:\n sbatch_args += (\" \" + node.plugin_args['sbatch_args'])\n if '-o' not in sbatch_args:\n sbatch_args = '%s -o %s' % (sbatch_args,\n os.path.join(path, 'slurm-%j.out'))\n if '-e' not in sbatch_args:\n sbatch_args = '%s -e %s' % (sbatch_args,\n os.path.join(path, 'slurm-%j.out'))\n if node._hierarchy:\n jobname = '.'.join((dict(os.environ)['LOGNAME'], node._hierarchy,\n node._id))\n else:\n jobname = '.'.join((dict(os.environ)['LOGNAME'], node._id))\n jobnameitems = jobname.split('.')\n jobnameitems.reverse()\n jobname = '.'.join(jobnameitems)\n cmd.inputs.args = '%s -J %s %s' % (sbatch_args, jobname, scriptfile)\n oldlevel = iflogger.level\n iflogger.setLevel(logging.getLevelName('CRITICAL'))\n tries = 0\n while True:\n try:\n result = cmd.run()\n except Exception as e:\n if tries < self._max_tries:\n tries += 1\n # sleep 2 seconds and try again.\n sleep(self._retry_timeout)\n else:\n iflogger.setLevel(oldlevel)\n raise RuntimeError('\\n'.join(\n (('Could not submit sbatch task'\n ' for node %s') % node._id, str(e))))\n else:\n break\n logger.debug('Ran command ({0})'.format(cmd.cmdline))\n iflogger.setLevel(oldlevel)\n # retrieve taskid\n lines = [line for line in result.runtime.stdout.split('\\n') if line]\n taskid = int(re.match(self._jobid_re, lines[-1]).groups()[0])\n self._pending[taskid] = node.output_dir()\n logger.debug('submitted sbatch task: %d for node %s' % (taskid,\n node._id))\n return taskid\n", "path": "nipype/pipeline/plugins/slurm.py"}]} | 2,328 | 189 |
gh_patches_debug_27197 | rasdani/github-patches | git_diff | ansible__ansible-modules-core-3886 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
junos_facts - config parameter should have a default value
##### COMPONENT NAME
module : junos_facts
```
network/junos_facts.py
```
##### ANSIBLE VERSION
2.1.0.0
##### OS / ENVIRONMENT
Mac OS 10.11.5
##### SUMMARY
Currently the parameter `config` for junos_facts is mandatory and do not have a default value
I think it would be better to set a default value to `false` and not make this parameter mandatory.
Playbooks will be simplified
I'm happy to send a pull request for it, but I wanted to discuss this change before
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `network/junos/junos_facts.py`
Content:
```
1 #!/usr/bin/python
2 #
3 # This file is part of Ansible
4 #
5 # Ansible is free software: you can redistribute it and/or modify
6 # it under the terms of the GNU General Public License as published by
7 # the Free Software Foundation, either version 3 of the License, or
8 # (at your option) any later version.
9 #
10 # Ansible is distributed in the hope that it will be useful,
11 # but WITHOUT ANY WARRANTY; without even the implied warranty of
12 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13 # GNU General Public License for more details.
14 #
15 # You should have received a copy of the GNU General Public License
16 # along with Ansible. If not, see <http://www.gnu.org/licenses/>.
17 #
18
19 DOCUMENTATION = """
20 ---
21 module: junos_facts
22 version_added: "2.1"
23 author: "Peter Sprygada (@privateip)"
24 short_description: Collect facts from remote device running Junos
25 description:
26 - Collects fact information from a remote device running the Junos
27 operating system. By default, the module will collect basic fact
28 information from the device to be included with the hostvars.
29 Additional fact information can be collected based on the
30 configured set of arguments.
31 extends_documentation_fragment: junos
32 options:
33 config:
34 description:
35 - The C(config) argument instructs the fact module to collect
36 the configuration from the remote device. The configuration
37 is then included in return facts. By default, the configuration
38 is returned as text. The C(config_format) can be used to return
39 different Junos configuration formats.
40 required: true
41 default: false
42 config_format:
43 description:
44 - The C(config_format) argument is used to specify the desired
45 format of the configuration file. Devices support three
46 configuration file formats. By default, the configuration
47 from the device is returned as text. The other options include
48 set and xml. If the xml option is choosen, the configuration file
49 is returned as both xml and json.
50 required: false
51 default: text
52 choices: ['xml', 'text', 'set']
53 requirements:
54 - junos-eznc
55 notes:
56 - This module requires the netconf system service be enabled on
57 the remote device being managed
58 """
59
60 EXAMPLES = """
61 # the required set of connection arguments have been purposely left off
62 # the examples for brevity
63
64 - name: collect default set of facts
65 junos_facts:
66
67 - name: collect default set of facts and configuration
68 junos_facts:
69 config: yes
70
71 - name: collect default set of facts and configuration in set format
72 junos_facts:
73 config: yes
74 config_format: set
75
76 - name: collect default set of facts and configuration in XML and JSON format
77 junos_facts:
78 config: yes
79 config_format: xml
80 """
81
82 RETURN = """
83 ansible_facts:
84 descrption: Returns the facts collect from the device
85 returned: always
86 type: dict
87 """
88
89 def main():
90 """ Main entry point for AnsibleModule
91 """
92 spec = dict(
93 config=dict(required=True, type='bool'),
94 config_format=dict(default='text', choices=['xml', 'set', 'text']),
95 transport=dict(default='netconf', choices=['netconf'])
96 )
97
98 module = get_module(argument_spec=spec,
99 supports_check_mode=True)
100
101 result = dict(changed=False)
102
103 facts = module.get_facts()
104
105 if '2RE' in facts:
106 facts['has_2RE'] = facts['2RE']
107 del facts['2RE']
108
109 facts['version_info'] = dict(facts['version_info'])
110
111 if module.params['config']:
112 config_format = module.params['config_format']
113 resp_config = module.get_config( config_format=config_format)
114
115 if config_format in ['text', 'set']:
116 facts['config'] = resp_config
117 elif config_format == "xml":
118 facts['config'] = xml_to_string(resp_config)
119 facts['config_json'] = xml_to_json(resp_config)
120
121 result['ansible_facts'] = facts
122 module.exit_json(**result)
123
124 from ansible.module_utils.basic import *
125 from ansible.module_utils.junos import *
126
127 if __name__ == '__main__':
128 main()
129
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/network/junos/junos_facts.py b/network/junos/junos_facts.py
--- a/network/junos/junos_facts.py
+++ b/network/junos/junos_facts.py
@@ -37,8 +37,8 @@
is then included in return facts. By default, the configuration
is returned as text. The C(config_format) can be used to return
different Junos configuration formats.
- required: true
- default: false
+ required: false
+ default: null
config_format:
description:
- The C(config_format) argument is used to specify the desired
@@ -90,7 +90,7 @@
""" Main entry point for AnsibleModule
"""
spec = dict(
- config=dict(required=True, type='bool'),
+ config=dict(type='bool'),
config_format=dict(default='text', choices=['xml', 'set', 'text']),
transport=dict(default='netconf', choices=['netconf'])
)
@@ -108,7 +108,7 @@
facts['version_info'] = dict(facts['version_info'])
- if module.params['config']:
+ if module.params['config'] is True:
config_format = module.params['config_format']
resp_config = module.get_config( config_format=config_format)
| {"golden_diff": "diff --git a/network/junos/junos_facts.py b/network/junos/junos_facts.py\n--- a/network/junos/junos_facts.py\n+++ b/network/junos/junos_facts.py\n@@ -37,8 +37,8 @@\n is then included in return facts. By default, the configuration\n is returned as text. The C(config_format) can be used to return\n different Junos configuration formats.\n- required: true\n- default: false\n+ required: false\n+ default: null\n config_format:\n description:\n - The C(config_format) argument is used to specify the desired\n@@ -90,7 +90,7 @@\n \"\"\" Main entry point for AnsibleModule\n \"\"\"\n spec = dict(\n- config=dict(required=True, type='bool'),\n+ config=dict(type='bool'),\n config_format=dict(default='text', choices=['xml', 'set', 'text']),\n transport=dict(default='netconf', choices=['netconf'])\n )\n@@ -108,7 +108,7 @@\n \n facts['version_info'] = dict(facts['version_info'])\n \n- if module.params['config']:\n+ if module.params['config'] is True:\n config_format = module.params['config_format']\n resp_config = module.get_config( config_format=config_format)\n", "issue": "junos_facts - config parameter should have a default value\n##### COMPONENT NAME\n\nmodule : junos_facts\n\n```\nnetwork/junos_facts.py\n```\n##### ANSIBLE VERSION\n\n2.1.0.0\n##### OS / ENVIRONMENT\n\nMac OS 10.11.5\n##### SUMMARY\n\nCurrently the parameter `config` for junos_facts is mandatory and do not have a default value\nI think it would be better to set a default value to `false` and not make this parameter mandatory.\n\nPlaybooks will be simplified\n\nI'm happy to send a pull request for it, but I wanted to discuss this change before\n\n", "before_files": [{"content": "#!/usr/bin/python\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\n#\n\nDOCUMENTATION = \"\"\"\n---\nmodule: junos_facts\nversion_added: \"2.1\"\nauthor: \"Peter Sprygada (@privateip)\"\nshort_description: Collect facts from remote device running Junos\ndescription:\n - Collects fact information from a remote device running the Junos\n operating system. By default, the module will collect basic fact\n information from the device to be included with the hostvars.\n Additional fact information can be collected based on the\n configured set of arguments.\nextends_documentation_fragment: junos\noptions:\n config:\n description:\n - The C(config) argument instructs the fact module to collect\n the configuration from the remote device. The configuration\n is then included in return facts. By default, the configuration\n is returned as text. The C(config_format) can be used to return\n different Junos configuration formats.\n required: true\n default: false\n config_format:\n description:\n - The C(config_format) argument is used to specify the desired\n format of the configuration file. Devices support three\n configuration file formats. By default, the configuration\n from the device is returned as text. The other options include\n set and xml. If the xml option is choosen, the configuration file\n is returned as both xml and json.\n required: false\n default: text\n choices: ['xml', 'text', 'set']\nrequirements:\n - junos-eznc\nnotes:\n - This module requires the netconf system service be enabled on\n the remote device being managed\n\"\"\"\n\nEXAMPLES = \"\"\"\n# the required set of connection arguments have been purposely left off\n# the examples for brevity\n\n- name: collect default set of facts\n junos_facts:\n\n- name: collect default set of facts and configuration\n junos_facts:\n config: yes\n\n- name: collect default set of facts and configuration in set format\n junos_facts:\n config: yes\n config_format: set\n\n- name: collect default set of facts and configuration in XML and JSON format\n junos_facts:\n config: yes\n config_format: xml\n\"\"\"\n\nRETURN = \"\"\"\nansible_facts:\n descrption: Returns the facts collect from the device\n returned: always\n type: dict\n\"\"\"\n\ndef main():\n \"\"\" Main entry point for AnsibleModule\n \"\"\"\n spec = dict(\n config=dict(required=True, type='bool'),\n config_format=dict(default='text', choices=['xml', 'set', 'text']),\n transport=dict(default='netconf', choices=['netconf'])\n )\n\n module = get_module(argument_spec=spec,\n supports_check_mode=True)\n\n result = dict(changed=False)\n\n facts = module.get_facts()\n\n if '2RE' in facts:\n facts['has_2RE'] = facts['2RE']\n del facts['2RE']\n\n facts['version_info'] = dict(facts['version_info'])\n\n if module.params['config']:\n config_format = module.params['config_format']\n resp_config = module.get_config( config_format=config_format)\n\n if config_format in ['text', 'set']:\n facts['config'] = resp_config\n elif config_format == \"xml\":\n facts['config'] = xml_to_string(resp_config)\n facts['config_json'] = xml_to_json(resp_config)\n\n result['ansible_facts'] = facts\n module.exit_json(**result)\n\nfrom ansible.module_utils.basic import *\nfrom ansible.module_utils.junos import *\n\nif __name__ == '__main__':\n main()\n", "path": "network/junos/junos_facts.py"}], "after_files": [{"content": "#!/usr/bin/python\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\n#\n\nDOCUMENTATION = \"\"\"\n---\nmodule: junos_facts\nversion_added: \"2.1\"\nauthor: \"Peter Sprygada (@privateip)\"\nshort_description: Collect facts from remote device running Junos\ndescription:\n - Collects fact information from a remote device running the Junos\n operating system. By default, the module will collect basic fact\n information from the device to be included with the hostvars.\n Additional fact information can be collected based on the\n configured set of arguments.\nextends_documentation_fragment: junos\noptions:\n config:\n description:\n - The C(config) argument instructs the fact module to collect\n the configuration from the remote device. The configuration\n is then included in return facts. By default, the configuration\n is returned as text. The C(config_format) can be used to return\n different Junos configuration formats.\n required: false\n default: null\n config_format:\n description:\n - The C(config_format) argument is used to specify the desired\n format of the configuration file. Devices support three\n configuration file formats. By default, the configuration\n from the device is returned as text. The other options include\n set and xml. If the xml option is choosen, the configuration file\n is returned as both xml and json.\n required: false\n default: text\n choices: ['xml', 'text', 'set']\nrequirements:\n - junos-eznc\nnotes:\n - This module requires the netconf system service be enabled on\n the remote device being managed\n\"\"\"\n\nEXAMPLES = \"\"\"\n# the required set of connection arguments have been purposely left off\n# the examples for brevity\n\n- name: collect default set of facts\n junos_facts:\n\n- name: collect default set of facts and configuration\n junos_facts:\n config: yes\n\n- name: collect default set of facts and configuration in set format\n junos_facts:\n config: yes\n config_format: set\n\n- name: collect default set of facts and configuration in XML and JSON format\n junos_facts:\n config: yes\n config_format: xml\n\"\"\"\n\nRETURN = \"\"\"\nansible_facts:\n descrption: Returns the facts collect from the device\n returned: always\n type: dict\n\"\"\"\n\ndef main():\n \"\"\" Main entry point for AnsibleModule\n \"\"\"\n spec = dict(\n config=dict(type='bool'),\n config_format=dict(default='text', choices=['xml', 'set', 'text']),\n transport=dict(default='netconf', choices=['netconf'])\n )\n\n module = get_module(argument_spec=spec,\n supports_check_mode=True)\n\n result = dict(changed=False)\n\n facts = module.get_facts()\n\n if '2RE' in facts:\n facts['has_2RE'] = facts['2RE']\n del facts['2RE']\n\n facts['version_info'] = dict(facts['version_info'])\n\n if module.params['config'] is True:\n config_format = module.params['config_format']\n resp_config = module.get_config( config_format=config_format)\n\n if config_format in ['text', 'set']:\n facts['config'] = resp_config\n elif config_format == \"xml\":\n facts['config'] = xml_to_string(resp_config)\n facts['config_json'] = xml_to_json(resp_config)\n\n result['ansible_facts'] = facts\n module.exit_json(**result)\n\nfrom ansible.module_utils.basic import *\nfrom ansible.module_utils.junos import *\n\nif __name__ == '__main__':\n main()\n", "path": "network/junos/junos_facts.py"}]} | 1,619 | 295 |
gh_patches_debug_5836 | rasdani/github-patches | git_diff | sanic-org__sanic-961 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cookie secure option not encoded properly
When `Cookies.encode` encounters `response.cookies["<cookie>"]["secure"] = False` then it outputs:
`b'Domain=xad.com; Path=/; Secure=False'`
where it should output:
`b'Domain=xad.com; Path=/;'` when `response.cookies["<cookie>"]["secure"] = False`
and
`b'Domain=xad.com; Path=/; Secure;'` when `response.cookies["<cookie>"]["secure"] = True`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sanic/cookies.py`
Content:
```
1 import re
2 import string
3
4 # ------------------------------------------------------------ #
5 # SimpleCookie
6 # ------------------------------------------------------------ #
7
8 # Straight up copied this section of dark magic from SimpleCookie
9
10 _LegalChars = string.ascii_letters + string.digits + "!#$%&'*+-.^_`|~:"
11 _UnescapedChars = _LegalChars + ' ()/<=>?@[]{}'
12
13 _Translator = {n: '\\%03o' % n
14 for n in set(range(256)) - set(map(ord, _UnescapedChars))}
15 _Translator.update({
16 ord('"'): '\\"',
17 ord('\\'): '\\\\',
18 })
19
20
21 def _quote(str):
22 """Quote a string for use in a cookie header.
23 If the string does not need to be double-quoted, then just return the
24 string. Otherwise, surround the string in doublequotes and quote
25 (with a \) special characters.
26 """
27 if str is None or _is_legal_key(str):
28 return str
29 else:
30 return '"' + str.translate(_Translator) + '"'
31
32
33 _is_legal_key = re.compile('[%s]+' % re.escape(_LegalChars)).fullmatch
34
35 # ------------------------------------------------------------ #
36 # Custom SimpleCookie
37 # ------------------------------------------------------------ #
38
39
40 class CookieJar(dict):
41 """CookieJar dynamically writes headers as cookies are added and removed
42 It gets around the limitation of one header per name by using the
43 MultiHeader class to provide a unique key that encodes to Set-Cookie.
44 """
45
46 def __init__(self, headers):
47 super().__init__()
48 self.headers = headers
49 self.cookie_headers = {}
50
51 def __setitem__(self, key, value):
52 # If this cookie doesn't exist, add it to the header keys
53 cookie_header = self.cookie_headers.get(key)
54 if not cookie_header:
55 cookie = Cookie(key, value)
56 cookie['path'] = '/'
57 cookie_header = MultiHeader("Set-Cookie")
58 self.cookie_headers[key] = cookie_header
59 self.headers[cookie_header] = cookie
60 return super().__setitem__(key, cookie)
61 else:
62 self[key].value = value
63
64 def __delitem__(self, key):
65 if key not in self.cookie_headers:
66 self[key] = ''
67 self[key]['max-age'] = 0
68 else:
69 cookie_header = self.cookie_headers[key]
70 del self.headers[cookie_header]
71 del self.cookie_headers[key]
72 return super().__delitem__(key)
73
74
75 class Cookie(dict):
76 """A stripped down version of Morsel from SimpleCookie #gottagofast"""
77 _keys = {
78 "expires": "expires",
79 "path": "Path",
80 "comment": "Comment",
81 "domain": "Domain",
82 "max-age": "Max-Age",
83 "secure": "Secure",
84 "httponly": "HttpOnly",
85 "version": "Version",
86 }
87 _flags = {'secure', 'httponly'}
88
89 def __init__(self, key, value):
90 if key in self._keys:
91 raise KeyError("Cookie name is a reserved word")
92 if not _is_legal_key(key):
93 raise KeyError("Cookie key contains illegal characters")
94 self.key = key
95 self.value = value
96 super().__init__()
97
98 def __setitem__(self, key, value):
99 if key not in self._keys:
100 raise KeyError("Unknown cookie property")
101 return super().__setitem__(key, value)
102
103 def encode(self, encoding):
104 output = ['%s=%s' % (self.key, _quote(self.value))]
105 for key, value in self.items():
106 if key == 'max-age':
107 try:
108 output.append('%s=%d' % (self._keys[key], value))
109 except TypeError:
110 output.append('%s=%s' % (self._keys[key], value))
111 elif key == 'expires':
112 try:
113 output.append('%s=%s' % (
114 self._keys[key],
115 value.strftime("%a, %d-%b-%Y %T GMT")
116 ))
117 except AttributeError:
118 output.append('%s=%s' % (self._keys[key], value))
119 elif key in self._flags and self[key]:
120 output.append(self._keys[key])
121 else:
122 output.append('%s=%s' % (self._keys[key], value))
123
124 return "; ".join(output).encode(encoding)
125
126 # ------------------------------------------------------------ #
127 # Header Trickery
128 # ------------------------------------------------------------ #
129
130
131 class MultiHeader:
132 """String-holding object which allow us to set a header within response
133 that has a unique key, but may contain duplicate header names
134 """
135 def __init__(self, name):
136 self.name = name
137
138 def encode(self):
139 return self.name.encode()
140
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sanic/cookies.py b/sanic/cookies.py
--- a/sanic/cookies.py
+++ b/sanic/cookies.py
@@ -98,7 +98,8 @@
def __setitem__(self, key, value):
if key not in self._keys:
raise KeyError("Unknown cookie property")
- return super().__setitem__(key, value)
+ if value is not False:
+ return super().__setitem__(key, value)
def encode(self, encoding):
output = ['%s=%s' % (self.key, _quote(self.value))]
| {"golden_diff": "diff --git a/sanic/cookies.py b/sanic/cookies.py\n--- a/sanic/cookies.py\n+++ b/sanic/cookies.py\n@@ -98,7 +98,8 @@\n def __setitem__(self, key, value):\n if key not in self._keys:\n raise KeyError(\"Unknown cookie property\")\n- return super().__setitem__(key, value)\n+ if value is not False:\n+ return super().__setitem__(key, value)\n \n def encode(self, encoding):\n output = ['%s=%s' % (self.key, _quote(self.value))]\n", "issue": "Cookie secure option not encoded properly\nWhen `Cookies.encode` encounters `response.cookies[\"<cookie>\"][\"secure\"] = False` then it outputs:\r\n\r\n`b'Domain=xad.com; Path=/; Secure=False'`\r\n\r\nwhere it should output:\r\n\r\n`b'Domain=xad.com; Path=/;'` when `response.cookies[\"<cookie>\"][\"secure\"] = False`\r\n\r\nand\r\n\r\n`b'Domain=xad.com; Path=/; Secure;'` when `response.cookies[\"<cookie>\"][\"secure\"] = True`\n", "before_files": [{"content": "import re\nimport string\n\n# ------------------------------------------------------------ #\n# SimpleCookie\n# ------------------------------------------------------------ #\n\n# Straight up copied this section of dark magic from SimpleCookie\n\n_LegalChars = string.ascii_letters + string.digits + \"!#$%&'*+-.^_`|~:\"\n_UnescapedChars = _LegalChars + ' ()/<=>?@[]{}'\n\n_Translator = {n: '\\\\%03o' % n\n for n in set(range(256)) - set(map(ord, _UnescapedChars))}\n_Translator.update({\n ord('\"'): '\\\\\"',\n ord('\\\\'): '\\\\\\\\',\n})\n\n\ndef _quote(str):\n \"\"\"Quote a string for use in a cookie header.\n If the string does not need to be double-quoted, then just return the\n string. Otherwise, surround the string in doublequotes and quote\n (with a \\) special characters.\n \"\"\"\n if str is None or _is_legal_key(str):\n return str\n else:\n return '\"' + str.translate(_Translator) + '\"'\n\n\n_is_legal_key = re.compile('[%s]+' % re.escape(_LegalChars)).fullmatch\n\n# ------------------------------------------------------------ #\n# Custom SimpleCookie\n# ------------------------------------------------------------ #\n\n\nclass CookieJar(dict):\n \"\"\"CookieJar dynamically writes headers as cookies are added and removed\n It gets around the limitation of one header per name by using the\n MultiHeader class to provide a unique key that encodes to Set-Cookie.\n \"\"\"\n\n def __init__(self, headers):\n super().__init__()\n self.headers = headers\n self.cookie_headers = {}\n\n def __setitem__(self, key, value):\n # If this cookie doesn't exist, add it to the header keys\n cookie_header = self.cookie_headers.get(key)\n if not cookie_header:\n cookie = Cookie(key, value)\n cookie['path'] = '/'\n cookie_header = MultiHeader(\"Set-Cookie\")\n self.cookie_headers[key] = cookie_header\n self.headers[cookie_header] = cookie\n return super().__setitem__(key, cookie)\n else:\n self[key].value = value\n\n def __delitem__(self, key):\n if key not in self.cookie_headers:\n self[key] = ''\n self[key]['max-age'] = 0\n else:\n cookie_header = self.cookie_headers[key]\n del self.headers[cookie_header]\n del self.cookie_headers[key]\n return super().__delitem__(key)\n\n\nclass Cookie(dict):\n \"\"\"A stripped down version of Morsel from SimpleCookie #gottagofast\"\"\"\n _keys = {\n \"expires\": \"expires\",\n \"path\": \"Path\",\n \"comment\": \"Comment\",\n \"domain\": \"Domain\",\n \"max-age\": \"Max-Age\",\n \"secure\": \"Secure\",\n \"httponly\": \"HttpOnly\",\n \"version\": \"Version\",\n }\n _flags = {'secure', 'httponly'}\n\n def __init__(self, key, value):\n if key in self._keys:\n raise KeyError(\"Cookie name is a reserved word\")\n if not _is_legal_key(key):\n raise KeyError(\"Cookie key contains illegal characters\")\n self.key = key\n self.value = value\n super().__init__()\n\n def __setitem__(self, key, value):\n if key not in self._keys:\n raise KeyError(\"Unknown cookie property\")\n return super().__setitem__(key, value)\n\n def encode(self, encoding):\n output = ['%s=%s' % (self.key, _quote(self.value))]\n for key, value in self.items():\n if key == 'max-age':\n try:\n output.append('%s=%d' % (self._keys[key], value))\n except TypeError:\n output.append('%s=%s' % (self._keys[key], value))\n elif key == 'expires':\n try:\n output.append('%s=%s' % (\n self._keys[key],\n value.strftime(\"%a, %d-%b-%Y %T GMT\")\n ))\n except AttributeError:\n output.append('%s=%s' % (self._keys[key], value))\n elif key in self._flags and self[key]:\n output.append(self._keys[key])\n else:\n output.append('%s=%s' % (self._keys[key], value))\n\n return \"; \".join(output).encode(encoding)\n\n# ------------------------------------------------------------ #\n# Header Trickery\n# ------------------------------------------------------------ #\n\n\nclass MultiHeader:\n \"\"\"String-holding object which allow us to set a header within response\n that has a unique key, but may contain duplicate header names\n \"\"\"\n def __init__(self, name):\n self.name = name\n\n def encode(self):\n return self.name.encode()\n", "path": "sanic/cookies.py"}], "after_files": [{"content": "import re\nimport string\n\n# ------------------------------------------------------------ #\n# SimpleCookie\n# ------------------------------------------------------------ #\n\n# Straight up copied this section of dark magic from SimpleCookie\n\n_LegalChars = string.ascii_letters + string.digits + \"!#$%&'*+-.^_`|~:\"\n_UnescapedChars = _LegalChars + ' ()/<=>?@[]{}'\n\n_Translator = {n: '\\\\%03o' % n\n for n in set(range(256)) - set(map(ord, _UnescapedChars))}\n_Translator.update({\n ord('\"'): '\\\\\"',\n ord('\\\\'): '\\\\\\\\',\n})\n\n\ndef _quote(str):\n \"\"\"Quote a string for use in a cookie header.\n If the string does not need to be double-quoted, then just return the\n string. Otherwise, surround the string in doublequotes and quote\n (with a \\) special characters.\n \"\"\"\n if str is None or _is_legal_key(str):\n return str\n else:\n return '\"' + str.translate(_Translator) + '\"'\n\n\n_is_legal_key = re.compile('[%s]+' % re.escape(_LegalChars)).fullmatch\n\n# ------------------------------------------------------------ #\n# Custom SimpleCookie\n# ------------------------------------------------------------ #\n\n\nclass CookieJar(dict):\n \"\"\"CookieJar dynamically writes headers as cookies are added and removed\n It gets around the limitation of one header per name by using the\n MultiHeader class to provide a unique key that encodes to Set-Cookie.\n \"\"\"\n\n def __init__(self, headers):\n super().__init__()\n self.headers = headers\n self.cookie_headers = {}\n\n def __setitem__(self, key, value):\n # If this cookie doesn't exist, add it to the header keys\n cookie_header = self.cookie_headers.get(key)\n if not cookie_header:\n cookie = Cookie(key, value)\n cookie['path'] = '/'\n cookie_header = MultiHeader(\"Set-Cookie\")\n self.cookie_headers[key] = cookie_header\n self.headers[cookie_header] = cookie\n return super().__setitem__(key, cookie)\n else:\n self[key].value = value\n\n def __delitem__(self, key):\n if key not in self.cookie_headers:\n self[key] = ''\n self[key]['max-age'] = 0\n else:\n cookie_header = self.cookie_headers[key]\n del self.headers[cookie_header]\n del self.cookie_headers[key]\n return super().__delitem__(key)\n\n\nclass Cookie(dict):\n \"\"\"A stripped down version of Morsel from SimpleCookie #gottagofast\"\"\"\n _keys = {\n \"expires\": \"expires\",\n \"path\": \"Path\",\n \"comment\": \"Comment\",\n \"domain\": \"Domain\",\n \"max-age\": \"Max-Age\",\n \"secure\": \"Secure\",\n \"httponly\": \"HttpOnly\",\n \"version\": \"Version\",\n }\n _flags = {'secure', 'httponly'}\n\n def __init__(self, key, value):\n if key in self._keys:\n raise KeyError(\"Cookie name is a reserved word\")\n if not _is_legal_key(key):\n raise KeyError(\"Cookie key contains illegal characters\")\n self.key = key\n self.value = value\n super().__init__()\n\n def __setitem__(self, key, value):\n if key not in self._keys:\n raise KeyError(\"Unknown cookie property\")\n if value is not False:\n return super().__setitem__(key, value)\n\n def encode(self, encoding):\n output = ['%s=%s' % (self.key, _quote(self.value))]\n for key, value in self.items():\n if key == 'max-age':\n try:\n output.append('%s=%d' % (self._keys[key], value))\n except TypeError:\n output.append('%s=%s' % (self._keys[key], value))\n elif key == 'expires':\n try:\n output.append('%s=%s' % (\n self._keys[key],\n value.strftime(\"%a, %d-%b-%Y %T GMT\")\n ))\n except AttributeError:\n output.append('%s=%s' % (self._keys[key], value))\n elif key in self._flags and self[key]:\n output.append(self._keys[key])\n else:\n output.append('%s=%s' % (self._keys[key], value))\n\n return \"; \".join(output).encode(encoding)\n\n# ------------------------------------------------------------ #\n# Header Trickery\n# ------------------------------------------------------------ #\n\n\nclass MultiHeader:\n \"\"\"String-holding object which allow us to set a header within response\n that has a unique key, but may contain duplicate header names\n \"\"\"\n def __init__(self, name):\n self.name = name\n\n def encode(self):\n return self.name.encode()\n", "path": "sanic/cookies.py"}]} | 1,702 | 132 |
gh_patches_debug_20089 | rasdani/github-patches | git_diff | translate__translate-3400 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
json2po output strings in wrong order
Running `json2po` outputs a valid PO files, but the strings are in the wrong order. I’m not sure exactly _which_ order it is, but is _neither_ the original order, the reverse order or alphabetically. Here’s an example JSON file:
``` json
{
"foo": "foo",
"bar": "bar",
"baz": "baz"
}
```
Running `json2po` on this file results in the following (with a PO file header):
``` po
#: .baz
msgid "baz"
msgstr ""
#: .foo
msgid "foo"
msgstr ""
#: .bar
msgid "bar"
msgstr ""
```
Having the strings in a different order than in the original file makes it much harder to correctly translate the file, as one loses context.
I observe this bug both in version 1.13.0 and 1.14.0-rc1.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `translate/storage/jsonl10n.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright 2007,2009-2011 Zuza Software Foundation
4 #
5 # This file is part of the Translate Toolkit.
6 #
7 # This program is free software; you can redistribute it and/or modify
8 # it under the terms of the GNU General Public License as published by
9 # the Free Software Foundation; either version 2 of the License, or
10 # (at your option) any later version.
11 #
12 # This program is distributed in the hope that it will be useful,
13 # but WITHOUT ANY WARRANTY; without even the implied warranty of
14 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 # GNU General Public License for more details.
16 #
17 # You should have received a copy of the GNU General Public License
18 # along with this program; if not, see <http://www.gnu.org/licenses/>.
19
20 r"""Class that manages JSON data files for translation
21
22 JSON is an acronym for JavaScript Object Notation, it is an open standard
23 designed for human-readable data interchange.
24
25 JSON basic types:
26
27 - Number (integer or real)
28 - String (double-quoted Unicode with backslash escaping)
29 - Boolean (true or false)
30 - Array (an ordered sequence of values, comma-separated and enclosed in square
31 brackets)
32 - Object (a collection of key:value pairs, comma-separated and enclosed in
33 curly braces)
34 - null
35
36 Example:
37
38 .. code-block:: json
39
40 {
41 "firstName": "John",
42 "lastName": "Smith",
43 "age": 25,
44 "address": {
45 "streetAddress": "21 2nd Street",
46 "city": "New York",
47 "state": "NY",
48 "postalCode": "10021"
49 },
50 "phoneNumber": [
51 {
52 "type": "home",
53 "number": "212 555-1234"
54 },
55 {
56 "type": "fax",
57 "number": "646 555-4567"
58 }
59 ]
60 }
61
62
63 TODO:
64
65 - Handle ``\u`` and other escapes in Unicode
66 - Manage data type storage and conversion. True --> "True" --> True
67 - Sort the extracted data to the order of the JSON file
68
69 """
70
71 import json
72 import os
73 try:
74 from collections import OrderedDict
75 except ImportError:
76 # Python 2.6 does not have OrderedDict and also can't use it in
77 # json.loads()
78 OrderedDict = None
79 import six
80
81 from translate.storage import base
82
83
84 class JsonUnit(base.TranslationUnit):
85 """A JSON entry"""
86
87 def __init__(self, source=None, ref=None, item=None, **kwargs):
88 self._id = None
89 self._item = str(os.urandom(30))
90 if item is not None:
91 self._item = item
92 self._ref = {}
93 if ref is not None:
94 self._ref = ref
95 if ref is None and item is None:
96 self._ref[self._item] = ""
97 if source:
98 self.source = source
99 super(JsonUnit, self).__init__(source)
100
101 def getsource(self):
102 return self.target
103
104 def setsource(self, source):
105 self.target = source
106 source = property(getsource, setsource)
107
108 def gettarget(self):
109
110 def change_type(value):
111 if isinstance(value, bool):
112 return str(value)
113 return value
114
115 if isinstance(self._ref, list):
116 return change_type(self._ref[self._item])
117 elif isinstance(self._ref, dict):
118 return change_type(self._ref[self._item])
119
120 def settarget(self, target):
121
122 def change_type(oldvalue, newvalue):
123 if isinstance(oldvalue, bool):
124 newvalue = bool(newvalue)
125 return newvalue
126
127 if isinstance(self._ref, list):
128 self._ref[int(self._item)] = change_type(self._ref[int(self._item)],
129 target)
130 elif isinstance(self._ref, dict):
131 self._ref[self._item] = change_type(self._ref[self._item], target)
132 else:
133 raise ValueError("We don't know how to handle:\n"
134 "Type: %s\n"
135 "Value: %s" % (type(self._ref), target))
136 target = property(gettarget, settarget)
137
138 def setid(self, value):
139 self._id = value
140
141 def getid(self):
142 return self._id
143
144 def getlocations(self):
145 return [self.getid()]
146
147
148 class JsonFile(base.TranslationStore):
149 """A JSON file"""
150 UnitClass = JsonUnit
151
152 def __init__(self, inputfile=None, filter=None, **kwargs):
153 """construct a JSON file, optionally reading in from inputfile."""
154 super(JsonFile, self).__init__(**kwargs)
155 self._filter = filter
156 self.filename = ''
157 self._file = u''
158 if inputfile is not None:
159 self.parse(inputfile)
160
161 def serialize(self, out):
162 units = {}
163 for unit in self.unit_iter():
164 path = unit.getid().lstrip('.')
165 units[path] = unit.target
166 out.write(json.dumps(units, sort_keys=True, separators=(',', ': '),
167 indent=4, ensure_ascii=False).encode(self.encoding))
168 out.write(b'\n')
169
170 def _extract_translatables(self, data, stop=None, prev="", name_node=None,
171 name_last_node=None, last_node=None):
172 """Recursive function to extract items from the data files
173
174 :param data: the current branch to walk down
175 :param stop: a list of leaves to extract or None to extract everything
176 :param prev: the heirarchy of the tree at this iteration
177 :param name_node:
178 :param name_last_node: the name of the last node
179 :param last_node: the last list or dict
180 """
181 if isinstance(data, dict):
182 for k, v in six.iteritems(data):
183 for x in self._extract_translatables(v, stop,
184 "%s.%s" % (prev, k),
185 k, None, data):
186 yield x
187 elif isinstance(data, list):
188 for i, item in enumerate(data):
189 for x in self._extract_translatables(item, stop,
190 "%s[%s]" % (prev, i),
191 i, name_node, data):
192 yield x
193 # apply filter
194 elif (stop is None or
195 (isinstance(last_node, dict) and name_node in stop) or
196 (isinstance(last_node, list) and name_last_node in stop)):
197
198 if isinstance(data, six.string_types):
199 yield (prev, data, last_node, name_node)
200 elif isinstance(data, bool):
201 yield (prev, str(data), last_node, name_node)
202 elif data is None:
203 pass
204 else:
205 raise ValueError("We don't handle these values:\n"
206 "Type: %s\n"
207 "Data: %s\n"
208 "Previous: %s" % (type(data), data, prev))
209
210 def parse(self, input):
211 """parse the given file or file source string"""
212 if hasattr(input, 'name'):
213 self.filename = input.name
214 elif not getattr(self, 'filename', ''):
215 self.filename = ''
216 if hasattr(input, "read"):
217 src = input.read()
218 input.close()
219 input = src
220 if isinstance(input, bytes):
221 input = input.decode('utf-8')
222 try:
223 if OrderedDict is not None:
224 self._file = json.loads(input, object_pairs_hook=OrderedDict)
225 else:
226 # object_pairs_hook is not present in Python 2.6
227 self._file = json.loads(input)
228 except ValueError as e:
229 raise base.ParseError(e.message)
230
231 for k, data, ref, item in self._extract_translatables(self._file,
232 stop=self._filter):
233 unit = self.UnitClass(data, ref, item)
234 unit.setid(k)
235 self.addunit(unit)
236
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/translate/storage/jsonl10n.py b/translate/storage/jsonl10n.py
--- a/translate/storage/jsonl10n.py
+++ b/translate/storage/jsonl10n.py
@@ -64,7 +64,6 @@
- Handle ``\u`` and other escapes in Unicode
- Manage data type storage and conversion. True --> "True" --> True
-- Sort the extracted data to the order of the JSON file
"""
@@ -159,11 +158,14 @@
self.parse(inputfile)
def serialize(self, out):
- units = {}
+ if OrderedDict is not None:
+ units = OrderedDict()
+ else:
+ units = {}
for unit in self.unit_iter():
path = unit.getid().lstrip('.')
units[path] = unit.target
- out.write(json.dumps(units, sort_keys=True, separators=(',', ': '),
+ out.write(json.dumps(units, separators=(',', ': '),
indent=4, ensure_ascii=False).encode(self.encoding))
out.write(b'\n')
| {"golden_diff": "diff --git a/translate/storage/jsonl10n.py b/translate/storage/jsonl10n.py\n--- a/translate/storage/jsonl10n.py\n+++ b/translate/storage/jsonl10n.py\n@@ -64,7 +64,6 @@\n \n - Handle ``\\u`` and other escapes in Unicode\n - Manage data type storage and conversion. True --> \"True\" --> True\n-- Sort the extracted data to the order of the JSON file\n \n \"\"\"\n \n@@ -159,11 +158,14 @@\n self.parse(inputfile)\n \n def serialize(self, out):\n- units = {}\n+ if OrderedDict is not None:\n+ units = OrderedDict()\n+ else:\n+ units = {}\n for unit in self.unit_iter():\n path = unit.getid().lstrip('.')\n units[path] = unit.target\n- out.write(json.dumps(units, sort_keys=True, separators=(',', ': '),\n+ out.write(json.dumps(units, separators=(',', ': '),\n indent=4, ensure_ascii=False).encode(self.encoding))\n out.write(b'\\n')\n", "issue": "json2po output strings in wrong order\nRunning `json2po` outputs a valid PO files, but the strings are in the wrong order. I\u2019m not sure exactly _which_ order it is, but is _neither_ the original order, the reverse order or alphabetically. Here\u2019s an example JSON file:\n\n``` json\n{\n \"foo\": \"foo\",\n \"bar\": \"bar\",\n \"baz\": \"baz\"\n}\n```\n\nRunning `json2po` on this file results in the following (with a PO file header):\n\n``` po\n#: .baz\nmsgid \"baz\"\nmsgstr \"\"\n\n#: .foo\nmsgid \"foo\"\nmsgstr \"\"\n\n#: .bar\nmsgid \"bar\"\nmsgstr \"\"\n```\n\nHaving the strings in a different order than in the original file makes it much harder to correctly translate the file, as one loses context.\n\nI observe this bug both in version 1.13.0 and 1.14.0-rc1.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright 2007,2009-2011 Zuza Software Foundation\n#\n# This file is part of the Translate Toolkit.\n#\n# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation; either version 2 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program; if not, see <http://www.gnu.org/licenses/>.\n\nr\"\"\"Class that manages JSON data files for translation\n\nJSON is an acronym for JavaScript Object Notation, it is an open standard\ndesigned for human-readable data interchange.\n\nJSON basic types:\n\n- Number (integer or real)\n- String (double-quoted Unicode with backslash escaping)\n- Boolean (true or false)\n- Array (an ordered sequence of values, comma-separated and enclosed in square\n brackets)\n- Object (a collection of key:value pairs, comma-separated and enclosed in\n curly braces)\n- null\n\nExample:\n\n.. code-block:: json\n\n {\n \"firstName\": \"John\",\n \"lastName\": \"Smith\",\n \"age\": 25,\n \"address\": {\n \"streetAddress\": \"21 2nd Street\",\n \"city\": \"New York\",\n \"state\": \"NY\",\n \"postalCode\": \"10021\"\n },\n \"phoneNumber\": [\n {\n \"type\": \"home\",\n \"number\": \"212 555-1234\"\n },\n {\n \"type\": \"fax\",\n \"number\": \"646 555-4567\"\n }\n ]\n }\n\n\nTODO:\n\n- Handle ``\\u`` and other escapes in Unicode\n- Manage data type storage and conversion. True --> \"True\" --> True\n- Sort the extracted data to the order of the JSON file\n\n\"\"\"\n\nimport json\nimport os\ntry:\n from collections import OrderedDict\nexcept ImportError:\n # Python 2.6 does not have OrderedDict and also can't use it in\n # json.loads()\n OrderedDict = None\nimport six\n\nfrom translate.storage import base\n\n\nclass JsonUnit(base.TranslationUnit):\n \"\"\"A JSON entry\"\"\"\n\n def __init__(self, source=None, ref=None, item=None, **kwargs):\n self._id = None\n self._item = str(os.urandom(30))\n if item is not None:\n self._item = item\n self._ref = {}\n if ref is not None:\n self._ref = ref\n if ref is None and item is None:\n self._ref[self._item] = \"\"\n if source:\n self.source = source\n super(JsonUnit, self).__init__(source)\n\n def getsource(self):\n return self.target\n\n def setsource(self, source):\n self.target = source\n source = property(getsource, setsource)\n\n def gettarget(self):\n\n def change_type(value):\n if isinstance(value, bool):\n return str(value)\n return value\n\n if isinstance(self._ref, list):\n return change_type(self._ref[self._item])\n elif isinstance(self._ref, dict):\n return change_type(self._ref[self._item])\n\n def settarget(self, target):\n\n def change_type(oldvalue, newvalue):\n if isinstance(oldvalue, bool):\n newvalue = bool(newvalue)\n return newvalue\n\n if isinstance(self._ref, list):\n self._ref[int(self._item)] = change_type(self._ref[int(self._item)],\n target)\n elif isinstance(self._ref, dict):\n self._ref[self._item] = change_type(self._ref[self._item], target)\n else:\n raise ValueError(\"We don't know how to handle:\\n\"\n \"Type: %s\\n\"\n \"Value: %s\" % (type(self._ref), target))\n target = property(gettarget, settarget)\n\n def setid(self, value):\n self._id = value\n\n def getid(self):\n return self._id\n\n def getlocations(self):\n return [self.getid()]\n\n\nclass JsonFile(base.TranslationStore):\n \"\"\"A JSON file\"\"\"\n UnitClass = JsonUnit\n\n def __init__(self, inputfile=None, filter=None, **kwargs):\n \"\"\"construct a JSON file, optionally reading in from inputfile.\"\"\"\n super(JsonFile, self).__init__(**kwargs)\n self._filter = filter\n self.filename = ''\n self._file = u''\n if inputfile is not None:\n self.parse(inputfile)\n\n def serialize(self, out):\n units = {}\n for unit in self.unit_iter():\n path = unit.getid().lstrip('.')\n units[path] = unit.target\n out.write(json.dumps(units, sort_keys=True, separators=(',', ': '),\n indent=4, ensure_ascii=False).encode(self.encoding))\n out.write(b'\\n')\n\n def _extract_translatables(self, data, stop=None, prev=\"\", name_node=None,\n name_last_node=None, last_node=None):\n \"\"\"Recursive function to extract items from the data files\n\n :param data: the current branch to walk down\n :param stop: a list of leaves to extract or None to extract everything\n :param prev: the heirarchy of the tree at this iteration\n :param name_node:\n :param name_last_node: the name of the last node\n :param last_node: the last list or dict\n \"\"\"\n if isinstance(data, dict):\n for k, v in six.iteritems(data):\n for x in self._extract_translatables(v, stop,\n \"%s.%s\" % (prev, k),\n k, None, data):\n yield x\n elif isinstance(data, list):\n for i, item in enumerate(data):\n for x in self._extract_translatables(item, stop,\n \"%s[%s]\" % (prev, i),\n i, name_node, data):\n yield x\n # apply filter\n elif (stop is None or\n (isinstance(last_node, dict) and name_node in stop) or\n (isinstance(last_node, list) and name_last_node in stop)):\n\n if isinstance(data, six.string_types):\n yield (prev, data, last_node, name_node)\n elif isinstance(data, bool):\n yield (prev, str(data), last_node, name_node)\n elif data is None:\n pass\n else:\n raise ValueError(\"We don't handle these values:\\n\"\n \"Type: %s\\n\"\n \"Data: %s\\n\"\n \"Previous: %s\" % (type(data), data, prev))\n\n def parse(self, input):\n \"\"\"parse the given file or file source string\"\"\"\n if hasattr(input, 'name'):\n self.filename = input.name\n elif not getattr(self, 'filename', ''):\n self.filename = ''\n if hasattr(input, \"read\"):\n src = input.read()\n input.close()\n input = src\n if isinstance(input, bytes):\n input = input.decode('utf-8')\n try:\n if OrderedDict is not None:\n self._file = json.loads(input, object_pairs_hook=OrderedDict)\n else:\n # object_pairs_hook is not present in Python 2.6\n self._file = json.loads(input)\n except ValueError as e:\n raise base.ParseError(e.message)\n\n for k, data, ref, item in self._extract_translatables(self._file,\n stop=self._filter):\n unit = self.UnitClass(data, ref, item)\n unit.setid(k)\n self.addunit(unit)\n", "path": "translate/storage/jsonl10n.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright 2007,2009-2011 Zuza Software Foundation\n#\n# This file is part of the Translate Toolkit.\n#\n# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation; either version 2 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program; if not, see <http://www.gnu.org/licenses/>.\n\nr\"\"\"Class that manages JSON data files for translation\n\nJSON is an acronym for JavaScript Object Notation, it is an open standard\ndesigned for human-readable data interchange.\n\nJSON basic types:\n\n- Number (integer or real)\n- String (double-quoted Unicode with backslash escaping)\n- Boolean (true or false)\n- Array (an ordered sequence of values, comma-separated and enclosed in square\n brackets)\n- Object (a collection of key:value pairs, comma-separated and enclosed in\n curly braces)\n- null\n\nExample:\n\n.. code-block:: json\n\n {\n \"firstName\": \"John\",\n \"lastName\": \"Smith\",\n \"age\": 25,\n \"address\": {\n \"streetAddress\": \"21 2nd Street\",\n \"city\": \"New York\",\n \"state\": \"NY\",\n \"postalCode\": \"10021\"\n },\n \"phoneNumber\": [\n {\n \"type\": \"home\",\n \"number\": \"212 555-1234\"\n },\n {\n \"type\": \"fax\",\n \"number\": \"646 555-4567\"\n }\n ]\n }\n\n\nTODO:\n\n- Handle ``\\u`` and other escapes in Unicode\n- Manage data type storage and conversion. True --> \"True\" --> True\n\n\"\"\"\n\nimport json\nimport os\ntry:\n from collections import OrderedDict\nexcept ImportError:\n # Python 2.6 does not have OrderedDict and also can't use it in\n # json.loads()\n OrderedDict = None\nimport six\n\nfrom translate.storage import base\n\n\nclass JsonUnit(base.TranslationUnit):\n \"\"\"A JSON entry\"\"\"\n\n def __init__(self, source=None, ref=None, item=None, **kwargs):\n self._id = None\n self._item = str(os.urandom(30))\n if item is not None:\n self._item = item\n self._ref = {}\n if ref is not None:\n self._ref = ref\n if ref is None and item is None:\n self._ref[self._item] = \"\"\n if source:\n self.source = source\n super(JsonUnit, self).__init__(source)\n\n def getsource(self):\n return self.target\n\n def setsource(self, source):\n self.target = source\n source = property(getsource, setsource)\n\n def gettarget(self):\n\n def change_type(value):\n if isinstance(value, bool):\n return str(value)\n return value\n\n if isinstance(self._ref, list):\n return change_type(self._ref[self._item])\n elif isinstance(self._ref, dict):\n return change_type(self._ref[self._item])\n\n def settarget(self, target):\n\n def change_type(oldvalue, newvalue):\n if isinstance(oldvalue, bool):\n newvalue = bool(newvalue)\n return newvalue\n\n if isinstance(self._ref, list):\n self._ref[int(self._item)] = change_type(self._ref[int(self._item)],\n target)\n elif isinstance(self._ref, dict):\n self._ref[self._item] = change_type(self._ref[self._item], target)\n else:\n raise ValueError(\"We don't know how to handle:\\n\"\n \"Type: %s\\n\"\n \"Value: %s\" % (type(self._ref), target))\n target = property(gettarget, settarget)\n\n def setid(self, value):\n self._id = value\n\n def getid(self):\n return self._id\n\n def getlocations(self):\n return [self.getid()]\n\n\nclass JsonFile(base.TranslationStore):\n \"\"\"A JSON file\"\"\"\n UnitClass = JsonUnit\n\n def __init__(self, inputfile=None, filter=None, **kwargs):\n \"\"\"construct a JSON file, optionally reading in from inputfile.\"\"\"\n super(JsonFile, self).__init__(**kwargs)\n self._filter = filter\n self.filename = ''\n self._file = u''\n if inputfile is not None:\n self.parse(inputfile)\n\n def serialize(self, out):\n if OrderedDict is not None:\n units = OrderedDict()\n else:\n units = {}\n for unit in self.unit_iter():\n path = unit.getid().lstrip('.')\n units[path] = unit.target\n out.write(json.dumps(units, separators=(',', ': '),\n indent=4, ensure_ascii=False).encode(self.encoding))\n out.write(b'\\n')\n\n def _extract_translatables(self, data, stop=None, prev=\"\", name_node=None,\n name_last_node=None, last_node=None):\n \"\"\"Recursive function to extract items from the data files\n\n :param data: the current branch to walk down\n :param stop: a list of leaves to extract or None to extract everything\n :param prev: the heirarchy of the tree at this iteration\n :param name_node:\n :param name_last_node: the name of the last node\n :param last_node: the last list or dict\n \"\"\"\n if isinstance(data, dict):\n for k, v in six.iteritems(data):\n for x in self._extract_translatables(v, stop,\n \"%s.%s\" % (prev, k),\n k, None, data):\n yield x\n elif isinstance(data, list):\n for i, item in enumerate(data):\n for x in self._extract_translatables(item, stop,\n \"%s[%s]\" % (prev, i),\n i, name_node, data):\n yield x\n # apply filter\n elif (stop is None or\n (isinstance(last_node, dict) and name_node in stop) or\n (isinstance(last_node, list) and name_last_node in stop)):\n\n if isinstance(data, six.string_types):\n yield (prev, data, last_node, name_node)\n elif isinstance(data, bool):\n yield (prev, str(data), last_node, name_node)\n elif data is None:\n pass\n else:\n raise ValueError(\"We don't handle these values:\\n\"\n \"Type: %s\\n\"\n \"Data: %s\\n\"\n \"Previous: %s\" % (type(data), data, prev))\n\n def parse(self, input):\n \"\"\"parse the given file or file source string\"\"\"\n if hasattr(input, 'name'):\n self.filename = input.name\n elif not getattr(self, 'filename', ''):\n self.filename = ''\n if hasattr(input, \"read\"):\n src = input.read()\n input.close()\n input = src\n if isinstance(input, bytes):\n input = input.decode('utf-8')\n try:\n if OrderedDict is not None:\n self._file = json.loads(input, object_pairs_hook=OrderedDict)\n else:\n # object_pairs_hook is not present in Python 2.6\n self._file = json.loads(input)\n except ValueError as e:\n raise base.ParseError(e.message)\n\n for k, data, ref, item in self._extract_translatables(self._file,\n stop=self._filter):\n unit = self.UnitClass(data, ref, item)\n unit.setid(k)\n self.addunit(unit)\n", "path": "translate/storage/jsonl10n.py"}]} | 2,809 | 240 |
gh_patches_debug_39301 | rasdani/github-patches | git_diff | mathesar-foundation__mathesar-114 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Handle routing takeover at client
Currently, routing is handled by Django. When the user visits the application initially, the routing should be handled by Django, and after the application loads, the routing should be taken over by client.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mathesar/urls.py`
Content:
```
1 from django.urls import include, path
2 from rest_framework_nested import routers
3
4 from mathesar.views import api, frontend
5
6
7 router = routers.DefaultRouter()
8 router.register(r'tables', api.TableViewSet)
9 router.register(r'schemas', api.SchemaViewSet)
10 router.register(r'database_keys', api.DatabaseKeyViewSet, basename='database_keys')
11
12 records_router = routers.NestedSimpleRouter(router, r'tables', lookup='table')
13 records_router.register(r'records', api.RecordViewSet, basename='table-records')
14
15 urlpatterns = [
16 path('', frontend.index, name="index"),
17 path(
18 'tables/<int:pk>/',
19 frontend.TableDetail.as_view(),
20 name='frontend-table-detail',
21 ),
22 path('api/v0/', include(router.urls)),
23 path('api/v0/', include(records_router.urls)),
24 ]
25
```
Path: `mathesar/views/frontend.py`
Content:
```
1 from django.http import HttpResponseRedirect
2 from django.shortcuts import render
3 from django.urls import reverse
4 from django.views.generic import DetailView
5
6 from mathesar.forms.forms import UploadFileForm
7 from mathesar.imports.csv import create_table_from_csv
8 from mathesar.models import Table, Schema
9 from mathesar.serializers import SchemaSerializer
10
11
12 def index(request):
13 tables = Table.objects.all()
14 if request.method == "POST":
15 form = UploadFileForm(request.POST, request.FILES)
16 if form.is_valid():
17 table = create_table_from_csv(
18 name=form.cleaned_data["table_name"],
19 schema=form.cleaned_data["schema_name"],
20 database_key=form.cleaned_data["database_key"],
21 csv_file=request.FILES["file"]
22 )
23 return HttpResponseRedirect(
24 reverse("frontend-table-detail", kwargs={"pk": table.id})
25 )
26 else:
27 form = UploadFileForm()
28 schema_serializer = SchemaSerializer(Schema.objects.all(), many=True, context={'request': request})
29 return render(
30 request,
31 "mathesar/index.html",
32 {
33 "form": form,
34 "tables": sorted(tables, key=lambda x: x.schema.name),
35 "schema_data": schema_serializer.data
36 },
37 )
38
39
40 class TableDetail(DetailView):
41 context_object_name = "table"
42 queryset = Table.objects.all()
43
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mathesar/urls.py b/mathesar/urls.py
--- a/mathesar/urls.py
+++ b/mathesar/urls.py
@@ -14,11 +14,7 @@
urlpatterns = [
path('', frontend.index, name="index"),
- path(
- 'tables/<int:pk>/',
- frontend.TableDetail.as_view(),
- name='frontend-table-detail',
- ),
+ path('tables/<int:pk>', frontend.table, name="table"),
path('api/v0/', include(router.urls)),
path('api/v0/', include(records_router.urls)),
]
diff --git a/mathesar/views/frontend.py b/mathesar/views/frontend.py
--- a/mathesar/views/frontend.py
+++ b/mathesar/views/frontend.py
@@ -1,16 +1,22 @@
-from django.http import HttpResponseRedirect
+from django.http import JsonResponse
from django.shortcuts import render
-from django.urls import reverse
-from django.views.generic import DetailView
from mathesar.forms.forms import UploadFileForm
from mathesar.imports.csv import create_table_from_csv
from mathesar.models import Table, Schema
-from mathesar.serializers import SchemaSerializer
+from mathesar.serializers import SchemaSerializer, TableSerializer, RecordSerializer
+from mathesar.database.utils import get_non_default_database_keys
+
+
+def get_common_data(request):
+ schema_serializer = SchemaSerializer(Schema.objects.all(), many=True, context={'request': request})
+ return {
+ "schemas": schema_serializer.data,
+ "databases": get_non_default_database_keys(),
+ }
def index(request):
- tables = Table.objects.all()
if request.method == "POST":
form = UploadFileForm(request.POST, request.FILES)
if form.is_valid():
@@ -20,23 +26,32 @@
database_key=form.cleaned_data["database_key"],
csv_file=request.FILES["file"]
)
- return HttpResponseRedirect(
- reverse("frontend-table-detail", kwargs={"pk": table.id})
- )
- else:
- form = UploadFileForm()
- schema_serializer = SchemaSerializer(Schema.objects.all(), many=True, context={'request': request})
+ return JsonResponse({"pk": table.id}, status=200)
return render(
request,
"mathesar/index.html",
{
- "form": form,
- "tables": sorted(tables, key=lambda x: x.schema.name),
- "schema_data": schema_serializer.data
- },
+ "common_data": get_common_data(request),
+ }
)
-class TableDetail(DetailView):
- context_object_name = "table"
- queryset = Table.objects.all()
+def table(request, pk):
+ try:
+ table_data = Table.objects.get(pk=pk)
+ table_serialized = TableSerializer(table_data, context={'request': request}).data
+ records_serialized = RecordSerializer(table_data.get_records(limit=50, offset=0), many=True, context={'request': request}).data
+ except Table.DoesNotExist:
+ table_serialized = {}
+ records_serialized = []
+ return render(
+ request,
+ "mathesar/index.html",
+ {
+ "common_data": get_common_data(request),
+ "route_specific_data": {
+ "table-detail": table_serialized,
+ "table-records": records_serialized
+ }
+ }
+ )
| {"golden_diff": "diff --git a/mathesar/urls.py b/mathesar/urls.py\n--- a/mathesar/urls.py\n+++ b/mathesar/urls.py\n@@ -14,11 +14,7 @@\n \n urlpatterns = [\n path('', frontend.index, name=\"index\"),\n- path(\n- 'tables/<int:pk>/',\n- frontend.TableDetail.as_view(),\n- name='frontend-table-detail',\n- ),\n+ path('tables/<int:pk>', frontend.table, name=\"table\"),\n path('api/v0/', include(router.urls)),\n path('api/v0/', include(records_router.urls)),\n ]\ndiff --git a/mathesar/views/frontend.py b/mathesar/views/frontend.py\n--- a/mathesar/views/frontend.py\n+++ b/mathesar/views/frontend.py\n@@ -1,16 +1,22 @@\n-from django.http import HttpResponseRedirect\n+from django.http import JsonResponse\n from django.shortcuts import render\n-from django.urls import reverse\n-from django.views.generic import DetailView\n \n from mathesar.forms.forms import UploadFileForm\n from mathesar.imports.csv import create_table_from_csv\n from mathesar.models import Table, Schema\n-from mathesar.serializers import SchemaSerializer\n+from mathesar.serializers import SchemaSerializer, TableSerializer, RecordSerializer\n+from mathesar.database.utils import get_non_default_database_keys\n+\n+\n+def get_common_data(request):\n+ schema_serializer = SchemaSerializer(Schema.objects.all(), many=True, context={'request': request})\n+ return {\n+ \"schemas\": schema_serializer.data,\n+ \"databases\": get_non_default_database_keys(),\n+ }\n \n \n def index(request):\n- tables = Table.objects.all()\n if request.method == \"POST\":\n form = UploadFileForm(request.POST, request.FILES)\n if form.is_valid():\n@@ -20,23 +26,32 @@\n database_key=form.cleaned_data[\"database_key\"],\n csv_file=request.FILES[\"file\"]\n )\n- return HttpResponseRedirect(\n- reverse(\"frontend-table-detail\", kwargs={\"pk\": table.id})\n- )\n- else:\n- form = UploadFileForm()\n- schema_serializer = SchemaSerializer(Schema.objects.all(), many=True, context={'request': request})\n+ return JsonResponse({\"pk\": table.id}, status=200)\n return render(\n request,\n \"mathesar/index.html\",\n {\n- \"form\": form,\n- \"tables\": sorted(tables, key=lambda x: x.schema.name),\n- \"schema_data\": schema_serializer.data\n- },\n+ \"common_data\": get_common_data(request),\n+ }\n )\n \n \n-class TableDetail(DetailView):\n- context_object_name = \"table\"\n- queryset = Table.objects.all()\n+def table(request, pk):\n+ try:\n+ table_data = Table.objects.get(pk=pk)\n+ table_serialized = TableSerializer(table_data, context={'request': request}).data\n+ records_serialized = RecordSerializer(table_data.get_records(limit=50, offset=0), many=True, context={'request': request}).data\n+ except Table.DoesNotExist:\n+ table_serialized = {}\n+ records_serialized = []\n+ return render(\n+ request,\n+ \"mathesar/index.html\",\n+ {\n+ \"common_data\": get_common_data(request),\n+ \"route_specific_data\": {\n+ \"table-detail\": table_serialized,\n+ \"table-records\": records_serialized\n+ }\n+ }\n+ )\n", "issue": "Handle routing takeover at client\nCurrently, routing is handled by Django. When the user visits the application initially, the routing should be handled by Django, and after the application loads, the routing should be taken over by client.\n", "before_files": [{"content": "from django.urls import include, path\nfrom rest_framework_nested import routers\n\nfrom mathesar.views import api, frontend\n\n\nrouter = routers.DefaultRouter()\nrouter.register(r'tables', api.TableViewSet)\nrouter.register(r'schemas', api.SchemaViewSet)\nrouter.register(r'database_keys', api.DatabaseKeyViewSet, basename='database_keys')\n\nrecords_router = routers.NestedSimpleRouter(router, r'tables', lookup='table')\nrecords_router.register(r'records', api.RecordViewSet, basename='table-records')\n\nurlpatterns = [\n path('', frontend.index, name=\"index\"),\n path(\n 'tables/<int:pk>/',\n frontend.TableDetail.as_view(),\n name='frontend-table-detail',\n ),\n path('api/v0/', include(router.urls)),\n path('api/v0/', include(records_router.urls)),\n]\n", "path": "mathesar/urls.py"}, {"content": "from django.http import HttpResponseRedirect\nfrom django.shortcuts import render\nfrom django.urls import reverse\nfrom django.views.generic import DetailView\n\nfrom mathesar.forms.forms import UploadFileForm\nfrom mathesar.imports.csv import create_table_from_csv\nfrom mathesar.models import Table, Schema\nfrom mathesar.serializers import SchemaSerializer\n\n\ndef index(request):\n tables = Table.objects.all()\n if request.method == \"POST\":\n form = UploadFileForm(request.POST, request.FILES)\n if form.is_valid():\n table = create_table_from_csv(\n name=form.cleaned_data[\"table_name\"],\n schema=form.cleaned_data[\"schema_name\"],\n database_key=form.cleaned_data[\"database_key\"],\n csv_file=request.FILES[\"file\"]\n )\n return HttpResponseRedirect(\n reverse(\"frontend-table-detail\", kwargs={\"pk\": table.id})\n )\n else:\n form = UploadFileForm()\n schema_serializer = SchemaSerializer(Schema.objects.all(), many=True, context={'request': request})\n return render(\n request,\n \"mathesar/index.html\",\n {\n \"form\": form,\n \"tables\": sorted(tables, key=lambda x: x.schema.name),\n \"schema_data\": schema_serializer.data\n },\n )\n\n\nclass TableDetail(DetailView):\n context_object_name = \"table\"\n queryset = Table.objects.all()\n", "path": "mathesar/views/frontend.py"}], "after_files": [{"content": "from django.urls import include, path\nfrom rest_framework_nested import routers\n\nfrom mathesar.views import api, frontend\n\n\nrouter = routers.DefaultRouter()\nrouter.register(r'tables', api.TableViewSet)\nrouter.register(r'schemas', api.SchemaViewSet)\nrouter.register(r'database_keys', api.DatabaseKeyViewSet, basename='database_keys')\n\nrecords_router = routers.NestedSimpleRouter(router, r'tables', lookup='table')\nrecords_router.register(r'records', api.RecordViewSet, basename='table-records')\n\nurlpatterns = [\n path('', frontend.index, name=\"index\"),\n path('tables/<int:pk>', frontend.table, name=\"table\"),\n path('api/v0/', include(router.urls)),\n path('api/v0/', include(records_router.urls)),\n]\n", "path": "mathesar/urls.py"}, {"content": "from django.http import JsonResponse\nfrom django.shortcuts import render\n\nfrom mathesar.forms.forms import UploadFileForm\nfrom mathesar.imports.csv import create_table_from_csv\nfrom mathesar.models import Table, Schema\nfrom mathesar.serializers import SchemaSerializer, TableSerializer, RecordSerializer\nfrom mathesar.database.utils import get_non_default_database_keys\n\n\ndef get_common_data(request):\n schema_serializer = SchemaSerializer(Schema.objects.all(), many=True, context={'request': request})\n return {\n \"schemas\": schema_serializer.data,\n \"databases\": get_non_default_database_keys(),\n }\n\n\ndef index(request):\n if request.method == \"POST\":\n form = UploadFileForm(request.POST, request.FILES)\n if form.is_valid():\n table = create_table_from_csv(\n name=form.cleaned_data[\"table_name\"],\n schema=form.cleaned_data[\"schema_name\"],\n database_key=form.cleaned_data[\"database_key\"],\n csv_file=request.FILES[\"file\"]\n )\n return JsonResponse({\"pk\": table.id}, status=200)\n return render(\n request,\n \"mathesar/index.html\",\n {\n \"common_data\": get_common_data(request),\n }\n )\n\n\ndef table(request, pk):\n try:\n table_data = Table.objects.get(pk=pk)\n table_serialized = TableSerializer(table_data, context={'request': request}).data\n records_serialized = RecordSerializer(table_data.get_records(limit=50, offset=0), many=True, context={'request': request}).data\n except Table.DoesNotExist:\n table_serialized = {}\n records_serialized = []\n return render(\n request,\n \"mathesar/index.html\",\n {\n \"common_data\": get_common_data(request),\n \"route_specific_data\": {\n \"table-detail\": table_serialized,\n \"table-records\": records_serialized\n }\n }\n )\n", "path": "mathesar/views/frontend.py"}]} | 888 | 736 |
gh_patches_debug_20660 | rasdani/github-patches | git_diff | jazzband__pip-tools-975 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
pip-compile bumps setuptools within commented-out spec
If you have a `requirements.in` which contains a package which depends on `setuptools`, for example:
```
$ cat requirements.in
protobuf # or google-api-core, ipython, pandas-gbq, pydata-google-auth
```
Then after running `pip-compile`, you get:
```
$ cat requirements.txt
#
# This file is autogenerated by pip-compile
# To update, run:
#
# pip-compile
#
protobuf==3.10.0
six==1.12.0 # via protobuf
# The following packages are considered to be unsafe in a requirements file:
# setuptools==41.4.0 # via protobuf
```
As you can see, `pip-compile` refuses to include `setuptools` directly in a generated `requirements.txt`, instead adding a section with the comment "The following packages are considered to be unsafe in a requirements file:".
This behaviour is fine in the general (and I don't disagree with the comment or behaviour).
However because the commented out `setuptools` line includes a version number, when `setuptools` releases an update then the `requirements.txt` file ends up looking out of date.
Say you had committed a `requirements.txt` which looked like:
```
# requirements...
# The following packages are considered to be unsafe in a requirements file:
# setuptools==41.4.0 # via protobuf
```
Then `setuptools` releases `41.5.0` and now when you run `pip-compile` you get:
```
# requirements...
# The following packages are considered to be unsafe in a requirements file:
# setuptools==41.5.0 # via protobuf
```
This is inconvenient if you happen to regenerate your `requirements.txt` at some point after an upstream releases, however if you're also using `pip-compile` to check that your `requirements.txt` is in step with your `requirements.in` (see #882) in CI then it means that your build now fails for what is essentially a non-change.
I'm not sure what the right solution is here, though perhaps if there was a way for pip-compile to understand its own commented out dependency that might help?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `piptools/writer.py`
Content:
```
1 from __future__ import unicode_literals
2
3 import os
4 from itertools import chain
5
6 from .click import unstyle
7 from .logging import log
8 from .utils import (
9 UNSAFE_PACKAGES,
10 comment,
11 dedup,
12 format_requirement,
13 get_compile_command,
14 key_from_ireq,
15 )
16
17 MESSAGE_UNHASHED_PACKAGE = comment(
18 "# WARNING: pip install will require the following package to be hashed."
19 "\n# Consider using a hashable URL like "
20 "https://github.com/jazzband/pip-tools/archive/SOMECOMMIT.zip"
21 )
22
23 MESSAGE_UNSAFE_PACKAGES_UNPINNED = comment(
24 "# WARNING: The following packages were not pinned, but pip requires them to be"
25 "\n# pinned when the requirements file includes hashes. "
26 "Consider using the --allow-unsafe flag."
27 )
28
29 MESSAGE_UNSAFE_PACKAGES = comment(
30 "# The following packages are considered to be unsafe in a requirements file:"
31 )
32
33 MESSAGE_UNINSTALLABLE = (
34 "The generated requirements file may be rejected by pip install. "
35 "See # WARNING lines for details."
36 )
37
38
39 class OutputWriter(object):
40 def __init__(
41 self,
42 src_files,
43 dst_file,
44 click_ctx,
45 dry_run,
46 emit_header,
47 emit_index,
48 emit_trusted_host,
49 annotate,
50 generate_hashes,
51 default_index_url,
52 index_urls,
53 trusted_hosts,
54 format_control,
55 allow_unsafe,
56 find_links,
57 emit_find_links,
58 ):
59 self.src_files = src_files
60 self.dst_file = dst_file
61 self.click_ctx = click_ctx
62 self.dry_run = dry_run
63 self.emit_header = emit_header
64 self.emit_index = emit_index
65 self.emit_trusted_host = emit_trusted_host
66 self.annotate = annotate
67 self.generate_hashes = generate_hashes
68 self.default_index_url = default_index_url
69 self.index_urls = index_urls
70 self.trusted_hosts = trusted_hosts
71 self.format_control = format_control
72 self.allow_unsafe = allow_unsafe
73 self.find_links = find_links
74 self.emit_find_links = emit_find_links
75
76 def _sort_key(self, ireq):
77 return (not ireq.editable, str(ireq.req).lower())
78
79 def write_header(self):
80 if self.emit_header:
81 yield comment("#")
82 yield comment("# This file is autogenerated by pip-compile")
83 yield comment("# To update, run:")
84 yield comment("#")
85 compile_command = os.environ.get(
86 "CUSTOM_COMPILE_COMMAND"
87 ) or get_compile_command(self.click_ctx)
88 yield comment("# {}".format(compile_command))
89 yield comment("#")
90
91 def write_index_options(self):
92 if self.emit_index:
93 for index, index_url in enumerate(dedup(self.index_urls)):
94 if index_url.rstrip("/") == self.default_index_url:
95 continue
96 flag = "--index-url" if index == 0 else "--extra-index-url"
97 yield "{} {}".format(flag, index_url)
98
99 def write_trusted_hosts(self):
100 if self.emit_trusted_host:
101 for trusted_host in dedup(self.trusted_hosts):
102 yield "--trusted-host {}".format(trusted_host)
103
104 def write_format_controls(self):
105 for nb in dedup(self.format_control.no_binary):
106 yield "--no-binary {}".format(nb)
107 for ob in dedup(self.format_control.only_binary):
108 yield "--only-binary {}".format(ob)
109
110 def write_find_links(self):
111 if self.emit_find_links:
112 for find_link in dedup(self.find_links):
113 yield "--find-links {}".format(find_link)
114
115 def write_flags(self):
116 emitted = False
117 for line in chain(
118 self.write_index_options(),
119 self.write_find_links(),
120 self.write_trusted_hosts(),
121 self.write_format_controls(),
122 ):
123 emitted = True
124 yield line
125 if emitted:
126 yield ""
127
128 def _iter_lines(
129 self,
130 results,
131 unsafe_requirements=None,
132 reverse_dependencies=None,
133 primary_packages=None,
134 markers=None,
135 hashes=None,
136 ):
137 # default values
138 unsafe_requirements = unsafe_requirements or []
139 reverse_dependencies = reverse_dependencies or {}
140 primary_packages = primary_packages or []
141 markers = markers or {}
142 hashes = hashes or {}
143
144 # Check for unhashed or unpinned packages if at least one package does have
145 # hashes, which will trigger pip install's --require-hashes mode.
146 warn_uninstallable = False
147 has_hashes = hashes and any(hash for hash in hashes.values())
148
149 yielded = False
150
151 for line in self.write_header():
152 yield line
153 yielded = True
154 for line in self.write_flags():
155 yield line
156 yielded = True
157
158 unsafe_requirements = (
159 {r for r in results if r.name in UNSAFE_PACKAGES}
160 if not unsafe_requirements
161 else unsafe_requirements
162 )
163 packages = {r for r in results if r.name not in UNSAFE_PACKAGES}
164
165 if packages:
166 packages = sorted(packages, key=self._sort_key)
167 for ireq in packages:
168 if has_hashes and not hashes.get(ireq):
169 yield MESSAGE_UNHASHED_PACKAGE
170 warn_uninstallable = True
171 line = self._format_requirement(
172 ireq,
173 reverse_dependencies,
174 primary_packages,
175 markers.get(key_from_ireq(ireq)),
176 hashes=hashes,
177 )
178 yield line
179 yielded = True
180
181 if unsafe_requirements:
182 unsafe_requirements = sorted(unsafe_requirements, key=self._sort_key)
183 yield ""
184 yielded = True
185 if has_hashes and not self.allow_unsafe:
186 yield MESSAGE_UNSAFE_PACKAGES_UNPINNED
187 warn_uninstallable = True
188 else:
189 yield MESSAGE_UNSAFE_PACKAGES
190
191 for ireq in unsafe_requirements:
192 req = self._format_requirement(
193 ireq,
194 reverse_dependencies,
195 primary_packages,
196 marker=markers.get(key_from_ireq(ireq)),
197 hashes=hashes,
198 )
199 if not self.allow_unsafe:
200 yield comment("# {}".format(req))
201 else:
202 yield req
203
204 # Yield even when there's no real content, so that blank files are written
205 if not yielded:
206 yield ""
207
208 if warn_uninstallable:
209 log.warning(MESSAGE_UNINSTALLABLE)
210
211 def write(
212 self,
213 results,
214 unsafe_requirements,
215 reverse_dependencies,
216 primary_packages,
217 markers,
218 hashes,
219 ):
220
221 for line in self._iter_lines(
222 results,
223 unsafe_requirements,
224 reverse_dependencies,
225 primary_packages,
226 markers,
227 hashes,
228 ):
229 log.info(line)
230 if not self.dry_run:
231 self.dst_file.write(unstyle(line).encode("utf-8"))
232 self.dst_file.write(os.linesep.encode("utf-8"))
233
234 def _format_requirement(
235 self, ireq, reverse_dependencies, primary_packages, marker=None, hashes=None
236 ):
237 ireq_hashes = (hashes if hashes is not None else {}).get(ireq)
238
239 line = format_requirement(ireq, marker=marker, hashes=ireq_hashes)
240
241 if not self.annotate or key_from_ireq(ireq) in primary_packages:
242 return line
243
244 # Annotate what packages this package is required by
245 required_by = reverse_dependencies.get(ireq.name.lower(), [])
246 if required_by:
247 annotation = ", ".join(sorted(required_by))
248 line = "{:24}{}{}".format(
249 line,
250 " \\\n " if ireq_hashes else " ",
251 comment("# via " + annotation),
252 )
253 return line
254
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/piptools/writer.py b/piptools/writer.py
--- a/piptools/writer.py
+++ b/piptools/writer.py
@@ -189,17 +189,18 @@
yield MESSAGE_UNSAFE_PACKAGES
for ireq in unsafe_requirements:
- req = self._format_requirement(
- ireq,
- reverse_dependencies,
- primary_packages,
- marker=markers.get(key_from_ireq(ireq)),
- hashes=hashes,
- )
+ ireq_key = key_from_ireq(ireq)
if not self.allow_unsafe:
- yield comment("# {}".format(req))
+ yield comment("# {}".format(ireq_key))
else:
- yield req
+ line = self._format_requirement(
+ ireq,
+ reverse_dependencies,
+ primary_packages,
+ marker=markers.get(ireq_key),
+ hashes=hashes,
+ )
+ yield line
# Yield even when there's no real content, so that blank files are written
if not yielded:
| {"golden_diff": "diff --git a/piptools/writer.py b/piptools/writer.py\n--- a/piptools/writer.py\n+++ b/piptools/writer.py\n@@ -189,17 +189,18 @@\n yield MESSAGE_UNSAFE_PACKAGES\n \n for ireq in unsafe_requirements:\n- req = self._format_requirement(\n- ireq,\n- reverse_dependencies,\n- primary_packages,\n- marker=markers.get(key_from_ireq(ireq)),\n- hashes=hashes,\n- )\n+ ireq_key = key_from_ireq(ireq)\n if not self.allow_unsafe:\n- yield comment(\"# {}\".format(req))\n+ yield comment(\"# {}\".format(ireq_key))\n else:\n- yield req\n+ line = self._format_requirement(\n+ ireq,\n+ reverse_dependencies,\n+ primary_packages,\n+ marker=markers.get(ireq_key),\n+ hashes=hashes,\n+ )\n+ yield line\n \n # Yield even when there's no real content, so that blank files are written\n if not yielded:\n", "issue": "pip-compile bumps setuptools within commented-out spec\nIf you have a `requirements.in` which contains a package which depends on `setuptools`, for example:\r\n```\r\n$ cat requirements.in \r\nprotobuf # or google-api-core, ipython, pandas-gbq, pydata-google-auth\r\n```\r\nThen after running `pip-compile`, you get:\r\n```\r\n$ cat requirements.txt\r\n#\r\n# This file is autogenerated by pip-compile\r\n# To update, run:\r\n#\r\n# pip-compile\r\n#\r\nprotobuf==3.10.0\r\nsix==1.12.0 # via protobuf\r\n\r\n# The following packages are considered to be unsafe in a requirements file:\r\n# setuptools==41.4.0 # via protobuf\r\n```\r\n\r\nAs you can see, `pip-compile` refuses to include `setuptools` directly in a generated `requirements.txt`, instead adding a section with the comment \"The following packages are considered to be unsafe in a requirements file:\".\r\nThis behaviour is fine in the general (and I don't disagree with the comment or behaviour).\r\n\r\nHowever because the commented out `setuptools` line includes a version number, when `setuptools` releases an update then the `requirements.txt` file ends up looking out of date.\r\n\r\nSay you had committed a `requirements.txt` which looked like:\r\n```\r\n# requirements...\r\n\r\n# The following packages are considered to be unsafe in a requirements file:\r\n# setuptools==41.4.0 # via protobuf\r\n```\r\n\r\nThen `setuptools` releases `41.5.0` and now when you run `pip-compile` you get:\r\n```\r\n# requirements...\r\n\r\n# The following packages are considered to be unsafe in a requirements file:\r\n# setuptools==41.5.0 # via protobuf\r\n```\r\n\r\nThis is inconvenient if you happen to regenerate your `requirements.txt` at some point after an upstream releases, however if you're also using `pip-compile` to check that your `requirements.txt` is in step with your `requirements.in` (see #882) in CI then it means that your build now fails for what is essentially a non-change.\r\n\r\nI'm not sure what the right solution is here, though perhaps if there was a way for pip-compile to understand its own commented out dependency that might help?\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport os\nfrom itertools import chain\n\nfrom .click import unstyle\nfrom .logging import log\nfrom .utils import (\n UNSAFE_PACKAGES,\n comment,\n dedup,\n format_requirement,\n get_compile_command,\n key_from_ireq,\n)\n\nMESSAGE_UNHASHED_PACKAGE = comment(\n \"# WARNING: pip install will require the following package to be hashed.\"\n \"\\n# Consider using a hashable URL like \"\n \"https://github.com/jazzband/pip-tools/archive/SOMECOMMIT.zip\"\n)\n\nMESSAGE_UNSAFE_PACKAGES_UNPINNED = comment(\n \"# WARNING: The following packages were not pinned, but pip requires them to be\"\n \"\\n# pinned when the requirements file includes hashes. \"\n \"Consider using the --allow-unsafe flag.\"\n)\n\nMESSAGE_UNSAFE_PACKAGES = comment(\n \"# The following packages are considered to be unsafe in a requirements file:\"\n)\n\nMESSAGE_UNINSTALLABLE = (\n \"The generated requirements file may be rejected by pip install. \"\n \"See # WARNING lines for details.\"\n)\n\n\nclass OutputWriter(object):\n def __init__(\n self,\n src_files,\n dst_file,\n click_ctx,\n dry_run,\n emit_header,\n emit_index,\n emit_trusted_host,\n annotate,\n generate_hashes,\n default_index_url,\n index_urls,\n trusted_hosts,\n format_control,\n allow_unsafe,\n find_links,\n emit_find_links,\n ):\n self.src_files = src_files\n self.dst_file = dst_file\n self.click_ctx = click_ctx\n self.dry_run = dry_run\n self.emit_header = emit_header\n self.emit_index = emit_index\n self.emit_trusted_host = emit_trusted_host\n self.annotate = annotate\n self.generate_hashes = generate_hashes\n self.default_index_url = default_index_url\n self.index_urls = index_urls\n self.trusted_hosts = trusted_hosts\n self.format_control = format_control\n self.allow_unsafe = allow_unsafe\n self.find_links = find_links\n self.emit_find_links = emit_find_links\n\n def _sort_key(self, ireq):\n return (not ireq.editable, str(ireq.req).lower())\n\n def write_header(self):\n if self.emit_header:\n yield comment(\"#\")\n yield comment(\"# This file is autogenerated by pip-compile\")\n yield comment(\"# To update, run:\")\n yield comment(\"#\")\n compile_command = os.environ.get(\n \"CUSTOM_COMPILE_COMMAND\"\n ) or get_compile_command(self.click_ctx)\n yield comment(\"# {}\".format(compile_command))\n yield comment(\"#\")\n\n def write_index_options(self):\n if self.emit_index:\n for index, index_url in enumerate(dedup(self.index_urls)):\n if index_url.rstrip(\"/\") == self.default_index_url:\n continue\n flag = \"--index-url\" if index == 0 else \"--extra-index-url\"\n yield \"{} {}\".format(flag, index_url)\n\n def write_trusted_hosts(self):\n if self.emit_trusted_host:\n for trusted_host in dedup(self.trusted_hosts):\n yield \"--trusted-host {}\".format(trusted_host)\n\n def write_format_controls(self):\n for nb in dedup(self.format_control.no_binary):\n yield \"--no-binary {}\".format(nb)\n for ob in dedup(self.format_control.only_binary):\n yield \"--only-binary {}\".format(ob)\n\n def write_find_links(self):\n if self.emit_find_links:\n for find_link in dedup(self.find_links):\n yield \"--find-links {}\".format(find_link)\n\n def write_flags(self):\n emitted = False\n for line in chain(\n self.write_index_options(),\n self.write_find_links(),\n self.write_trusted_hosts(),\n self.write_format_controls(),\n ):\n emitted = True\n yield line\n if emitted:\n yield \"\"\n\n def _iter_lines(\n self,\n results,\n unsafe_requirements=None,\n reverse_dependencies=None,\n primary_packages=None,\n markers=None,\n hashes=None,\n ):\n # default values\n unsafe_requirements = unsafe_requirements or []\n reverse_dependencies = reverse_dependencies or {}\n primary_packages = primary_packages or []\n markers = markers or {}\n hashes = hashes or {}\n\n # Check for unhashed or unpinned packages if at least one package does have\n # hashes, which will trigger pip install's --require-hashes mode.\n warn_uninstallable = False\n has_hashes = hashes and any(hash for hash in hashes.values())\n\n yielded = False\n\n for line in self.write_header():\n yield line\n yielded = True\n for line in self.write_flags():\n yield line\n yielded = True\n\n unsafe_requirements = (\n {r for r in results if r.name in UNSAFE_PACKAGES}\n if not unsafe_requirements\n else unsafe_requirements\n )\n packages = {r for r in results if r.name not in UNSAFE_PACKAGES}\n\n if packages:\n packages = sorted(packages, key=self._sort_key)\n for ireq in packages:\n if has_hashes and not hashes.get(ireq):\n yield MESSAGE_UNHASHED_PACKAGE\n warn_uninstallable = True\n line = self._format_requirement(\n ireq,\n reverse_dependencies,\n primary_packages,\n markers.get(key_from_ireq(ireq)),\n hashes=hashes,\n )\n yield line\n yielded = True\n\n if unsafe_requirements:\n unsafe_requirements = sorted(unsafe_requirements, key=self._sort_key)\n yield \"\"\n yielded = True\n if has_hashes and not self.allow_unsafe:\n yield MESSAGE_UNSAFE_PACKAGES_UNPINNED\n warn_uninstallable = True\n else:\n yield MESSAGE_UNSAFE_PACKAGES\n\n for ireq in unsafe_requirements:\n req = self._format_requirement(\n ireq,\n reverse_dependencies,\n primary_packages,\n marker=markers.get(key_from_ireq(ireq)),\n hashes=hashes,\n )\n if not self.allow_unsafe:\n yield comment(\"# {}\".format(req))\n else:\n yield req\n\n # Yield even when there's no real content, so that blank files are written\n if not yielded:\n yield \"\"\n\n if warn_uninstallable:\n log.warning(MESSAGE_UNINSTALLABLE)\n\n def write(\n self,\n results,\n unsafe_requirements,\n reverse_dependencies,\n primary_packages,\n markers,\n hashes,\n ):\n\n for line in self._iter_lines(\n results,\n unsafe_requirements,\n reverse_dependencies,\n primary_packages,\n markers,\n hashes,\n ):\n log.info(line)\n if not self.dry_run:\n self.dst_file.write(unstyle(line).encode(\"utf-8\"))\n self.dst_file.write(os.linesep.encode(\"utf-8\"))\n\n def _format_requirement(\n self, ireq, reverse_dependencies, primary_packages, marker=None, hashes=None\n ):\n ireq_hashes = (hashes if hashes is not None else {}).get(ireq)\n\n line = format_requirement(ireq, marker=marker, hashes=ireq_hashes)\n\n if not self.annotate or key_from_ireq(ireq) in primary_packages:\n return line\n\n # Annotate what packages this package is required by\n required_by = reverse_dependencies.get(ireq.name.lower(), [])\n if required_by:\n annotation = \", \".join(sorted(required_by))\n line = \"{:24}{}{}\".format(\n line,\n \" \\\\\\n \" if ireq_hashes else \" \",\n comment(\"# via \" + annotation),\n )\n return line\n", "path": "piptools/writer.py"}], "after_files": [{"content": "from __future__ import unicode_literals\n\nimport os\nfrom itertools import chain\n\nfrom .click import unstyle\nfrom .logging import log\nfrom .utils import (\n UNSAFE_PACKAGES,\n comment,\n dedup,\n format_requirement,\n get_compile_command,\n key_from_ireq,\n)\n\nMESSAGE_UNHASHED_PACKAGE = comment(\n \"# WARNING: pip install will require the following package to be hashed.\"\n \"\\n# Consider using a hashable URL like \"\n \"https://github.com/jazzband/pip-tools/archive/SOMECOMMIT.zip\"\n)\n\nMESSAGE_UNSAFE_PACKAGES_UNPINNED = comment(\n \"# WARNING: The following packages were not pinned, but pip requires them to be\"\n \"\\n# pinned when the requirements file includes hashes. \"\n \"Consider using the --allow-unsafe flag.\"\n)\n\nMESSAGE_UNSAFE_PACKAGES = comment(\n \"# The following packages are considered to be unsafe in a requirements file:\"\n)\n\nMESSAGE_UNINSTALLABLE = (\n \"The generated requirements file may be rejected by pip install. \"\n \"See # WARNING lines for details.\"\n)\n\n\nclass OutputWriter(object):\n def __init__(\n self,\n src_files,\n dst_file,\n click_ctx,\n dry_run,\n emit_header,\n emit_index,\n emit_trusted_host,\n annotate,\n generate_hashes,\n default_index_url,\n index_urls,\n trusted_hosts,\n format_control,\n allow_unsafe,\n find_links,\n emit_find_links,\n ):\n self.src_files = src_files\n self.dst_file = dst_file\n self.click_ctx = click_ctx\n self.dry_run = dry_run\n self.emit_header = emit_header\n self.emit_index = emit_index\n self.emit_trusted_host = emit_trusted_host\n self.annotate = annotate\n self.generate_hashes = generate_hashes\n self.default_index_url = default_index_url\n self.index_urls = index_urls\n self.trusted_hosts = trusted_hosts\n self.format_control = format_control\n self.allow_unsafe = allow_unsafe\n self.find_links = find_links\n self.emit_find_links = emit_find_links\n\n def _sort_key(self, ireq):\n return (not ireq.editable, str(ireq.req).lower())\n\n def write_header(self):\n if self.emit_header:\n yield comment(\"#\")\n yield comment(\"# This file is autogenerated by pip-compile\")\n yield comment(\"# To update, run:\")\n yield comment(\"#\")\n compile_command = os.environ.get(\n \"CUSTOM_COMPILE_COMMAND\"\n ) or get_compile_command(self.click_ctx)\n yield comment(\"# {}\".format(compile_command))\n yield comment(\"#\")\n\n def write_index_options(self):\n if self.emit_index:\n for index, index_url in enumerate(dedup(self.index_urls)):\n if index_url.rstrip(\"/\") == self.default_index_url:\n continue\n flag = \"--index-url\" if index == 0 else \"--extra-index-url\"\n yield \"{} {}\".format(flag, index_url)\n\n def write_trusted_hosts(self):\n if self.emit_trusted_host:\n for trusted_host in dedup(self.trusted_hosts):\n yield \"--trusted-host {}\".format(trusted_host)\n\n def write_format_controls(self):\n for nb in dedup(self.format_control.no_binary):\n yield \"--no-binary {}\".format(nb)\n for ob in dedup(self.format_control.only_binary):\n yield \"--only-binary {}\".format(ob)\n\n def write_find_links(self):\n if self.emit_find_links:\n for find_link in dedup(self.find_links):\n yield \"--find-links {}\".format(find_link)\n\n def write_flags(self):\n emitted = False\n for line in chain(\n self.write_index_options(),\n self.write_find_links(),\n self.write_trusted_hosts(),\n self.write_format_controls(),\n ):\n emitted = True\n yield line\n if emitted:\n yield \"\"\n\n def _iter_lines(\n self,\n results,\n unsafe_requirements=None,\n reverse_dependencies=None,\n primary_packages=None,\n markers=None,\n hashes=None,\n ):\n # default values\n unsafe_requirements = unsafe_requirements or []\n reverse_dependencies = reverse_dependencies or {}\n primary_packages = primary_packages or []\n markers = markers or {}\n hashes = hashes or {}\n\n # Check for unhashed or unpinned packages if at least one package does have\n # hashes, which will trigger pip install's --require-hashes mode.\n warn_uninstallable = False\n has_hashes = hashes and any(hash for hash in hashes.values())\n\n yielded = False\n\n for line in self.write_header():\n yield line\n yielded = True\n for line in self.write_flags():\n yield line\n yielded = True\n\n unsafe_requirements = (\n {r for r in results if r.name in UNSAFE_PACKAGES}\n if not unsafe_requirements\n else unsafe_requirements\n )\n packages = {r for r in results if r.name not in UNSAFE_PACKAGES}\n\n if packages:\n packages = sorted(packages, key=self._sort_key)\n for ireq in packages:\n if has_hashes and not hashes.get(ireq):\n yield MESSAGE_UNHASHED_PACKAGE\n warn_uninstallable = True\n line = self._format_requirement(\n ireq,\n reverse_dependencies,\n primary_packages,\n markers.get(key_from_ireq(ireq)),\n hashes=hashes,\n )\n yield line\n yielded = True\n\n if unsafe_requirements:\n unsafe_requirements = sorted(unsafe_requirements, key=self._sort_key)\n yield \"\"\n yielded = True\n if has_hashes and not self.allow_unsafe:\n yield MESSAGE_UNSAFE_PACKAGES_UNPINNED\n warn_uninstallable = True\n else:\n yield MESSAGE_UNSAFE_PACKAGES\n\n for ireq in unsafe_requirements:\n ireq_key = key_from_ireq(ireq)\n if not self.allow_unsafe:\n yield comment(\"# {}\".format(ireq_key))\n else:\n line = self._format_requirement(\n ireq,\n reverse_dependencies,\n primary_packages,\n marker=markers.get(ireq_key),\n hashes=hashes,\n )\n yield line\n\n # Yield even when there's no real content, so that blank files are written\n if not yielded:\n yield \"\"\n\n if warn_uninstallable:\n log.warning(MESSAGE_UNINSTALLABLE)\n\n def write(\n self,\n results,\n unsafe_requirements,\n reverse_dependencies,\n primary_packages,\n markers,\n hashes,\n ):\n\n for line in self._iter_lines(\n results,\n unsafe_requirements,\n reverse_dependencies,\n primary_packages,\n markers,\n hashes,\n ):\n log.info(line)\n if not self.dry_run:\n self.dst_file.write(unstyle(line).encode(\"utf-8\"))\n self.dst_file.write(os.linesep.encode(\"utf-8\"))\n\n def _format_requirement(\n self, ireq, reverse_dependencies, primary_packages, marker=None, hashes=None\n ):\n ireq_hashes = (hashes if hashes is not None else {}).get(ireq)\n\n line = format_requirement(ireq, marker=marker, hashes=ireq_hashes)\n\n if not self.annotate or key_from_ireq(ireq) in primary_packages:\n return line\n\n # Annotate what packages this package is required by\n required_by = reverse_dependencies.get(ireq.name.lower(), [])\n if required_by:\n annotation = \", \".join(sorted(required_by))\n line = \"{:24}{}{}\".format(\n line,\n \" \\\\\\n \" if ireq_hashes else \" \",\n comment(\"# via \" + annotation),\n )\n return line\n", "path": "piptools/writer.py"}]} | 3,035 | 244 |
gh_patches_debug_19303 | rasdani/github-patches | git_diff | netbox-community__netbox-5286 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Plugins must define app_name in api/urls.py to be included in API root view
### Environment
* Python version: 3.6.9
* NetBox version: 2.9.7
### Steps to Reproduce
I'm actually not certain whether this is better classified as a bug or a lapse in the documentation, but I'm opting to propose it as a bug because I think it can be addressed by a code change.
1. Create/install a plugin which provides at least one REST API endpoint. (Assume the plugin's name is `myplugin`.)
2. Within `api/urls.py`, set `app_name = myplugin-api`
3. Visit `/api/plugins/` in the browser. The plugin should be included.
4. Remove the `app_name` definition from `api/urls.py`.
5. Visit `/api/plugins/` in the browser again. The plugin no longer appears.
### Expected Behavior
Plugin API endpoints should be detected automatically. AFAICT there's no reason a plugin should need to declare `app_name`. (The core apps do this, but only because they're being included dynamically.)
### Observed Behavior
Plugin API endpoints disappear from the list when `app_name` is not declared.
It seems like the need for the `app_name` variable could be obviated within `PluginsAPIRootView._get_plugin_entry()` by changing
```
api_app_name = import_object(f"{plugin}.api.urls.app_name")
```
to
```
api_app_name = f'{app_config.name}-api'
```
This actually raises another point: the full URL name for a plugin API endpoint is currently in the format `plugins-api:myplugin-api:url-name`, which seems a bit unwieldy: `plugins-api:myplugin:url-name` should suffice. However, this would be a breaking change for any plugins which rely on reverse URL resolution to their REST API endpoints.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `netbox/extras/plugins/views.py`
Content:
```
1 from collections import OrderedDict
2
3 from django.apps import apps
4 from django.conf import settings
5 from django.shortcuts import render
6 from django.urls.exceptions import NoReverseMatch
7 from django.views.generic import View
8 from rest_framework import permissions
9 from rest_framework.response import Response
10 from rest_framework.reverse import reverse
11 from rest_framework.views import APIView
12
13 from extras.plugins.utils import import_object
14
15
16 class InstalledPluginsAdminView(View):
17 """
18 Admin view for listing all installed plugins
19 """
20 def get(self, request):
21 plugins = [apps.get_app_config(plugin) for plugin in settings.PLUGINS]
22 return render(request, 'extras/admin/plugins_list.html', {
23 'plugins': plugins,
24 })
25
26
27 class InstalledPluginsAPIView(APIView):
28 """
29 API view for listing all installed plugins
30 """
31 permission_classes = [permissions.IsAdminUser]
32 _ignore_model_permissions = True
33 exclude_from_schema = True
34 swagger_schema = None
35
36 def get_view_name(self):
37 return "Installed Plugins"
38
39 @staticmethod
40 def _get_plugin_data(plugin_app_config):
41 return {
42 'name': plugin_app_config.verbose_name,
43 'package': plugin_app_config.name,
44 'author': plugin_app_config.author,
45 'author_email': plugin_app_config.author_email,
46 'description': plugin_app_config.description,
47 'verison': plugin_app_config.version
48 }
49
50 def get(self, request, format=None):
51 return Response([self._get_plugin_data(apps.get_app_config(plugin)) for plugin in settings.PLUGINS])
52
53
54 class PluginsAPIRootView(APIView):
55 _ignore_model_permissions = True
56 exclude_from_schema = True
57 swagger_schema = None
58
59 def get_view_name(self):
60 return "Plugins"
61
62 @staticmethod
63 def _get_plugin_entry(plugin, app_config, request, format):
64 # Check if the plugin specifies any API URLs
65 api_app_name = import_object(f"{plugin}.api.urls.app_name")
66 if api_app_name is None:
67 # Plugin does not expose an API
68 return None
69
70 try:
71 entry = (getattr(app_config, 'base_url', app_config.label), reverse(
72 f"plugins-api:{api_app_name}:api-root",
73 request=request,
74 format=format
75 ))
76 except NoReverseMatch:
77 # The plugin does not include an api-root url
78 entry = None
79
80 return entry
81
82 def get(self, request, format=None):
83
84 entries = []
85 for plugin in settings.PLUGINS:
86 app_config = apps.get_app_config(plugin)
87 entry = self._get_plugin_entry(plugin, app_config, request, format)
88 if entry is not None:
89 entries.append(entry)
90
91 return Response(OrderedDict((
92 ('installed-plugins', reverse('plugins-api:plugins-list', request=request, format=format)),
93 *entries
94 )))
95
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/netbox/extras/plugins/views.py b/netbox/extras/plugins/views.py
--- a/netbox/extras/plugins/views.py
+++ b/netbox/extras/plugins/views.py
@@ -10,8 +10,6 @@
from rest_framework.reverse import reverse
from rest_framework.views import APIView
-from extras.plugins.utils import import_object
-
class InstalledPluginsAdminView(View):
"""
@@ -62,11 +60,7 @@
@staticmethod
def _get_plugin_entry(plugin, app_config, request, format):
# Check if the plugin specifies any API URLs
- api_app_name = import_object(f"{plugin}.api.urls.app_name")
- if api_app_name is None:
- # Plugin does not expose an API
- return None
-
+ api_app_name = f'{app_config.name}-api'
try:
entry = (getattr(app_config, 'base_url', app_config.label), reverse(
f"plugins-api:{api_app_name}:api-root",
| {"golden_diff": "diff --git a/netbox/extras/plugins/views.py b/netbox/extras/plugins/views.py\n--- a/netbox/extras/plugins/views.py\n+++ b/netbox/extras/plugins/views.py\n@@ -10,8 +10,6 @@\n from rest_framework.reverse import reverse\n from rest_framework.views import APIView\n \n-from extras.plugins.utils import import_object\n-\n \n class InstalledPluginsAdminView(View):\n \"\"\"\n@@ -62,11 +60,7 @@\n @staticmethod\n def _get_plugin_entry(plugin, app_config, request, format):\n # Check if the plugin specifies any API URLs\n- api_app_name = import_object(f\"{plugin}.api.urls.app_name\")\n- if api_app_name is None:\n- # Plugin does not expose an API\n- return None\n-\n+ api_app_name = f'{app_config.name}-api'\n try:\n entry = (getattr(app_config, 'base_url', app_config.label), reverse(\n f\"plugins-api:{api_app_name}:api-root\",\n", "issue": "Plugins must define app_name in api/urls.py to be included in API root view\n### Environment\r\n* Python version: 3.6.9\r\n* NetBox version: 2.9.7\r\n\r\n### Steps to Reproduce\r\nI'm actually not certain whether this is better classified as a bug or a lapse in the documentation, but I'm opting to propose it as a bug because I think it can be addressed by a code change.\r\n\r\n1. Create/install a plugin which provides at least one REST API endpoint. (Assume the plugin's name is `myplugin`.)\r\n2. Within `api/urls.py`, set `app_name = myplugin-api`\r\n3. Visit `/api/plugins/` in the browser. The plugin should be included.\r\n4. Remove the `app_name` definition from `api/urls.py`.\r\n5. Visit `/api/plugins/` in the browser again. The plugin no longer appears.\r\n\r\n### Expected Behavior\r\nPlugin API endpoints should be detected automatically. AFAICT there's no reason a plugin should need to declare `app_name`. (The core apps do this, but only because they're being included dynamically.)\r\n\r\n### Observed Behavior\r\nPlugin API endpoints disappear from the list when `app_name` is not declared.\r\n\r\nIt seems like the need for the `app_name` variable could be obviated within `PluginsAPIRootView._get_plugin_entry()` by changing\r\n\r\n```\r\napi_app_name = import_object(f\"{plugin}.api.urls.app_name\")\r\n```\r\n\r\nto\r\n\r\n```\r\napi_app_name = f'{app_config.name}-api'\r\n```\r\n\r\nThis actually raises another point: the full URL name for a plugin API endpoint is currently in the format `plugins-api:myplugin-api:url-name`, which seems a bit unwieldy: `plugins-api:myplugin:url-name` should suffice. However, this would be a breaking change for any plugins which rely on reverse URL resolution to their REST API endpoints.\n", "before_files": [{"content": "from collections import OrderedDict\n\nfrom django.apps import apps\nfrom django.conf import settings\nfrom django.shortcuts import render\nfrom django.urls.exceptions import NoReverseMatch\nfrom django.views.generic import View\nfrom rest_framework import permissions\nfrom rest_framework.response import Response\nfrom rest_framework.reverse import reverse\nfrom rest_framework.views import APIView\n\nfrom extras.plugins.utils import import_object\n\n\nclass InstalledPluginsAdminView(View):\n \"\"\"\n Admin view for listing all installed plugins\n \"\"\"\n def get(self, request):\n plugins = [apps.get_app_config(plugin) for plugin in settings.PLUGINS]\n return render(request, 'extras/admin/plugins_list.html', {\n 'plugins': plugins,\n })\n\n\nclass InstalledPluginsAPIView(APIView):\n \"\"\"\n API view for listing all installed plugins\n \"\"\"\n permission_classes = [permissions.IsAdminUser]\n _ignore_model_permissions = True\n exclude_from_schema = True\n swagger_schema = None\n\n def get_view_name(self):\n return \"Installed Plugins\"\n\n @staticmethod\n def _get_plugin_data(plugin_app_config):\n return {\n 'name': plugin_app_config.verbose_name,\n 'package': plugin_app_config.name,\n 'author': plugin_app_config.author,\n 'author_email': plugin_app_config.author_email,\n 'description': plugin_app_config.description,\n 'verison': plugin_app_config.version\n }\n\n def get(self, request, format=None):\n return Response([self._get_plugin_data(apps.get_app_config(plugin)) for plugin in settings.PLUGINS])\n\n\nclass PluginsAPIRootView(APIView):\n _ignore_model_permissions = True\n exclude_from_schema = True\n swagger_schema = None\n\n def get_view_name(self):\n return \"Plugins\"\n\n @staticmethod\n def _get_plugin_entry(plugin, app_config, request, format):\n # Check if the plugin specifies any API URLs\n api_app_name = import_object(f\"{plugin}.api.urls.app_name\")\n if api_app_name is None:\n # Plugin does not expose an API\n return None\n\n try:\n entry = (getattr(app_config, 'base_url', app_config.label), reverse(\n f\"plugins-api:{api_app_name}:api-root\",\n request=request,\n format=format\n ))\n except NoReverseMatch:\n # The plugin does not include an api-root url\n entry = None\n\n return entry\n\n def get(self, request, format=None):\n\n entries = []\n for plugin in settings.PLUGINS:\n app_config = apps.get_app_config(plugin)\n entry = self._get_plugin_entry(plugin, app_config, request, format)\n if entry is not None:\n entries.append(entry)\n\n return Response(OrderedDict((\n ('installed-plugins', reverse('plugins-api:plugins-list', request=request, format=format)),\n *entries\n )))\n", "path": "netbox/extras/plugins/views.py"}], "after_files": [{"content": "from collections import OrderedDict\n\nfrom django.apps import apps\nfrom django.conf import settings\nfrom django.shortcuts import render\nfrom django.urls.exceptions import NoReverseMatch\nfrom django.views.generic import View\nfrom rest_framework import permissions\nfrom rest_framework.response import Response\nfrom rest_framework.reverse import reverse\nfrom rest_framework.views import APIView\n\n\nclass InstalledPluginsAdminView(View):\n \"\"\"\n Admin view for listing all installed plugins\n \"\"\"\n def get(self, request):\n plugins = [apps.get_app_config(plugin) for plugin in settings.PLUGINS]\n return render(request, 'extras/admin/plugins_list.html', {\n 'plugins': plugins,\n })\n\n\nclass InstalledPluginsAPIView(APIView):\n \"\"\"\n API view for listing all installed plugins\n \"\"\"\n permission_classes = [permissions.IsAdminUser]\n _ignore_model_permissions = True\n exclude_from_schema = True\n swagger_schema = None\n\n def get_view_name(self):\n return \"Installed Plugins\"\n\n @staticmethod\n def _get_plugin_data(plugin_app_config):\n return {\n 'name': plugin_app_config.verbose_name,\n 'package': plugin_app_config.name,\n 'author': plugin_app_config.author,\n 'author_email': plugin_app_config.author_email,\n 'description': plugin_app_config.description,\n 'verison': plugin_app_config.version\n }\n\n def get(self, request, format=None):\n return Response([self._get_plugin_data(apps.get_app_config(plugin)) for plugin in settings.PLUGINS])\n\n\nclass PluginsAPIRootView(APIView):\n _ignore_model_permissions = True\n exclude_from_schema = True\n swagger_schema = None\n\n def get_view_name(self):\n return \"Plugins\"\n\n @staticmethod\n def _get_plugin_entry(plugin, app_config, request, format):\n # Check if the plugin specifies any API URLs\n api_app_name = f'{app_config.name}-api'\n try:\n entry = (getattr(app_config, 'base_url', app_config.label), reverse(\n f\"plugins-api:{api_app_name}:api-root\",\n request=request,\n format=format\n ))\n except NoReverseMatch:\n # The plugin does not include an api-root url\n entry = None\n\n return entry\n\n def get(self, request, format=None):\n\n entries = []\n for plugin in settings.PLUGINS:\n app_config = apps.get_app_config(plugin)\n entry = self._get_plugin_entry(plugin, app_config, request, format)\n if entry is not None:\n entries.append(entry)\n\n return Response(OrderedDict((\n ('installed-plugins', reverse('plugins-api:plugins-list', request=request, format=format)),\n *entries\n )))\n", "path": "netbox/extras/plugins/views.py"}]} | 1,447 | 219 |
gh_patches_debug_65231 | rasdani/github-patches | git_diff | streamlink__streamlink-5911 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
plugins.showroom: streamlink unable to download any live streams from showroom.com
### Checklist
- [X] This is a [plugin issue](https://streamlink.github.io/plugins.html) and not [a different kind of issue](https://github.com/streamlink/streamlink/issues/new/choose)
- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)
- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)
- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)
### Streamlink version
streamlink 6.7.2
### Description
On 2024.03.29, showroom.com made some changes to their site.
When I try to use streamlink to record a showroom url that is online, for eg. https://www.showroom-live.com/r/48_KOJIMA_AIKO
> streamlink --loglevel debug https://www.showroom-live.com/r/48_KOJIMA_AIKO best -o testing.ts
the expected behavior is that it should return this:
> [cli][info] Found matching plugin showroom for URL https://www.showroom-live.com/r/48_KOJIMA_AIKO
[utils.l10n][debug] Language code: en_US
[cli][info] Available streams: 144p (worst), 360p (best)
[cli][info] Opening stream: 360p (hls)
[cli][info] Writing output to D:\testing.ts
[cli][debug] Checking file output
[stream.hls][debug] Reloading playlist
[cli][debug] Pre-buffering 8192 bytes
[stream.hls][debug] First Sequence: 1; Last Sequence: 4
[stream.hls][debug] Start offset: 0; Duration: None; Start Sequence: 3; End Sequence: None
[stream.hls][debug] Adding segment 3 to queue
[stream.hls][debug] Adding segment 4 to queue
However, when I tried recording a showroom stream on 2024.03.29, I got an error stating that the stream is restricted.
> L:\>streamlink --loglevel debug https://www.showroom-live.com/r/48_KOJIMA_AIKO best -o sample.ts
[session][debug] Loading plugin: showroom
[cli][debug] OS: Windows 10
[cli][debug] Python: 3.10.6
[cli][debug] OpenSSL: OpenSSL 1.1.1n 15 Mar 2022
[cli][debug] Streamlink: 6.7.2
[cli][debug] Dependencies:
[cli][debug] certifi: 2022.6.15
[cli][debug] exceptiongroup: 1.2.0
[cli][debug] isodate: 0.6.1
[cli][debug] lxml: 4.9.1
[cli][debug] pycountry: 22.3.5
[cli][debug] pycryptodome: 3.16.0
[cli][debug] PySocks: 1.7.1
[cli][debug] requests: 2.28.2
[cli][debug] trio: 0.25.0
[cli][debug] trio-websocket: 0.11.1
[cli][debug] typing-extensions: 4.5.0
[cli][debug] urllib3: 1.26.12
[cli][debug] websocket-client: 1.4.0
[cli][debug] Arguments:
[cli][debug] url=https://www.showroom-live.com/r/48_KOJIMA_AIKO
[cli][debug] stream=['best']
[cli][debug] --loglevel=debug
[cli][debug] --output=sample.ts
[cli][info] Found matching plugin showroom for URL https://www.showroom-live.com/r/48_KOJIMA_AIKO
[plugins.showroom][debug] Room ID: 270117
[plugins.showroom][error] This stream is restricted
error: No playable streams found on this URL: https://www.showroom-live.com/r/48_KOJIMA_AIKO
- I tried downloading 12 different showroom live streams, but received the same error for all of them.
- I tried changing my IP address using a VPN to a Japan/Hong Kong/Singapore/Germany/USA IP, but the same problem persist.
- Next, I tried to locate the m3u8 address of the showroom stream using stream detector addon (Firefox) and use the .m3u8 address directly in streamlink:
> streamlink --loglevel debug https://hls-css.live.showroom-live.com/live/66f871b4d33f747c738e608bfab98e9fef1aa2d402df70ee64ae4c3fcc1357b1_sourceabr.m3u8 best -o testing.ts
Streamlink was able to work as normal and download successfully:
> D:\>streamlink --loglevel debug https://hls-css.live.showroom-live.com/live/66f871b4d33f747c738e608bfab98e9fef1aa2d402df70ee64ae4c3fcc1357b1_sourceabr.m3u8 best -o testing.ts
> [session][debug] Loading plugin: hls
[cli][debug] OS: Windows 10
[cli][debug] Python: 3.10.6
[cli][debug] OpenSSL: OpenSSL 1.1.1n 15 Mar 2022
[cli][debug] Streamlink: 6.7.2
[cli][debug] Dependencies:
[cli][debug] certifi: 2022.6.15
[cli][debug] exceptiongroup: 1.2.0
[cli][debug] isodate: 0.6.1
[cli][debug] lxml: 4.9.1
[cli][debug] pycountry: 22.3.5
[cli][debug] pycryptodome: 3.16.0
[cli][debug] PySocks: 1.7.1
[cli][debug] requests: 2.28.2
[cli][debug] trio: 0.25.0
[cli][debug] trio-websocket: 0.11.1
[cli][debug] typing-extensions: 4.5.0
[cli][debug] urllib3: 1.26.12
[cli][debug] websocket-client: 1.4.0
[cli][debug] Arguments:
[cli][debug] url=https://hls-css.live.showroom-live.com/live/66f871b4d33f747c738e608bfab98e9fef1aa2d402df70ee64ae4c3fcc1357b1_sourceabr.m3u8
[cli][debug] stream=['best']
[cli][debug] --loglevel=debug
[cli][debug] --output=testing.ts
[cli][info] Found matching plugin hls for URL https://hls-css.live.showroom-live.com/live/66f871b4d33f747c738e608bfab98e9fef1aa2d402df70ee64ae4c3fcc1357b1_sourceabr.m3u8
[plugins.hls][debug] URL=https://hls-css.live.showroom-live.com/live/66f871b4d33f747c738e608bfab98e9fef1aa2d402df70ee64ae4c3fcc1357b1_sourceabr.m3u8; params={}
[utils.l10n][debug] Language code: en_US
[cli][info] Available streams: live (worst, best)
[cli][info] Opening stream: live (hls)
[cli][info] Writing output to
D:\testing.ts
[cli][debug] Checking file output
[stream.hls][debug] Reloading playlist
[cli][debug] Pre-buffering 8192 bytes
[stream.hls][debug] First Sequence: 8904; Last Sequence: 8906
[stream.hls][debug] Start offset: 0; Duration: None; Start Sequence: 8904; End Sequence: None
[stream.hls][debug] Adding segment 8904 to queue
[stream.hls][debug] Adding segment 8905 to queue
[stream.hls][debug] Adding segment 8906 to queue
[stream.hls][debug] Writing segment 8904 to output
[stream.hls][debug] Segment 8904 complete
[cli][debug] Writing stream to output
[download] Written 538.66 KiB to L:\testing.ts (0s) [stream.hls][debug] Writing segment 8905 to output
[stream.hls][debug] Segment 8905 complete
[download] Written 1.17 MiB to L:\testing.ts (0s) [stream.hls][debug] Writing segment 8906 to output
[stream.hls][debug] Segment 8906 complete
[download] Written 1.73 MiB to L:\testing.ts (1s) [stream.hls][debug] Reloading playlist
I was thinking that this might be a streamlink plugin issue and not Showroom disabling their API, because I tried testing with a Japanese GUI ffmpeg based showroom downloader, called ショールーム録画っち (https://www.skypower.xyz/showroom_rokugatch.html). I was able to download streams successfully by just entering the showroom url.
### Debug log
```text
L:\>streamlink --loglevel debug https://www.showroom-live.com/r/48_KOJIMA_AIKO best -o sample.ts
[session][debug] Loading plugin: showroom
[cli][debug] OS: Windows 10
[cli][debug] Python: 3.10.6
[cli][debug] OpenSSL: OpenSSL 1.1.1n 15 Mar 2022
[cli][debug] Streamlink: 6.7.2
[cli][debug] Dependencies:
[cli][debug] certifi: 2022.6.15
[cli][debug] exceptiongroup: 1.2.0
[cli][debug] isodate: 0.6.1
[cli][debug] lxml: 4.9.1
[cli][debug] pycountry: 22.3.5
[cli][debug] pycryptodome: 3.16.0
[cli][debug] PySocks: 1.7.1
[cli][debug] requests: 2.28.2
[cli][debug] trio: 0.25.0
[cli][debug] trio-websocket: 0.11.1
[cli][debug] typing-extensions: 4.5.0
[cli][debug] urllib3: 1.26.12
[cli][debug] websocket-client: 1.4.0
[cli][debug] Arguments:
[cli][debug] url=https://www.showroom-live.com/r/48_KOJIMA_AIKO
[cli][debug] stream=['best']
[cli][debug] --loglevel=debug
[cli][debug] --output=sample.ts
[cli][info] Found matching plugin showroom for URL https://www.showroom-live.com/r/48_KOJIMA_AIKO
[plugins.showroom][debug] Room ID: 270117
[plugins.showroom][error] This stream is restricted
error: No playable streams found on this URL: https://www.showroom-live.com/r/48_KOJIMA_AIKO
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/streamlink/plugins/showroom.py`
Content:
```
1 """
2 $description Japanese live-streaming service used primarily by Japanese idols & voice actors and their fans.
3 $url showroom-live.com
4 $type live
5 $metadata title
6 """
7
8 import logging
9 import re
10 from urllib.parse import parse_qsl, urlparse
11
12 from streamlink.plugin import Plugin, pluginmatcher
13 from streamlink.plugin.api import validate
14 from streamlink.stream.hls import HLSStream
15
16
17 log = logging.getLogger(__name__)
18
19
20 @pluginmatcher(re.compile(
21 r"https?://(?:\w+\.)?showroom-live\.com/",
22 ))
23 class Showroom(Plugin):
24 LIVE_STATUS = 2
25
26 def __init__(self, *args, **kwargs):
27 super().__init__(*args, **kwargs)
28 self.session.set_option("hls-playlist-reload-time", "segment")
29
30 def _get_streams(self):
31 room_id = self.session.http.get(
32 self.url,
33 schema=validate.Schema(
34 validate.parse_html(),
35 validate.xml_xpath_string(".//nav//a[contains(@href,'/room/profile?')]/@href"),
36 validate.none_or_all(
37 validate.transform(lambda _url_profile: dict(parse_qsl(urlparse(_url_profile).query))),
38 validate.get("room_id"),
39 ),
40 ),
41 )
42 if not room_id:
43 return
44
45 log.debug(f"Room ID: {room_id}")
46
47 live_status, self.title = self.session.http.get(
48 "https://www.showroom-live.com/api/live/live_info",
49 params={
50 "room_id": room_id,
51 },
52 schema=validate.Schema(
53 validate.parse_json(),
54 {
55 "live_status": int,
56 "room_name": str,
57 },
58 validate.union_get(
59 "live_status",
60 "room_name",
61 ),
62 ),
63 )
64 if live_status != self.LIVE_STATUS:
65 log.info("This stream is currently offline")
66 return
67
68 url = self.session.http.get(
69 "https://www.showroom-live.com/api/live/streaming_url",
70 params={
71 "room_id": room_id,
72 "abr_available": 1,
73 },
74 schema=validate.Schema(
75 validate.parse_json(),
76 {"streaming_url_list": [{
77 "type": str,
78 "url": validate.url(),
79 }]},
80 validate.get("streaming_url_list"),
81 validate.filter(lambda p: p["type"] == "hls_all"),
82 validate.get((0, "url")),
83 ),
84 )
85
86 res = self.session.http.get(url, acceptable_status=(200, 403, 404))
87 if res.headers["Content-Type"] != "application/x-mpegURL":
88 log.error("This stream is restricted")
89 return
90
91 return HLSStream.parse_variant_playlist(self.session, url)
92
93
94 __plugin__ = Showroom
95
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/streamlink/plugins/showroom.py b/src/streamlink/plugins/showroom.py
--- a/src/streamlink/plugins/showroom.py
+++ b/src/streamlink/plugins/showroom.py
@@ -84,7 +84,7 @@
)
res = self.session.http.get(url, acceptable_status=(200, 403, 404))
- if res.headers["Content-Type"] != "application/x-mpegURL":
+ if res.headers["Content-Type"] not in ("application/x-mpegURL", "application/vnd.apple.mpegurl"):
log.error("This stream is restricted")
return
| {"golden_diff": "diff --git a/src/streamlink/plugins/showroom.py b/src/streamlink/plugins/showroom.py\n--- a/src/streamlink/plugins/showroom.py\n+++ b/src/streamlink/plugins/showroom.py\n@@ -84,7 +84,7 @@\n )\n \n res = self.session.http.get(url, acceptable_status=(200, 403, 404))\n- if res.headers[\"Content-Type\"] != \"application/x-mpegURL\":\n+ if res.headers[\"Content-Type\"] not in (\"application/x-mpegURL\", \"application/vnd.apple.mpegurl\"):\n log.error(\"This stream is restricted\")\n return\n", "issue": "plugins.showroom: streamlink unable to download any live streams from showroom.com\n### Checklist\r\n\r\n- [X] This is a [plugin issue](https://streamlink.github.io/plugins.html) and not [a different kind of issue](https://github.com/streamlink/streamlink/issues/new/choose)\r\n- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)\r\n- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)\r\n- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)\r\n\r\n### Streamlink version\r\n\r\nstreamlink 6.7.2\r\n\r\n### Description\r\n\r\nOn 2024.03.29, showroom.com made some changes to their site.\r\n\r\nWhen I try to use streamlink to record a showroom url that is online, for eg. https://www.showroom-live.com/r/48_KOJIMA_AIKO\r\n\r\n> streamlink --loglevel debug https://www.showroom-live.com/r/48_KOJIMA_AIKO best -o testing.ts\r\n\r\nthe expected behavior is that it should return this:\r\n\r\n> [cli][info] Found matching plugin showroom for URL https://www.showroom-live.com/r/48_KOJIMA_AIKO\r\n [utils.l10n][debug] Language code: en_US\r\n [cli][info] Available streams: 144p (worst), 360p (best)\r\n [cli][info] Opening stream: 360p (hls)\r\n [cli][info] Writing output to D:\\testing.ts\r\n [cli][debug] Checking file output\r\n [stream.hls][debug] Reloading playlist\r\n [cli][debug] Pre-buffering 8192 bytes\r\n [stream.hls][debug] First Sequence: 1; Last Sequence: 4\r\n [stream.hls][debug] Start offset: 0; Duration: None; Start Sequence: 3; End Sequence: None\r\n [stream.hls][debug] Adding segment 3 to queue\r\n [stream.hls][debug] Adding segment 4 to queue\r\n\r\nHowever, when I tried recording a showroom stream on 2024.03.29, I got an error stating that the stream is restricted.\r\n\r\n> L:\\>streamlink --loglevel debug https://www.showroom-live.com/r/48_KOJIMA_AIKO best -o sample.ts\r\n[session][debug] Loading plugin: showroom\r\n[cli][debug] OS: Windows 10\r\n[cli][debug] Python: 3.10.6\r\n[cli][debug] OpenSSL: OpenSSL 1.1.1n 15 Mar 2022\r\n[cli][debug] Streamlink: 6.7.2\r\n[cli][debug] Dependencies:\r\n[cli][debug] certifi: 2022.6.15\r\n[cli][debug] exceptiongroup: 1.2.0\r\n[cli][debug] isodate: 0.6.1\r\n[cli][debug] lxml: 4.9.1\r\n[cli][debug] pycountry: 22.3.5\r\n[cli][debug] pycryptodome: 3.16.0\r\n[cli][debug] PySocks: 1.7.1\r\n[cli][debug] requests: 2.28.2\r\n[cli][debug] trio: 0.25.0\r\n[cli][debug] trio-websocket: 0.11.1\r\n[cli][debug] typing-extensions: 4.5.0\r\n[cli][debug] urllib3: 1.26.12\r\n[cli][debug] websocket-client: 1.4.0\r\n[cli][debug] Arguments:\r\n[cli][debug] url=https://www.showroom-live.com/r/48_KOJIMA_AIKO\r\n[cli][debug] stream=['best']\r\n[cli][debug] --loglevel=debug\r\n[cli][debug] --output=sample.ts\r\n[cli][info] Found matching plugin showroom for URL https://www.showroom-live.com/r/48_KOJIMA_AIKO\r\n[plugins.showroom][debug] Room ID: 270117\r\n[plugins.showroom][error] This stream is restricted\r\nerror: No playable streams found on this URL: https://www.showroom-live.com/r/48_KOJIMA_AIKO\r\n\r\n- I tried downloading 12 different showroom live streams, but received the same error for all of them.\r\n- I tried changing my IP address using a VPN to a Japan/Hong Kong/Singapore/Germany/USA IP, but the same problem persist.\r\n- Next, I tried to locate the m3u8 address of the showroom stream using stream detector addon (Firefox) and use the .m3u8 address directly in streamlink:\r\n\r\n> streamlink --loglevel debug https://hls-css.live.showroom-live.com/live/66f871b4d33f747c738e608bfab98e9fef1aa2d402df70ee64ae4c3fcc1357b1_sourceabr.m3u8 best -o testing.ts\r\n\r\nStreamlink was able to work as normal and download successfully:\r\n\r\n> D:\\>streamlink --loglevel debug https://hls-css.live.showroom-live.com/live/66f871b4d33f747c738e608bfab98e9fef1aa2d402df70ee64ae4c3fcc1357b1_sourceabr.m3u8 best -o testing.ts\r\n\r\n> [session][debug] Loading plugin: hls\r\n[cli][debug] OS: Windows 10\r\n[cli][debug] Python: 3.10.6\r\n[cli][debug] OpenSSL: OpenSSL 1.1.1n 15 Mar 2022\r\n[cli][debug] Streamlink: 6.7.2\r\n[cli][debug] Dependencies:\r\n[cli][debug] certifi: 2022.6.15\r\n[cli][debug] exceptiongroup: 1.2.0\r\n[cli][debug] isodate: 0.6.1\r\n[cli][debug] lxml: 4.9.1\r\n[cli][debug] pycountry: 22.3.5\r\n[cli][debug] pycryptodome: 3.16.0\r\n[cli][debug] PySocks: 1.7.1\r\n[cli][debug] requests: 2.28.2\r\n[cli][debug] trio: 0.25.0\r\n[cli][debug] trio-websocket: 0.11.1\r\n[cli][debug] typing-extensions: 4.5.0\r\n[cli][debug] urllib3: 1.26.12\r\n[cli][debug] websocket-client: 1.4.0\r\n[cli][debug] Arguments:\r\n[cli][debug] url=https://hls-css.live.showroom-live.com/live/66f871b4d33f747c738e608bfab98e9fef1aa2d402df70ee64ae4c3fcc1357b1_sourceabr.m3u8\r\n[cli][debug] stream=['best']\r\n[cli][debug] --loglevel=debug\r\n[cli][debug] --output=testing.ts\r\n[cli][info] Found matching plugin hls for URL https://hls-css.live.showroom-live.com/live/66f871b4d33f747c738e608bfab98e9fef1aa2d402df70ee64ae4c3fcc1357b1_sourceabr.m3u8\r\n[plugins.hls][debug] URL=https://hls-css.live.showroom-live.com/live/66f871b4d33f747c738e608bfab98e9fef1aa2d402df70ee64ae4c3fcc1357b1_sourceabr.m3u8; params={}\r\n[utils.l10n][debug] Language code: en_US\r\n[cli][info] Available streams: live (worst, best)\r\n[cli][info] Opening stream: live (hls)\r\n[cli][info] Writing output to\r\nD:\\testing.ts\r\n[cli][debug] Checking file output\r\n[stream.hls][debug] Reloading playlist\r\n[cli][debug] Pre-buffering 8192 bytes\r\n[stream.hls][debug] First Sequence: 8904; Last Sequence: 8906\r\n[stream.hls][debug] Start offset: 0; Duration: None; Start Sequence: 8904; End Sequence: None\r\n[stream.hls][debug] Adding segment 8904 to queue\r\n[stream.hls][debug] Adding segment 8905 to queue\r\n[stream.hls][debug] Adding segment 8906 to queue\r\n[stream.hls][debug] Writing segment 8904 to output\r\n[stream.hls][debug] Segment 8904 complete\r\n[cli][debug] Writing stream to output\r\n[download] Written 538.66 KiB to L:\\testing.ts (0s) [stream.hls][debug] Writing segment 8905 to output\r\n[stream.hls][debug] Segment 8905 complete\r\n[download] Written 1.17 MiB to L:\\testing.ts (0s) [stream.hls][debug] Writing segment 8906 to output\r\n[stream.hls][debug] Segment 8906 complete\r\n[download] Written 1.73 MiB to L:\\testing.ts (1s) [stream.hls][debug] Reloading playlist\r\n\r\nI was thinking that this might be a streamlink plugin issue and not Showroom disabling their API, because I tried testing with a Japanese GUI ffmpeg based showroom downloader, called \u30b7\u30e7\u30fc\u30eb\u30fc\u30e0\u9332\u753b\u3063\u3061 (https://www.skypower.xyz/showroom_rokugatch.html). I was able to download streams successfully by just entering the showroom url.\r\n\r\n\r\n\r\n\r\n\r\n### Debug log\r\n\r\n```text\r\nL:\\>streamlink --loglevel debug https://www.showroom-live.com/r/48_KOJIMA_AIKO best -o sample.ts\r\n[session][debug] Loading plugin: showroom\r\n[cli][debug] OS: Windows 10\r\n[cli][debug] Python: 3.10.6\r\n[cli][debug] OpenSSL: OpenSSL 1.1.1n 15 Mar 2022\r\n[cli][debug] Streamlink: 6.7.2\r\n[cli][debug] Dependencies:\r\n[cli][debug] certifi: 2022.6.15\r\n[cli][debug] exceptiongroup: 1.2.0\r\n[cli][debug] isodate: 0.6.1\r\n[cli][debug] lxml: 4.9.1\r\n[cli][debug] pycountry: 22.3.5\r\n[cli][debug] pycryptodome: 3.16.0\r\n[cli][debug] PySocks: 1.7.1\r\n[cli][debug] requests: 2.28.2\r\n[cli][debug] trio: 0.25.0\r\n[cli][debug] trio-websocket: 0.11.1\r\n[cli][debug] typing-extensions: 4.5.0\r\n[cli][debug] urllib3: 1.26.12\r\n[cli][debug] websocket-client: 1.4.0\r\n[cli][debug] Arguments:\r\n[cli][debug] url=https://www.showroom-live.com/r/48_KOJIMA_AIKO\r\n[cli][debug] stream=['best']\r\n[cli][debug] --loglevel=debug\r\n[cli][debug] --output=sample.ts\r\n[cli][info] Found matching plugin showroom for URL https://www.showroom-live.com/r/48_KOJIMA_AIKO\r\n[plugins.showroom][debug] Room ID: 270117\r\n[plugins.showroom][error] This stream is restricted\r\nerror: No playable streams found on this URL: https://www.showroom-live.com/r/48_KOJIMA_AIKO\r\n```\r\n\n", "before_files": [{"content": "\"\"\"\n$description Japanese live-streaming service used primarily by Japanese idols & voice actors and their fans.\n$url showroom-live.com\n$type live\n$metadata title\n\"\"\"\n\nimport logging\nimport re\nfrom urllib.parse import parse_qsl, urlparse\n\nfrom streamlink.plugin import Plugin, pluginmatcher\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream.hls import HLSStream\n\n\nlog = logging.getLogger(__name__)\n\n\n@pluginmatcher(re.compile(\n r\"https?://(?:\\w+\\.)?showroom-live\\.com/\",\n))\nclass Showroom(Plugin):\n LIVE_STATUS = 2\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.session.set_option(\"hls-playlist-reload-time\", \"segment\")\n\n def _get_streams(self):\n room_id = self.session.http.get(\n self.url,\n schema=validate.Schema(\n validate.parse_html(),\n validate.xml_xpath_string(\".//nav//a[contains(@href,'/room/profile?')]/@href\"),\n validate.none_or_all(\n validate.transform(lambda _url_profile: dict(parse_qsl(urlparse(_url_profile).query))),\n validate.get(\"room_id\"),\n ),\n ),\n )\n if not room_id:\n return\n\n log.debug(f\"Room ID: {room_id}\")\n\n live_status, self.title = self.session.http.get(\n \"https://www.showroom-live.com/api/live/live_info\",\n params={\n \"room_id\": room_id,\n },\n schema=validate.Schema(\n validate.parse_json(),\n {\n \"live_status\": int,\n \"room_name\": str,\n },\n validate.union_get(\n \"live_status\",\n \"room_name\",\n ),\n ),\n )\n if live_status != self.LIVE_STATUS:\n log.info(\"This stream is currently offline\")\n return\n\n url = self.session.http.get(\n \"https://www.showroom-live.com/api/live/streaming_url\",\n params={\n \"room_id\": room_id,\n \"abr_available\": 1,\n },\n schema=validate.Schema(\n validate.parse_json(),\n {\"streaming_url_list\": [{\n \"type\": str,\n \"url\": validate.url(),\n }]},\n validate.get(\"streaming_url_list\"),\n validate.filter(lambda p: p[\"type\"] == \"hls_all\"),\n validate.get((0, \"url\")),\n ),\n )\n\n res = self.session.http.get(url, acceptable_status=(200, 403, 404))\n if res.headers[\"Content-Type\"] != \"application/x-mpegURL\":\n log.error(\"This stream is restricted\")\n return\n\n return HLSStream.parse_variant_playlist(self.session, url)\n\n\n__plugin__ = Showroom\n", "path": "src/streamlink/plugins/showroom.py"}], "after_files": [{"content": "\"\"\"\n$description Japanese live-streaming service used primarily by Japanese idols & voice actors and their fans.\n$url showroom-live.com\n$type live\n$metadata title\n\"\"\"\n\nimport logging\nimport re\nfrom urllib.parse import parse_qsl, urlparse\n\nfrom streamlink.plugin import Plugin, pluginmatcher\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream.hls import HLSStream\n\n\nlog = logging.getLogger(__name__)\n\n\n@pluginmatcher(re.compile(\n r\"https?://(?:\\w+\\.)?showroom-live\\.com/\",\n))\nclass Showroom(Plugin):\n LIVE_STATUS = 2\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.session.set_option(\"hls-playlist-reload-time\", \"segment\")\n\n def _get_streams(self):\n room_id = self.session.http.get(\n self.url,\n schema=validate.Schema(\n validate.parse_html(),\n validate.xml_xpath_string(\".//nav//a[contains(@href,'/room/profile?')]/@href\"),\n validate.none_or_all(\n validate.transform(lambda _url_profile: dict(parse_qsl(urlparse(_url_profile).query))),\n validate.get(\"room_id\"),\n ),\n ),\n )\n if not room_id:\n return\n\n log.debug(f\"Room ID: {room_id}\")\n\n live_status, self.title = self.session.http.get(\n \"https://www.showroom-live.com/api/live/live_info\",\n params={\n \"room_id\": room_id,\n },\n schema=validate.Schema(\n validate.parse_json(),\n {\n \"live_status\": int,\n \"room_name\": str,\n },\n validate.union_get(\n \"live_status\",\n \"room_name\",\n ),\n ),\n )\n if live_status != self.LIVE_STATUS:\n log.info(\"This stream is currently offline\")\n return\n\n url = self.session.http.get(\n \"https://www.showroom-live.com/api/live/streaming_url\",\n params={\n \"room_id\": room_id,\n \"abr_available\": 1,\n },\n schema=validate.Schema(\n validate.parse_json(),\n {\"streaming_url_list\": [{\n \"type\": str,\n \"url\": validate.url(),\n }]},\n validate.get(\"streaming_url_list\"),\n validate.filter(lambda p: p[\"type\"] == \"hls_all\"),\n validate.get((0, \"url\")),\n ),\n )\n\n res = self.session.http.get(url, acceptable_status=(200, 403, 404))\n if res.headers[\"Content-Type\"] not in (\"application/x-mpegURL\", \"application/vnd.apple.mpegurl\"):\n log.error(\"This stream is restricted\")\n return\n\n return HLSStream.parse_variant_playlist(self.session, url)\n\n\n__plugin__ = Showroom\n", "path": "src/streamlink/plugins/showroom.py"}]} | 3,924 | 136 |
gh_patches_debug_8842 | rasdani/github-patches | git_diff | deepset-ai__haystack-1620 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Connection Error. Is Haystack running? JSON Issue with Docker Compose On Fresh Repo and Image Pulls
### Discussed in https://github.com/deepset-ai/haystack/discussions/1617
<div type='discussions-op-text'>
<sup>Originally posted by **sieu-tran** October 19, 2021</sup>
Between October 18, 2021 and October 19, 2021, something has changed and the docker returns the error: "Connection Error. Is Haystack running?" when we start running it.</div>
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ui/utils.py`
Content:
```
1 import os
2
3 import logging
4 import requests
5 import streamlit as st
6
7 API_ENDPOINT = os.getenv("API_ENDPOINT", "http://localhost:8000")
8 STATUS = "initialized"
9 DOC_REQUEST = "query"
10 DOC_FEEDBACK = "feedback"
11 DOC_UPLOAD = "file-upload"
12
13
14 def haystack_is_ready():
15 url = f"{API_ENDPOINT}/{STATUS}"
16 try:
17 if requests.get(url).json():
18 return True
19 except Exception as e:
20 logging.exception(e)
21 return False
22
23
24 @st.cache(show_spinner=False)
25 def retrieve_doc(query, filters=None, top_k_reader=5, top_k_retriever=5):
26 # Query Haystack API
27 url = f"{API_ENDPOINT}/{DOC_REQUEST}"
28 params = {"filters": filters, "ESRetriever": {"top_k": top_k_retriever}, "Reader": {"top_k": top_k_reader}}
29 req = {"query": query, "params": params}
30 response_raw = requests.post(url, json=req).json()
31
32 # Format response
33 result = []
34 answers = response_raw["answers"]
35 for i in range(len(answers)):
36 answer = answers[i]
37 answer_text = answer["answer"]
38 if answer_text:
39 result.append(
40 {
41 "context": "..." + answer["context"] + "...",
42 "answer": answer_text,
43 "source": answer["meta"]["name"],
44 "relevance": round(answer["score"] * 100, 2),
45 "document_id": answer["document_id"],
46 "offset_start_in_doc": answer["offsets_in_document"][0]["start"],
47 }
48 )
49 return result, response_raw
50
51
52 def feedback_doc(question, is_correct_answer, document_id, model_id, is_correct_document, answer, offset_start_in_doc):
53 # Feedback Haystack API
54 url = f"{API_ENDPOINT}/{DOC_FEEDBACK}"
55 #TODO adjust after Label refactoring
56 req = {
57 "question": question,
58 "is_correct_answer": is_correct_answer,
59 "document_id": document_id,
60 "model_id": model_id,
61 "is_correct_document": is_correct_document,
62 "answer": answer,
63 "offset_start_in_doc": offset_start_in_doc,
64 }
65 response_raw = requests.post(url, json=req).json()
66 return response_raw
67
68
69 def upload_doc(file):
70 url = f"{API_ENDPOINT}/{DOC_UPLOAD}"
71 files = [("files", file)]
72 response_raw = requests.post(url, files=files).json()
73 return response_raw
74
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ui/utils.py b/ui/utils.py
--- a/ui/utils.py
+++ b/ui/utils.py
@@ -25,7 +25,7 @@
def retrieve_doc(query, filters=None, top_k_reader=5, top_k_retriever=5):
# Query Haystack API
url = f"{API_ENDPOINT}/{DOC_REQUEST}"
- params = {"filters": filters, "ESRetriever": {"top_k": top_k_retriever}, "Reader": {"top_k": top_k_reader}}
+ params = {"filters": filters, "Retriever": {"top_k": top_k_retriever}, "Reader": {"top_k": top_k_reader}}
req = {"query": query, "params": params}
response_raw = requests.post(url, json=req).json()
| {"golden_diff": "diff --git a/ui/utils.py b/ui/utils.py\n--- a/ui/utils.py\n+++ b/ui/utils.py\n@@ -25,7 +25,7 @@\n def retrieve_doc(query, filters=None, top_k_reader=5, top_k_retriever=5):\n # Query Haystack API\n url = f\"{API_ENDPOINT}/{DOC_REQUEST}\"\n- params = {\"filters\": filters, \"ESRetriever\": {\"top_k\": top_k_retriever}, \"Reader\": {\"top_k\": top_k_reader}}\n+ params = {\"filters\": filters, \"Retriever\": {\"top_k\": top_k_retriever}, \"Reader\": {\"top_k\": top_k_reader}}\n req = {\"query\": query, \"params\": params}\n response_raw = requests.post(url, json=req).json()\n", "issue": "Connection Error. Is Haystack running? JSON Issue with Docker Compose On Fresh Repo and Image Pulls\n### Discussed in https://github.com/deepset-ai/haystack/discussions/1617\r\n\r\n<div type='discussions-op-text'>\r\n\r\n<sup>Originally posted by **sieu-tran** October 19, 2021</sup>\r\nBetween October 18, 2021 and October 19, 2021, something has changed and the docker returns the error: \"Connection Error. Is Haystack running?\" when we start running it.</div>\n", "before_files": [{"content": "import os\n\nimport logging\nimport requests\nimport streamlit as st\n\nAPI_ENDPOINT = os.getenv(\"API_ENDPOINT\", \"http://localhost:8000\")\nSTATUS = \"initialized\"\nDOC_REQUEST = \"query\"\nDOC_FEEDBACK = \"feedback\"\nDOC_UPLOAD = \"file-upload\"\n\n\ndef haystack_is_ready():\n url = f\"{API_ENDPOINT}/{STATUS}\"\n try:\n if requests.get(url).json():\n return True\n except Exception as e:\n logging.exception(e)\n return False\n\n\[email protected](show_spinner=False)\ndef retrieve_doc(query, filters=None, top_k_reader=5, top_k_retriever=5):\n # Query Haystack API\n url = f\"{API_ENDPOINT}/{DOC_REQUEST}\"\n params = {\"filters\": filters, \"ESRetriever\": {\"top_k\": top_k_retriever}, \"Reader\": {\"top_k\": top_k_reader}}\n req = {\"query\": query, \"params\": params}\n response_raw = requests.post(url, json=req).json()\n\n # Format response\n result = []\n answers = response_raw[\"answers\"]\n for i in range(len(answers)):\n answer = answers[i]\n answer_text = answer[\"answer\"]\n if answer_text:\n result.append(\n {\n \"context\": \"...\" + answer[\"context\"] + \"...\",\n \"answer\": answer_text,\n \"source\": answer[\"meta\"][\"name\"],\n \"relevance\": round(answer[\"score\"] * 100, 2),\n \"document_id\": answer[\"document_id\"],\n \"offset_start_in_doc\": answer[\"offsets_in_document\"][0][\"start\"],\n }\n )\n return result, response_raw\n\n\ndef feedback_doc(question, is_correct_answer, document_id, model_id, is_correct_document, answer, offset_start_in_doc):\n # Feedback Haystack API\n url = f\"{API_ENDPOINT}/{DOC_FEEDBACK}\"\n #TODO adjust after Label refactoring\n req = {\n \"question\": question,\n \"is_correct_answer\": is_correct_answer,\n \"document_id\": document_id,\n \"model_id\": model_id,\n \"is_correct_document\": is_correct_document,\n \"answer\": answer,\n \"offset_start_in_doc\": offset_start_in_doc,\n }\n response_raw = requests.post(url, json=req).json()\n return response_raw\n\n\ndef upload_doc(file):\n url = f\"{API_ENDPOINT}/{DOC_UPLOAD}\"\n files = [(\"files\", file)]\n response_raw = requests.post(url, files=files).json()\n return response_raw\n", "path": "ui/utils.py"}], "after_files": [{"content": "import os\n\nimport logging\nimport requests\nimport streamlit as st\n\nAPI_ENDPOINT = os.getenv(\"API_ENDPOINT\", \"http://localhost:8000\")\nSTATUS = \"initialized\"\nDOC_REQUEST = \"query\"\nDOC_FEEDBACK = \"feedback\"\nDOC_UPLOAD = \"file-upload\"\n\n\ndef haystack_is_ready():\n url = f\"{API_ENDPOINT}/{STATUS}\"\n try:\n if requests.get(url).json():\n return True\n except Exception as e:\n logging.exception(e)\n return False\n\n\[email protected](show_spinner=False)\ndef retrieve_doc(query, filters=None, top_k_reader=5, top_k_retriever=5):\n # Query Haystack API\n url = f\"{API_ENDPOINT}/{DOC_REQUEST}\"\n params = {\"filters\": filters, \"Retriever\": {\"top_k\": top_k_retriever}, \"Reader\": {\"top_k\": top_k_reader}}\n req = {\"query\": query, \"params\": params}\n response_raw = requests.post(url, json=req).json()\n\n # Format response\n result = []\n answers = response_raw[\"answers\"]\n for i in range(len(answers)):\n answer = answers[i]\n answer_text = answer[\"answer\"]\n if answer_text:\n result.append(\n {\n \"context\": \"...\" + answer[\"context\"] + \"...\",\n \"answer\": answer_text,\n \"source\": answer[\"meta\"][\"name\"],\n \"relevance\": round(answer[\"score\"] * 100, 2),\n \"document_id\": answer[\"document_id\"],\n \"offset_start_in_doc\": answer[\"offsets_in_document\"][0][\"start\"],\n }\n )\n return result, response_raw\n\n\ndef feedback_doc(question, is_correct_answer, document_id, model_id, is_correct_document, answer, offset_start_in_doc):\n # Feedback Haystack API\n url = f\"{API_ENDPOINT}/{DOC_FEEDBACK}\"\n #TODO adjust after Label refactoring\n req = {\n \"question\": question,\n \"is_correct_answer\": is_correct_answer,\n \"document_id\": document_id,\n \"model_id\": model_id,\n \"is_correct_document\": is_correct_document,\n \"answer\": answer,\n \"offset_start_in_doc\": offset_start_in_doc,\n }\n response_raw = requests.post(url, json=req).json()\n return response_raw\n\n\ndef upload_doc(file):\n url = f\"{API_ENDPOINT}/{DOC_UPLOAD}\"\n files = [(\"files\", file)]\n response_raw = requests.post(url, files=files).json()\n return response_raw\n", "path": "ui/utils.py"}]} | 1,070 | 173 |
gh_patches_debug_21955 | rasdani/github-patches | git_diff | netbox-community__netbox-15135 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Tunnel Group is not selectable as column in VPN tunnels list
### Deployment Type
Self-hosted
### NetBox Version
v3.7.2
### Python Version
3.11
### Steps to Reproduce
1. Go to VPN - Tunnels
2. Click Configure Table
### Expected Behavior
There is the Tunnel Group column
### Observed Behavior
There is no Tunnel Group column that could be shown
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `netbox/vpn/tables/tunnels.py`
Content:
```
1 import django_tables2 as tables
2 from django.utils.translation import gettext_lazy as _
3 from django_tables2.utils import Accessor
4
5 from netbox.tables import NetBoxTable, columns
6 from tenancy.tables import TenancyColumnsMixin
7 from vpn.models import *
8
9 __all__ = (
10 'TunnelTable',
11 'TunnelGroupTable',
12 'TunnelTerminationTable',
13 )
14
15
16 class TunnelGroupTable(NetBoxTable):
17 name = tables.Column(
18 verbose_name=_('Name'),
19 linkify=True
20 )
21 tunnel_count = columns.LinkedCountColumn(
22 viewname='vpn:tunnel_list',
23 url_params={'group_id': 'pk'},
24 verbose_name=_('Tunnels')
25 )
26 tags = columns.TagColumn(
27 url_name='vpn:tunnelgroup_list'
28 )
29
30 class Meta(NetBoxTable.Meta):
31 model = TunnelGroup
32 fields = (
33 'pk', 'id', 'name', 'tunnel_count', 'description', 'slug', 'tags', 'actions', 'created', 'last_updated',
34 )
35 default_columns = ('pk', 'name', 'tunnel_count', 'description')
36
37
38 class TunnelTable(TenancyColumnsMixin, NetBoxTable):
39 name = tables.Column(
40 verbose_name=_('Name'),
41 linkify=True
42 )
43 status = columns.ChoiceFieldColumn(
44 verbose_name=_('Status')
45 )
46 ipsec_profile = tables.Column(
47 verbose_name=_('IPSec profile'),
48 linkify=True
49 )
50 terminations_count = columns.LinkedCountColumn(
51 accessor=Accessor('count_terminations'),
52 viewname='vpn:tunneltermination_list',
53 url_params={'tunnel_id': 'pk'},
54 verbose_name=_('Terminations')
55 )
56 comments = columns.MarkdownColumn(
57 verbose_name=_('Comments'),
58 )
59 tags = columns.TagColumn(
60 url_name='vpn:tunnel_list'
61 )
62
63 class Meta(NetBoxTable.Meta):
64 model = Tunnel
65 fields = (
66 'pk', 'id', 'name', 'status', 'encapsulation', 'ipsec_profile', 'tenant', 'tenant_group', 'tunnel_id',
67 'termination_count', 'description', 'comments', 'tags', 'created', 'last_updated',
68 )
69 default_columns = ('pk', 'name', 'status', 'encapsulation', 'tenant', 'terminations_count')
70
71
72 class TunnelTerminationTable(TenancyColumnsMixin, NetBoxTable):
73 tunnel = tables.Column(
74 verbose_name=_('Tunnel'),
75 linkify=True
76 )
77 role = columns.ChoiceFieldColumn(
78 verbose_name=_('Role')
79 )
80 termination_parent = tables.Column(
81 accessor='termination__parent_object',
82 linkify=True,
83 orderable=False,
84 verbose_name=_('Host')
85 )
86 termination = tables.Column(
87 verbose_name=_('Interface'),
88 linkify=True
89 )
90 ip_addresses = tables.ManyToManyColumn(
91 accessor=tables.A('termination__ip_addresses'),
92 orderable=False,
93 linkify_item=True,
94 verbose_name=_('IP Addresses')
95 )
96 outside_ip = tables.Column(
97 verbose_name=_('Outside IP'),
98 linkify=True
99 )
100 tags = columns.TagColumn(
101 url_name='vpn:tunneltermination_list'
102 )
103
104 class Meta(NetBoxTable.Meta):
105 model = TunnelTermination
106 fields = (
107 'pk', 'id', 'tunnel', 'role', 'termination_parent', 'termination', 'ip_addresses', 'outside_ip', 'tags',
108 'created', 'last_updated',
109 )
110 default_columns = (
111 'pk', 'id', 'tunnel', 'role', 'termination_parent', 'termination', 'ip_addresses', 'outside_ip',
112 )
113
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/netbox/vpn/tables/tunnels.py b/netbox/vpn/tables/tunnels.py
--- a/netbox/vpn/tables/tunnels.py
+++ b/netbox/vpn/tables/tunnels.py
@@ -40,6 +40,10 @@
verbose_name=_('Name'),
linkify=True
)
+ group = tables.Column(
+ verbose_name=_('Group'),
+ linkify=True
+ )
status = columns.ChoiceFieldColumn(
verbose_name=_('Status')
)
@@ -63,10 +67,10 @@
class Meta(NetBoxTable.Meta):
model = Tunnel
fields = (
- 'pk', 'id', 'name', 'status', 'encapsulation', 'ipsec_profile', 'tenant', 'tenant_group', 'tunnel_id',
- 'termination_count', 'description', 'comments', 'tags', 'created', 'last_updated',
+ 'pk', 'id', 'name', 'group', 'status', 'encapsulation', 'ipsec_profile', 'tenant', 'tenant_group',
+ 'tunnel_id', 'termination_count', 'description', 'comments', 'tags', 'created', 'last_updated',
)
- default_columns = ('pk', 'name', 'status', 'encapsulation', 'tenant', 'terminations_count')
+ default_columns = ('pk', 'name', 'group', 'status', 'encapsulation', 'tenant', 'terminations_count')
class TunnelTerminationTable(TenancyColumnsMixin, NetBoxTable):
| {"golden_diff": "diff --git a/netbox/vpn/tables/tunnels.py b/netbox/vpn/tables/tunnels.py\n--- a/netbox/vpn/tables/tunnels.py\n+++ b/netbox/vpn/tables/tunnels.py\n@@ -40,6 +40,10 @@\n verbose_name=_('Name'),\n linkify=True\n )\n+ group = tables.Column(\n+ verbose_name=_('Group'),\n+ linkify=True\n+ )\n status = columns.ChoiceFieldColumn(\n verbose_name=_('Status')\n )\n@@ -63,10 +67,10 @@\n class Meta(NetBoxTable.Meta):\n model = Tunnel\n fields = (\n- 'pk', 'id', 'name', 'status', 'encapsulation', 'ipsec_profile', 'tenant', 'tenant_group', 'tunnel_id',\n- 'termination_count', 'description', 'comments', 'tags', 'created', 'last_updated',\n+ 'pk', 'id', 'name', 'group', 'status', 'encapsulation', 'ipsec_profile', 'tenant', 'tenant_group',\n+ 'tunnel_id', 'termination_count', 'description', 'comments', 'tags', 'created', 'last_updated',\n )\n- default_columns = ('pk', 'name', 'status', 'encapsulation', 'tenant', 'terminations_count')\n+ default_columns = ('pk', 'name', 'group', 'status', 'encapsulation', 'tenant', 'terminations_count')\n \n \n class TunnelTerminationTable(TenancyColumnsMixin, NetBoxTable):\n", "issue": "Tunnel Group is not selectable as column in VPN tunnels list\n### Deployment Type\n\nSelf-hosted\n\n### NetBox Version\n\nv3.7.2\n\n### Python Version\n\n3.11\n\n### Steps to Reproduce\n\n1. Go to VPN - Tunnels\r\n2. Click Configure Table\r\n\n\n### Expected Behavior\n\nThere is the Tunnel Group column\n\n### Observed Behavior\n\nThere is no Tunnel Group column that could be shown\n", "before_files": [{"content": "import django_tables2 as tables\nfrom django.utils.translation import gettext_lazy as _\nfrom django_tables2.utils import Accessor\n\nfrom netbox.tables import NetBoxTable, columns\nfrom tenancy.tables import TenancyColumnsMixin\nfrom vpn.models import *\n\n__all__ = (\n 'TunnelTable',\n 'TunnelGroupTable',\n 'TunnelTerminationTable',\n)\n\n\nclass TunnelGroupTable(NetBoxTable):\n name = tables.Column(\n verbose_name=_('Name'),\n linkify=True\n )\n tunnel_count = columns.LinkedCountColumn(\n viewname='vpn:tunnel_list',\n url_params={'group_id': 'pk'},\n verbose_name=_('Tunnels')\n )\n tags = columns.TagColumn(\n url_name='vpn:tunnelgroup_list'\n )\n\n class Meta(NetBoxTable.Meta):\n model = TunnelGroup\n fields = (\n 'pk', 'id', 'name', 'tunnel_count', 'description', 'slug', 'tags', 'actions', 'created', 'last_updated',\n )\n default_columns = ('pk', 'name', 'tunnel_count', 'description')\n\n\nclass TunnelTable(TenancyColumnsMixin, NetBoxTable):\n name = tables.Column(\n verbose_name=_('Name'),\n linkify=True\n )\n status = columns.ChoiceFieldColumn(\n verbose_name=_('Status')\n )\n ipsec_profile = tables.Column(\n verbose_name=_('IPSec profile'),\n linkify=True\n )\n terminations_count = columns.LinkedCountColumn(\n accessor=Accessor('count_terminations'),\n viewname='vpn:tunneltermination_list',\n url_params={'tunnel_id': 'pk'},\n verbose_name=_('Terminations')\n )\n comments = columns.MarkdownColumn(\n verbose_name=_('Comments'),\n )\n tags = columns.TagColumn(\n url_name='vpn:tunnel_list'\n )\n\n class Meta(NetBoxTable.Meta):\n model = Tunnel\n fields = (\n 'pk', 'id', 'name', 'status', 'encapsulation', 'ipsec_profile', 'tenant', 'tenant_group', 'tunnel_id',\n 'termination_count', 'description', 'comments', 'tags', 'created', 'last_updated',\n )\n default_columns = ('pk', 'name', 'status', 'encapsulation', 'tenant', 'terminations_count')\n\n\nclass TunnelTerminationTable(TenancyColumnsMixin, NetBoxTable):\n tunnel = tables.Column(\n verbose_name=_('Tunnel'),\n linkify=True\n )\n role = columns.ChoiceFieldColumn(\n verbose_name=_('Role')\n )\n termination_parent = tables.Column(\n accessor='termination__parent_object',\n linkify=True,\n orderable=False,\n verbose_name=_('Host')\n )\n termination = tables.Column(\n verbose_name=_('Interface'),\n linkify=True\n )\n ip_addresses = tables.ManyToManyColumn(\n accessor=tables.A('termination__ip_addresses'),\n orderable=False,\n linkify_item=True,\n verbose_name=_('IP Addresses')\n )\n outside_ip = tables.Column(\n verbose_name=_('Outside IP'),\n linkify=True\n )\n tags = columns.TagColumn(\n url_name='vpn:tunneltermination_list'\n )\n\n class Meta(NetBoxTable.Meta):\n model = TunnelTermination\n fields = (\n 'pk', 'id', 'tunnel', 'role', 'termination_parent', 'termination', 'ip_addresses', 'outside_ip', 'tags',\n 'created', 'last_updated',\n )\n default_columns = (\n 'pk', 'id', 'tunnel', 'role', 'termination_parent', 'termination', 'ip_addresses', 'outside_ip',\n )\n", "path": "netbox/vpn/tables/tunnels.py"}], "after_files": [{"content": "import django_tables2 as tables\nfrom django.utils.translation import gettext_lazy as _\nfrom django_tables2.utils import Accessor\n\nfrom netbox.tables import NetBoxTable, columns\nfrom tenancy.tables import TenancyColumnsMixin\nfrom vpn.models import *\n\n__all__ = (\n 'TunnelTable',\n 'TunnelGroupTable',\n 'TunnelTerminationTable',\n)\n\n\nclass TunnelGroupTable(NetBoxTable):\n name = tables.Column(\n verbose_name=_('Name'),\n linkify=True\n )\n tunnel_count = columns.LinkedCountColumn(\n viewname='vpn:tunnel_list',\n url_params={'group_id': 'pk'},\n verbose_name=_('Tunnels')\n )\n tags = columns.TagColumn(\n url_name='vpn:tunnelgroup_list'\n )\n\n class Meta(NetBoxTable.Meta):\n model = TunnelGroup\n fields = (\n 'pk', 'id', 'name', 'tunnel_count', 'description', 'slug', 'tags', 'actions', 'created', 'last_updated',\n )\n default_columns = ('pk', 'name', 'tunnel_count', 'description')\n\n\nclass TunnelTable(TenancyColumnsMixin, NetBoxTable):\n name = tables.Column(\n verbose_name=_('Name'),\n linkify=True\n )\n group = tables.Column(\n verbose_name=_('Group'),\n linkify=True\n )\n status = columns.ChoiceFieldColumn(\n verbose_name=_('Status')\n )\n ipsec_profile = tables.Column(\n verbose_name=_('IPSec profile'),\n linkify=True\n )\n terminations_count = columns.LinkedCountColumn(\n accessor=Accessor('count_terminations'),\n viewname='vpn:tunneltermination_list',\n url_params={'tunnel_id': 'pk'},\n verbose_name=_('Terminations')\n )\n comments = columns.MarkdownColumn(\n verbose_name=_('Comments'),\n )\n tags = columns.TagColumn(\n url_name='vpn:tunnel_list'\n )\n\n class Meta(NetBoxTable.Meta):\n model = Tunnel\n fields = (\n 'pk', 'id', 'name', 'group', 'status', 'encapsulation', 'ipsec_profile', 'tenant', 'tenant_group',\n 'tunnel_id', 'termination_count', 'description', 'comments', 'tags', 'created', 'last_updated',\n )\n default_columns = ('pk', 'name', 'group', 'status', 'encapsulation', 'tenant', 'terminations_count')\n\n\nclass TunnelTerminationTable(TenancyColumnsMixin, NetBoxTable):\n tunnel = tables.Column(\n verbose_name=_('Tunnel'),\n linkify=True\n )\n role = columns.ChoiceFieldColumn(\n verbose_name=_('Role')\n )\n termination_parent = tables.Column(\n accessor='termination__parent_object',\n linkify=True,\n orderable=False,\n verbose_name=_('Host')\n )\n termination = tables.Column(\n verbose_name=_('Interface'),\n linkify=True\n )\n ip_addresses = tables.ManyToManyColumn(\n accessor=tables.A('termination__ip_addresses'),\n orderable=False,\n linkify_item=True,\n verbose_name=_('IP Addresses')\n )\n outside_ip = tables.Column(\n verbose_name=_('Outside IP'),\n linkify=True\n )\n tags = columns.TagColumn(\n url_name='vpn:tunneltermination_list'\n )\n\n class Meta(NetBoxTable.Meta):\n model = TunnelTermination\n fields = (\n 'pk', 'id', 'tunnel', 'role', 'termination_parent', 'termination', 'ip_addresses', 'outside_ip', 'tags',\n 'created', 'last_updated',\n )\n default_columns = (\n 'pk', 'id', 'tunnel', 'role', 'termination_parent', 'termination', 'ip_addresses', 'outside_ip',\n )\n", "path": "netbox/vpn/tables/tunnels.py"}]} | 1,372 | 340 |
gh_patches_debug_2126 | rasdani/github-patches | git_diff | Mailu__Mailu-1196 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
refining the default password scheme
In https://github.com/Mailu/Mailu/pull/647 the default scheme was switched to BCRYPT. The argumentation was:
> bcrypt is fast¹ and secure[...]
While it's actually secure, bcypt is not fast. It is _designed_ to be slow - to consume a very high amount of CPU resources to make brute-force attacs expensive.
Unfortunately, such a password scheme is not a good choice for a stateless system where many auth checks are done in a very short time. The current bcrypt strength of 12 consumes ~0.7s of CPU time for each authentication.
This might work for low-volume sites but can easily max out multiple cpus if there's some user traffic.
We should default to something alternative like PBKDF2 that ensures fast response times for auth requests.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `core/admin/mailu/configuration.py`
Content:
```
1 import os
2
3 from socrate import system
4
5 DEFAULT_CONFIG = {
6 # Specific to the admin UI
7 'DOCKER_SOCKET': 'unix:///var/run/docker.sock',
8 'BABEL_DEFAULT_LOCALE': 'en',
9 'BABEL_DEFAULT_TIMEZONE': 'UTC',
10 'BOOTSTRAP_SERVE_LOCAL': True,
11 'RATELIMIT_STORAGE_URL': '',
12 'QUOTA_STORAGE_URL': '',
13 'DEBUG': False,
14 'DOMAIN_REGISTRATION': False,
15 'TEMPLATES_AUTO_RELOAD': True,
16 # Database settings
17 'DB_FLAVOR': None,
18 'DB_USER': 'mailu',
19 'DB_PW': None,
20 'DB_HOST': 'database',
21 'DB_NAME': 'mailu',
22 'SQLITE_DATABASE_FILE':'data/main.db',
23 'SQLALCHEMY_DATABASE_URI': 'sqlite:////data/main.db',
24 'SQLALCHEMY_TRACK_MODIFICATIONS': False,
25 # Statistics management
26 'INSTANCE_ID_PATH': '/data/instance',
27 'STATS_ENDPOINT': '0.{}.stats.mailu.io',
28 # Common configuration variables
29 'SECRET_KEY': 'changeMe',
30 'DOMAIN': 'mailu.io',
31 'HOSTNAMES': 'mail.mailu.io,alternative.mailu.io,yetanother.mailu.io',
32 'POSTMASTER': 'postmaster',
33 'TLS_FLAVOR': 'cert',
34 'AUTH_RATELIMIT': '10/minute;1000/hour',
35 'DISABLE_STATISTICS': False,
36 # Mail settings
37 'DMARC_RUA': None,
38 'DMARC_RUF': None,
39 'WELCOME': False,
40 'WELCOME_SUBJECT': 'Dummy welcome topic',
41 'WELCOME_BODY': 'Dummy welcome body',
42 'DKIM_SELECTOR': 'dkim',
43 'DKIM_PATH': '/dkim/{domain}.{selector}.key',
44 'DEFAULT_QUOTA': 1000000000,
45 # Web settings
46 'SITENAME': 'Mailu',
47 'WEBSITE': 'https://mailu.io',
48 'WEB_ADMIN': '/admin',
49 'WEB_WEBMAIL': '/webmail',
50 'WEBMAIL': 'none',
51 'RECAPTCHA_PUBLIC_KEY': '',
52 'RECAPTCHA_PRIVATE_KEY': '',
53 # Advanced settings
54 'PASSWORD_SCHEME': 'BLF-CRYPT',
55 'LOG_LEVEL': 'WARNING',
56 # Host settings
57 'HOST_IMAP': 'imap',
58 'HOST_LMTP': 'imap:2525',
59 'HOST_POP3': 'imap',
60 'HOST_SMTP': 'smtp',
61 'HOST_AUTHSMTP': 'smtp',
62 'HOST_ADMIN': 'admin',
63 'ANTISPAM': 'none',
64 'HOST_ANTISPAM': 'antispam:11334',
65 'WEBMAIL': 'none',
66 'HOST_WEBMAIL': 'webmail',
67 'HOST_WEBDAV': 'webdav:5232',
68 'HOST_REDIS': 'redis',
69 'HOST_FRONT': 'front',
70 'SUBNET': '192.168.203.0/24',
71 'POD_ADDRESS_RANGE': None
72 }
73
74 class ConfigManager(dict):
75 """ Naive configuration manager that uses environment only
76 """
77
78 DB_TEMPLATES = {
79 'sqlite': 'sqlite:////{SQLITE_DATABASE_FILE}',
80 'postgresql': 'postgresql://{DB_USER}:{DB_PW}@{DB_HOST}/{DB_NAME}',
81 'mysql': 'mysql://{DB_USER}:{DB_PW}@{DB_HOST}/{DB_NAME}'
82 }
83
84 def __init__(self):
85 self.config = dict()
86
87 def get_host_address(self, name):
88 # if MYSERVICE_ADDRESS is defined, use this
89 if '{}_ADDRESS'.format(name) in os.environ:
90 return os.environ.get('{}_ADDRESS'.format(name))
91 # otherwise use the host name and resolve it
92 return system.resolve_address(self.config['HOST_{}'.format(name)])
93
94 def resolve_hosts(self):
95 self.config["IMAP_ADDRESS"] = self.get_host_address("IMAP")
96 self.config["POP3_ADDRESS"] = self.get_host_address("POP3")
97 self.config["AUTHSMTP_ADDRESS"] = self.get_host_address("AUTHSMTP")
98 self.config["SMTP_ADDRESS"] = self.get_host_address("SMTP")
99 self.config["REDIS_ADDRESS"] = self.get_host_address("REDIS")
100 if self.config["WEBMAIL"] != "none":
101 self.config["WEBMAIL_ADDRESS"] = self.get_host_address("WEBMAIL")
102 if self.config["ANTISPAM"] != "none":
103 self.config["ANTISPAM_ADDRESS"] = self.get_host_address("ANTISPAM")
104
105 def __coerce_value(self, value):
106 if isinstance(value, str) and value.lower() in ('true','yes'):
107 return True
108 elif isinstance(value, str) and value.lower() in ('false', 'no'):
109 return False
110 return value
111
112 def init_app(self, app):
113 self.config.update(app.config)
114 # get environment variables
115 self.config.update({
116 key: self.__coerce_value(os.environ.get(key, value))
117 for key, value in DEFAULT_CONFIG.items()
118 })
119 self.resolve_hosts()
120
121 # automatically set the sqlalchemy string
122 if self.config['DB_FLAVOR']:
123 template = self.DB_TEMPLATES[self.config['DB_FLAVOR']]
124 self.config['SQLALCHEMY_DATABASE_URI'] = template.format(**self.config)
125
126 self.config['RATELIMIT_STORAGE_URL'] = 'redis://{0}/2'.format(self.config['REDIS_ADDRESS'])
127 self.config['QUOTA_STORAGE_URL'] = 'redis://{0}/1'.format(self.config['REDIS_ADDRESS'])
128 # update the app config itself
129 app.config = self
130
131 def setdefault(self, key, value):
132 if key not in self.config:
133 self.config[key] = value
134 return self.config[key]
135
136 def get(self, *args):
137 return self.config.get(*args)
138
139 def keys(self):
140 return self.config.keys()
141
142 def __getitem__(self, key):
143 return self.config.get(key)
144
145 def __setitem__(self, key, value):
146 self.config[key] = value
147
148 def __contains__(self, key):
149 return key in self.config
150
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/core/admin/mailu/configuration.py b/core/admin/mailu/configuration.py
--- a/core/admin/mailu/configuration.py
+++ b/core/admin/mailu/configuration.py
@@ -51,7 +51,7 @@
'RECAPTCHA_PUBLIC_KEY': '',
'RECAPTCHA_PRIVATE_KEY': '',
# Advanced settings
- 'PASSWORD_SCHEME': 'BLF-CRYPT',
+ 'PASSWORD_SCHEME': 'PBKDF2',
'LOG_LEVEL': 'WARNING',
# Host settings
'HOST_IMAP': 'imap',
| {"golden_diff": "diff --git a/core/admin/mailu/configuration.py b/core/admin/mailu/configuration.py\n--- a/core/admin/mailu/configuration.py\n+++ b/core/admin/mailu/configuration.py\n@@ -51,7 +51,7 @@\n 'RECAPTCHA_PUBLIC_KEY': '',\n 'RECAPTCHA_PRIVATE_KEY': '',\n # Advanced settings\n- 'PASSWORD_SCHEME': 'BLF-CRYPT',\n+ 'PASSWORD_SCHEME': 'PBKDF2',\n 'LOG_LEVEL': 'WARNING',\n # Host settings\n 'HOST_IMAP': 'imap',\n", "issue": "refining the default password scheme\nIn https://github.com/Mailu/Mailu/pull/647 the default scheme was switched to BCRYPT. The argumentation was:\r\n\r\n> bcrypt is fast\u00b9 and secure[...]\r\n\r\nWhile it's actually secure, bcypt is not fast. It is _designed_ to be slow - to consume a very high amount of CPU resources to make brute-force attacs expensive.\r\n\r\nUnfortunately, such a password scheme is not a good choice for a stateless system where many auth checks are done in a very short time. The current bcrypt strength of 12 consumes ~0.7s of CPU time for each authentication.\r\n\r\nThis might work for low-volume sites but can easily max out multiple cpus if there's some user traffic.\r\n\r\nWe should default to something alternative like PBKDF2 that ensures fast response times for auth requests.\n", "before_files": [{"content": "import os\n\nfrom socrate import system\n\nDEFAULT_CONFIG = {\n # Specific to the admin UI\n 'DOCKER_SOCKET': 'unix:///var/run/docker.sock',\n 'BABEL_DEFAULT_LOCALE': 'en',\n 'BABEL_DEFAULT_TIMEZONE': 'UTC',\n 'BOOTSTRAP_SERVE_LOCAL': True,\n 'RATELIMIT_STORAGE_URL': '',\n 'QUOTA_STORAGE_URL': '',\n 'DEBUG': False,\n 'DOMAIN_REGISTRATION': False,\n 'TEMPLATES_AUTO_RELOAD': True,\n # Database settings\n 'DB_FLAVOR': None,\n 'DB_USER': 'mailu',\n 'DB_PW': None,\n 'DB_HOST': 'database',\n 'DB_NAME': 'mailu',\n 'SQLITE_DATABASE_FILE':'data/main.db',\n 'SQLALCHEMY_DATABASE_URI': 'sqlite:////data/main.db',\n 'SQLALCHEMY_TRACK_MODIFICATIONS': False,\n # Statistics management\n 'INSTANCE_ID_PATH': '/data/instance',\n 'STATS_ENDPOINT': '0.{}.stats.mailu.io',\n # Common configuration variables\n 'SECRET_KEY': 'changeMe',\n 'DOMAIN': 'mailu.io',\n 'HOSTNAMES': 'mail.mailu.io,alternative.mailu.io,yetanother.mailu.io',\n 'POSTMASTER': 'postmaster',\n 'TLS_FLAVOR': 'cert',\n 'AUTH_RATELIMIT': '10/minute;1000/hour',\n 'DISABLE_STATISTICS': False,\n # Mail settings\n 'DMARC_RUA': None,\n 'DMARC_RUF': None,\n 'WELCOME': False,\n 'WELCOME_SUBJECT': 'Dummy welcome topic',\n 'WELCOME_BODY': 'Dummy welcome body',\n 'DKIM_SELECTOR': 'dkim',\n 'DKIM_PATH': '/dkim/{domain}.{selector}.key',\n 'DEFAULT_QUOTA': 1000000000,\n # Web settings\n 'SITENAME': 'Mailu',\n 'WEBSITE': 'https://mailu.io',\n 'WEB_ADMIN': '/admin',\n 'WEB_WEBMAIL': '/webmail',\n 'WEBMAIL': 'none',\n 'RECAPTCHA_PUBLIC_KEY': '',\n 'RECAPTCHA_PRIVATE_KEY': '',\n # Advanced settings\n 'PASSWORD_SCHEME': 'BLF-CRYPT',\n 'LOG_LEVEL': 'WARNING',\n # Host settings\n 'HOST_IMAP': 'imap',\n 'HOST_LMTP': 'imap:2525',\n 'HOST_POP3': 'imap',\n 'HOST_SMTP': 'smtp',\n 'HOST_AUTHSMTP': 'smtp',\n 'HOST_ADMIN': 'admin',\n 'ANTISPAM': 'none',\n 'HOST_ANTISPAM': 'antispam:11334',\n 'WEBMAIL': 'none',\n 'HOST_WEBMAIL': 'webmail',\n 'HOST_WEBDAV': 'webdav:5232',\n 'HOST_REDIS': 'redis',\n 'HOST_FRONT': 'front',\n 'SUBNET': '192.168.203.0/24',\n 'POD_ADDRESS_RANGE': None\n}\n\nclass ConfigManager(dict):\n \"\"\" Naive configuration manager that uses environment only\n \"\"\"\n\n DB_TEMPLATES = {\n 'sqlite': 'sqlite:////{SQLITE_DATABASE_FILE}',\n 'postgresql': 'postgresql://{DB_USER}:{DB_PW}@{DB_HOST}/{DB_NAME}',\n 'mysql': 'mysql://{DB_USER}:{DB_PW}@{DB_HOST}/{DB_NAME}'\n }\n\n def __init__(self):\n self.config = dict()\n\n def get_host_address(self, name):\n # if MYSERVICE_ADDRESS is defined, use this\n if '{}_ADDRESS'.format(name) in os.environ:\n return os.environ.get('{}_ADDRESS'.format(name))\n # otherwise use the host name and resolve it\n return system.resolve_address(self.config['HOST_{}'.format(name)])\n\n def resolve_hosts(self):\n self.config[\"IMAP_ADDRESS\"] = self.get_host_address(\"IMAP\")\n self.config[\"POP3_ADDRESS\"] = self.get_host_address(\"POP3\")\n self.config[\"AUTHSMTP_ADDRESS\"] = self.get_host_address(\"AUTHSMTP\")\n self.config[\"SMTP_ADDRESS\"] = self.get_host_address(\"SMTP\")\n self.config[\"REDIS_ADDRESS\"] = self.get_host_address(\"REDIS\")\n if self.config[\"WEBMAIL\"] != \"none\":\n self.config[\"WEBMAIL_ADDRESS\"] = self.get_host_address(\"WEBMAIL\")\n if self.config[\"ANTISPAM\"] != \"none\":\n self.config[\"ANTISPAM_ADDRESS\"] = self.get_host_address(\"ANTISPAM\")\n\n def __coerce_value(self, value):\n if isinstance(value, str) and value.lower() in ('true','yes'):\n return True\n elif isinstance(value, str) and value.lower() in ('false', 'no'):\n return False\n return value\n\n def init_app(self, app):\n self.config.update(app.config)\n # get environment variables\n self.config.update({\n key: self.__coerce_value(os.environ.get(key, value))\n for key, value in DEFAULT_CONFIG.items()\n })\n self.resolve_hosts()\n\n # automatically set the sqlalchemy string\n if self.config['DB_FLAVOR']:\n template = self.DB_TEMPLATES[self.config['DB_FLAVOR']]\n self.config['SQLALCHEMY_DATABASE_URI'] = template.format(**self.config)\n\n self.config['RATELIMIT_STORAGE_URL'] = 'redis://{0}/2'.format(self.config['REDIS_ADDRESS'])\n self.config['QUOTA_STORAGE_URL'] = 'redis://{0}/1'.format(self.config['REDIS_ADDRESS'])\n # update the app config itself\n app.config = self\n\n def setdefault(self, key, value):\n if key not in self.config:\n self.config[key] = value\n return self.config[key]\n\n def get(self, *args):\n return self.config.get(*args)\n\n def keys(self):\n return self.config.keys()\n\n def __getitem__(self, key):\n return self.config.get(key)\n\n def __setitem__(self, key, value):\n self.config[key] = value\n\n def __contains__(self, key):\n return key in self.config\n", "path": "core/admin/mailu/configuration.py"}], "after_files": [{"content": "import os\n\nfrom socrate import system\n\nDEFAULT_CONFIG = {\n # Specific to the admin UI\n 'DOCKER_SOCKET': 'unix:///var/run/docker.sock',\n 'BABEL_DEFAULT_LOCALE': 'en',\n 'BABEL_DEFAULT_TIMEZONE': 'UTC',\n 'BOOTSTRAP_SERVE_LOCAL': True,\n 'RATELIMIT_STORAGE_URL': '',\n 'QUOTA_STORAGE_URL': '',\n 'DEBUG': False,\n 'DOMAIN_REGISTRATION': False,\n 'TEMPLATES_AUTO_RELOAD': True,\n # Database settings\n 'DB_FLAVOR': None,\n 'DB_USER': 'mailu',\n 'DB_PW': None,\n 'DB_HOST': 'database',\n 'DB_NAME': 'mailu',\n 'SQLITE_DATABASE_FILE':'data/main.db',\n 'SQLALCHEMY_DATABASE_URI': 'sqlite:////data/main.db',\n 'SQLALCHEMY_TRACK_MODIFICATIONS': False,\n # Statistics management\n 'INSTANCE_ID_PATH': '/data/instance',\n 'STATS_ENDPOINT': '0.{}.stats.mailu.io',\n # Common configuration variables\n 'SECRET_KEY': 'changeMe',\n 'DOMAIN': 'mailu.io',\n 'HOSTNAMES': 'mail.mailu.io,alternative.mailu.io,yetanother.mailu.io',\n 'POSTMASTER': 'postmaster',\n 'TLS_FLAVOR': 'cert',\n 'AUTH_RATELIMIT': '10/minute;1000/hour',\n 'DISABLE_STATISTICS': False,\n # Mail settings\n 'DMARC_RUA': None,\n 'DMARC_RUF': None,\n 'WELCOME': False,\n 'WELCOME_SUBJECT': 'Dummy welcome topic',\n 'WELCOME_BODY': 'Dummy welcome body',\n 'DKIM_SELECTOR': 'dkim',\n 'DKIM_PATH': '/dkim/{domain}.{selector}.key',\n 'DEFAULT_QUOTA': 1000000000,\n # Web settings\n 'SITENAME': 'Mailu',\n 'WEBSITE': 'https://mailu.io',\n 'WEB_ADMIN': '/admin',\n 'WEB_WEBMAIL': '/webmail',\n 'WEBMAIL': 'none',\n 'RECAPTCHA_PUBLIC_KEY': '',\n 'RECAPTCHA_PRIVATE_KEY': '',\n # Advanced settings\n 'PASSWORD_SCHEME': 'PBKDF2',\n 'LOG_LEVEL': 'WARNING',\n # Host settings\n 'HOST_IMAP': 'imap',\n 'HOST_LMTP': 'imap:2525',\n 'HOST_POP3': 'imap',\n 'HOST_SMTP': 'smtp',\n 'HOST_AUTHSMTP': 'smtp',\n 'HOST_ADMIN': 'admin',\n 'ANTISPAM': 'none',\n 'HOST_ANTISPAM': 'antispam:11334',\n 'WEBMAIL': 'none',\n 'HOST_WEBMAIL': 'webmail',\n 'HOST_WEBDAV': 'webdav:5232',\n 'HOST_REDIS': 'redis',\n 'HOST_FRONT': 'front',\n 'SUBNET': '192.168.203.0/24',\n 'POD_ADDRESS_RANGE': None\n}\n\nclass ConfigManager(dict):\n \"\"\" Naive configuration manager that uses environment only\n \"\"\"\n\n DB_TEMPLATES = {\n 'sqlite': 'sqlite:////{SQLITE_DATABASE_FILE}',\n 'postgresql': 'postgresql://{DB_USER}:{DB_PW}@{DB_HOST}/{DB_NAME}',\n 'mysql': 'mysql://{DB_USER}:{DB_PW}@{DB_HOST}/{DB_NAME}'\n }\n\n def __init__(self):\n self.config = dict()\n\n def get_host_address(self, name):\n # if MYSERVICE_ADDRESS is defined, use this\n if '{}_ADDRESS'.format(name) in os.environ:\n return os.environ.get('{}_ADDRESS'.format(name))\n # otherwise use the host name and resolve it\n return system.resolve_address(self.config['HOST_{}'.format(name)])\n\n def resolve_hosts(self):\n self.config[\"IMAP_ADDRESS\"] = self.get_host_address(\"IMAP\")\n self.config[\"POP3_ADDRESS\"] = self.get_host_address(\"POP3\")\n self.config[\"AUTHSMTP_ADDRESS\"] = self.get_host_address(\"AUTHSMTP\")\n self.config[\"SMTP_ADDRESS\"] = self.get_host_address(\"SMTP\")\n self.config[\"REDIS_ADDRESS\"] = self.get_host_address(\"REDIS\")\n if self.config[\"WEBMAIL\"] != \"none\":\n self.config[\"WEBMAIL_ADDRESS\"] = self.get_host_address(\"WEBMAIL\")\n if self.config[\"ANTISPAM\"] != \"none\":\n self.config[\"ANTISPAM_ADDRESS\"] = self.get_host_address(\"ANTISPAM\")\n\n def __coerce_value(self, value):\n if isinstance(value, str) and value.lower() in ('true','yes'):\n return True\n elif isinstance(value, str) and value.lower() in ('false', 'no'):\n return False\n return value\n\n def init_app(self, app):\n self.config.update(app.config)\n # get environment variables\n self.config.update({\n key: self.__coerce_value(os.environ.get(key, value))\n for key, value in DEFAULT_CONFIG.items()\n })\n self.resolve_hosts()\n\n # automatically set the sqlalchemy string\n if self.config['DB_FLAVOR']:\n template = self.DB_TEMPLATES[self.config['DB_FLAVOR']]\n self.config['SQLALCHEMY_DATABASE_URI'] = template.format(**self.config)\n\n self.config['RATELIMIT_STORAGE_URL'] = 'redis://{0}/2'.format(self.config['REDIS_ADDRESS'])\n self.config['QUOTA_STORAGE_URL'] = 'redis://{0}/1'.format(self.config['REDIS_ADDRESS'])\n # update the app config itself\n app.config = self\n\n def setdefault(self, key, value):\n if key not in self.config:\n self.config[key] = value\n return self.config[key]\n\n def get(self, *args):\n return self.config.get(*args)\n\n def keys(self):\n return self.config.keys()\n\n def __getitem__(self, key):\n return self.config.get(key)\n\n def __setitem__(self, key, value):\n self.config[key] = value\n\n def __contains__(self, key):\n return key in self.config\n", "path": "core/admin/mailu/configuration.py"}]} | 2,129 | 119 |
gh_patches_debug_31423 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-8374 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Wendy's incorrect hours
ATP populates incorrect hours for Wendy's. It's populating 10:30 when [the Wendy's website](https://locations.wendys.com/united-states/oh/loveland/10601-loveland-madeira-rd) says 12:00am. Reference [OSM Changeset 151311478](https://www.openstreetmap.org/changeset/151311478#map=6/41.675/-77.593). I assume it has to do with parsing multiple blocks of hours on the same page. In this case, the web page lists restaurant hours and drive through hours, with an extra blurb about breakfast ending at 10:30 am, which I assume is where the 10:30 is coming from in the OSM import.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `locations/spiders/wendys.py`
Content:
```
1 from scrapy.spiders import SitemapSpider
2
3 from locations.structured_data_spider import StructuredDataSpider
4
5
6 class WendysSpider(SitemapSpider, StructuredDataSpider):
7 name = "wendys"
8 item_attributes = {"brand": "Wendy's", "brand_wikidata": "Q550258"}
9 sitemap_urls = ["https://locations.wendys.com/sitemap.xml"]
10 sitemap_rules = [(r"https://locations.wendys.com/.+/\w\w/.+/.+", "parse_sd")]
11
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/locations/spiders/wendys.py b/locations/spiders/wendys.py
--- a/locations/spiders/wendys.py
+++ b/locations/spiders/wendys.py
@@ -1,10 +1,48 @@
+import json
+
from scrapy.spiders import SitemapSpider
+from locations.hours import OpeningHours
from locations.structured_data_spider import StructuredDataSpider
class WendysSpider(SitemapSpider, StructuredDataSpider):
name = "wendys"
item_attributes = {"brand": "Wendy's", "brand_wikidata": "Q550258"}
+ wanted_types = ["FastFoodRestaurant"]
sitemap_urls = ["https://locations.wendys.com/sitemap.xml"]
sitemap_rules = [(r"https://locations.wendys.com/.+/\w\w/.+/.+", "parse_sd")]
+
+ def post_process_item(self, item, response, ld_data, **kwargs):
+ item["website"] = ld_data.get("url")
+
+ # Opening hours for the drive-through seem to get included with regular hours, so clean that up
+ opening_hours_divs = response.xpath('//div[@class="c-location-hours-details-wrapper js-location-hours"]')
+ item["opening_hours"] = self.clean_hours(opening_hours_divs[0])
+
+ if len(opening_hours_divs) > 1:
+ item["extras"]["opening_hours:drive_through"] = self.clean_hours(opening_hours_divs[1])
+
+ if breakfast_hours_divs := response.xpath(
+ '//div[@class="LocationInfo-breakfastInfo js-breakfastInfo"]/span[@class="c-location-hours-today js-location-hours"]'
+ ):
+ item["extras"]["breakfast"] = self.clean_hours(breakfast_hours_divs[0])
+
+ yield item
+
+ @staticmethod
+ def clean_hours(hours_div):
+ days = hours_div.xpath(".//@data-days").extract_first()
+ days = json.loads(days)
+
+ oh = OpeningHours()
+
+ for day in days:
+ for interval in day["intervals"]:
+ # These interval ranges are 24 hour times represented as integers, so they need to be converted to strings
+ open_time = str(interval["start"]).zfill(4)
+ close_time = str(interval["end"]).zfill(4)
+
+ oh.add_range(day=day["day"].title()[:2], open_time=open_time, close_time=close_time, time_format="%H%M")
+
+ return oh.as_opening_hours()
| {"golden_diff": "diff --git a/locations/spiders/wendys.py b/locations/spiders/wendys.py\n--- a/locations/spiders/wendys.py\n+++ b/locations/spiders/wendys.py\n@@ -1,10 +1,48 @@\n+import json\n+\n from scrapy.spiders import SitemapSpider\n \n+from locations.hours import OpeningHours\n from locations.structured_data_spider import StructuredDataSpider\n \n \n class WendysSpider(SitemapSpider, StructuredDataSpider):\n name = \"wendys\"\n item_attributes = {\"brand\": \"Wendy's\", \"brand_wikidata\": \"Q550258\"}\n+ wanted_types = [\"FastFoodRestaurant\"]\n sitemap_urls = [\"https://locations.wendys.com/sitemap.xml\"]\n sitemap_rules = [(r\"https://locations.wendys.com/.+/\\w\\w/.+/.+\", \"parse_sd\")]\n+\n+ def post_process_item(self, item, response, ld_data, **kwargs):\n+ item[\"website\"] = ld_data.get(\"url\")\n+\n+ # Opening hours for the drive-through seem to get included with regular hours, so clean that up\n+ opening_hours_divs = response.xpath('//div[@class=\"c-location-hours-details-wrapper js-location-hours\"]')\n+ item[\"opening_hours\"] = self.clean_hours(opening_hours_divs[0])\n+\n+ if len(opening_hours_divs) > 1:\n+ item[\"extras\"][\"opening_hours:drive_through\"] = self.clean_hours(opening_hours_divs[1])\n+\n+ if breakfast_hours_divs := response.xpath(\n+ '//div[@class=\"LocationInfo-breakfastInfo js-breakfastInfo\"]/span[@class=\"c-location-hours-today js-location-hours\"]'\n+ ):\n+ item[\"extras\"][\"breakfast\"] = self.clean_hours(breakfast_hours_divs[0])\n+\n+ yield item\n+\n+ @staticmethod\n+ def clean_hours(hours_div):\n+ days = hours_div.xpath(\".//@data-days\").extract_first()\n+ days = json.loads(days)\n+\n+ oh = OpeningHours()\n+\n+ for day in days:\n+ for interval in day[\"intervals\"]:\n+ # These interval ranges are 24 hour times represented as integers, so they need to be converted to strings\n+ open_time = str(interval[\"start\"]).zfill(4)\n+ close_time = str(interval[\"end\"]).zfill(4)\n+\n+ oh.add_range(day=day[\"day\"].title()[:2], open_time=open_time, close_time=close_time, time_format=\"%H%M\")\n+\n+ return oh.as_opening_hours()\n", "issue": "Wendy's incorrect hours\nATP populates incorrect hours for Wendy's. It's populating 10:30 when [the Wendy's website](https://locations.wendys.com/united-states/oh/loveland/10601-loveland-madeira-rd) says 12:00am. Reference [OSM Changeset 151311478](https://www.openstreetmap.org/changeset/151311478#map=6/41.675/-77.593). I assume it has to do with parsing multiple blocks of hours on the same page. In this case, the web page lists restaurant hours and drive through hours, with an extra blurb about breakfast ending at 10:30 am, which I assume is where the 10:30 is coming from in the OSM import.\n", "before_files": [{"content": "from scrapy.spiders import SitemapSpider\n\nfrom locations.structured_data_spider import StructuredDataSpider\n\n\nclass WendysSpider(SitemapSpider, StructuredDataSpider):\n name = \"wendys\"\n item_attributes = {\"brand\": \"Wendy's\", \"brand_wikidata\": \"Q550258\"}\n sitemap_urls = [\"https://locations.wendys.com/sitemap.xml\"]\n sitemap_rules = [(r\"https://locations.wendys.com/.+/\\w\\w/.+/.+\", \"parse_sd\")]\n", "path": "locations/spiders/wendys.py"}], "after_files": [{"content": "import json\n\nfrom scrapy.spiders import SitemapSpider\n\nfrom locations.hours import OpeningHours\nfrom locations.structured_data_spider import StructuredDataSpider\n\n\nclass WendysSpider(SitemapSpider, StructuredDataSpider):\n name = \"wendys\"\n item_attributes = {\"brand\": \"Wendy's\", \"brand_wikidata\": \"Q550258\"}\n wanted_types = [\"FastFoodRestaurant\"]\n sitemap_urls = [\"https://locations.wendys.com/sitemap.xml\"]\n sitemap_rules = [(r\"https://locations.wendys.com/.+/\\w\\w/.+/.+\", \"parse_sd\")]\n\n def post_process_item(self, item, response, ld_data, **kwargs):\n item[\"website\"] = ld_data.get(\"url\")\n\n # Opening hours for the drive-through seem to get included with regular hours, so clean that up\n opening_hours_divs = response.xpath('//div[@class=\"c-location-hours-details-wrapper js-location-hours\"]')\n item[\"opening_hours\"] = self.clean_hours(opening_hours_divs[0])\n\n if len(opening_hours_divs) > 1:\n item[\"extras\"][\"opening_hours:drive_through\"] = self.clean_hours(opening_hours_divs[1])\n\n if breakfast_hours_divs := response.xpath(\n '//div[@class=\"LocationInfo-breakfastInfo js-breakfastInfo\"]/span[@class=\"c-location-hours-today js-location-hours\"]'\n ):\n item[\"extras\"][\"breakfast\"] = self.clean_hours(breakfast_hours_divs[0])\n\n yield item\n\n @staticmethod\n def clean_hours(hours_div):\n days = hours_div.xpath(\".//@data-days\").extract_first()\n days = json.loads(days)\n\n oh = OpeningHours()\n\n for day in days:\n for interval in day[\"intervals\"]:\n # These interval ranges are 24 hour times represented as integers, so they need to be converted to strings\n open_time = str(interval[\"start\"]).zfill(4)\n close_time = str(interval[\"end\"]).zfill(4)\n\n oh.add_range(day=day[\"day\"].title()[:2], open_time=open_time, close_time=close_time, time_format=\"%H%M\")\n\n return oh.as_opening_hours()\n", "path": "locations/spiders/wendys.py"}]} | 591 | 568 |
gh_patches_debug_23365 | rasdani/github-patches | git_diff | biopython__biopython-4029 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add option to silence/capture stdout/stderr from Bio.PDB.PSEA
Spun from #3980.
Running `p-sea` through `Bio.PDB.PSEA` will always print output to standard out. We should refactor the code to use `subprocess.run` and add a keyword to the top-level `run_psea` function to control whether to print or capture the tool's output.
A mock implementation that adds some better error handling would be:
```import subprocess
def run_psea(pname, verbose=True):
cmd = [...]
try:
p = subprocess.run(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, check=True)
except subprocess.CalledProcessError as err:
print(f"Command {cmd} failed")
raise
if verbose:
print(p.stdout)
...
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `Bio/PDB/PSEA.py`
Content:
```
1 # Copyright (C) 2006, Thomas Hamelryck ([email protected])
2 #
3 # This file is part of the Biopython distribution and governed by your
4 # choice of the "Biopython License Agreement" or the "BSD 3-Clause License".
5 # Please see the LICENSE file that should have been included as part of this
6 # package.
7
8 """Wrappers for PSEA, a program for secondary structure assignment.
9
10 See this citation for P-SEA, PMID: 9183534
11
12 Labesse G, Colloc'h N, Pothier J, Mornon J-P: P-SEA: a new efficient
13 assignment of secondary structure from C_alpha.
14 Comput Appl Biosci 1997 , 13:291-295
15
16 ftp://ftp.lmcp.jussieu.fr/pub/sincris/software/protein/p-sea/
17 """
18
19 import subprocess
20
21 from Bio.PDB.Polypeptide import is_aa
22
23
24 def run_psea(fname):
25 """Run PSEA and return output filename.
26
27 Note that this assumes the P-SEA binary is called "psea" and that it is
28 on the path.
29
30 Note that P-SEA will write an output file in the current directory using
31 the input filename with extension ".sea".
32
33 Note that P-SEA will write output to the terminal while run.
34 """
35 subprocess.call(["psea", fname])
36 last = fname.split("/")[-1]
37 base = last.split(".")[0]
38 return base + ".sea"
39
40
41 def psea(pname):
42 """Parse PSEA output file."""
43 fname = run_psea(pname)
44 start = 0
45 ss = ""
46 with open(fname) as fp:
47 for l in fp:
48 if l[0:6] == ">p-sea":
49 start = 1
50 continue
51 if not start:
52 continue
53 if l[0] == "\n":
54 break
55 ss = ss + l[0:-1]
56 return ss
57
58
59 def psea2HEC(pseq):
60 """Translate PSEA secondary structure string into HEC."""
61 seq = []
62 for ss in pseq:
63 if ss == "a":
64 n = "H"
65 elif ss == "b":
66 n = "E"
67 elif ss == "c":
68 n = "C"
69 seq.append(n)
70 return seq
71
72
73 def annotate(m, ss_seq):
74 """Apply secondary structure information to residues in model."""
75 c = m.get_list()[0]
76 all = c.get_list()
77 residues = []
78 # Now remove HOH etc.
79 for res in all:
80 if is_aa(res):
81 residues.append(res)
82 L = len(residues)
83 if not L == len(ss_seq):
84 raise ValueError("Length mismatch %i %i" % (L, len(ss_seq)))
85 for i in range(0, L):
86 residues[i].xtra["SS_PSEA"] = ss_seq[i]
87 # subprocess.call(["rm", fname])
88
89
90 class PSEA:
91 """Define PSEA class.
92
93 PSEA object is a wrapper to PSEA program for secondary structure assignment.
94 """
95
96 def __init__(self, model, filename):
97 """Initialize the class."""
98 ss_seq = psea(filename)
99 ss_seq = psea2HEC(ss_seq)
100 annotate(model, ss_seq)
101 self.ss_seq = ss_seq
102
103 def get_seq(self):
104 """Return secondary structure string."""
105 return self.ss_seq
106
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/Bio/PDB/PSEA.py b/Bio/PDB/PSEA.py
--- a/Bio/PDB/PSEA.py
+++ b/Bio/PDB/PSEA.py
@@ -17,11 +17,12 @@
"""
import subprocess
+import os
from Bio.PDB.Polypeptide import is_aa
-def run_psea(fname):
+def run_psea(fname, verbose=False):
"""Run PSEA and return output filename.
Note that this assumes the P-SEA binary is called "psea" and that it is
@@ -30,12 +31,22 @@
Note that P-SEA will write an output file in the current directory using
the input filename with extension ".sea".
- Note that P-SEA will write output to the terminal while run.
+ Note that P-SEA will not write output to the terminal while run unless
+ verbose is set to True.
"""
- subprocess.call(["psea", fname])
last = fname.split("/")[-1]
base = last.split(".")[0]
- return base + ".sea"
+ cmd = ["psea", fname]
+
+ p = subprocess.run(cmd, capture_output=True, universal_newlines=True)
+
+ if verbose:
+ print(p.stdout)
+
+ if not p.stderr.strip() and os.path.exists(base + ".sea"):
+ return base + ".sea"
+ else:
+ raise RuntimeError(f"Error running p-sea: {p.stderr}")
def psea(pname):
| {"golden_diff": "diff --git a/Bio/PDB/PSEA.py b/Bio/PDB/PSEA.py\n--- a/Bio/PDB/PSEA.py\n+++ b/Bio/PDB/PSEA.py\n@@ -17,11 +17,12 @@\n \"\"\"\n \n import subprocess\n+import os\n \n from Bio.PDB.Polypeptide import is_aa\n \n \n-def run_psea(fname):\n+def run_psea(fname, verbose=False):\n \"\"\"Run PSEA and return output filename.\n \n Note that this assumes the P-SEA binary is called \"psea\" and that it is\n@@ -30,12 +31,22 @@\n Note that P-SEA will write an output file in the current directory using\n the input filename with extension \".sea\".\n \n- Note that P-SEA will write output to the terminal while run.\n+ Note that P-SEA will not write output to the terminal while run unless\n+ verbose is set to True.\n \"\"\"\n- subprocess.call([\"psea\", fname])\n last = fname.split(\"/\")[-1]\n base = last.split(\".\")[0]\n- return base + \".sea\"\n+ cmd = [\"psea\", fname]\n+\n+ p = subprocess.run(cmd, capture_output=True, universal_newlines=True)\n+\n+ if verbose:\n+ print(p.stdout)\n+\n+ if not p.stderr.strip() and os.path.exists(base + \".sea\"):\n+ return base + \".sea\"\n+ else:\n+ raise RuntimeError(f\"Error running p-sea: {p.stderr}\")\n \n \n def psea(pname):\n", "issue": "Add option to silence/capture stdout/stderr from Bio.PDB.PSEA\nSpun from #3980.\r\n\r\nRunning `p-sea` through `Bio.PDB.PSEA` will always print output to standard out. We should refactor the code to use `subprocess.run` and add a keyword to the top-level `run_psea` function to control whether to print or capture the tool's output.\r\n\r\nA mock implementation that adds some better error handling would be:\r\n\r\n```import subprocess\r\n\r\ndef run_psea(pname, verbose=True):\r\n cmd = [...]\r\n try:\r\n p = subprocess.run(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, check=True)\r\n except subprocess.CalledProcessError as err:\r\n print(f\"Command {cmd} failed\")\r\n raise\r\n\r\n if verbose:\r\n print(p.stdout)\r\n ...\r\n```\n", "before_files": [{"content": "# Copyright (C) 2006, Thomas Hamelryck ([email protected])\n#\n# This file is part of the Biopython distribution and governed by your\n# choice of the \"Biopython License Agreement\" or the \"BSD 3-Clause License\".\n# Please see the LICENSE file that should have been included as part of this\n# package.\n\n\"\"\"Wrappers for PSEA, a program for secondary structure assignment.\n\nSee this citation for P-SEA, PMID: 9183534\n\nLabesse G, Colloc'h N, Pothier J, Mornon J-P: P-SEA: a new efficient\nassignment of secondary structure from C_alpha.\nComput Appl Biosci 1997 , 13:291-295\n\nftp://ftp.lmcp.jussieu.fr/pub/sincris/software/protein/p-sea/\n\"\"\"\n\nimport subprocess\n\nfrom Bio.PDB.Polypeptide import is_aa\n\n\ndef run_psea(fname):\n \"\"\"Run PSEA and return output filename.\n\n Note that this assumes the P-SEA binary is called \"psea\" and that it is\n on the path.\n\n Note that P-SEA will write an output file in the current directory using\n the input filename with extension \".sea\".\n\n Note that P-SEA will write output to the terminal while run.\n \"\"\"\n subprocess.call([\"psea\", fname])\n last = fname.split(\"/\")[-1]\n base = last.split(\".\")[0]\n return base + \".sea\"\n\n\ndef psea(pname):\n \"\"\"Parse PSEA output file.\"\"\"\n fname = run_psea(pname)\n start = 0\n ss = \"\"\n with open(fname) as fp:\n for l in fp:\n if l[0:6] == \">p-sea\":\n start = 1\n continue\n if not start:\n continue\n if l[0] == \"\\n\":\n break\n ss = ss + l[0:-1]\n return ss\n\n\ndef psea2HEC(pseq):\n \"\"\"Translate PSEA secondary structure string into HEC.\"\"\"\n seq = []\n for ss in pseq:\n if ss == \"a\":\n n = \"H\"\n elif ss == \"b\":\n n = \"E\"\n elif ss == \"c\":\n n = \"C\"\n seq.append(n)\n return seq\n\n\ndef annotate(m, ss_seq):\n \"\"\"Apply secondary structure information to residues in model.\"\"\"\n c = m.get_list()[0]\n all = c.get_list()\n residues = []\n # Now remove HOH etc.\n for res in all:\n if is_aa(res):\n residues.append(res)\n L = len(residues)\n if not L == len(ss_seq):\n raise ValueError(\"Length mismatch %i %i\" % (L, len(ss_seq)))\n for i in range(0, L):\n residues[i].xtra[\"SS_PSEA\"] = ss_seq[i]\n # subprocess.call([\"rm\", fname])\n\n\nclass PSEA:\n \"\"\"Define PSEA class.\n\n PSEA object is a wrapper to PSEA program for secondary structure assignment.\n \"\"\"\n\n def __init__(self, model, filename):\n \"\"\"Initialize the class.\"\"\"\n ss_seq = psea(filename)\n ss_seq = psea2HEC(ss_seq)\n annotate(model, ss_seq)\n self.ss_seq = ss_seq\n\n def get_seq(self):\n \"\"\"Return secondary structure string.\"\"\"\n return self.ss_seq\n", "path": "Bio/PDB/PSEA.py"}], "after_files": [{"content": "# Copyright (C) 2006, Thomas Hamelryck ([email protected])\n#\n# This file is part of the Biopython distribution and governed by your\n# choice of the \"Biopython License Agreement\" or the \"BSD 3-Clause License\".\n# Please see the LICENSE file that should have been included as part of this\n# package.\n\n\"\"\"Wrappers for PSEA, a program for secondary structure assignment.\n\nSee this citation for P-SEA, PMID: 9183534\n\nLabesse G, Colloc'h N, Pothier J, Mornon J-P: P-SEA: a new efficient\nassignment of secondary structure from C_alpha.\nComput Appl Biosci 1997 , 13:291-295\n\nftp://ftp.lmcp.jussieu.fr/pub/sincris/software/protein/p-sea/\n\"\"\"\n\nimport subprocess\nimport os\n\nfrom Bio.PDB.Polypeptide import is_aa\n\n\ndef run_psea(fname, verbose=False):\n \"\"\"Run PSEA and return output filename.\n\n Note that this assumes the P-SEA binary is called \"psea\" and that it is\n on the path.\n\n Note that P-SEA will write an output file in the current directory using\n the input filename with extension \".sea\".\n\n Note that P-SEA will not write output to the terminal while run unless\n verbose is set to True.\n \"\"\"\n last = fname.split(\"/\")[-1]\n base = last.split(\".\")[0]\n cmd = [\"psea\", fname]\n\n p = subprocess.run(cmd, capture_output=True, universal_newlines=True)\n\n if verbose:\n print(p.stdout)\n\n if not p.stderr.strip() and os.path.exists(base + \".sea\"):\n return base + \".sea\"\n else:\n raise RuntimeError(f\"Error running p-sea: {p.stderr}\")\n\n\ndef psea(pname):\n \"\"\"Parse PSEA output file.\"\"\"\n fname = run_psea(pname)\n start = 0\n ss = \"\"\n with open(fname) as fp:\n for l in fp:\n if l[0:6] == \">p-sea\":\n start = 1\n continue\n if not start:\n continue\n if l[0] == \"\\n\":\n break\n ss = ss + l[0:-1]\n return ss\n\n\ndef psea2HEC(pseq):\n \"\"\"Translate PSEA secondary structure string into HEC.\"\"\"\n seq = []\n for ss in pseq:\n if ss == \"a\":\n n = \"H\"\n elif ss == \"b\":\n n = \"E\"\n elif ss == \"c\":\n n = \"C\"\n seq.append(n)\n return seq\n\n\ndef annotate(m, ss_seq):\n \"\"\"Apply secondary structure information to residues in model.\"\"\"\n c = m.get_list()[0]\n all = c.get_list()\n residues = []\n # Now remove HOH etc.\n for res in all:\n if is_aa(res):\n residues.append(res)\n L = len(residues)\n if not L == len(ss_seq):\n raise ValueError(\"Length mismatch %i %i\" % (L, len(ss_seq)))\n for i in range(0, L):\n residues[i].xtra[\"SS_PSEA\"] = ss_seq[i]\n # subprocess.call([\"rm\", fname])\n\n\nclass PSEA:\n \"\"\"Define PSEA class.\n\n PSEA object is a wrapper to PSEA program for secondary structure assignment.\n \"\"\"\n\n def __init__(self, model, filename):\n \"\"\"Initialize the class.\"\"\"\n ss_seq = psea(filename)\n ss_seq = psea2HEC(ss_seq)\n annotate(model, ss_seq)\n self.ss_seq = ss_seq\n\n def get_seq(self):\n \"\"\"Return secondary structure string.\"\"\"\n return self.ss_seq\n", "path": "Bio/PDB/PSEA.py"}]} | 1,415 | 338 |
gh_patches_debug_11339 | rasdani/github-patches | git_diff | bridgecrewio__checkov-5679 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Terraform azurerm_windows_function_app resource - CKV_SECRET_6 false positive for active_directory_v2 setting
**Describe the issue**
When adding azure function app with identity provider active directory v2, checkov reports false positive
secret is not exposed but checkov reports :
| 0 | CKV_SECRET_6 | xxxxxxxxx| 0a69b97b56bf2230f607111888418a59f1891c92 | Base64 High Entropy String | https://docs.paloaltonetworks.com/content/techdocs/en_US/prisma/prisma-cloud/prisma-cloud-code-security-policy-reference/secrets-policies/secrets-policy-index/git-secrets-6.html |
with resource 0a69b97b56bf2230f607111888418a59f1891c92
Because from resource id is not clear which resource is the culprit the only possible one is client_secret_setting_name which is set to
"MICROSOFT_PROVIDER_AUTHENTICATION_SECRET" and not exposed secret.
**Examples**
snippet of resource azurerm_windows_function_app :
active_directory_v2 {
client_id = local.managed_identity_client_id
client_secret_setting_name = "MICROSOFT_PROVIDER_AUTHENTICATION_SECRET"
tenant_auth_endpoint = local.identity_provider.tenant_auth_endpoint
www_authentication_disabled = false
}
**Version (please complete the following information):**
- Checkov Version 2.4.50
**Additional context**
Add any other context about the problem here.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `checkov/secrets/plugins/detector_utils.py`
Content:
```
1 from __future__ import annotations
2
3 import json
4 import re
5 from re import Pattern
6 from typing import Any, TYPE_CHECKING, Optional
7
8 from detect_secrets.util.filetype import FileType
9 from detect_secrets.plugins.keyword import DENYLIST
10 from detect_secrets.plugins.keyword import AFFIX_REGEX
11 from detect_secrets.plugins.keyword import CLOSING
12 from detect_secrets.plugins.keyword import OPTIONAL_WHITESPACE
13 from detect_secrets.plugins.keyword import QUOTE
14 from detect_secrets.plugins.keyword import SECRET
15
16 from checkov.secrets.parsers.terraform.multiline_parser import terraform_multiline_parser
17 from checkov.secrets.parsers.terraform.single_line_parser import terraform_single_line_parser
18 from checkov.secrets.parsers.yaml.multiline_parser import yml_multiline_parser
19 from checkov.secrets.parsers.json.multiline_parser import json_multiline_parser
20
21 if TYPE_CHECKING:
22 from checkov.secrets.parsers.multiline_parser import BaseMultiLineParser
23 from detect_secrets.core.potential_secret import PotentialSecret
24 from detect_secrets.util.code_snippet import CodeSnippet
25 from detect_secrets.plugins.base import BasePlugin
26
27 MAX_KEYWORD_LIMIT = 500
28
29 DENY_LIST_REGEX = r'|'.join(DENYLIST)
30 # Support for suffix after keyword i.e. password_secure = "value"
31 DENY_LIST_REGEX2 = r'({denylist}){suffix}'.format(
32 denylist=DENY_LIST_REGEX,
33 suffix=AFFIX_REGEX,
34 )
35
36 KEY = r'{words}({closing})?'.format(
37 words=AFFIX_REGEX,
38 closing=CLOSING,
39 )
40
41 FOLLOWED_BY_COLON_VALUE_KEYWORD_REGEX = re.compile(
42 # e.g. var: MY_PASSWORD_123
43 r'{whitespace}({key})?:{whitespace}({quote}?){words}{denylist}({closing})?(\3)'.format(
44 key=KEY,
45 whitespace=OPTIONAL_WHITESPACE,
46 quote=QUOTE,
47 words=AFFIX_REGEX,
48 denylist=DENY_LIST_REGEX2,
49 closing=CLOSING,
50 ),
51 flags=re.IGNORECASE,
52 )
53
54 QUOTES_REQUIRED_FOLLOWED_BY_COLON_VALUE_KEYWORD_REGEX = re.compile(
55 # e.g. var: MY_PASSWORD_123
56 r'{whitespace}"({key})?":{whitespace}("?){words}{denylist}({closing})?(\3)'.format(
57 key=KEY,
58 whitespace=OPTIONAL_WHITESPACE,
59 words=AFFIX_REGEX,
60 denylist=DENY_LIST_REGEX2,
61 closing=CLOSING,
62 ),
63 flags=re.IGNORECASE,
64 )
65
66 FOLLOWED_BY_COLON_VALUE_SECRET_REGEX = re.compile(
67 # e.g. var: Zmlyc3Rfc2VjcmV0X2hlcmVfd2hvYV9tdWx0aWxsaW5lX3Nob3VsZF93b3JrXzE==
68 r'{whitespace}({key})?:{whitespace}({quote}?)({secret})(\3)'.format(
69 key=KEY,
70 whitespace=OPTIONAL_WHITESPACE,
71 quote=QUOTE,
72 secret=SECRET,
73 ),
74 flags=re.IGNORECASE,
75 )
76
77 QUOTES_REQUIRED_FOLLOWED_BY_COLON_VALUE_SECRET_REGEX = re.compile(
78 # e.g. var: Zmlyc3Rfc2VjcmV0X2hlcmVfd2hvYV9tdWx0aWxsaW5lX3Nob3VsZF93b3JrXzE==
79 r'{whitespace}"({key})?":{whitespace}("?)({secret})(\3)'.format(
80 key=KEY,
81 whitespace=OPTIONAL_WHITESPACE,
82 secret=SECRET,
83 ),
84 flags=re.IGNORECASE,
85 )
86
87 FOLLOWED_BY_EQUAL_VALUE_KEYWORD_REGEX = re.compile(
88 # e.g. var = MY_PASSWORD_123
89 r'{whitespace}({key})?={whitespace}({quote}?){words}{denylist}({closing})?(\3)'.format(
90 key=KEY,
91 whitespace=OPTIONAL_WHITESPACE,
92 quote=QUOTE,
93 words=AFFIX_REGEX,
94 denylist=DENY_LIST_REGEX2,
95 closing=CLOSING,
96 ),
97 flags=re.IGNORECASE,
98 )
99
100 FOLLOWED_BY_EQUAL_VALUE_SECRET_REGEX = re.compile(
101 # e.g. var = Zmlyc3Rfc2VjcmV0X2hlcmVfd2hvYV9tdWx0aWxsaW5lX3Nob3VsZF93b3JrXzE==
102 r'{whitespace}({key})?={whitespace}({quote}?)({secret})(\3)'.format(
103 key=KEY,
104 whitespace=OPTIONAL_WHITESPACE,
105 quote=QUOTE,
106 secret=SECRET,
107 ),
108 flags=re.IGNORECASE,
109 )
110
111 ALLOW_LIST = ('secretsmanager', "secretName", "secret_name", "creation_token") # can add more keys like that
112 ALLOW_LIST_REGEX = r'|'.join(ALLOW_LIST)
113 # Support for suffix of function name i.e "secretsmanager:GetSecretValue"
114 CAMEL_CASE_NAMES = r'[A-Z]([A-Z0-9]*[a-z][a-z0-9]*[A-Z]|[a-z0-9]*[A-Z][A-Z0-9]*[a-z])[A-Za-z0-9]*'
115 FUNCTION_CALL_AFTER_KEYWORD_REGEX = re.compile(r'({allowlist})\s*(:|=)\s*{suffix}'.format(
116 allowlist=ALLOW_LIST_REGEX,
117 suffix=AFFIX_REGEX,
118 ))
119
120 # if the current regex is not enough, can add more regexes to check
121
122 YML_PAIR_VALUE_KEYWORD_REGEX_TO_GROUP = {
123 FOLLOWED_BY_COLON_VALUE_KEYWORD_REGEX: 4,
124 }
125
126 YML_PAIR_VALUE_SECRET_REGEX_TO_GROUP = {
127 FOLLOWED_BY_COLON_VALUE_SECRET_REGEX: 4,
128 }
129
130 JSON_PAIR_VALUE_KEYWORD_REGEX_TO_GROUP = {
131 QUOTES_REQUIRED_FOLLOWED_BY_COLON_VALUE_KEYWORD_REGEX: 4,
132 }
133
134 JSON_PAIR_VALUE_SECRET_REGEX_TO_GROUP = {
135 QUOTES_REQUIRED_FOLLOWED_BY_COLON_VALUE_SECRET_REGEX: 4,
136 }
137
138 TERRAFORM_PAIR_VALUE_KEYWORD_REGEX_TO_GROUP = {
139 FOLLOWED_BY_EQUAL_VALUE_KEYWORD_REGEX: 4,
140 }
141
142 TERRAFORM_PAIR_VALUE_SECRET_REGEX_TO_GROUP = {
143 FOLLOWED_BY_EQUAL_VALUE_SECRET_REGEX: 4,
144 }
145
146 REGEX_VALUE_KEYWORD_BY_FILETYPE = {
147 FileType.YAML: YML_PAIR_VALUE_KEYWORD_REGEX_TO_GROUP,
148 FileType.JSON: JSON_PAIR_VALUE_KEYWORD_REGEX_TO_GROUP,
149 FileType.TERRAFORM: TERRAFORM_PAIR_VALUE_KEYWORD_REGEX_TO_GROUP,
150 }
151
152 REGEX_VALUE_SECRET_BY_FILETYPE = {
153 FileType.YAML: YML_PAIR_VALUE_SECRET_REGEX_TO_GROUP,
154 FileType.JSON: JSON_PAIR_VALUE_SECRET_REGEX_TO_GROUP,
155 FileType.TERRAFORM: TERRAFORM_PAIR_VALUE_SECRET_REGEX_TO_GROUP,
156 }
157
158 SINGLE_LINE_PARSER = {
159 FileType.TERRAFORM: terraform_single_line_parser,
160 }
161
162 MULTILINE_PARSERS = {
163 FileType.YAML: (
164 (FileType.YAML, yml_multiline_parser),
165 ),
166 FileType.JSON: (
167 (FileType.JSON, json_multiline_parser),
168 ),
169 FileType.TERRAFORM: (
170 (FileType.TERRAFORM, terraform_multiline_parser),
171 (FileType.JSON, json_multiline_parser),
172 (FileType.YAML, yml_multiline_parser),
173 ),
174 }
175
176
177 def remove_fp_secrets_in_keys(detected_secrets: set[PotentialSecret], line: str, is_code_file: bool = False) -> None:
178 formatted_line = line.replace('"', '').replace("'", '')
179 secrets_to_remove = set()
180 for detected_secret in detected_secrets:
181 if not detected_secret.secret_value:
182 continue
183 # Found keyword prefix as potential secret
184 if formatted_line.startswith(detected_secret.secret_value):
185 secrets_to_remove.add(detected_secret)
186 # found a function name at the end of the line
187 if formatted_line and FUNCTION_CALL_AFTER_KEYWORD_REGEX.search(formatted_line):
188 secrets_to_remove.add(detected_secret)
189 # secret value is substring of keywork
190 if is_code_file and FOLLOWED_BY_EQUAL_VALUE_KEYWORD_REGEX.search(formatted_line):
191 key, value = line.split("=", 1)
192 if detected_secret.secret_value in key and detected_secret.secret_value in value:
193 secrets_to_remove.add(detected_secret)
194 detected_secrets -= secrets_to_remove
195
196
197 def format_reducing_noise_secret(string: str) -> str:
198 return json.dumps(string)
199
200
201 def extract_from_string(pattern: dict[Pattern[str], int] | None, string: str) -> set[str]:
202 matches: set[str] = set()
203 if not pattern:
204 return matches
205 for value_regex, group_number in pattern.items():
206 match = value_regex.search(string)
207 if match:
208 matches |= {match.group(group_number).rstrip('\n')}
209 return matches
210
211
212 def detect_secret(
213 scanners: tuple[BasePlugin, ...],
214 filename: str,
215 line: str,
216 line_number: int = 0,
217 is_multiline: Optional[bool] = None,
218 **kwargs: Any,
219 ) -> set[PotentialSecret]:
220 for scanner in scanners:
221 matches = scanner.analyze_line(filename, line, line_number, **kwargs)
222 if matches:
223 if is_multiline:
224 mark_set_multiline(matches)
225 return matches
226 return set()
227
228
229 def analyze_multiline_keyword_combinator(
230 filename: str,
231 scanners: tuple[BasePlugin, ...],
232 multiline_parser: BaseMultiLineParser,
233 line_number: int,
234 context: CodeSnippet | None = None,
235 raw_context: CodeSnippet | None = None,
236 value_pattern: dict[Pattern[str], int] | None = None,
237 secret_pattern: dict[Pattern[str], int] | None = None,
238 is_added: bool = False,
239 is_removed: bool = False,
240 **kwargs: Any,
241 ) -> set[PotentialSecret]:
242 secrets: set[PotentialSecret] = set()
243 if context is None or raw_context is None:
244 return secrets
245 value_secrets = extract_from_string(pattern=secret_pattern, string=context.target_line)
246 for possible_secret in value_secrets:
247 secret_adjust = format_reducing_noise_secret(possible_secret)
248
249 potential_secrets = detect_secret(
250 scanners=scanners,
251 filename=filename,
252 line=secret_adjust,
253 line_number=line_number,
254 is_added=is_added,
255 is_removed=is_removed,
256 is_multiline=True, # always true because we check here for multiline
257 kwargs=kwargs
258 )
259
260 if potential_secrets:
261 possible_keywords: set[str] = set()
262 backwards_range = range(context.target_index - 1, -1, -1)
263 forward_range = range(context.target_index + 1, len(context.lines))
264
265 possible_keywords |= multiline_parser.get_lines_from_same_object(
266 search_range=forward_range,
267 context=context,
268 raw_context=raw_context,
269 line_length_limit=MAX_KEYWORD_LIMIT)
270 possible_keywords |= multiline_parser.get_lines_from_same_object(
271 search_range=backwards_range,
272 context=context,
273 raw_context=raw_context,
274 line_length_limit=MAX_KEYWORD_LIMIT)
275
276 for other_value in possible_keywords:
277 if extract_from_string(pattern=value_pattern, string=other_value):
278 secrets |= potential_secrets
279 break
280 return secrets
281
282
283 def mark_set_multiline(secrets: set[PotentialSecret]) -> None:
284 for sec in secrets:
285 sec.is_multiline = True
286
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/checkov/secrets/plugins/detector_utils.py b/checkov/secrets/plugins/detector_utils.py
--- a/checkov/secrets/plugins/detector_utils.py
+++ b/checkov/secrets/plugins/detector_utils.py
@@ -108,7 +108,13 @@
flags=re.IGNORECASE,
)
-ALLOW_LIST = ('secretsmanager', "secretName", "secret_name", "creation_token") # can add more keys like that
+ALLOW_LIST = ( # can add more keys like that
+ 'secretsmanager',
+ "secretName",
+ "secret_name",
+ "creation_token",
+ "client_secret_setting_name",
+)
ALLOW_LIST_REGEX = r'|'.join(ALLOW_LIST)
# Support for suffix of function name i.e "secretsmanager:GetSecretValue"
CAMEL_CASE_NAMES = r'[A-Z]([A-Z0-9]*[a-z][a-z0-9]*[A-Z]|[a-z0-9]*[A-Z][A-Z0-9]*[a-z])[A-Za-z0-9]*'
| {"golden_diff": "diff --git a/checkov/secrets/plugins/detector_utils.py b/checkov/secrets/plugins/detector_utils.py\n--- a/checkov/secrets/plugins/detector_utils.py\n+++ b/checkov/secrets/plugins/detector_utils.py\n@@ -108,7 +108,13 @@\n flags=re.IGNORECASE,\n )\n \n-ALLOW_LIST = ('secretsmanager', \"secretName\", \"secret_name\", \"creation_token\") # can add more keys like that\n+ALLOW_LIST = ( # can add more keys like that\n+ 'secretsmanager',\n+ \"secretName\",\n+ \"secret_name\",\n+ \"creation_token\",\n+ \"client_secret_setting_name\",\n+)\n ALLOW_LIST_REGEX = r'|'.join(ALLOW_LIST)\n # Support for suffix of function name i.e \"secretsmanager:GetSecretValue\"\n CAMEL_CASE_NAMES = r'[A-Z]([A-Z0-9]*[a-z][a-z0-9]*[A-Z]|[a-z0-9]*[A-Z][A-Z0-9]*[a-z])[A-Za-z0-9]*'\n", "issue": "Terraform azurerm_windows_function_app resource - CKV_SECRET_6 false positive for active_directory_v2 setting\n**Describe the issue**\r\nWhen adding azure function app with identity provider active directory v2, checkov reports false positive\r\nsecret is not exposed but checkov reports :\r\n\r\n| 0 | CKV_SECRET_6 | xxxxxxxxx| 0a69b97b56bf2230f607111888418a59f1891c92 | Base64 High Entropy String | https://docs.paloaltonetworks.com/content/techdocs/en_US/prisma/prisma-cloud/prisma-cloud-code-security-policy-reference/secrets-policies/secrets-policy-index/git-secrets-6.html |\r\n\r\nwith resource 0a69b97b56bf2230f607111888418a59f1891c92\r\n\r\nBecause from resource id is not clear which resource is the culprit the only possible one is client_secret_setting_name which is set to \r\n\"MICROSOFT_PROVIDER_AUTHENTICATION_SECRET\" and not exposed secret. \r\n**Examples**\r\n\r\nsnippet of resource azurerm_windows_function_app :\r\n\r\n active_directory_v2 {\r\n client_id = local.managed_identity_client_id\r\n client_secret_setting_name = \"MICROSOFT_PROVIDER_AUTHENTICATION_SECRET\"\r\n tenant_auth_endpoint = local.identity_provider.tenant_auth_endpoint\r\n www_authentication_disabled = false\r\n }\r\n\r\n**Version (please complete the following information):**\r\n - Checkov Version 2.4.50\r\n\r\n**Additional context**\r\nAdd any other context about the problem here.\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nimport json\nimport re\nfrom re import Pattern\nfrom typing import Any, TYPE_CHECKING, Optional\n\nfrom detect_secrets.util.filetype import FileType\nfrom detect_secrets.plugins.keyword import DENYLIST\nfrom detect_secrets.plugins.keyword import AFFIX_REGEX\nfrom detect_secrets.plugins.keyword import CLOSING\nfrom detect_secrets.plugins.keyword import OPTIONAL_WHITESPACE\nfrom detect_secrets.plugins.keyword import QUOTE\nfrom detect_secrets.plugins.keyword import SECRET\n\nfrom checkov.secrets.parsers.terraform.multiline_parser import terraform_multiline_parser\nfrom checkov.secrets.parsers.terraform.single_line_parser import terraform_single_line_parser\nfrom checkov.secrets.parsers.yaml.multiline_parser import yml_multiline_parser\nfrom checkov.secrets.parsers.json.multiline_parser import json_multiline_parser\n\nif TYPE_CHECKING:\n from checkov.secrets.parsers.multiline_parser import BaseMultiLineParser\n from detect_secrets.core.potential_secret import PotentialSecret\n from detect_secrets.util.code_snippet import CodeSnippet\n from detect_secrets.plugins.base import BasePlugin\n\nMAX_KEYWORD_LIMIT = 500\n\nDENY_LIST_REGEX = r'|'.join(DENYLIST)\n# Support for suffix after keyword i.e. password_secure = \"value\"\nDENY_LIST_REGEX2 = r'({denylist}){suffix}'.format(\n denylist=DENY_LIST_REGEX,\n suffix=AFFIX_REGEX,\n)\n\nKEY = r'{words}({closing})?'.format(\n words=AFFIX_REGEX,\n closing=CLOSING,\n)\n\nFOLLOWED_BY_COLON_VALUE_KEYWORD_REGEX = re.compile(\n # e.g. var: MY_PASSWORD_123\n r'{whitespace}({key})?:{whitespace}({quote}?){words}{denylist}({closing})?(\\3)'.format(\n key=KEY,\n whitespace=OPTIONAL_WHITESPACE,\n quote=QUOTE,\n words=AFFIX_REGEX,\n denylist=DENY_LIST_REGEX2,\n closing=CLOSING,\n ),\n flags=re.IGNORECASE,\n)\n\nQUOTES_REQUIRED_FOLLOWED_BY_COLON_VALUE_KEYWORD_REGEX = re.compile(\n # e.g. var: MY_PASSWORD_123\n r'{whitespace}\"({key})?\":{whitespace}(\"?){words}{denylist}({closing})?(\\3)'.format(\n key=KEY,\n whitespace=OPTIONAL_WHITESPACE,\n words=AFFIX_REGEX,\n denylist=DENY_LIST_REGEX2,\n closing=CLOSING,\n ),\n flags=re.IGNORECASE,\n)\n\nFOLLOWED_BY_COLON_VALUE_SECRET_REGEX = re.compile(\n # e.g. var: Zmlyc3Rfc2VjcmV0X2hlcmVfd2hvYV9tdWx0aWxsaW5lX3Nob3VsZF93b3JrXzE==\n r'{whitespace}({key})?:{whitespace}({quote}?)({secret})(\\3)'.format(\n key=KEY,\n whitespace=OPTIONAL_WHITESPACE,\n quote=QUOTE,\n secret=SECRET,\n ),\n flags=re.IGNORECASE,\n)\n\nQUOTES_REQUIRED_FOLLOWED_BY_COLON_VALUE_SECRET_REGEX = re.compile(\n # e.g. var: Zmlyc3Rfc2VjcmV0X2hlcmVfd2hvYV9tdWx0aWxsaW5lX3Nob3VsZF93b3JrXzE==\n r'{whitespace}\"({key})?\":{whitespace}(\"?)({secret})(\\3)'.format(\n key=KEY,\n whitespace=OPTIONAL_WHITESPACE,\n secret=SECRET,\n ),\n flags=re.IGNORECASE,\n)\n\nFOLLOWED_BY_EQUAL_VALUE_KEYWORD_REGEX = re.compile(\n # e.g. var = MY_PASSWORD_123\n r'{whitespace}({key})?={whitespace}({quote}?){words}{denylist}({closing})?(\\3)'.format(\n key=KEY,\n whitespace=OPTIONAL_WHITESPACE,\n quote=QUOTE,\n words=AFFIX_REGEX,\n denylist=DENY_LIST_REGEX2,\n closing=CLOSING,\n ),\n flags=re.IGNORECASE,\n)\n\nFOLLOWED_BY_EQUAL_VALUE_SECRET_REGEX = re.compile(\n # e.g. var = Zmlyc3Rfc2VjcmV0X2hlcmVfd2hvYV9tdWx0aWxsaW5lX3Nob3VsZF93b3JrXzE==\n r'{whitespace}({key})?={whitespace}({quote}?)({secret})(\\3)'.format(\n key=KEY,\n whitespace=OPTIONAL_WHITESPACE,\n quote=QUOTE,\n secret=SECRET,\n ),\n flags=re.IGNORECASE,\n)\n\nALLOW_LIST = ('secretsmanager', \"secretName\", \"secret_name\", \"creation_token\") # can add more keys like that\nALLOW_LIST_REGEX = r'|'.join(ALLOW_LIST)\n# Support for suffix of function name i.e \"secretsmanager:GetSecretValue\"\nCAMEL_CASE_NAMES = r'[A-Z]([A-Z0-9]*[a-z][a-z0-9]*[A-Z]|[a-z0-9]*[A-Z][A-Z0-9]*[a-z])[A-Za-z0-9]*'\nFUNCTION_CALL_AFTER_KEYWORD_REGEX = re.compile(r'({allowlist})\\s*(:|=)\\s*{suffix}'.format(\n allowlist=ALLOW_LIST_REGEX,\n suffix=AFFIX_REGEX,\n))\n\n# if the current regex is not enough, can add more regexes to check\n\nYML_PAIR_VALUE_KEYWORD_REGEX_TO_GROUP = {\n FOLLOWED_BY_COLON_VALUE_KEYWORD_REGEX: 4,\n}\n\nYML_PAIR_VALUE_SECRET_REGEX_TO_GROUP = {\n FOLLOWED_BY_COLON_VALUE_SECRET_REGEX: 4,\n}\n\nJSON_PAIR_VALUE_KEYWORD_REGEX_TO_GROUP = {\n QUOTES_REQUIRED_FOLLOWED_BY_COLON_VALUE_KEYWORD_REGEX: 4,\n}\n\nJSON_PAIR_VALUE_SECRET_REGEX_TO_GROUP = {\n QUOTES_REQUIRED_FOLLOWED_BY_COLON_VALUE_SECRET_REGEX: 4,\n}\n\nTERRAFORM_PAIR_VALUE_KEYWORD_REGEX_TO_GROUP = {\n FOLLOWED_BY_EQUAL_VALUE_KEYWORD_REGEX: 4,\n}\n\nTERRAFORM_PAIR_VALUE_SECRET_REGEX_TO_GROUP = {\n FOLLOWED_BY_EQUAL_VALUE_SECRET_REGEX: 4,\n}\n\nREGEX_VALUE_KEYWORD_BY_FILETYPE = {\n FileType.YAML: YML_PAIR_VALUE_KEYWORD_REGEX_TO_GROUP,\n FileType.JSON: JSON_PAIR_VALUE_KEYWORD_REGEX_TO_GROUP,\n FileType.TERRAFORM: TERRAFORM_PAIR_VALUE_KEYWORD_REGEX_TO_GROUP,\n}\n\nREGEX_VALUE_SECRET_BY_FILETYPE = {\n FileType.YAML: YML_PAIR_VALUE_SECRET_REGEX_TO_GROUP,\n FileType.JSON: JSON_PAIR_VALUE_SECRET_REGEX_TO_GROUP,\n FileType.TERRAFORM: TERRAFORM_PAIR_VALUE_SECRET_REGEX_TO_GROUP,\n}\n\nSINGLE_LINE_PARSER = {\n FileType.TERRAFORM: terraform_single_line_parser,\n}\n\nMULTILINE_PARSERS = {\n FileType.YAML: (\n (FileType.YAML, yml_multiline_parser),\n ),\n FileType.JSON: (\n (FileType.JSON, json_multiline_parser),\n ),\n FileType.TERRAFORM: (\n (FileType.TERRAFORM, terraform_multiline_parser),\n (FileType.JSON, json_multiline_parser),\n (FileType.YAML, yml_multiline_parser),\n ),\n}\n\n\ndef remove_fp_secrets_in_keys(detected_secrets: set[PotentialSecret], line: str, is_code_file: bool = False) -> None:\n formatted_line = line.replace('\"', '').replace(\"'\", '')\n secrets_to_remove = set()\n for detected_secret in detected_secrets:\n if not detected_secret.secret_value:\n continue\n # Found keyword prefix as potential secret\n if formatted_line.startswith(detected_secret.secret_value):\n secrets_to_remove.add(detected_secret)\n # found a function name at the end of the line\n if formatted_line and FUNCTION_CALL_AFTER_KEYWORD_REGEX.search(formatted_line):\n secrets_to_remove.add(detected_secret)\n # secret value is substring of keywork\n if is_code_file and FOLLOWED_BY_EQUAL_VALUE_KEYWORD_REGEX.search(formatted_line):\n key, value = line.split(\"=\", 1)\n if detected_secret.secret_value in key and detected_secret.secret_value in value:\n secrets_to_remove.add(detected_secret)\n detected_secrets -= secrets_to_remove\n\n\ndef format_reducing_noise_secret(string: str) -> str:\n return json.dumps(string)\n\n\ndef extract_from_string(pattern: dict[Pattern[str], int] | None, string: str) -> set[str]:\n matches: set[str] = set()\n if not pattern:\n return matches\n for value_regex, group_number in pattern.items():\n match = value_regex.search(string)\n if match:\n matches |= {match.group(group_number).rstrip('\\n')}\n return matches\n\n\ndef detect_secret(\n scanners: tuple[BasePlugin, ...],\n filename: str,\n line: str,\n line_number: int = 0,\n is_multiline: Optional[bool] = None,\n **kwargs: Any,\n) -> set[PotentialSecret]:\n for scanner in scanners:\n matches = scanner.analyze_line(filename, line, line_number, **kwargs)\n if matches:\n if is_multiline:\n mark_set_multiline(matches)\n return matches\n return set()\n\n\ndef analyze_multiline_keyword_combinator(\n filename: str,\n scanners: tuple[BasePlugin, ...],\n multiline_parser: BaseMultiLineParser,\n line_number: int,\n context: CodeSnippet | None = None,\n raw_context: CodeSnippet | None = None,\n value_pattern: dict[Pattern[str], int] | None = None,\n secret_pattern: dict[Pattern[str], int] | None = None,\n is_added: bool = False,\n is_removed: bool = False,\n **kwargs: Any,\n) -> set[PotentialSecret]:\n secrets: set[PotentialSecret] = set()\n if context is None or raw_context is None:\n return secrets\n value_secrets = extract_from_string(pattern=secret_pattern, string=context.target_line)\n for possible_secret in value_secrets:\n secret_adjust = format_reducing_noise_secret(possible_secret)\n\n potential_secrets = detect_secret(\n scanners=scanners,\n filename=filename,\n line=secret_adjust,\n line_number=line_number,\n is_added=is_added,\n is_removed=is_removed,\n is_multiline=True, # always true because we check here for multiline\n kwargs=kwargs\n )\n\n if potential_secrets:\n possible_keywords: set[str] = set()\n backwards_range = range(context.target_index - 1, -1, -1)\n forward_range = range(context.target_index + 1, len(context.lines))\n\n possible_keywords |= multiline_parser.get_lines_from_same_object(\n search_range=forward_range,\n context=context,\n raw_context=raw_context,\n line_length_limit=MAX_KEYWORD_LIMIT)\n possible_keywords |= multiline_parser.get_lines_from_same_object(\n search_range=backwards_range,\n context=context,\n raw_context=raw_context,\n line_length_limit=MAX_KEYWORD_LIMIT)\n\n for other_value in possible_keywords:\n if extract_from_string(pattern=value_pattern, string=other_value):\n secrets |= potential_secrets\n break\n return secrets\n\n\ndef mark_set_multiline(secrets: set[PotentialSecret]) -> None:\n for sec in secrets:\n sec.is_multiline = True\n", "path": "checkov/secrets/plugins/detector_utils.py"}], "after_files": [{"content": "from __future__ import annotations\n\nimport json\nimport re\nfrom re import Pattern\nfrom typing import Any, TYPE_CHECKING, Optional\n\nfrom detect_secrets.util.filetype import FileType\nfrom detect_secrets.plugins.keyword import DENYLIST\nfrom detect_secrets.plugins.keyword import AFFIX_REGEX\nfrom detect_secrets.plugins.keyword import CLOSING\nfrom detect_secrets.plugins.keyword import OPTIONAL_WHITESPACE\nfrom detect_secrets.plugins.keyword import QUOTE\nfrom detect_secrets.plugins.keyword import SECRET\n\nfrom checkov.secrets.parsers.terraform.multiline_parser import terraform_multiline_parser\nfrom checkov.secrets.parsers.terraform.single_line_parser import terraform_single_line_parser\nfrom checkov.secrets.parsers.yaml.multiline_parser import yml_multiline_parser\nfrom checkov.secrets.parsers.json.multiline_parser import json_multiline_parser\n\nif TYPE_CHECKING:\n from checkov.secrets.parsers.multiline_parser import BaseMultiLineParser\n from detect_secrets.core.potential_secret import PotentialSecret\n from detect_secrets.util.code_snippet import CodeSnippet\n from detect_secrets.plugins.base import BasePlugin\n\nMAX_KEYWORD_LIMIT = 500\n\nDENY_LIST_REGEX = r'|'.join(DENYLIST)\n# Support for suffix after keyword i.e. password_secure = \"value\"\nDENY_LIST_REGEX2 = r'({denylist}){suffix}'.format(\n denylist=DENY_LIST_REGEX,\n suffix=AFFIX_REGEX,\n)\n\nKEY = r'{words}({closing})?'.format(\n words=AFFIX_REGEX,\n closing=CLOSING,\n)\n\nFOLLOWED_BY_COLON_VALUE_KEYWORD_REGEX = re.compile(\n # e.g. var: MY_PASSWORD_123\n r'{whitespace}({key})?:{whitespace}({quote}?){words}{denylist}({closing})?(\\3)'.format(\n key=KEY,\n whitespace=OPTIONAL_WHITESPACE,\n quote=QUOTE,\n words=AFFIX_REGEX,\n denylist=DENY_LIST_REGEX2,\n closing=CLOSING,\n ),\n flags=re.IGNORECASE,\n)\n\nQUOTES_REQUIRED_FOLLOWED_BY_COLON_VALUE_KEYWORD_REGEX = re.compile(\n # e.g. var: MY_PASSWORD_123\n r'{whitespace}\"({key})?\":{whitespace}(\"?){words}{denylist}({closing})?(\\3)'.format(\n key=KEY,\n whitespace=OPTIONAL_WHITESPACE,\n words=AFFIX_REGEX,\n denylist=DENY_LIST_REGEX2,\n closing=CLOSING,\n ),\n flags=re.IGNORECASE,\n)\n\nFOLLOWED_BY_COLON_VALUE_SECRET_REGEX = re.compile(\n # e.g. var: Zmlyc3Rfc2VjcmV0X2hlcmVfd2hvYV9tdWx0aWxsaW5lX3Nob3VsZF93b3JrXzE==\n r'{whitespace}({key})?:{whitespace}({quote}?)({secret})(\\3)'.format(\n key=KEY,\n whitespace=OPTIONAL_WHITESPACE,\n quote=QUOTE,\n secret=SECRET,\n ),\n flags=re.IGNORECASE,\n)\n\nQUOTES_REQUIRED_FOLLOWED_BY_COLON_VALUE_SECRET_REGEX = re.compile(\n # e.g. var: Zmlyc3Rfc2VjcmV0X2hlcmVfd2hvYV9tdWx0aWxsaW5lX3Nob3VsZF93b3JrXzE==\n r'{whitespace}\"({key})?\":{whitespace}(\"?)({secret})(\\3)'.format(\n key=KEY,\n whitespace=OPTIONAL_WHITESPACE,\n secret=SECRET,\n ),\n flags=re.IGNORECASE,\n)\n\nFOLLOWED_BY_EQUAL_VALUE_KEYWORD_REGEX = re.compile(\n # e.g. var = MY_PASSWORD_123\n r'{whitespace}({key})?={whitespace}({quote}?){words}{denylist}({closing})?(\\3)'.format(\n key=KEY,\n whitespace=OPTIONAL_WHITESPACE,\n quote=QUOTE,\n words=AFFIX_REGEX,\n denylist=DENY_LIST_REGEX2,\n closing=CLOSING,\n ),\n flags=re.IGNORECASE,\n)\n\nFOLLOWED_BY_EQUAL_VALUE_SECRET_REGEX = re.compile(\n # e.g. var = Zmlyc3Rfc2VjcmV0X2hlcmVfd2hvYV9tdWx0aWxsaW5lX3Nob3VsZF93b3JrXzE==\n r'{whitespace}({key})?={whitespace}({quote}?)({secret})(\\3)'.format(\n key=KEY,\n whitespace=OPTIONAL_WHITESPACE,\n quote=QUOTE,\n secret=SECRET,\n ),\n flags=re.IGNORECASE,\n)\n\nALLOW_LIST = ( # can add more keys like that\n 'secretsmanager',\n \"secretName\",\n \"secret_name\",\n \"creation_token\",\n \"client_secret_setting_name\",\n)\nALLOW_LIST_REGEX = r'|'.join(ALLOW_LIST)\n# Support for suffix of function name i.e \"secretsmanager:GetSecretValue\"\nCAMEL_CASE_NAMES = r'[A-Z]([A-Z0-9]*[a-z][a-z0-9]*[A-Z]|[a-z0-9]*[A-Z][A-Z0-9]*[a-z])[A-Za-z0-9]*'\nFUNCTION_CALL_AFTER_KEYWORD_REGEX = re.compile(r'({allowlist})\\s*(:|=)\\s*{suffix}'.format(\n allowlist=ALLOW_LIST_REGEX,\n suffix=AFFIX_REGEX,\n))\n\n# if the current regex is not enough, can add more regexes to check\n\nYML_PAIR_VALUE_KEYWORD_REGEX_TO_GROUP = {\n FOLLOWED_BY_COLON_VALUE_KEYWORD_REGEX: 4,\n}\n\nYML_PAIR_VALUE_SECRET_REGEX_TO_GROUP = {\n FOLLOWED_BY_COLON_VALUE_SECRET_REGEX: 4,\n}\n\nJSON_PAIR_VALUE_KEYWORD_REGEX_TO_GROUP = {\n QUOTES_REQUIRED_FOLLOWED_BY_COLON_VALUE_KEYWORD_REGEX: 4,\n}\n\nJSON_PAIR_VALUE_SECRET_REGEX_TO_GROUP = {\n QUOTES_REQUIRED_FOLLOWED_BY_COLON_VALUE_SECRET_REGEX: 4,\n}\n\nTERRAFORM_PAIR_VALUE_KEYWORD_REGEX_TO_GROUP = {\n FOLLOWED_BY_EQUAL_VALUE_KEYWORD_REGEX: 4,\n}\n\nTERRAFORM_PAIR_VALUE_SECRET_REGEX_TO_GROUP = {\n FOLLOWED_BY_EQUAL_VALUE_SECRET_REGEX: 4,\n}\n\nREGEX_VALUE_KEYWORD_BY_FILETYPE = {\n FileType.YAML: YML_PAIR_VALUE_KEYWORD_REGEX_TO_GROUP,\n FileType.JSON: JSON_PAIR_VALUE_KEYWORD_REGEX_TO_GROUP,\n FileType.TERRAFORM: TERRAFORM_PAIR_VALUE_KEYWORD_REGEX_TO_GROUP,\n}\n\nREGEX_VALUE_SECRET_BY_FILETYPE = {\n FileType.YAML: YML_PAIR_VALUE_SECRET_REGEX_TO_GROUP,\n FileType.JSON: JSON_PAIR_VALUE_SECRET_REGEX_TO_GROUP,\n FileType.TERRAFORM: TERRAFORM_PAIR_VALUE_SECRET_REGEX_TO_GROUP,\n}\n\nSINGLE_LINE_PARSER = {\n FileType.TERRAFORM: terraform_single_line_parser,\n}\n\nMULTILINE_PARSERS = {\n FileType.YAML: (\n (FileType.YAML, yml_multiline_parser),\n ),\n FileType.JSON: (\n (FileType.JSON, json_multiline_parser),\n ),\n FileType.TERRAFORM: (\n (FileType.TERRAFORM, terraform_multiline_parser),\n (FileType.JSON, json_multiline_parser),\n (FileType.YAML, yml_multiline_parser),\n ),\n}\n\n\ndef remove_fp_secrets_in_keys(detected_secrets: set[PotentialSecret], line: str, is_code_file: bool = False) -> None:\n formatted_line = line.replace('\"', '').replace(\"'\", '')\n secrets_to_remove = set()\n for detected_secret in detected_secrets:\n if not detected_secret.secret_value:\n continue\n # Found keyword prefix as potential secret\n if formatted_line.startswith(detected_secret.secret_value):\n secrets_to_remove.add(detected_secret)\n # found a function name at the end of the line\n if formatted_line and FUNCTION_CALL_AFTER_KEYWORD_REGEX.search(formatted_line):\n secrets_to_remove.add(detected_secret)\n # secret value is substring of keywork\n if is_code_file and FOLLOWED_BY_EQUAL_VALUE_KEYWORD_REGEX.search(formatted_line):\n key, value = line.split(\"=\", 1)\n if detected_secret.secret_value in key and detected_secret.secret_value in value:\n secrets_to_remove.add(detected_secret)\n detected_secrets -= secrets_to_remove\n\n\ndef format_reducing_noise_secret(string: str) -> str:\n return json.dumps(string)\n\n\ndef extract_from_string(pattern: dict[Pattern[str], int] | None, string: str) -> set[str]:\n matches: set[str] = set()\n if not pattern:\n return matches\n for value_regex, group_number in pattern.items():\n match = value_regex.search(string)\n if match:\n matches |= {match.group(group_number).rstrip('\\n')}\n return matches\n\n\ndef detect_secret(\n scanners: tuple[BasePlugin, ...],\n filename: str,\n line: str,\n line_number: int = 0,\n is_multiline: Optional[bool] = None,\n **kwargs: Any,\n) -> set[PotentialSecret]:\n for scanner in scanners:\n matches = scanner.analyze_line(filename, line, line_number, **kwargs)\n if matches:\n if is_multiline:\n mark_set_multiline(matches)\n return matches\n return set()\n\n\ndef analyze_multiline_keyword_combinator(\n filename: str,\n scanners: tuple[BasePlugin, ...],\n multiline_parser: BaseMultiLineParser,\n line_number: int,\n context: CodeSnippet | None = None,\n raw_context: CodeSnippet | None = None,\n value_pattern: dict[Pattern[str], int] | None = None,\n secret_pattern: dict[Pattern[str], int] | None = None,\n is_added: bool = False,\n is_removed: bool = False,\n **kwargs: Any,\n) -> set[PotentialSecret]:\n secrets: set[PotentialSecret] = set()\n if context is None or raw_context is None:\n return secrets\n value_secrets = extract_from_string(pattern=secret_pattern, string=context.target_line)\n for possible_secret in value_secrets:\n secret_adjust = format_reducing_noise_secret(possible_secret)\n\n potential_secrets = detect_secret(\n scanners=scanners,\n filename=filename,\n line=secret_adjust,\n line_number=line_number,\n is_added=is_added,\n is_removed=is_removed,\n is_multiline=True, # always true because we check here for multiline\n kwargs=kwargs\n )\n\n if potential_secrets:\n possible_keywords: set[str] = set()\n backwards_range = range(context.target_index - 1, -1, -1)\n forward_range = range(context.target_index + 1, len(context.lines))\n\n possible_keywords |= multiline_parser.get_lines_from_same_object(\n search_range=forward_range,\n context=context,\n raw_context=raw_context,\n line_length_limit=MAX_KEYWORD_LIMIT)\n possible_keywords |= multiline_parser.get_lines_from_same_object(\n search_range=backwards_range,\n context=context,\n raw_context=raw_context,\n line_length_limit=MAX_KEYWORD_LIMIT)\n\n for other_value in possible_keywords:\n if extract_from_string(pattern=value_pattern, string=other_value):\n secrets |= potential_secrets\n break\n return secrets\n\n\ndef mark_set_multiline(secrets: set[PotentialSecret]) -> None:\n for sec in secrets:\n sec.is_multiline = True\n", "path": "checkov/secrets/plugins/detector_utils.py"}]} | 3,881 | 243 |
gh_patches_debug_2944 | rasdani/github-patches | git_diff | ivy-llc__ivy-14663 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix generating_index_arrays.test_numpy_diag_indices
| | |
|---|---|
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/6413197943/job/17411744582"><img src=https://img.shields.io/badge/-failure-red></a>
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/6413197943/job/17411744582"><img src=https://img.shields.io/badge/-failure-red></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/6413197943/job/17411744582"><img src=https://img.shields.io/badge/-failure-red></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/6413197943/job/17411744582"><img src=https://img.shields.io/badge/-failure-red></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/6413197943/job/17411744582"><img src=https://img.shields.io/badge/-failure-red></a>
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ivy/functional/frontends/numpy/indexing_routines/generating_index_arrays.py`
Content:
```
1 import ivy
2 from ivy.functional.frontends.numpy.func_wrapper import (
3 to_ivy_arrays_and_back,
4 outputs_to_numpy_arrays,
5 )
6
7
8 @to_ivy_arrays_and_back
9 def indices(dimensions, dtype=int, sparse=False):
10 dimensions = tuple(dimensions)
11 N = len(dimensions)
12 shape = (1,) * N
13 if sparse:
14 res = tuple()
15 else:
16 res = ivy.empty((N,) + dimensions, dtype=dtype)
17 for i, dim in enumerate(dimensions):
18 idx = ivy.arange(dim, dtype=dtype).reshape(shape[:i] + (dim,) + shape[i + 1 :])
19 if sparse:
20 res = res + (idx,)
21 else:
22 res[i] = idx
23 return res
24
25
26 # unravel_index
27 @to_ivy_arrays_and_back
28 def unravel_index(indices, shape, order="C"):
29 ret = [x.astype("int64") for x in ivy.unravel_index(indices, shape)]
30 return tuple(ret)
31
32
33 @outputs_to_numpy_arrays
34 def diag_indices(n, ndim=2):
35 idx = ivy.arange(n, dtype=int)
36 return (idx,) * ndim
37
38
39 @to_ivy_arrays_and_back
40 def tril_indices(n, k=0, m=None):
41 return ivy.tril_indices(n, m, k)
42
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ivy/functional/frontends/numpy/indexing_routines/generating_index_arrays.py b/ivy/functional/frontends/numpy/indexing_routines/generating_index_arrays.py
--- a/ivy/functional/frontends/numpy/indexing_routines/generating_index_arrays.py
+++ b/ivy/functional/frontends/numpy/indexing_routines/generating_index_arrays.py
@@ -30,10 +30,12 @@
return tuple(ret)
-@outputs_to_numpy_arrays
+@to_ivy_arrays_and_back
def diag_indices(n, ndim=2):
- idx = ivy.arange(n, dtype=int)
- return (idx,) * ndim
+ idx = ivy.arange(n)
+ res = ivy.array((idx,) * ndim)
+ res = tuple(res.astype("int64"))
+ return res
@to_ivy_arrays_and_back
| {"golden_diff": "diff --git a/ivy/functional/frontends/numpy/indexing_routines/generating_index_arrays.py b/ivy/functional/frontends/numpy/indexing_routines/generating_index_arrays.py\n--- a/ivy/functional/frontends/numpy/indexing_routines/generating_index_arrays.py\n+++ b/ivy/functional/frontends/numpy/indexing_routines/generating_index_arrays.py\n@@ -30,10 +30,12 @@\n return tuple(ret)\n \n \n-@outputs_to_numpy_arrays\n+@to_ivy_arrays_and_back\n def diag_indices(n, ndim=2):\n- idx = ivy.arange(n, dtype=int)\n- return (idx,) * ndim\n+ idx = ivy.arange(n)\n+ res = ivy.array((idx,) * ndim)\n+ res = tuple(res.astype(\"int64\"))\n+ return res\n \n \n @to_ivy_arrays_and_back\n", "issue": "Fix generating_index_arrays.test_numpy_diag_indices\n| | |\r\n|---|---|\r\n|paddle|<a href=\"https://github.com/unifyai/ivy/actions/runs/6413197943/job/17411744582\"><img src=https://img.shields.io/badge/-failure-red></a>\r\n|tensorflow|<a href=\"https://github.com/unifyai/ivy/actions/runs/6413197943/job/17411744582\"><img src=https://img.shields.io/badge/-failure-red></a>\r\n|torch|<a href=\"https://github.com/unifyai/ivy/actions/runs/6413197943/job/17411744582\"><img src=https://img.shields.io/badge/-failure-red></a>\r\n|numpy|<a href=\"https://github.com/unifyai/ivy/actions/runs/6413197943/job/17411744582\"><img src=https://img.shields.io/badge/-failure-red></a>\r\n|jax|<a href=\"https://github.com/unifyai/ivy/actions/runs/6413197943/job/17411744582\"><img src=https://img.shields.io/badge/-failure-red></a>\r\n\n", "before_files": [{"content": "import ivy\nfrom ivy.functional.frontends.numpy.func_wrapper import (\n to_ivy_arrays_and_back,\n outputs_to_numpy_arrays,\n)\n\n\n@to_ivy_arrays_and_back\ndef indices(dimensions, dtype=int, sparse=False):\n dimensions = tuple(dimensions)\n N = len(dimensions)\n shape = (1,) * N\n if sparse:\n res = tuple()\n else:\n res = ivy.empty((N,) + dimensions, dtype=dtype)\n for i, dim in enumerate(dimensions):\n idx = ivy.arange(dim, dtype=dtype).reshape(shape[:i] + (dim,) + shape[i + 1 :])\n if sparse:\n res = res + (idx,)\n else:\n res[i] = idx\n return res\n\n\n# unravel_index\n@to_ivy_arrays_and_back\ndef unravel_index(indices, shape, order=\"C\"):\n ret = [x.astype(\"int64\") for x in ivy.unravel_index(indices, shape)]\n return tuple(ret)\n\n\n@outputs_to_numpy_arrays\ndef diag_indices(n, ndim=2):\n idx = ivy.arange(n, dtype=int)\n return (idx,) * ndim\n\n\n@to_ivy_arrays_and_back\ndef tril_indices(n, k=0, m=None):\n return ivy.tril_indices(n, m, k)\n", "path": "ivy/functional/frontends/numpy/indexing_routines/generating_index_arrays.py"}], "after_files": [{"content": "import ivy\nfrom ivy.functional.frontends.numpy.func_wrapper import (\n to_ivy_arrays_and_back,\n outputs_to_numpy_arrays,\n)\n\n\n@to_ivy_arrays_and_back\ndef indices(dimensions, dtype=int, sparse=False):\n dimensions = tuple(dimensions)\n N = len(dimensions)\n shape = (1,) * N\n if sparse:\n res = tuple()\n else:\n res = ivy.empty((N,) + dimensions, dtype=dtype)\n for i, dim in enumerate(dimensions):\n idx = ivy.arange(dim, dtype=dtype).reshape(shape[:i] + (dim,) + shape[i + 1 :])\n if sparse:\n res = res + (idx,)\n else:\n res[i] = idx\n return res\n\n\n# unravel_index\n@to_ivy_arrays_and_back\ndef unravel_index(indices, shape, order=\"C\"):\n ret = [x.astype(\"int64\") for x in ivy.unravel_index(indices, shape)]\n return tuple(ret)\n\n\n@to_ivy_arrays_and_back\ndef diag_indices(n, ndim=2):\n idx = ivy.arange(n)\n res = ivy.array((idx,) * ndim)\n res = tuple(res.astype(\"int64\"))\n return res\n\n\n@to_ivy_arrays_and_back\ndef tril_indices(n, k=0, m=None):\n return ivy.tril_indices(n, m, k)\n", "path": "ivy/functional/frontends/numpy/indexing_routines/generating_index_arrays.py"}]} | 962 | 197 |
gh_patches_debug_38336 | rasdani/github-patches | git_diff | tough-dev-school__education-backend-167 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Выпилить интеграцию с mailjet (она давно не используется)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/app/settings.py`
Content:
```
1 import environ
2 import os
3 from celery.schedules import crontab
4
5 root = environ.Path(__file__) - 2 # three folder back (/a/b/c/ - 3 = /)
6 env = environ.Env(DEBUG=(bool, False)) # set default values and casting
7 environ.Env.read_env() # reading .env file
8 SITE_ROOT = root()
9
10 USE_L10N = True
11 USE_i18N = True
12
13 LANGUAGE_CODE = 'ru'
14 LOCALE_PATHS = ['locale']
15
16 INTERNAL_IPS = [
17 '127.0.0.1',
18 ]
19 FRONTEND_URL = 'https://education.borshev.com'
20
21 USE_TZ = False
22 TIME_ZONE = env('TIME_ZONE', cast=str, default='Europe/Moscow')
23
24 # Build paths inside the project like this: os.path.join(BASE_DIR, ...)
25 BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
26 TEST_RUNNER = 'app.test.disable_test_command_runner.DisableTestCommandRunner'
27
28
29 # Quick-start development settings - unsuitable for production
30 # See https://docs.djangoproject.com/en/2.2/howto/deployment/checklist/
31
32 # SECURITY WARNING: keep the secret key used in production secret!
33 SECRET_KEY = env('SECRET_KEY', cast=str, default='s3cr3t')
34
35 # SECURITY WARNING: don't run with debug turned on in production!
36 DEBUG = env('DEBUG', cast=bool, default=False)
37 CI = env('CI', cast=bool, default=False)
38 ANONYMIZE_ENABLED = DEBUG
39
40 ABSOLUTE_HOST = env('ABSOLUTE_HOST', cast=str, default='https://edu-app.borshev.com')
41 ALLOWED_HOSTS = [
42 'edu-app.borshev.com',
43 'localhost',
44 'localhost:8000',
45 'education.borshev.com',
46 ABSOLUTE_HOST.replace('https://', ''),
47 ]
48
49 CORS_ORIGIN_WHITELIST = [
50 'https://pmdaily.ru',
51 'https://education.borshev.com',
52 ]
53
54 CSRF_TRUSTED_ORIGINS = [
55 'pmdaily.ru',
56 'education.borshev.com',
57 'borshev.com',
58 ]
59
60
61 # Application definition
62
63 INSTALLED_APPS = [
64 'app',
65 'users',
66 'orders',
67 'products',
68 'shipping',
69 'tinkoff',
70 'triggers',
71 'magnets',
72
73 'corsheaders',
74 'hattori',
75 'anymail',
76 'rest_framework',
77 'rest_framework.authtoken',
78 'django_filters',
79
80 'axes',
81 'django.contrib.admin',
82 'django.contrib.auth',
83 'django.contrib.contenttypes',
84 'django.contrib.sessions',
85 'django.contrib.messages',
86 'django.contrib.staticfiles',
87
88 'debug_toolbar',
89 ]
90
91 MIDDLEWARE = [
92 'django.middleware.security.SecurityMiddleware',
93
94 'django.contrib.sessions.middleware.SessionMiddleware',
95 'corsheaders.middleware.CorsMiddleware',
96 'django.middleware.common.CommonMiddleware',
97 # 'django.middleware.csrf.CsrfViewMiddleware',
98 'django.contrib.auth.middleware.AuthenticationMiddleware',
99 'django.contrib.auth.middleware.RemoteUserMiddleware',
100 'django.contrib.messages.middleware.MessageMiddleware',
101 'django.middleware.clickjacking.XFrameOptionsMiddleware',
102 'axes.middleware.AxesMiddleware',
103 'debug_toolbar.middleware.DebugToolbarMiddleware',
104 ]
105
106 if not DEBUG and not CI:
107 MIDDLEWARE.insert(1, 'whitenoise.middleware.WhiteNoiseMiddleware')
108
109
110 REST_FRAMEWORK = {
111 'DEFAULT_PERMISSION_CLASSES': (
112 'rest_framework.permissions.IsAuthenticated',
113 ),
114 'DEFAULT_AUTHENTICATION_CLASSES': (
115 'rest_framework.authentication.TokenAuthentication',
116 ),
117 'DEFAULT_RENDERER_CLASSES': [
118 'app.renderers.AppJSONRenderer',
119 ],
120 'DEFAULT_VERSIONING_CLASS': 'rest_framework.versioning.URLPathVersioning',
121 'DEFAULT_PAGINATION_CLASS': 'app.pagination.AppPagination',
122 'PAGE_SIZE': 20,
123 }
124
125 ROOT_URLCONF = 'app.urls'
126
127 TEMPLATES = [
128 {
129 'BACKEND': 'django.template.backends.django.DjangoTemplates',
130 'DIRS': [],
131 'APP_DIRS': True,
132 'OPTIONS': {
133 'context_processors': [
134 'django.template.context_processors.debug',
135 'django.template.context_processors.request',
136 'django.contrib.auth.context_processors.auth',
137 'django.contrib.messages.context_processors.messages',
138 ],
139 },
140 },
141 ]
142
143 WSGI_APPLICATION = 'app.wsgi.application'
144
145
146 # Database
147 # https://docs.djangoproject.com/en/2.2/ref/settings/#databases
148 DATABASES = {
149 'default': env.db(), # Raises ImproperlyConfigured exception if DATABASE_URL not in os.environ
150 }
151 AUTH_USER_MODEL = 'users.User'
152 AUTHENTICATION_BACKENDS = [
153 'axes.backends.AxesBackend',
154 'django.contrib.auth.backends.ModelBackend',
155 'django.contrib.auth.backends.RemoteUserBackend',
156 ]
157 HEALTH_CHECKS_ERROR_CODE = 503
158 HEALTH_CHECKS = {
159 'db': 'django_healthchecks.contrib.check_database',
160 }
161
162 MEDIA_URL = env('MEDIA_URL', default='/media/')
163
164 STATIC_URL = env('STATIC_URL', default='/static/')
165 STATIC_ROOT = env('STATIC_ROOT')
166
167 SENTRY_DSN = env('SENTRY_DSN', cast=str, default='')
168
169 if not DEBUG and len(SENTRY_DSN):
170 import sentry_sdk
171 from sentry_sdk.integrations.celery import CeleryIntegration
172 from sentry_sdk.integrations.django import DjangoIntegration
173 from sentry_sdk.integrations.redis import RedisIntegration
174
175 sentry_sdk.init(
176 dsn=SENTRY_DSN,
177 integrations=[DjangoIntegration(), CeleryIntegration(), RedisIntegration()],
178 )
179
180 BROKER_URL = env('CELERY_BACKEND')
181 CELERY_ALWAYS_EAGER = env('CELERY_ALWAYS_EAGER', cast=bool, default=DEBUG) # by default in debug mode we run all celery tasks in foregroud.
182 CELERY_TIMEZONE = TIME_ZONE
183 CELERY_ENABLE_UTC = False
184 CELERYBEAT_SCHEDULE = {
185 'run_started_purchase_trigger': {
186 'task': 'triggers.tasks.check_for_started_purchase_triggers',
187 'schedule': crontab(hour='*', minute=15),
188 },
189 'run_record_feedback_trigger': {
190 'task': 'triggers.tasks.check_for_record_feedback_triggers',
191 'schedule': crontab(hour='*', minute=15),
192 },
193 'ship_unshipped_orders': {
194 'task': 'orders.tasks.ship_unshipped_orders',
195 'schedule': crontab(hour='*', minute='*/2'),
196 },
197 }
198
199
200 AWS_ACCESS_KEY_ID = env('AWS_ACCESS_KEY_ID', default=None)
201 AWS_SECRET_ACCESS_KEY = env('AWS_SECRET_ACCESS_KEY', default=None)
202 AWS_STORAGE_BUCKET_NAME = env('AWS_STORAGE_BUCKET_NAME', default=None)
203 AWS_S3_REGION_NAME = env('AWS_S3_REGION_NAME', default=None)
204 AWS_S3_ENDPOINT_URL = env('AWS_S3_ENDPOINT_URL', default=None)
205
206 EMAIL_ENABLED = env('EMAIL_ENABLED', cast=bool, default=False)
207
208 EMAIL_BACKEND = env('EMAIL_BACKEND', default='django.core.mail.backends.console.EmailBackend')
209
210 MAILJET_API_KEY = env('MAILJET_API_KEY', default='')
211 MAILJET_SECRET_KEY = env('MAILJET_SECRET_KEY', default='')
212 MAILJET_CONTACT_LIST_ID = env('MAILJET_CONTACT_LIST_ID', cast=int, default=None)
213
214 MAILCHIMP_API_KEY = env('MAILCHIMP_API_KEY', default='')
215 MAILCHIMP_CONTACT_LIST_ID = env('MAILCHIMP_CONTACT_LIST_ID', cast=str, default=None)
216
217 DEFAULT_FROM_EMAIL = env('EMAIL_FROM', cast=str, default='')
218 ANYMAIL = {
219 'POSTMARK_SERVER_TOKEN': env('POSTMARK_SERVER_TOKEN', cast=str, default=''),
220 'DEBUG_API_REQUESTS': env('DEBUG'),
221 }
222
223 CLICKMEETING_API_KEY = env('CLICKMEETING_API_KEY', default=None, cast=str)
224
225 ZOOMUS_API_KEY = env('ZOOMUS_API_KEY', default=None, cast=str)
226 ZOOMUS_API_SECRET = env('ZOOMUS_API_SECRET', default=None, cast=str)
227
228 TINKOFF_TERMINAL_KEY = env('TINKOFF_TERMINAL_KEY', default=None)
229 TINKOFF_TERMINAL_PASSWORD = env('TINKOFF_TERMINAL_PASSWORD', default=None)
230
231 SEND_HAPPINESS_MESSAGES = env('SEND_HAPPINESS_MESSAGES', cast=bool, default=False)
232
233 # Uncomment this lines to catch all runtime warnings as errors
234
235 # import warnings # noqa
236 # warnings.filterwarnings(
237 # 'error', r".*",
238 # RuntimeWarning, r".*"
239 # )
240
```
Path: `src/app/integrations/mailjet.py`
Content:
```
1 import mailjet_rest as mailjet
2 from django.conf import settings
3 from django.utils.functional import cached_property
4
5 from users.models import User
6
7
8 class AppMailjetWrongResponseException(Exception):
9 pass
10
11
12 class AppMailjet:
13 def __init__(self):
14 pass
15
16 @cached_property
17 def client(self):
18 return mailjet.Client(
19 auth=(settings.MAILJET_API_KEY, settings.MAILJET_SECRET_KEY),
20 version='v3',
21 )
22
23 def subscribe(self, user: User):
24 response = self.client.contactslist_managecontact.create(
25 id=settings.MAILJET_CONTACT_LIST_ID,
26 data={
27 'Action': 'addnoforce',
28 'Email': user.email,
29 'Properties': {
30 'name': str(user),
31 'firstname': user.first_name,
32 'lastname': user.last_name,
33 },
34 },
35 )
36
37 if response.status_code != 201:
38 raise AppMailjetWrongResponseException(f'Wrong response from mailjet: {response.status_code}. Content: {response.content}')
39
```
Path: `src/app/tasks.py`
Content:
```
1 from typing import List, Union
2
3 from anymail.exceptions import AnymailRequestsAPIError
4 from django.apps import apps
5 from django.conf import settings
6 from requests.exceptions import RequestException
7
8 from app.celery import celery
9 from app.integrations import tg
10 from app.integrations.clickmeeting import ClickMeetingClient, ClickMeetingHTTPException
11 from app.integrations.mailchimp import AppMailchimp, MailchimpException
12 from app.integrations.mailjet import AppMailjet, AppMailjetWrongResponseException
13 from app.integrations.zoomus import ZoomusClient, ZoomusHTTPException
14 from app.mail.owl import TemplOwl
15
16
17 @celery.task(
18 autoretry_for=[AnymailRequestsAPIError],
19 retry_kwargs={
20 'max_retries': 10,
21 'countdown': 5,
22 },
23 )
24 def send_mail(to: Union[List, str], template_id, subject: str = '', ctx: dict = None, disable_antispam=False):
25 TemplOwl(
26 to=to,
27 template_id=template_id,
28 subject=subject,
29 ctx=ctx,
30 disable_antispam=disable_antispam,
31 ).send()
32
33
34 @celery.task(
35 autoretry_for=[RequestException, ClickMeetingHTTPException],
36 retry_kwargs={
37 'max_retries': 10,
38 'countdown': 5,
39 },
40 )
41 def invite_to_clickmeeting(room_url: str, email: str):
42 client = ClickMeetingClient()
43 client.invite(room_url, email)
44
45
46 @celery.task(
47 autoretry_for=[RequestException, ZoomusHTTPException],
48 retry_kwargs={
49 'max_retries': 10,
50 'countdown': 5,
51 },
52 )
53 def invite_to_zoomus(webinar_id: str, user_id: int):
54 user = apps.get_model('users.User').objects.get(pk=user_id)
55
56 client = ZoomusClient()
57 client.invite(webinar_id, user)
58
59
60 @celery.task(
61 autoretry_for=[RequestException, AppMailjetWrongResponseException],
62 retry_kwargs={
63 'max_retries': 10,
64 'countdown': 5,
65 },
66 )
67 def subscribe_to_mailjet(user_id: int):
68 if not all(getattr(settings, x) for x in ['MAILJET_API_KEY', 'MAILJET_SECRET_KEY', 'MAILJET_CONTACT_LIST_ID']):
69 return
70
71 user = apps.get_model('users.User').objects.get(pk=user_id)
72 mailjet = AppMailjet()
73
74 mailjet.subscribe(user)
75
76
77 @celery.task(
78 autoretry_for=[RequestException, MailchimpException],
79 retry_kwargs={
80 'max_retries': 10,
81 'countdown': 5,
82 },
83 )
84 def subscribe_to_mailchimp(user_id: int, list_id=None, tags=None):
85 if list_id is None:
86 list_id = settings.MAILCHIMP_CONTACT_LIST_ID
87
88 if not list_id:
89 return
90
91 mailchimp = AppMailchimp()
92
93 mailchimp.subscribe_django_user(
94 list_id=list_id,
95 user=apps.get_model('users.User').objects.get(pk=user_id),
96 tags=tags,
97 )
98
99
100 @celery.task
101 def send_happiness_message(text):
102 tg.send_happiness_message(text)
103
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/app/integrations/mailjet.py b/src/app/integrations/mailjet.py
deleted file mode 100644
--- a/src/app/integrations/mailjet.py
+++ /dev/null
@@ -1,38 +0,0 @@
-import mailjet_rest as mailjet
-from django.conf import settings
-from django.utils.functional import cached_property
-
-from users.models import User
-
-
-class AppMailjetWrongResponseException(Exception):
- pass
-
-
-class AppMailjet:
- def __init__(self):
- pass
-
- @cached_property
- def client(self):
- return mailjet.Client(
- auth=(settings.MAILJET_API_KEY, settings.MAILJET_SECRET_KEY),
- version='v3',
- )
-
- def subscribe(self, user: User):
- response = self.client.contactslist_managecontact.create(
- id=settings.MAILJET_CONTACT_LIST_ID,
- data={
- 'Action': 'addnoforce',
- 'Email': user.email,
- 'Properties': {
- 'name': str(user),
- 'firstname': user.first_name,
- 'lastname': user.last_name,
- },
- },
- )
-
- if response.status_code != 201:
- raise AppMailjetWrongResponseException(f'Wrong response from mailjet: {response.status_code}. Content: {response.content}')
diff --git a/src/app/settings.py b/src/app/settings.py
--- a/src/app/settings.py
+++ b/src/app/settings.py
@@ -207,10 +207,6 @@
EMAIL_BACKEND = env('EMAIL_BACKEND', default='django.core.mail.backends.console.EmailBackend')
-MAILJET_API_KEY = env('MAILJET_API_KEY', default='')
-MAILJET_SECRET_KEY = env('MAILJET_SECRET_KEY', default='')
-MAILJET_CONTACT_LIST_ID = env('MAILJET_CONTACT_LIST_ID', cast=int, default=None)
-
MAILCHIMP_API_KEY = env('MAILCHIMP_API_KEY', default='')
MAILCHIMP_CONTACT_LIST_ID = env('MAILCHIMP_CONTACT_LIST_ID', cast=str, default=None)
diff --git a/src/app/tasks.py b/src/app/tasks.py
--- a/src/app/tasks.py
+++ b/src/app/tasks.py
@@ -9,7 +9,6 @@
from app.integrations import tg
from app.integrations.clickmeeting import ClickMeetingClient, ClickMeetingHTTPException
from app.integrations.mailchimp import AppMailchimp, MailchimpException
-from app.integrations.mailjet import AppMailjet, AppMailjetWrongResponseException
from app.integrations.zoomus import ZoomusClient, ZoomusHTTPException
from app.mail.owl import TemplOwl
@@ -57,23 +56,6 @@
client.invite(webinar_id, user)
[email protected](
- autoretry_for=[RequestException, AppMailjetWrongResponseException],
- retry_kwargs={
- 'max_retries': 10,
- 'countdown': 5,
- },
-)
-def subscribe_to_mailjet(user_id: int):
- if not all(getattr(settings, x) for x in ['MAILJET_API_KEY', 'MAILJET_SECRET_KEY', 'MAILJET_CONTACT_LIST_ID']):
- return
-
- user = apps.get_model('users.User').objects.get(pk=user_id)
- mailjet = AppMailjet()
-
- mailjet.subscribe(user)
-
-
@celery.task(
autoretry_for=[RequestException, MailchimpException],
retry_kwargs={
| {"golden_diff": "diff --git a/src/app/integrations/mailjet.py b/src/app/integrations/mailjet.py\ndeleted file mode 100644\n--- a/src/app/integrations/mailjet.py\n+++ /dev/null\n@@ -1,38 +0,0 @@\n-import mailjet_rest as mailjet\n-from django.conf import settings\n-from django.utils.functional import cached_property\n-\n-from users.models import User\n-\n-\n-class AppMailjetWrongResponseException(Exception):\n- pass\n-\n-\n-class AppMailjet:\n- def __init__(self):\n- pass\n-\n- @cached_property\n- def client(self):\n- return mailjet.Client(\n- auth=(settings.MAILJET_API_KEY, settings.MAILJET_SECRET_KEY),\n- version='v3',\n- )\n-\n- def subscribe(self, user: User):\n- response = self.client.contactslist_managecontact.create(\n- id=settings.MAILJET_CONTACT_LIST_ID,\n- data={\n- 'Action': 'addnoforce',\n- 'Email': user.email,\n- 'Properties': {\n- 'name': str(user),\n- 'firstname': user.first_name,\n- 'lastname': user.last_name,\n- },\n- },\n- )\n-\n- if response.status_code != 201:\n- raise AppMailjetWrongResponseException(f'Wrong response from mailjet: {response.status_code}. Content: {response.content}')\ndiff --git a/src/app/settings.py b/src/app/settings.py\n--- a/src/app/settings.py\n+++ b/src/app/settings.py\n@@ -207,10 +207,6 @@\n \n EMAIL_BACKEND = env('EMAIL_BACKEND', default='django.core.mail.backends.console.EmailBackend')\n \n-MAILJET_API_KEY = env('MAILJET_API_KEY', default='')\n-MAILJET_SECRET_KEY = env('MAILJET_SECRET_KEY', default='')\n-MAILJET_CONTACT_LIST_ID = env('MAILJET_CONTACT_LIST_ID', cast=int, default=None)\n-\n MAILCHIMP_API_KEY = env('MAILCHIMP_API_KEY', default='')\n MAILCHIMP_CONTACT_LIST_ID = env('MAILCHIMP_CONTACT_LIST_ID', cast=str, default=None)\n \ndiff --git a/src/app/tasks.py b/src/app/tasks.py\n--- a/src/app/tasks.py\n+++ b/src/app/tasks.py\n@@ -9,7 +9,6 @@\n from app.integrations import tg\n from app.integrations.clickmeeting import ClickMeetingClient, ClickMeetingHTTPException\n from app.integrations.mailchimp import AppMailchimp, MailchimpException\n-from app.integrations.mailjet import AppMailjet, AppMailjetWrongResponseException\n from app.integrations.zoomus import ZoomusClient, ZoomusHTTPException\n from app.mail.owl import TemplOwl\n \n@@ -57,23 +56,6 @@\n client.invite(webinar_id, user)\n \n \[email protected](\n- autoretry_for=[RequestException, AppMailjetWrongResponseException],\n- retry_kwargs={\n- 'max_retries': 10,\n- 'countdown': 5,\n- },\n-)\n-def subscribe_to_mailjet(user_id: int):\n- if not all(getattr(settings, x) for x in ['MAILJET_API_KEY', 'MAILJET_SECRET_KEY', 'MAILJET_CONTACT_LIST_ID']):\n- return\n-\n- user = apps.get_model('users.User').objects.get(pk=user_id)\n- mailjet = AppMailjet()\n-\n- mailjet.subscribe(user)\n-\n-\n @celery.task(\n autoretry_for=[RequestException, MailchimpException],\n retry_kwargs={\n", "issue": "\u0412\u044b\u043f\u0438\u043b\u0438\u0442\u044c \u0438\u043d\u0442\u0435\u0433\u0440\u0430\u0446\u0438\u044e \u0441 mailjet (\u043e\u043d\u0430 \u0434\u0430\u0432\u043d\u043e \u043d\u0435 \u0438\u0441\u043f\u043e\u043b\u044c\u0437\u0443\u0435\u0442\u0441\u044f)\n\n", "before_files": [{"content": "import environ\nimport os\nfrom celery.schedules import crontab\n\nroot = environ.Path(__file__) - 2 # three folder back (/a/b/c/ - 3 = /)\nenv = environ.Env(DEBUG=(bool, False)) # set default values and casting\nenviron.Env.read_env() # reading .env file\nSITE_ROOT = root()\n\nUSE_L10N = True\nUSE_i18N = True\n\nLANGUAGE_CODE = 'ru'\nLOCALE_PATHS = ['locale']\n\nINTERNAL_IPS = [\n '127.0.0.1',\n]\nFRONTEND_URL = 'https://education.borshev.com'\n\nUSE_TZ = False\nTIME_ZONE = env('TIME_ZONE', cast=str, default='Europe/Moscow')\n\n# Build paths inside the project like this: os.path.join(BASE_DIR, ...)\nBASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\nTEST_RUNNER = 'app.test.disable_test_command_runner.DisableTestCommandRunner'\n\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/2.2/howto/deployment/checklist/\n\n# SECURITY WARNING: keep the secret key used in production secret!\nSECRET_KEY = env('SECRET_KEY', cast=str, default='s3cr3t')\n\n# SECURITY WARNING: don't run with debug turned on in production!\nDEBUG = env('DEBUG', cast=bool, default=False)\nCI = env('CI', cast=bool, default=False)\nANONYMIZE_ENABLED = DEBUG\n\nABSOLUTE_HOST = env('ABSOLUTE_HOST', cast=str, default='https://edu-app.borshev.com')\nALLOWED_HOSTS = [\n 'edu-app.borshev.com',\n 'localhost',\n 'localhost:8000',\n 'education.borshev.com',\n ABSOLUTE_HOST.replace('https://', ''),\n]\n\nCORS_ORIGIN_WHITELIST = [\n 'https://pmdaily.ru',\n 'https://education.borshev.com',\n]\n\nCSRF_TRUSTED_ORIGINS = [\n 'pmdaily.ru',\n 'education.borshev.com',\n 'borshev.com',\n]\n\n\n# Application definition\n\nINSTALLED_APPS = [\n 'app',\n 'users',\n 'orders',\n 'products',\n 'shipping',\n 'tinkoff',\n 'triggers',\n 'magnets',\n\n 'corsheaders',\n 'hattori',\n 'anymail',\n 'rest_framework',\n 'rest_framework.authtoken',\n 'django_filters',\n\n 'axes',\n 'django.contrib.admin',\n 'django.contrib.auth',\n 'django.contrib.contenttypes',\n 'django.contrib.sessions',\n 'django.contrib.messages',\n 'django.contrib.staticfiles',\n\n 'debug_toolbar',\n]\n\nMIDDLEWARE = [\n 'django.middleware.security.SecurityMiddleware',\n\n 'django.contrib.sessions.middleware.SessionMiddleware',\n 'corsheaders.middleware.CorsMiddleware',\n 'django.middleware.common.CommonMiddleware',\n # 'django.middleware.csrf.CsrfViewMiddleware',\n 'django.contrib.auth.middleware.AuthenticationMiddleware',\n 'django.contrib.auth.middleware.RemoteUserMiddleware',\n 'django.contrib.messages.middleware.MessageMiddleware',\n 'django.middleware.clickjacking.XFrameOptionsMiddleware',\n 'axes.middleware.AxesMiddleware',\n 'debug_toolbar.middleware.DebugToolbarMiddleware',\n]\n\nif not DEBUG and not CI:\n MIDDLEWARE.insert(1, 'whitenoise.middleware.WhiteNoiseMiddleware')\n\n\nREST_FRAMEWORK = {\n 'DEFAULT_PERMISSION_CLASSES': (\n 'rest_framework.permissions.IsAuthenticated',\n ),\n 'DEFAULT_AUTHENTICATION_CLASSES': (\n 'rest_framework.authentication.TokenAuthentication',\n ),\n 'DEFAULT_RENDERER_CLASSES': [\n 'app.renderers.AppJSONRenderer',\n ],\n 'DEFAULT_VERSIONING_CLASS': 'rest_framework.versioning.URLPathVersioning',\n 'DEFAULT_PAGINATION_CLASS': 'app.pagination.AppPagination',\n 'PAGE_SIZE': 20,\n}\n\nROOT_URLCONF = 'app.urls'\n\nTEMPLATES = [\n {\n 'BACKEND': 'django.template.backends.django.DjangoTemplates',\n 'DIRS': [],\n 'APP_DIRS': True,\n 'OPTIONS': {\n 'context_processors': [\n 'django.template.context_processors.debug',\n 'django.template.context_processors.request',\n 'django.contrib.auth.context_processors.auth',\n 'django.contrib.messages.context_processors.messages',\n ],\n },\n },\n]\n\nWSGI_APPLICATION = 'app.wsgi.application'\n\n\n# Database\n# https://docs.djangoproject.com/en/2.2/ref/settings/#databases\nDATABASES = {\n 'default': env.db(), # Raises ImproperlyConfigured exception if DATABASE_URL not in os.environ\n}\nAUTH_USER_MODEL = 'users.User'\nAUTHENTICATION_BACKENDS = [\n 'axes.backends.AxesBackend',\n 'django.contrib.auth.backends.ModelBackend',\n 'django.contrib.auth.backends.RemoteUserBackend',\n]\nHEALTH_CHECKS_ERROR_CODE = 503\nHEALTH_CHECKS = {\n 'db': 'django_healthchecks.contrib.check_database',\n}\n\nMEDIA_URL = env('MEDIA_URL', default='/media/')\n\nSTATIC_URL = env('STATIC_URL', default='/static/')\nSTATIC_ROOT = env('STATIC_ROOT')\n\nSENTRY_DSN = env('SENTRY_DSN', cast=str, default='')\n\nif not DEBUG and len(SENTRY_DSN):\n import sentry_sdk\n from sentry_sdk.integrations.celery import CeleryIntegration\n from sentry_sdk.integrations.django import DjangoIntegration\n from sentry_sdk.integrations.redis import RedisIntegration\n\n sentry_sdk.init(\n dsn=SENTRY_DSN,\n integrations=[DjangoIntegration(), CeleryIntegration(), RedisIntegration()],\n )\n\nBROKER_URL = env('CELERY_BACKEND')\nCELERY_ALWAYS_EAGER = env('CELERY_ALWAYS_EAGER', cast=bool, default=DEBUG) # by default in debug mode we run all celery tasks in foregroud.\nCELERY_TIMEZONE = TIME_ZONE\nCELERY_ENABLE_UTC = False\nCELERYBEAT_SCHEDULE = {\n 'run_started_purchase_trigger': {\n 'task': 'triggers.tasks.check_for_started_purchase_triggers',\n 'schedule': crontab(hour='*', minute=15),\n },\n 'run_record_feedback_trigger': {\n 'task': 'triggers.tasks.check_for_record_feedback_triggers',\n 'schedule': crontab(hour='*', minute=15),\n },\n 'ship_unshipped_orders': {\n 'task': 'orders.tasks.ship_unshipped_orders',\n 'schedule': crontab(hour='*', minute='*/2'),\n },\n}\n\n\nAWS_ACCESS_KEY_ID = env('AWS_ACCESS_KEY_ID', default=None)\nAWS_SECRET_ACCESS_KEY = env('AWS_SECRET_ACCESS_KEY', default=None)\nAWS_STORAGE_BUCKET_NAME = env('AWS_STORAGE_BUCKET_NAME', default=None)\nAWS_S3_REGION_NAME = env('AWS_S3_REGION_NAME', default=None)\nAWS_S3_ENDPOINT_URL = env('AWS_S3_ENDPOINT_URL', default=None)\n\nEMAIL_ENABLED = env('EMAIL_ENABLED', cast=bool, default=False)\n\nEMAIL_BACKEND = env('EMAIL_BACKEND', default='django.core.mail.backends.console.EmailBackend')\n\nMAILJET_API_KEY = env('MAILJET_API_KEY', default='')\nMAILJET_SECRET_KEY = env('MAILJET_SECRET_KEY', default='')\nMAILJET_CONTACT_LIST_ID = env('MAILJET_CONTACT_LIST_ID', cast=int, default=None)\n\nMAILCHIMP_API_KEY = env('MAILCHIMP_API_KEY', default='')\nMAILCHIMP_CONTACT_LIST_ID = env('MAILCHIMP_CONTACT_LIST_ID', cast=str, default=None)\n\nDEFAULT_FROM_EMAIL = env('EMAIL_FROM', cast=str, default='')\nANYMAIL = {\n 'POSTMARK_SERVER_TOKEN': env('POSTMARK_SERVER_TOKEN', cast=str, default=''),\n 'DEBUG_API_REQUESTS': env('DEBUG'),\n}\n\nCLICKMEETING_API_KEY = env('CLICKMEETING_API_KEY', default=None, cast=str)\n\nZOOMUS_API_KEY = env('ZOOMUS_API_KEY', default=None, cast=str)\nZOOMUS_API_SECRET = env('ZOOMUS_API_SECRET', default=None, cast=str)\n\nTINKOFF_TERMINAL_KEY = env('TINKOFF_TERMINAL_KEY', default=None)\nTINKOFF_TERMINAL_PASSWORD = env('TINKOFF_TERMINAL_PASSWORD', default=None)\n\nSEND_HAPPINESS_MESSAGES = env('SEND_HAPPINESS_MESSAGES', cast=bool, default=False)\n\n# Uncomment this lines to catch all runtime warnings as errors\n\n# import warnings # noqa\n# warnings.filterwarnings(\n# 'error', r\".*\",\n# RuntimeWarning, r\".*\"\n# )\n", "path": "src/app/settings.py"}, {"content": "import mailjet_rest as mailjet\nfrom django.conf import settings\nfrom django.utils.functional import cached_property\n\nfrom users.models import User\n\n\nclass AppMailjetWrongResponseException(Exception):\n pass\n\n\nclass AppMailjet:\n def __init__(self):\n pass\n\n @cached_property\n def client(self):\n return mailjet.Client(\n auth=(settings.MAILJET_API_KEY, settings.MAILJET_SECRET_KEY),\n version='v3',\n )\n\n def subscribe(self, user: User):\n response = self.client.contactslist_managecontact.create(\n id=settings.MAILJET_CONTACT_LIST_ID,\n data={\n 'Action': 'addnoforce',\n 'Email': user.email,\n 'Properties': {\n 'name': str(user),\n 'firstname': user.first_name,\n 'lastname': user.last_name,\n },\n },\n )\n\n if response.status_code != 201:\n raise AppMailjetWrongResponseException(f'Wrong response from mailjet: {response.status_code}. Content: {response.content}')\n", "path": "src/app/integrations/mailjet.py"}, {"content": "from typing import List, Union\n\nfrom anymail.exceptions import AnymailRequestsAPIError\nfrom django.apps import apps\nfrom django.conf import settings\nfrom requests.exceptions import RequestException\n\nfrom app.celery import celery\nfrom app.integrations import tg\nfrom app.integrations.clickmeeting import ClickMeetingClient, ClickMeetingHTTPException\nfrom app.integrations.mailchimp import AppMailchimp, MailchimpException\nfrom app.integrations.mailjet import AppMailjet, AppMailjetWrongResponseException\nfrom app.integrations.zoomus import ZoomusClient, ZoomusHTTPException\nfrom app.mail.owl import TemplOwl\n\n\[email protected](\n autoretry_for=[AnymailRequestsAPIError],\n retry_kwargs={\n 'max_retries': 10,\n 'countdown': 5,\n },\n)\ndef send_mail(to: Union[List, str], template_id, subject: str = '', ctx: dict = None, disable_antispam=False):\n TemplOwl(\n to=to,\n template_id=template_id,\n subject=subject,\n ctx=ctx,\n disable_antispam=disable_antispam,\n ).send()\n\n\[email protected](\n autoretry_for=[RequestException, ClickMeetingHTTPException],\n retry_kwargs={\n 'max_retries': 10,\n 'countdown': 5,\n },\n)\ndef invite_to_clickmeeting(room_url: str, email: str):\n client = ClickMeetingClient()\n client.invite(room_url, email)\n\n\[email protected](\n autoretry_for=[RequestException, ZoomusHTTPException],\n retry_kwargs={\n 'max_retries': 10,\n 'countdown': 5,\n },\n)\ndef invite_to_zoomus(webinar_id: str, user_id: int):\n user = apps.get_model('users.User').objects.get(pk=user_id)\n\n client = ZoomusClient()\n client.invite(webinar_id, user)\n\n\[email protected](\n autoretry_for=[RequestException, AppMailjetWrongResponseException],\n retry_kwargs={\n 'max_retries': 10,\n 'countdown': 5,\n },\n)\ndef subscribe_to_mailjet(user_id: int):\n if not all(getattr(settings, x) for x in ['MAILJET_API_KEY', 'MAILJET_SECRET_KEY', 'MAILJET_CONTACT_LIST_ID']):\n return\n\n user = apps.get_model('users.User').objects.get(pk=user_id)\n mailjet = AppMailjet()\n\n mailjet.subscribe(user)\n\n\[email protected](\n autoretry_for=[RequestException, MailchimpException],\n retry_kwargs={\n 'max_retries': 10,\n 'countdown': 5,\n },\n)\ndef subscribe_to_mailchimp(user_id: int, list_id=None, tags=None):\n if list_id is None:\n list_id = settings.MAILCHIMP_CONTACT_LIST_ID\n\n if not list_id:\n return\n\n mailchimp = AppMailchimp()\n\n mailchimp.subscribe_django_user(\n list_id=list_id,\n user=apps.get_model('users.User').objects.get(pk=user_id),\n tags=tags,\n )\n\n\[email protected]\ndef send_happiness_message(text):\n tg.send_happiness_message(text)\n", "path": "src/app/tasks.py"}], "after_files": [{"content": "import environ\nimport os\nfrom celery.schedules import crontab\n\nroot = environ.Path(__file__) - 2 # three folder back (/a/b/c/ - 3 = /)\nenv = environ.Env(DEBUG=(bool, False)) # set default values and casting\nenviron.Env.read_env() # reading .env file\nSITE_ROOT = root()\n\nUSE_L10N = True\nUSE_i18N = True\n\nLANGUAGE_CODE = 'ru'\nLOCALE_PATHS = ['locale']\n\nINTERNAL_IPS = [\n '127.0.0.1',\n]\nFRONTEND_URL = 'https://education.borshev.com'\n\nUSE_TZ = False\nTIME_ZONE = env('TIME_ZONE', cast=str, default='Europe/Moscow')\n\n# Build paths inside the project like this: os.path.join(BASE_DIR, ...)\nBASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\nTEST_RUNNER = 'app.test.disable_test_command_runner.DisableTestCommandRunner'\n\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/2.2/howto/deployment/checklist/\n\n# SECURITY WARNING: keep the secret key used in production secret!\nSECRET_KEY = env('SECRET_KEY', cast=str, default='s3cr3t')\n\n# SECURITY WARNING: don't run with debug turned on in production!\nDEBUG = env('DEBUG', cast=bool, default=False)\nCI = env('CI', cast=bool, default=False)\nANONYMIZE_ENABLED = DEBUG\n\nABSOLUTE_HOST = env('ABSOLUTE_HOST', cast=str, default='https://edu-app.borshev.com')\nALLOWED_HOSTS = [\n 'edu-app.borshev.com',\n 'localhost',\n 'localhost:8000',\n 'education.borshev.com',\n ABSOLUTE_HOST.replace('https://', ''),\n]\n\nCORS_ORIGIN_WHITELIST = [\n 'https://pmdaily.ru',\n 'https://education.borshev.com',\n]\n\nCSRF_TRUSTED_ORIGINS = [\n 'pmdaily.ru',\n 'education.borshev.com',\n 'borshev.com',\n]\n\n\n# Application definition\n\nINSTALLED_APPS = [\n 'app',\n 'users',\n 'orders',\n 'products',\n 'shipping',\n 'tinkoff',\n 'triggers',\n 'magnets',\n\n 'corsheaders',\n 'hattori',\n 'anymail',\n 'rest_framework',\n 'rest_framework.authtoken',\n 'django_filters',\n\n 'axes',\n 'django.contrib.admin',\n 'django.contrib.auth',\n 'django.contrib.contenttypes',\n 'django.contrib.sessions',\n 'django.contrib.messages',\n 'django.contrib.staticfiles',\n\n 'debug_toolbar',\n]\n\nMIDDLEWARE = [\n 'django.middleware.security.SecurityMiddleware',\n\n 'django.contrib.sessions.middleware.SessionMiddleware',\n 'corsheaders.middleware.CorsMiddleware',\n 'django.middleware.common.CommonMiddleware',\n # 'django.middleware.csrf.CsrfViewMiddleware',\n 'django.contrib.auth.middleware.AuthenticationMiddleware',\n 'django.contrib.auth.middleware.RemoteUserMiddleware',\n 'django.contrib.messages.middleware.MessageMiddleware',\n 'django.middleware.clickjacking.XFrameOptionsMiddleware',\n 'axes.middleware.AxesMiddleware',\n 'debug_toolbar.middleware.DebugToolbarMiddleware',\n]\n\nif not DEBUG and not CI:\n MIDDLEWARE.insert(1, 'whitenoise.middleware.WhiteNoiseMiddleware')\n\n\nREST_FRAMEWORK = {\n 'DEFAULT_PERMISSION_CLASSES': (\n 'rest_framework.permissions.IsAuthenticated',\n ),\n 'DEFAULT_AUTHENTICATION_CLASSES': (\n 'rest_framework.authentication.TokenAuthentication',\n ),\n 'DEFAULT_RENDERER_CLASSES': [\n 'app.renderers.AppJSONRenderer',\n ],\n 'DEFAULT_VERSIONING_CLASS': 'rest_framework.versioning.URLPathVersioning',\n 'DEFAULT_PAGINATION_CLASS': 'app.pagination.AppPagination',\n 'PAGE_SIZE': 20,\n}\n\nROOT_URLCONF = 'app.urls'\n\nTEMPLATES = [\n {\n 'BACKEND': 'django.template.backends.django.DjangoTemplates',\n 'DIRS': [],\n 'APP_DIRS': True,\n 'OPTIONS': {\n 'context_processors': [\n 'django.template.context_processors.debug',\n 'django.template.context_processors.request',\n 'django.contrib.auth.context_processors.auth',\n 'django.contrib.messages.context_processors.messages',\n ],\n },\n },\n]\n\nWSGI_APPLICATION = 'app.wsgi.application'\n\n\n# Database\n# https://docs.djangoproject.com/en/2.2/ref/settings/#databases\nDATABASES = {\n 'default': env.db(), # Raises ImproperlyConfigured exception if DATABASE_URL not in os.environ\n}\nAUTH_USER_MODEL = 'users.User'\nAUTHENTICATION_BACKENDS = [\n 'axes.backends.AxesBackend',\n 'django.contrib.auth.backends.ModelBackend',\n 'django.contrib.auth.backends.RemoteUserBackend',\n]\nHEALTH_CHECKS_ERROR_CODE = 503\nHEALTH_CHECKS = {\n 'db': 'django_healthchecks.contrib.check_database',\n}\n\nMEDIA_URL = env('MEDIA_URL', default='/media/')\n\nSTATIC_URL = env('STATIC_URL', default='/static/')\nSTATIC_ROOT = env('STATIC_ROOT')\n\nSENTRY_DSN = env('SENTRY_DSN', cast=str, default='')\n\nif not DEBUG and len(SENTRY_DSN):\n import sentry_sdk\n from sentry_sdk.integrations.celery import CeleryIntegration\n from sentry_sdk.integrations.django import DjangoIntegration\n from sentry_sdk.integrations.redis import RedisIntegration\n\n sentry_sdk.init(\n dsn=SENTRY_DSN,\n integrations=[DjangoIntegration(), CeleryIntegration(), RedisIntegration()],\n )\n\nBROKER_URL = env('CELERY_BACKEND')\nCELERY_ALWAYS_EAGER = env('CELERY_ALWAYS_EAGER', cast=bool, default=DEBUG) # by default in debug mode we run all celery tasks in foregroud.\nCELERY_TIMEZONE = TIME_ZONE\nCELERY_ENABLE_UTC = False\nCELERYBEAT_SCHEDULE = {\n 'run_started_purchase_trigger': {\n 'task': 'triggers.tasks.check_for_started_purchase_triggers',\n 'schedule': crontab(hour='*', minute=15),\n },\n 'run_record_feedback_trigger': {\n 'task': 'triggers.tasks.check_for_record_feedback_triggers',\n 'schedule': crontab(hour='*', minute=15),\n },\n 'ship_unshipped_orders': {\n 'task': 'orders.tasks.ship_unshipped_orders',\n 'schedule': crontab(hour='*', minute='*/2'),\n },\n}\n\n\nAWS_ACCESS_KEY_ID = env('AWS_ACCESS_KEY_ID', default=None)\nAWS_SECRET_ACCESS_KEY = env('AWS_SECRET_ACCESS_KEY', default=None)\nAWS_STORAGE_BUCKET_NAME = env('AWS_STORAGE_BUCKET_NAME', default=None)\nAWS_S3_REGION_NAME = env('AWS_S3_REGION_NAME', default=None)\nAWS_S3_ENDPOINT_URL = env('AWS_S3_ENDPOINT_URL', default=None)\n\nEMAIL_ENABLED = env('EMAIL_ENABLED', cast=bool, default=False)\n\nEMAIL_BACKEND = env('EMAIL_BACKEND', default='django.core.mail.backends.console.EmailBackend')\n\nMAILCHIMP_API_KEY = env('MAILCHIMP_API_KEY', default='')\nMAILCHIMP_CONTACT_LIST_ID = env('MAILCHIMP_CONTACT_LIST_ID', cast=str, default=None)\n\nDEFAULT_FROM_EMAIL = env('EMAIL_FROM', cast=str, default='')\nANYMAIL = {\n 'POSTMARK_SERVER_TOKEN': env('POSTMARK_SERVER_TOKEN', cast=str, default=''),\n 'DEBUG_API_REQUESTS': env('DEBUG'),\n}\n\nCLICKMEETING_API_KEY = env('CLICKMEETING_API_KEY', default=None, cast=str)\n\nZOOMUS_API_KEY = env('ZOOMUS_API_KEY', default=None, cast=str)\nZOOMUS_API_SECRET = env('ZOOMUS_API_SECRET', default=None, cast=str)\n\nTINKOFF_TERMINAL_KEY = env('TINKOFF_TERMINAL_KEY', default=None)\nTINKOFF_TERMINAL_PASSWORD = env('TINKOFF_TERMINAL_PASSWORD', default=None)\n\nSEND_HAPPINESS_MESSAGES = env('SEND_HAPPINESS_MESSAGES', cast=bool, default=False)\n\n# Uncomment this lines to catch all runtime warnings as errors\n\n# import warnings # noqa\n# warnings.filterwarnings(\n# 'error', r\".*\",\n# RuntimeWarning, r\".*\"\n# )\n", "path": "src/app/settings.py"}, {"content": null, "path": "src/app/integrations/mailjet.py"}, {"content": "from typing import List, Union\n\nfrom anymail.exceptions import AnymailRequestsAPIError\nfrom django.apps import apps\nfrom django.conf import settings\nfrom requests.exceptions import RequestException\n\nfrom app.celery import celery\nfrom app.integrations import tg\nfrom app.integrations.clickmeeting import ClickMeetingClient, ClickMeetingHTTPException\nfrom app.integrations.mailchimp import AppMailchimp, MailchimpException\nfrom app.integrations.zoomus import ZoomusClient, ZoomusHTTPException\nfrom app.mail.owl import TemplOwl\n\n\[email protected](\n autoretry_for=[AnymailRequestsAPIError],\n retry_kwargs={\n 'max_retries': 10,\n 'countdown': 5,\n },\n)\ndef send_mail(to: Union[List, str], template_id, subject: str = '', ctx: dict = None, disable_antispam=False):\n TemplOwl(\n to=to,\n template_id=template_id,\n subject=subject,\n ctx=ctx,\n disable_antispam=disable_antispam,\n ).send()\n\n\[email protected](\n autoretry_for=[RequestException, ClickMeetingHTTPException],\n retry_kwargs={\n 'max_retries': 10,\n 'countdown': 5,\n },\n)\ndef invite_to_clickmeeting(room_url: str, email: str):\n client = ClickMeetingClient()\n client.invite(room_url, email)\n\n\[email protected](\n autoretry_for=[RequestException, ZoomusHTTPException],\n retry_kwargs={\n 'max_retries': 10,\n 'countdown': 5,\n },\n)\ndef invite_to_zoomus(webinar_id: str, user_id: int):\n user = apps.get_model('users.User').objects.get(pk=user_id)\n\n client = ZoomusClient()\n client.invite(webinar_id, user)\n\n\[email protected](\n autoretry_for=[RequestException, MailchimpException],\n retry_kwargs={\n 'max_retries': 10,\n 'countdown': 5,\n },\n)\ndef subscribe_to_mailchimp(user_id: int, list_id=None, tags=None):\n if list_id is None:\n list_id = settings.MAILCHIMP_CONTACT_LIST_ID\n\n if not list_id:\n return\n\n mailchimp = AppMailchimp()\n\n mailchimp.subscribe_django_user(\n list_id=list_id,\n user=apps.get_model('users.User').objects.get(pk=user_id),\n tags=tags,\n )\n\n\[email protected]\ndef send_happiness_message(text):\n tg.send_happiness_message(text)\n", "path": "src/app/tasks.py"}]} | 3,963 | 782 |
gh_patches_debug_5021 | rasdani/github-patches | git_diff | huggingface__transformers-10070 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
BertGenerationTokenizer provides an unexpected value for BertGenerationModel
- `transformers` version: 4.2.2
- PyTorch version (GPU?): 1.7.0+cu101
- tokenizers: @n1t0, @LysandreJik
## Information
in both models BertGenerationEncoder, BertGenerationDecoder, there's no need for `token_type_ids` however the BertGenerationTokenizer provides it, this issue will be raised if you want to input the tokenizer results directly with `**`,
and if it meant to be like this, and the user should be aware of this behaviour, I think a change should be in the documentation.
Note: Another issue with BertGenerationTokenizer is the necessity of sentencepiece module, do you prefer that it should for the user to install it separately or it should be included in transformers dependencies.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/transformers/models/bert_generation/tokenization_bert_generation.py`
Content:
```
1 # coding=utf-8
2 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 """ Tokenization class for model BertGeneration."""
16
17
18 import os
19 from shutil import copyfile
20 from typing import List, Optional, Tuple
21
22 import sentencepiece as spm
23
24 from ...tokenization_utils import PreTrainedTokenizer
25 from ...utils import logging
26
27
28 logger = logging.get_logger(__name__)
29
30 VOCAB_FILES_NAMES = {"vocab_file": "spiece.model"}
31
32 tokenizer_url = "https://huggingface.co/google/bert_for_seq_generation_L-24_bbc_encoder/resolve/main/spiece.model"
33
34
35 class BertGenerationTokenizer(PreTrainedTokenizer):
36 """
37 Construct a BertGeneration tokenizer. Based on `SentencePiece <https://github.com/google/sentencepiece>`__.
38
39 This tokenizer inherits from :class:`~transformers.PreTrainedTokenizer` which contains most of the main methods.
40 Users should refer to this superclass for more information regarding those methods.
41
42 Args:
43 vocab_file (:obj:`str`):
44 `SentencePiece <https://github.com/google/sentencepiece>`__ file (generally has a `.spm` extension) that
45 contains the vocabulary necessary to instantiate a tokenizer.
46 eos_token (:obj:`str`, `optional`, defaults to :obj:`"</s>"`):
47 The end of sequence token.
48 bos_token (:obj:`str`, `optional`, defaults to :obj:`"<s>"`):
49 The begin of sequence token.
50 unk_token (:obj:`str`, `optional`, defaults to :obj:`"<unk>"`):
51 The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
52 token instead.
53 pad_token (:obj:`str`, `optional`, defaults to :obj:`"<pad>"`):
54 The token used for padding, for example when batching sequences of different lengths.
55 """
56
57 vocab_files_names = VOCAB_FILES_NAMES
58 pretrained_vocab_files_map = {"vocab_file": {"bert_for_seq_generation": tokenizer_url}}
59 max_model_input_sizes = {"bert_for_seq_generation": 512}
60 prefix_tokens: List[int] = []
61
62 def __init__(
63 self,
64 vocab_file,
65 bos_token="<s>",
66 eos_token="</s>",
67 unk_token="<unk>",
68 pad_token="<pad>",
69 sep_token="<::::>",
70 **kwargs
71 ):
72 # Add extra_ids to the special token list
73 super().__init__(
74 bos_token=bos_token,
75 eos_token=eos_token,
76 unk_token=unk_token,
77 pad_token=pad_token,
78 sep_token=sep_token,
79 **kwargs,
80 )
81
82 self.vocab_file = vocab_file
83
84 self.sp_model = spm.SentencePieceProcessor()
85 self.sp_model.Load(vocab_file)
86
87 @property
88 def vocab_size(self):
89 return self.sp_model.get_piece_size()
90
91 def get_vocab(self):
92 vocab = {self.convert_ids_to_tokens(i): i for i in range(self.vocab_size)}
93 vocab.update(self.added_tokens_encoder)
94 return vocab
95
96 def __getstate__(self):
97 state = self.__dict__.copy()
98 state["sp_model"] = None
99 return state
100
101 def __setstate__(self, d):
102 self.__dict__ = d
103 self.sp_model = spm.SentencePieceProcessor()
104 self.sp_model.Load(self.vocab_file)
105
106 def _tokenize(self, text, sample=False):
107 """Take as input a string and return a list of strings (tokens) for words/sub-words"""
108 if not sample:
109 pieces = self.sp_model.EncodeAsPieces(text)
110 else:
111 pieces = self.sp_model.SampleEncodeAsPieces(text, 64, 0.1)
112 return pieces
113
114 def _convert_token_to_id(self, token):
115 """ Converts a token (str) in an id using the vocab. """
116 return self.sp_model.piece_to_id(token)
117
118 def _convert_id_to_token(self, index):
119 """Converts an index (integer) in a token (str) using the vocab."""
120 token = self.sp_model.IdToPiece(index)
121 return token
122
123 def convert_tokens_to_string(self, tokens):
124 """ Converts a sequence of tokens (string) in a single string. """
125 out_string = self.sp_model.decode_pieces(tokens)
126 return out_string
127
128 def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]:
129 if not os.path.isdir(save_directory):
130 logger.error("Vocabulary path ({}) should be a directory".format(save_directory))
131 return
132 out_vocab_file = os.path.join(
133 save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["vocab_file"]
134 )
135
136 if os.path.abspath(self.vocab_file) != os.path.abspath(out_vocab_file):
137 copyfile(self.vocab_file, out_vocab_file)
138
139 return (out_vocab_file,)
140
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/transformers/models/bert_generation/tokenization_bert_generation.py b/src/transformers/models/bert_generation/tokenization_bert_generation.py
--- a/src/transformers/models/bert_generation/tokenization_bert_generation.py
+++ b/src/transformers/models/bert_generation/tokenization_bert_generation.py
@@ -58,6 +58,7 @@
pretrained_vocab_files_map = {"vocab_file": {"bert_for_seq_generation": tokenizer_url}}
max_model_input_sizes = {"bert_for_seq_generation": 512}
prefix_tokens: List[int] = []
+ model_input_names = ["input_ids", "attention_mask"]
def __init__(
self,
| {"golden_diff": "diff --git a/src/transformers/models/bert_generation/tokenization_bert_generation.py b/src/transformers/models/bert_generation/tokenization_bert_generation.py\n--- a/src/transformers/models/bert_generation/tokenization_bert_generation.py\n+++ b/src/transformers/models/bert_generation/tokenization_bert_generation.py\n@@ -58,6 +58,7 @@\n pretrained_vocab_files_map = {\"vocab_file\": {\"bert_for_seq_generation\": tokenizer_url}}\n max_model_input_sizes = {\"bert_for_seq_generation\": 512}\n prefix_tokens: List[int] = []\n+ model_input_names = [\"input_ids\", \"attention_mask\"]\n \n def __init__(\n self,\n", "issue": "BertGenerationTokenizer provides an unexpected value for BertGenerationModel\n- `transformers` version: 4.2.2\r\n- PyTorch version (GPU?): 1.7.0+cu101\r\n- tokenizers: @n1t0, @LysandreJik\r\n\r\n## Information\r\n\r\nin both models BertGenerationEncoder, BertGenerationDecoder, there's no need for `token_type_ids` however the BertGenerationTokenizer provides it, this issue will be raised if you want to input the tokenizer results directly with `**`,\r\n\r\nand if it meant to be like this, and the user should be aware of this behaviour, I think a change should be in the documentation.\r\n\r\nNote: Another issue with BertGenerationTokenizer is the necessity of sentencepiece module, do you prefer that it should for the user to install it separately or it should be included in transformers dependencies.\r\n\n", "before_files": [{"content": "# coding=utf-8\n# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\" Tokenization class for model BertGeneration.\"\"\"\n\n\nimport os\nfrom shutil import copyfile\nfrom typing import List, Optional, Tuple\n\nimport sentencepiece as spm\n\nfrom ...tokenization_utils import PreTrainedTokenizer\nfrom ...utils import logging\n\n\nlogger = logging.get_logger(__name__)\n\nVOCAB_FILES_NAMES = {\"vocab_file\": \"spiece.model\"}\n\ntokenizer_url = \"https://huggingface.co/google/bert_for_seq_generation_L-24_bbc_encoder/resolve/main/spiece.model\"\n\n\nclass BertGenerationTokenizer(PreTrainedTokenizer):\n \"\"\"\n Construct a BertGeneration tokenizer. Based on `SentencePiece <https://github.com/google/sentencepiece>`__.\n\n This tokenizer inherits from :class:`~transformers.PreTrainedTokenizer` which contains most of the main methods.\n Users should refer to this superclass for more information regarding those methods.\n\n Args:\n vocab_file (:obj:`str`):\n `SentencePiece <https://github.com/google/sentencepiece>`__ file (generally has a `.spm` extension) that\n contains the vocabulary necessary to instantiate a tokenizer.\n eos_token (:obj:`str`, `optional`, defaults to :obj:`\"</s>\"`):\n The end of sequence token.\n bos_token (:obj:`str`, `optional`, defaults to :obj:`\"<s>\"`):\n The begin of sequence token.\n unk_token (:obj:`str`, `optional`, defaults to :obj:`\"<unk>\"`):\n The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this\n token instead.\n pad_token (:obj:`str`, `optional`, defaults to :obj:`\"<pad>\"`):\n The token used for padding, for example when batching sequences of different lengths.\n \"\"\"\n\n vocab_files_names = VOCAB_FILES_NAMES\n pretrained_vocab_files_map = {\"vocab_file\": {\"bert_for_seq_generation\": tokenizer_url}}\n max_model_input_sizes = {\"bert_for_seq_generation\": 512}\n prefix_tokens: List[int] = []\n\n def __init__(\n self,\n vocab_file,\n bos_token=\"<s>\",\n eos_token=\"</s>\",\n unk_token=\"<unk>\",\n pad_token=\"<pad>\",\n sep_token=\"<::::>\",\n **kwargs\n ):\n # Add extra_ids to the special token list\n super().__init__(\n bos_token=bos_token,\n eos_token=eos_token,\n unk_token=unk_token,\n pad_token=pad_token,\n sep_token=sep_token,\n **kwargs,\n )\n\n self.vocab_file = vocab_file\n\n self.sp_model = spm.SentencePieceProcessor()\n self.sp_model.Load(vocab_file)\n\n @property\n def vocab_size(self):\n return self.sp_model.get_piece_size()\n\n def get_vocab(self):\n vocab = {self.convert_ids_to_tokens(i): i for i in range(self.vocab_size)}\n vocab.update(self.added_tokens_encoder)\n return vocab\n\n def __getstate__(self):\n state = self.__dict__.copy()\n state[\"sp_model\"] = None\n return state\n\n def __setstate__(self, d):\n self.__dict__ = d\n self.sp_model = spm.SentencePieceProcessor()\n self.sp_model.Load(self.vocab_file)\n\n def _tokenize(self, text, sample=False):\n \"\"\"Take as input a string and return a list of strings (tokens) for words/sub-words\"\"\"\n if not sample:\n pieces = self.sp_model.EncodeAsPieces(text)\n else:\n pieces = self.sp_model.SampleEncodeAsPieces(text, 64, 0.1)\n return pieces\n\n def _convert_token_to_id(self, token):\n \"\"\" Converts a token (str) in an id using the vocab. \"\"\"\n return self.sp_model.piece_to_id(token)\n\n def _convert_id_to_token(self, index):\n \"\"\"Converts an index (integer) in a token (str) using the vocab.\"\"\"\n token = self.sp_model.IdToPiece(index)\n return token\n\n def convert_tokens_to_string(self, tokens):\n \"\"\" Converts a sequence of tokens (string) in a single string. \"\"\"\n out_string = self.sp_model.decode_pieces(tokens)\n return out_string\n\n def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]:\n if not os.path.isdir(save_directory):\n logger.error(\"Vocabulary path ({}) should be a directory\".format(save_directory))\n return\n out_vocab_file = os.path.join(\n save_directory, (filename_prefix + \"-\" if filename_prefix else \"\") + VOCAB_FILES_NAMES[\"vocab_file\"]\n )\n\n if os.path.abspath(self.vocab_file) != os.path.abspath(out_vocab_file):\n copyfile(self.vocab_file, out_vocab_file)\n\n return (out_vocab_file,)\n", "path": "src/transformers/models/bert_generation/tokenization_bert_generation.py"}], "after_files": [{"content": "# coding=utf-8\n# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\" Tokenization class for model BertGeneration.\"\"\"\n\n\nimport os\nfrom shutil import copyfile\nfrom typing import List, Optional, Tuple\n\nimport sentencepiece as spm\n\nfrom ...tokenization_utils import PreTrainedTokenizer\nfrom ...utils import logging\n\n\nlogger = logging.get_logger(__name__)\n\nVOCAB_FILES_NAMES = {\"vocab_file\": \"spiece.model\"}\n\ntokenizer_url = \"https://huggingface.co/google/bert_for_seq_generation_L-24_bbc_encoder/resolve/main/spiece.model\"\n\n\nclass BertGenerationTokenizer(PreTrainedTokenizer):\n \"\"\"\n Construct a BertGeneration tokenizer. Based on `SentencePiece <https://github.com/google/sentencepiece>`__.\n\n This tokenizer inherits from :class:`~transformers.PreTrainedTokenizer` which contains most of the main methods.\n Users should refer to this superclass for more information regarding those methods.\n\n Args:\n vocab_file (:obj:`str`):\n `SentencePiece <https://github.com/google/sentencepiece>`__ file (generally has a `.spm` extension) that\n contains the vocabulary necessary to instantiate a tokenizer.\n eos_token (:obj:`str`, `optional`, defaults to :obj:`\"</s>\"`):\n The end of sequence token.\n bos_token (:obj:`str`, `optional`, defaults to :obj:`\"<s>\"`):\n The begin of sequence token.\n unk_token (:obj:`str`, `optional`, defaults to :obj:`\"<unk>\"`):\n The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this\n token instead.\n pad_token (:obj:`str`, `optional`, defaults to :obj:`\"<pad>\"`):\n The token used for padding, for example when batching sequences of different lengths.\n \"\"\"\n\n vocab_files_names = VOCAB_FILES_NAMES\n pretrained_vocab_files_map = {\"vocab_file\": {\"bert_for_seq_generation\": tokenizer_url}}\n max_model_input_sizes = {\"bert_for_seq_generation\": 512}\n prefix_tokens: List[int] = []\n model_input_names = [\"input_ids\", \"attention_mask\"]\n\n def __init__(\n self,\n vocab_file,\n bos_token=\"<s>\",\n eos_token=\"</s>\",\n unk_token=\"<unk>\",\n pad_token=\"<pad>\",\n sep_token=\"<::::>\",\n **kwargs\n ):\n # Add extra_ids to the special token list\n super().__init__(\n bos_token=bos_token,\n eos_token=eos_token,\n unk_token=unk_token,\n pad_token=pad_token,\n sep_token=sep_token,\n **kwargs,\n )\n\n self.vocab_file = vocab_file\n\n self.sp_model = spm.SentencePieceProcessor()\n self.sp_model.Load(vocab_file)\n\n @property\n def vocab_size(self):\n return self.sp_model.get_piece_size()\n\n def get_vocab(self):\n vocab = {self.convert_ids_to_tokens(i): i for i in range(self.vocab_size)}\n vocab.update(self.added_tokens_encoder)\n return vocab\n\n def __getstate__(self):\n state = self.__dict__.copy()\n state[\"sp_model\"] = None\n return state\n\n def __setstate__(self, d):\n self.__dict__ = d\n self.sp_model = spm.SentencePieceProcessor()\n self.sp_model.Load(self.vocab_file)\n\n def _tokenize(self, text, sample=False):\n \"\"\"Take as input a string and return a list of strings (tokens) for words/sub-words\"\"\"\n if not sample:\n pieces = self.sp_model.EncodeAsPieces(text)\n else:\n pieces = self.sp_model.SampleEncodeAsPieces(text, 64, 0.1)\n return pieces\n\n def _convert_token_to_id(self, token):\n \"\"\" Converts a token (str) in an id using the vocab. \"\"\"\n return self.sp_model.piece_to_id(token)\n\n def _convert_id_to_token(self, index):\n \"\"\"Converts an index (integer) in a token (str) using the vocab.\"\"\"\n token = self.sp_model.IdToPiece(index)\n return token\n\n def convert_tokens_to_string(self, tokens):\n \"\"\" Converts a sequence of tokens (string) in a single string. \"\"\"\n out_string = self.sp_model.decode_pieces(tokens)\n return out_string\n\n def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]:\n if not os.path.isdir(save_directory):\n logger.error(\"Vocabulary path ({}) should be a directory\".format(save_directory))\n return\n out_vocab_file = os.path.join(\n save_directory, (filename_prefix + \"-\" if filename_prefix else \"\") + VOCAB_FILES_NAMES[\"vocab_file\"]\n )\n\n if os.path.abspath(self.vocab_file) != os.path.abspath(out_vocab_file):\n copyfile(self.vocab_file, out_vocab_file)\n\n return (out_vocab_file,)\n", "path": "src/transformers/models/bert_generation/tokenization_bert_generation.py"}]} | 1,956 | 148 |
gh_patches_debug_22964 | rasdani/github-patches | git_diff | Cloud-CV__EvalAI-2132 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Update seed script to add challenge slug
## Current Behaviour:
Currently, when the challenge is created using the seed script, the `slug` field is not populated.
## Expected Behaviour:
When the seed script is run, the `slug` field must be populated with the challenge title and it must include a year in the name to keep it unique like `abc-challenge-2019`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scripts/seed.py`
Content:
```
1 # Command to run : python manage.py shell < scripts/seed.py
2 import os
3 import random
4 import string
5
6 from datetime import timedelta
7
8 from django.conf import settings
9 from django.contrib.auth.models import User
10 from django.core.files.uploadedfile import SimpleUploadedFile
11 from django.utils import timezone
12
13 from allauth.account.models import EmailAddress
14 from faker import Factory
15
16 from challenges.models import Challenge, ChallengePhase, DatasetSplit, Leaderboard, ChallengePhaseSplit
17 from hosts.models import ChallengeHostTeam, ChallengeHost
18 from participants.models import Participant, ParticipantTeam
19
20 fake = Factory.create()
21
22 NUMBER_OF_CHALLENGES = 1
23 NUMBER_OF_PHASES = 2
24 NUMBER_OF_DATASET_SPLITS = 2
25 DATASET_SPLIT_ITERATOR = 0
26
27 try:
28 xrange # Python 2
29 except NameError:
30 xrange = range # Python 3
31
32
33 def check_database():
34 if len(EmailAddress.objects.all()) > 0:
35 print("Are you sure you want to wipe the existing development database and reseed it? (Y/N)")
36 if settings.TEST or input().lower() == "y":
37 destroy_database()
38 return True
39 else:
40 return False
41 else:
42 return True
43
44
45 def destroy_database():
46 print("Destroying existing database...")
47 print("Destroying Participant objects...")
48 Participant.objects.all().delete()
49 print("Destroying ParticipantTeam objects...")
50 ParticipantTeam.objects.all().delete()
51 print("Destroying ChallengePhaseSplit objects...")
52 ChallengePhaseSplit.objects.all().delete()
53 print("Destroying DatasetSplit objects...")
54 DatasetSplit.objects.all().delete()
55 print("Destroying ChallengePhase objects...")
56 ChallengePhase.objects.all().delete()
57 print("Destroying Leaderboard objects...")
58 Leaderboard.objects.all().delete()
59 print("Destroying Challenge objects...")
60 Challenge.objects.all().delete()
61 print("Destroying ChallengeHostTeam objects...")
62 ChallengeHostTeam.objects.all().delete()
63 print("Destroying ChallengeHost objects...")
64 ChallengeHost.objects.all().delete()
65 print("Destroying User objects...")
66 User.objects.all().delete()
67 print("Destroying EmailAddress objects...")
68 EmailAddress.objects.all().delete()
69 return True
70
71
72 def create_user(is_admin, username=""):
73 """
74 Creates superuser, participant user, host user and returns it.
75 """
76 if is_admin:
77 username = "admin"
78 email = "[email protected]"
79 else:
80 email = "%[email protected]" % (username)
81 user = User.objects.create_user(
82 email=email,
83 username=username,
84 password="password",
85 is_staff=is_admin,
86 is_superuser=is_admin,
87 )
88 EmailAddress.objects.create(user=user, email=email, verified=True, primary=True)
89 print("{} was created with username: {} password: password".format("Super user" if is_admin else "User", username))
90 return user
91
92
93 def create_challenge_host_team(user):
94 """
95 Creates challenge host team and returns it.
96 """
97 team_name = "{} Host Team".format(fake.city())
98 team = ChallengeHostTeam.objects.create(
99 team_name=team_name,
100 created_by=user,
101 )
102 print("Challenge Host Team created with team_name: {} created_by: {}".format(team_name, user.username))
103 ChallengeHost.objects.create(user=user, team_name=team, status=ChallengeHost.SELF, permissions=ChallengeHost.ADMIN)
104 print("Challenge Host created with user: {} team_name: {}".format(user.username, team_name))
105 return team
106
107
108 def create_challenges(number_of_challenges, host_team=None):
109 """
110 Creates past challenge, on-going challenge and upcoming challenge.
111 """
112 for i in xrange(number_of_challenges):
113 if (i % 3 == 0):
114 create_challenge("{} Challenge".format(fake.first_name()),
115 timezone.now() - timedelta(days=100),
116 timezone.now() + timedelta(days=500),
117 host_team
118 )
119 elif (i % 3 == 1):
120 create_challenge("{} Challenge".format(fake.first_name()),
121 timezone.now() - timedelta(days=500),
122 timezone.now() - timedelta(days=100),
123 host_team
124 )
125 elif (i % 3 == 2):
126 create_challenge("{} Challenge".format(fake.first_name()),
127 timezone.now() + timedelta(days=100),
128 timezone.now() + timedelta(days=500),
129 host_team
130 )
131
132
133 def create_challenge(title, start_date, end_date, host_team):
134 """
135 Creates a challenge.
136 """
137 evaluation_script = open(
138 os.path.join(settings.BASE_DIR, 'examples', 'example1', 'sample_evaluation_script.zip'), 'rb')
139 queue = ''.join(random.choice(string.ascii_letters) for _ in range(75))
140 Challenge.objects.create(
141 title=title,
142 short_description=fake.paragraph(),
143 description=fake.paragraph(),
144 terms_and_conditions=fake.paragraph(),
145 submission_guidelines=fake.paragraph(),
146 evaluation_details=fake.paragraph(),
147 evaluation_script=SimpleUploadedFile(evaluation_script.name, evaluation_script.read()),
148 approved_by_admin=True,
149 creator=host_team,
150 published=True,
151 enable_forum=True,
152 anonymous_leaderboard=False,
153 start_date=start_date,
154 end_date=end_date,
155 queue=queue,
156 )
157 print("Challenge created with title: {} creator: {} start_date: {} end_date: {}".format(title,
158 host_team.team_name,
159 start_date, end_date))
160
161
162 def create_challenge_phases(challenge, number_of_phases=1):
163 """
164 Creates challenge phases for the created challenges and returns it.
165 """
166 challenge_phases = []
167 for i in range(number_of_phases):
168 name = "{} Phase".format(fake.first_name())
169 with open(os.path.join(settings.BASE_DIR, 'examples', 'example1', 'test_annotation.txt'), 'rb') as data_file:
170 data = data_file.read()
171 data = data or None
172 challenge_phase = ChallengePhase.objects.create(
173 name=name,
174 description=fake.paragraph(),
175 leaderboard_public=True,
176 is_public=True,
177 start_date=challenge.start_date,
178 end_date=challenge.end_date,
179 challenge=challenge,
180 test_annotation=SimpleUploadedFile(fake.file_name(extension="txt"), data, content_type="text/plain"),
181 codename="{}{}".format("phase", i + 1),
182 )
183 challenge_phases.append(challenge_phase)
184 print("Challenge Phase created with name: {} challenge: {}".format(name, challenge.title))
185 return challenge_phases
186
187
188 def create_leaderboard():
189 """
190 Creates Leaderboard schema and returns it.
191 """
192 schema = {
193 'labels': ['score', ],
194 'default_order_by': 'score',
195 }
196 leaderboard = Leaderboard.objects.create(
197 schema=schema
198 )
199 print("Leaderboard created")
200 return leaderboard
201
202
203 def create_dataset_splits(number_of_splits):
204 """
205 Creates dataset splits and returns it.
206 """
207 dataset_splits = []
208 for i in range(number_of_splits):
209 global DATASET_SPLIT_ITERATOR
210 name = "Split {}".format(DATASET_SPLIT_ITERATOR + 1)
211 codename = "{}{}".format('split', DATASET_SPLIT_ITERATOR + 1)
212 dataset_split = DatasetSplit.objects.create(
213 name=name,
214 codename=codename,
215 )
216 dataset_splits.append(dataset_split)
217 DATASET_SPLIT_ITERATOR += 1
218 print("Dataset Split created with name: {} codename: {}".format(name, codename))
219 return dataset_splits
220
221
222 def create_challenge_phase_splits(challenge_phase, leaderboard, dataset_split):
223 """
224 Creates a challenge phase split.
225 """
226 ChallengePhaseSplit.objects.create(
227 challenge_phase=challenge_phase,
228 leaderboard=leaderboard,
229 dataset_split=dataset_split,
230 visibility=ChallengePhaseSplit.PUBLIC
231 )
232 print("Challenge Phase Split created with challenge_phase: {} dataset_split: {}".format(challenge_phase.name,
233 dataset_split.name))
234
235
236 def create_participant_team(user):
237 """
238 Creates participant team and returns it.
239 """
240 team_name = "{} Participant Team".format(fake.city())
241 team = ParticipantTeam.objects.create(
242 team_name=team_name,
243 created_by=user,
244 )
245 print("Participant Team created with team_name: {} created_by: {}".format(team_name, user.username))
246 Participant.objects.create(user=user, team=team, status="Self")
247 print("Participant created with user: {} team_name: {}".format(user.username, team_name))
248 return team
249
250
251 def run(*args):
252 NUMBER_OF_CHALLENGES = int(args[0])
253 status = check_database()
254 if status is False:
255 print("Seeding aborted.")
256 return 0
257 print("Seeding...")
258 # Create superuser
259 create_user(is_admin=True)
260 # Create host user
261 host_user = create_user(is_admin=False, username="host")
262 # Create challenge host team with challenge host
263 challenge_host_team = create_challenge_host_team(user=host_user)
264 # Create challenge
265 create_challenges(number_of_challenges=NUMBER_OF_CHALLENGES, host_team=challenge_host_team)
266
267 # Fetch all the created challenges
268 challenges = Challenge.objects.all()
269 for challenge in challenges:
270 # Create a leaderboard object for each challenge
271 leaderboard = create_leaderboard()
272 # Create Phases for a challenge
273 challenge_phases = create_challenge_phases(challenge, number_of_phases=NUMBER_OF_PHASES)
274 # Create Dataset Split for each Challenge
275 dataset_splits = create_dataset_splits(number_of_splits=NUMBER_OF_DATASET_SPLITS)
276 # Create Challenge Phase Split for each Phase and Dataset Split
277 for challenge_phase in challenge_phases:
278 for dataset_split in dataset_splits:
279 create_challenge_phase_splits(challenge_phase, leaderboard, dataset_split)
280 participant_user = create_user(is_admin=False, username="participant")
281 create_participant_team(user=participant_user)
282 print('Database successfully seeded.')
283
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/scripts/seed.py b/scripts/seed.py
--- a/scripts/seed.py
+++ b/scripts/seed.py
@@ -1,4 +1,5 @@
# Command to run : python manage.py shell < scripts/seed.py
+import datetime
import os
import random
import string
@@ -137,6 +138,9 @@
evaluation_script = open(
os.path.join(settings.BASE_DIR, 'examples', 'example1', 'sample_evaluation_script.zip'), 'rb')
queue = ''.join(random.choice(string.ascii_letters) for _ in range(75))
+ year = datetime.date.today().year
+ slug = '{t}-{y}'.format(t=title, y=year)
+ slug = slug.lower().replace(" ", "-")
Challenge.objects.create(
title=title,
short_description=fake.paragraph(),
@@ -150,6 +154,7 @@
published=True,
enable_forum=True,
anonymous_leaderboard=False,
+ slug=slug,
start_date=start_date,
end_date=end_date,
queue=queue,
| {"golden_diff": "diff --git a/scripts/seed.py b/scripts/seed.py\n--- a/scripts/seed.py\n+++ b/scripts/seed.py\n@@ -1,4 +1,5 @@\n # Command to run : python manage.py shell < scripts/seed.py\n+import datetime\n import os\n import random\n import string\n@@ -137,6 +138,9 @@\n evaluation_script = open(\n os.path.join(settings.BASE_DIR, 'examples', 'example1', 'sample_evaluation_script.zip'), 'rb')\n queue = ''.join(random.choice(string.ascii_letters) for _ in range(75))\n+ year = datetime.date.today().year\n+ slug = '{t}-{y}'.format(t=title, y=year)\n+ slug = slug.lower().replace(\" \", \"-\")\n Challenge.objects.create(\n title=title,\n short_description=fake.paragraph(),\n@@ -150,6 +154,7 @@\n published=True,\n enable_forum=True,\n anonymous_leaderboard=False,\n+ slug=slug,\n start_date=start_date,\n end_date=end_date,\n queue=queue,\n", "issue": "Update seed script to add challenge slug\n## Current Behaviour:\r\nCurrently, when the challenge is created using the seed script, the `slug` field is not populated.\r\n\r\n## Expected Behaviour:\r\nWhen the seed script is run, the `slug` field must be populated with the challenge title and it must include a year in the name to keep it unique like `abc-challenge-2019`\n", "before_files": [{"content": "# Command to run : python manage.py shell < scripts/seed.py\nimport os\nimport random\nimport string\n\nfrom datetime import timedelta\n\nfrom django.conf import settings\nfrom django.contrib.auth.models import User\nfrom django.core.files.uploadedfile import SimpleUploadedFile\nfrom django.utils import timezone\n\nfrom allauth.account.models import EmailAddress\nfrom faker import Factory\n\nfrom challenges.models import Challenge, ChallengePhase, DatasetSplit, Leaderboard, ChallengePhaseSplit\nfrom hosts.models import ChallengeHostTeam, ChallengeHost\nfrom participants.models import Participant, ParticipantTeam\n\nfake = Factory.create()\n\nNUMBER_OF_CHALLENGES = 1\nNUMBER_OF_PHASES = 2\nNUMBER_OF_DATASET_SPLITS = 2\nDATASET_SPLIT_ITERATOR = 0\n\ntry:\n xrange # Python 2\nexcept NameError:\n xrange = range # Python 3\n\n\ndef check_database():\n if len(EmailAddress.objects.all()) > 0:\n print(\"Are you sure you want to wipe the existing development database and reseed it? (Y/N)\")\n if settings.TEST or input().lower() == \"y\":\n destroy_database()\n return True\n else:\n return False\n else:\n return True\n\n\ndef destroy_database():\n print(\"Destroying existing database...\")\n print(\"Destroying Participant objects...\")\n Participant.objects.all().delete()\n print(\"Destroying ParticipantTeam objects...\")\n ParticipantTeam.objects.all().delete()\n print(\"Destroying ChallengePhaseSplit objects...\")\n ChallengePhaseSplit.objects.all().delete()\n print(\"Destroying DatasetSplit objects...\")\n DatasetSplit.objects.all().delete()\n print(\"Destroying ChallengePhase objects...\")\n ChallengePhase.objects.all().delete()\n print(\"Destroying Leaderboard objects...\")\n Leaderboard.objects.all().delete()\n print(\"Destroying Challenge objects...\")\n Challenge.objects.all().delete()\n print(\"Destroying ChallengeHostTeam objects...\")\n ChallengeHostTeam.objects.all().delete()\n print(\"Destroying ChallengeHost objects...\")\n ChallengeHost.objects.all().delete()\n print(\"Destroying User objects...\")\n User.objects.all().delete()\n print(\"Destroying EmailAddress objects...\")\n EmailAddress.objects.all().delete()\n return True\n\n\ndef create_user(is_admin, username=\"\"):\n \"\"\"\n Creates superuser, participant user, host user and returns it.\n \"\"\"\n if is_admin:\n username = \"admin\"\n email = \"[email protected]\"\n else:\n email = \"%[email protected]\" % (username)\n user = User.objects.create_user(\n email=email,\n username=username,\n password=\"password\",\n is_staff=is_admin,\n is_superuser=is_admin,\n )\n EmailAddress.objects.create(user=user, email=email, verified=True, primary=True)\n print(\"{} was created with username: {} password: password\".format(\"Super user\" if is_admin else \"User\", username))\n return user\n\n\ndef create_challenge_host_team(user):\n \"\"\"\n Creates challenge host team and returns it.\n \"\"\"\n team_name = \"{} Host Team\".format(fake.city())\n team = ChallengeHostTeam.objects.create(\n team_name=team_name,\n created_by=user,\n )\n print(\"Challenge Host Team created with team_name: {} created_by: {}\".format(team_name, user.username))\n ChallengeHost.objects.create(user=user, team_name=team, status=ChallengeHost.SELF, permissions=ChallengeHost.ADMIN)\n print(\"Challenge Host created with user: {} team_name: {}\".format(user.username, team_name))\n return team\n\n\ndef create_challenges(number_of_challenges, host_team=None):\n \"\"\"\n Creates past challenge, on-going challenge and upcoming challenge.\n \"\"\"\n for i in xrange(number_of_challenges):\n if (i % 3 == 0):\n create_challenge(\"{} Challenge\".format(fake.first_name()),\n timezone.now() - timedelta(days=100),\n timezone.now() + timedelta(days=500),\n host_team\n )\n elif (i % 3 == 1):\n create_challenge(\"{} Challenge\".format(fake.first_name()),\n timezone.now() - timedelta(days=500),\n timezone.now() - timedelta(days=100),\n host_team\n )\n elif (i % 3 == 2):\n create_challenge(\"{} Challenge\".format(fake.first_name()),\n timezone.now() + timedelta(days=100),\n timezone.now() + timedelta(days=500),\n host_team\n )\n\n\ndef create_challenge(title, start_date, end_date, host_team):\n \"\"\"\n Creates a challenge.\n \"\"\"\n evaluation_script = open(\n os.path.join(settings.BASE_DIR, 'examples', 'example1', 'sample_evaluation_script.zip'), 'rb')\n queue = ''.join(random.choice(string.ascii_letters) for _ in range(75))\n Challenge.objects.create(\n title=title,\n short_description=fake.paragraph(),\n description=fake.paragraph(),\n terms_and_conditions=fake.paragraph(),\n submission_guidelines=fake.paragraph(),\n evaluation_details=fake.paragraph(),\n evaluation_script=SimpleUploadedFile(evaluation_script.name, evaluation_script.read()),\n approved_by_admin=True,\n creator=host_team,\n published=True,\n enable_forum=True,\n anonymous_leaderboard=False,\n start_date=start_date,\n end_date=end_date,\n queue=queue,\n )\n print(\"Challenge created with title: {} creator: {} start_date: {} end_date: {}\".format(title,\n host_team.team_name,\n start_date, end_date))\n\n\ndef create_challenge_phases(challenge, number_of_phases=1):\n \"\"\"\n Creates challenge phases for the created challenges and returns it.\n \"\"\"\n challenge_phases = []\n for i in range(number_of_phases):\n name = \"{} Phase\".format(fake.first_name())\n with open(os.path.join(settings.BASE_DIR, 'examples', 'example1', 'test_annotation.txt'), 'rb') as data_file:\n data = data_file.read()\n data = data or None\n challenge_phase = ChallengePhase.objects.create(\n name=name,\n description=fake.paragraph(),\n leaderboard_public=True,\n is_public=True,\n start_date=challenge.start_date,\n end_date=challenge.end_date,\n challenge=challenge,\n test_annotation=SimpleUploadedFile(fake.file_name(extension=\"txt\"), data, content_type=\"text/plain\"),\n codename=\"{}{}\".format(\"phase\", i + 1),\n )\n challenge_phases.append(challenge_phase)\n print(\"Challenge Phase created with name: {} challenge: {}\".format(name, challenge.title))\n return challenge_phases\n\n\ndef create_leaderboard():\n \"\"\"\n Creates Leaderboard schema and returns it.\n \"\"\"\n schema = {\n 'labels': ['score', ],\n 'default_order_by': 'score',\n }\n leaderboard = Leaderboard.objects.create(\n schema=schema\n )\n print(\"Leaderboard created\")\n return leaderboard\n\n\ndef create_dataset_splits(number_of_splits):\n \"\"\"\n Creates dataset splits and returns it.\n \"\"\"\n dataset_splits = []\n for i in range(number_of_splits):\n global DATASET_SPLIT_ITERATOR\n name = \"Split {}\".format(DATASET_SPLIT_ITERATOR + 1)\n codename = \"{}{}\".format('split', DATASET_SPLIT_ITERATOR + 1)\n dataset_split = DatasetSplit.objects.create(\n name=name,\n codename=codename,\n )\n dataset_splits.append(dataset_split)\n DATASET_SPLIT_ITERATOR += 1\n print(\"Dataset Split created with name: {} codename: {}\".format(name, codename))\n return dataset_splits\n\n\ndef create_challenge_phase_splits(challenge_phase, leaderboard, dataset_split):\n \"\"\"\n Creates a challenge phase split.\n \"\"\"\n ChallengePhaseSplit.objects.create(\n challenge_phase=challenge_phase,\n leaderboard=leaderboard,\n dataset_split=dataset_split,\n visibility=ChallengePhaseSplit.PUBLIC\n )\n print(\"Challenge Phase Split created with challenge_phase: {} dataset_split: {}\".format(challenge_phase.name,\n dataset_split.name))\n\n\ndef create_participant_team(user):\n \"\"\"\n Creates participant team and returns it.\n \"\"\"\n team_name = \"{} Participant Team\".format(fake.city())\n team = ParticipantTeam.objects.create(\n team_name=team_name,\n created_by=user,\n )\n print(\"Participant Team created with team_name: {} created_by: {}\".format(team_name, user.username))\n Participant.objects.create(user=user, team=team, status=\"Self\")\n print(\"Participant created with user: {} team_name: {}\".format(user.username, team_name))\n return team\n\n\ndef run(*args):\n NUMBER_OF_CHALLENGES = int(args[0])\n status = check_database()\n if status is False:\n print(\"Seeding aborted.\")\n return 0\n print(\"Seeding...\")\n # Create superuser\n create_user(is_admin=True)\n # Create host user\n host_user = create_user(is_admin=False, username=\"host\")\n # Create challenge host team with challenge host\n challenge_host_team = create_challenge_host_team(user=host_user)\n # Create challenge\n create_challenges(number_of_challenges=NUMBER_OF_CHALLENGES, host_team=challenge_host_team)\n\n # Fetch all the created challenges\n challenges = Challenge.objects.all()\n for challenge in challenges:\n # Create a leaderboard object for each challenge\n leaderboard = create_leaderboard()\n # Create Phases for a challenge\n challenge_phases = create_challenge_phases(challenge, number_of_phases=NUMBER_OF_PHASES)\n # Create Dataset Split for each Challenge\n dataset_splits = create_dataset_splits(number_of_splits=NUMBER_OF_DATASET_SPLITS)\n # Create Challenge Phase Split for each Phase and Dataset Split\n for challenge_phase in challenge_phases:\n for dataset_split in dataset_splits:\n create_challenge_phase_splits(challenge_phase, leaderboard, dataset_split)\n participant_user = create_user(is_admin=False, username=\"participant\")\n create_participant_team(user=participant_user)\n print('Database successfully seeded.')\n", "path": "scripts/seed.py"}], "after_files": [{"content": "# Command to run : python manage.py shell < scripts/seed.py\nimport datetime\nimport os\nimport random\nimport string\n\nfrom datetime import timedelta\n\nfrom django.conf import settings\nfrom django.contrib.auth.models import User\nfrom django.core.files.uploadedfile import SimpleUploadedFile\nfrom django.utils import timezone\n\nfrom allauth.account.models import EmailAddress\nfrom faker import Factory\n\nfrom challenges.models import Challenge, ChallengePhase, DatasetSplit, Leaderboard, ChallengePhaseSplit\nfrom hosts.models import ChallengeHostTeam, ChallengeHost\nfrom participants.models import Participant, ParticipantTeam\n\nfake = Factory.create()\n\nNUMBER_OF_CHALLENGES = 1\nNUMBER_OF_PHASES = 2\nNUMBER_OF_DATASET_SPLITS = 2\nDATASET_SPLIT_ITERATOR = 0\n\ntry:\n xrange # Python 2\nexcept NameError:\n xrange = range # Python 3\n\n\ndef check_database():\n if len(EmailAddress.objects.all()) > 0:\n print(\"Are you sure you want to wipe the existing development database and reseed it? (Y/N)\")\n if settings.TEST or input().lower() == \"y\":\n destroy_database()\n return True\n else:\n return False\n else:\n return True\n\n\ndef destroy_database():\n print(\"Destroying existing database...\")\n print(\"Destroying Participant objects...\")\n Participant.objects.all().delete()\n print(\"Destroying ParticipantTeam objects...\")\n ParticipantTeam.objects.all().delete()\n print(\"Destroying ChallengePhaseSplit objects...\")\n ChallengePhaseSplit.objects.all().delete()\n print(\"Destroying DatasetSplit objects...\")\n DatasetSplit.objects.all().delete()\n print(\"Destroying ChallengePhase objects...\")\n ChallengePhase.objects.all().delete()\n print(\"Destroying Leaderboard objects...\")\n Leaderboard.objects.all().delete()\n print(\"Destroying Challenge objects...\")\n Challenge.objects.all().delete()\n print(\"Destroying ChallengeHostTeam objects...\")\n ChallengeHostTeam.objects.all().delete()\n print(\"Destroying ChallengeHost objects...\")\n ChallengeHost.objects.all().delete()\n print(\"Destroying User objects...\")\n User.objects.all().delete()\n print(\"Destroying EmailAddress objects...\")\n EmailAddress.objects.all().delete()\n return True\n\n\ndef create_user(is_admin, username=\"\"):\n \"\"\"\n Creates superuser, participant user, host user and returns it.\n \"\"\"\n if is_admin:\n username = \"admin\"\n email = \"[email protected]\"\n else:\n email = \"%[email protected]\" % (username)\n user = User.objects.create_user(\n email=email,\n username=username,\n password=\"password\",\n is_staff=is_admin,\n is_superuser=is_admin,\n )\n EmailAddress.objects.create(user=user, email=email, verified=True, primary=True)\n print(\"{} was created with username: {} password: password\".format(\"Super user\" if is_admin else \"User\", username))\n return user\n\n\ndef create_challenge_host_team(user):\n \"\"\"\n Creates challenge host team and returns it.\n \"\"\"\n team_name = \"{} Host Team\".format(fake.city())\n team = ChallengeHostTeam.objects.create(\n team_name=team_name,\n created_by=user,\n )\n print(\"Challenge Host Team created with team_name: {} created_by: {}\".format(team_name, user.username))\n ChallengeHost.objects.create(user=user, team_name=team, status=ChallengeHost.SELF, permissions=ChallengeHost.ADMIN)\n print(\"Challenge Host created with user: {} team_name: {}\".format(user.username, team_name))\n return team\n\n\ndef create_challenges(number_of_challenges, host_team=None):\n \"\"\"\n Creates past challenge, on-going challenge and upcoming challenge.\n \"\"\"\n for i in xrange(number_of_challenges):\n if (i % 3 == 0):\n create_challenge(\"{} Challenge\".format(fake.first_name()),\n timezone.now() - timedelta(days=100),\n timezone.now() + timedelta(days=500),\n host_team\n )\n elif (i % 3 == 1):\n create_challenge(\"{} Challenge\".format(fake.first_name()),\n timezone.now() - timedelta(days=500),\n timezone.now() - timedelta(days=100),\n host_team\n )\n elif (i % 3 == 2):\n create_challenge(\"{} Challenge\".format(fake.first_name()),\n timezone.now() + timedelta(days=100),\n timezone.now() + timedelta(days=500),\n host_team\n )\n\n\ndef create_challenge(title, start_date, end_date, host_team):\n \"\"\"\n Creates a challenge.\n \"\"\"\n evaluation_script = open(\n os.path.join(settings.BASE_DIR, 'examples', 'example1', 'sample_evaluation_script.zip'), 'rb')\n queue = ''.join(random.choice(string.ascii_letters) for _ in range(75))\n year = datetime.date.today().year\n slug = '{t}-{y}'.format(t=title, y=year)\n slug = slug.lower().replace(\" \", \"-\")\n Challenge.objects.create(\n title=title,\n short_description=fake.paragraph(),\n description=fake.paragraph(),\n terms_and_conditions=fake.paragraph(),\n submission_guidelines=fake.paragraph(),\n evaluation_details=fake.paragraph(),\n evaluation_script=SimpleUploadedFile(evaluation_script.name, evaluation_script.read()),\n approved_by_admin=True,\n creator=host_team,\n published=True,\n enable_forum=True,\n anonymous_leaderboard=False,\n slug=slug,\n start_date=start_date,\n end_date=end_date,\n queue=queue,\n )\n print(\"Challenge created with title: {} creator: {} start_date: {} end_date: {}\".format(title,\n host_team.team_name,\n start_date, end_date))\n\n\ndef create_challenge_phases(challenge, number_of_phases=1):\n \"\"\"\n Creates challenge phases for the created challenges and returns it.\n \"\"\"\n challenge_phases = []\n for i in range(number_of_phases):\n name = \"{} Phase\".format(fake.first_name())\n with open(os.path.join(settings.BASE_DIR, 'examples', 'example1', 'test_annotation.txt'), 'rb') as data_file:\n data = data_file.read()\n data = data or None\n challenge_phase = ChallengePhase.objects.create(\n name=name,\n description=fake.paragraph(),\n leaderboard_public=True,\n is_public=True,\n start_date=challenge.start_date,\n end_date=challenge.end_date,\n challenge=challenge,\n test_annotation=SimpleUploadedFile(fake.file_name(extension=\"txt\"), data, content_type=\"text/plain\"),\n codename=\"{}{}\".format(\"phase\", i + 1),\n )\n challenge_phases.append(challenge_phase)\n print(\"Challenge Phase created with name: {} challenge: {}\".format(name, challenge.title))\n return challenge_phases\n\n\ndef create_leaderboard():\n \"\"\"\n Creates Leaderboard schema and returns it.\n \"\"\"\n schema = {\n 'labels': ['score', ],\n 'default_order_by': 'score',\n }\n leaderboard = Leaderboard.objects.create(\n schema=schema\n )\n print(\"Leaderboard created\")\n return leaderboard\n\n\ndef create_dataset_splits(number_of_splits):\n \"\"\"\n Creates dataset splits and returns it.\n \"\"\"\n dataset_splits = []\n for i in range(number_of_splits):\n global DATASET_SPLIT_ITERATOR\n name = \"Split {}\".format(DATASET_SPLIT_ITERATOR + 1)\n codename = \"{}{}\".format('split', DATASET_SPLIT_ITERATOR + 1)\n dataset_split = DatasetSplit.objects.create(\n name=name,\n codename=codename,\n )\n dataset_splits.append(dataset_split)\n DATASET_SPLIT_ITERATOR += 1\n print(\"Dataset Split created with name: {} codename: {}\".format(name, codename))\n return dataset_splits\n\n\ndef create_challenge_phase_splits(challenge_phase, leaderboard, dataset_split):\n \"\"\"\n Creates a challenge phase split.\n \"\"\"\n ChallengePhaseSplit.objects.create(\n challenge_phase=challenge_phase,\n leaderboard=leaderboard,\n dataset_split=dataset_split,\n visibility=ChallengePhaseSplit.PUBLIC\n )\n print(\"Challenge Phase Split created with challenge_phase: {} dataset_split: {}\".format(challenge_phase.name,\n dataset_split.name))\n\n\ndef create_participant_team(user):\n \"\"\"\n Creates participant team and returns it.\n \"\"\"\n team_name = \"{} Participant Team\".format(fake.city())\n team = ParticipantTeam.objects.create(\n team_name=team_name,\n created_by=user,\n )\n print(\"Participant Team created with team_name: {} created_by: {}\".format(team_name, user.username))\n Participant.objects.create(user=user, team=team, status=\"Self\")\n print(\"Participant created with user: {} team_name: {}\".format(user.username, team_name))\n return team\n\n\ndef run(*args):\n NUMBER_OF_CHALLENGES = int(args[0])\n status = check_database()\n if status is False:\n print(\"Seeding aborted.\")\n return 0\n print(\"Seeding...\")\n # Create superuser\n create_user(is_admin=True)\n # Create host user\n host_user = create_user(is_admin=False, username=\"host\")\n # Create challenge host team with challenge host\n challenge_host_team = create_challenge_host_team(user=host_user)\n # Create challenge\n create_challenges(number_of_challenges=NUMBER_OF_CHALLENGES, host_team=challenge_host_team)\n\n # Fetch all the created challenges\n challenges = Challenge.objects.all()\n for challenge in challenges:\n # Create a leaderboard object for each challenge\n leaderboard = create_leaderboard()\n # Create Phases for a challenge\n challenge_phases = create_challenge_phases(challenge, number_of_phases=NUMBER_OF_PHASES)\n # Create Dataset Split for each Challenge\n dataset_splits = create_dataset_splits(number_of_splits=NUMBER_OF_DATASET_SPLITS)\n # Create Challenge Phase Split for each Phase and Dataset Split\n for challenge_phase in challenge_phases:\n for dataset_split in dataset_splits:\n create_challenge_phase_splits(challenge_phase, leaderboard, dataset_split)\n participant_user = create_user(is_admin=False, username=\"participant\")\n create_participant_team(user=participant_user)\n print('Database successfully seeded.')\n", "path": "scripts/seed.py"}]} | 3,200 | 238 |
gh_patches_debug_38909 | rasdani/github-patches | git_diff | bridgecrewio__checkov-2330 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
BC_AWS_PUBLIC_1 does not take Conditions into account
**Describe the bug**
BC_AWS_PUBLIC_1 seems to only look for the principal in a ECR Resource Policy. If you want to control access to a repo based on the pricipals AWS Organization ID, you have to do this using a condition instead of a Principal. It seems like checkov does not take this into account.
Example:
``` yaml
Repository:
Type: AWS::ECR::Repository
Properties:
RepositoryName: !Ref RepoName
RepositoryPolicyText:
Version: "2012-10-17"
Statement:
- Sid: OrgRead
Effect: Allow
Principal: "*"
Action:
- ecr:GetAuthorizationToken
- ecr:BatchCheckLayerAvailability
- ecr:GetDownloadUrlForLayer
- ecr:GetRepositoryPolicy
- ecr:DescribeRepositories
- ecr:ListImages
- ecr:DescribeImages
- ecr:BatchGetImage
- ecr:DescribeImageScanFindings
Condition:
ForAllValues:StringEquals:
aws:PrincipalOrgID: "o-12345678"
```
**To Reproduce**
Create the above mentioned resource in cloudformation and scan it with checkov
**Expected behavior**
I expect checkov to look for the usage of a PrincipalOrgID or similar global condition keys and not fail the check when they are used to make the repository non public.
**Desktop (please complete the following information):**
- OS: macOS 11.5.2
- Checkov Version 2.0.390
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `checkov/terraform/checks/resource/aws/ECRPolicy.py`
Content:
```
1
2 from checkov.common.models.enums import CheckResult, CheckCategories
3 from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck
4 from typing import List
5
6
7 class ECRPolicy(BaseResourceCheck):
8 def __init__(self):
9 name = "Ensure ECR policy is not set to public"
10 id = "CKV_AWS_32"
11 supported_resources = ['aws_ecr_repository_policy']
12 categories = [CheckCategories.GENERAL_SECURITY]
13 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
14
15 def scan_resource_conf(self, conf):
16 """
17 Looks for public * policy for ecr repository:
18 https://www.terraform.io/docs/providers/aws/r/ecr_repository_policy.html
19 :param conf: aws_ecr_repository configuration
20 :return: <CheckResult>
21 """
22 if "policy" in conf.keys():
23 policy = conf["policy"][0]
24 if type(policy) is str:
25 return CheckResult.PASSED
26 if policy['Statement'][0] and type(policy['Statement'][0]) is dict:
27 statement = policy['Statement'][0]
28 if statement['Principal'] and type(statement['Principal']) is str:
29 principal = statement['Principal']
30 if principal == "*":
31 self.evaluated_keys = ["policy/Statement/Principal"]
32 return CheckResult.FAILED
33 return CheckResult.PASSED
34
35 def get_evaluated_keys(self) -> List[str]:
36 return ['policy']
37
38
39 check = ECRPolicy()
40
```
Path: `checkov/cloudformation/checks/resource/aws/ECRPolicy.py`
Content:
```
1 import json
2 from typing import List
3
4 from checkov.common.parsers.node import StrNode
5 from checkov.common.models.enums import CheckResult, CheckCategories
6 from checkov.cloudformation.checks.resource.base_resource_check import BaseResourceCheck
7
8
9 class ECRPolicy(BaseResourceCheck):
10 def __init__(self):
11 name = "Ensure ECR policy is not set to public"
12 id = "CKV_AWS_32"
13 supported_resources = ['AWS::ECR::Repository']
14 categories = [CheckCategories.GENERAL_SECURITY]
15 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
16
17 def scan_resource_conf(self, conf):
18 """
19 Looks for public * policy for ecr repository:
20 https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ecr-repository.html
21 :param conf: aws_ecr_repository configuration
22 :return: <CheckResult>
23 """
24 self.evaluated_keys = ["Properties/RepositoryPolicyText/Statement"]
25 if 'Properties' in conf.keys():
26 if 'RepositoryPolicyText' in conf['Properties'].keys():
27 policy_text = conf['Properties']['RepositoryPolicyText']
28 if type(policy_text) in (str, StrNode):
29 policy_text = json.loads(str(policy_text))
30 if 'Statement' in policy_text.keys():
31 for statement_index, statement in enumerate(policy_text['Statement']):
32 if 'Principal' in statement.keys():
33 for principal_index, principal in enumerate(statement['Principal']):
34 if principal == "*":
35 self.evaluated_keys = [f"Properties/RepositoryPolicyText/Statement/[{statement_index}]/Principal/[{principal_index}]"]
36 return CheckResult.FAILED
37 return CheckResult.PASSED
38
39 check = ECRPolicy()
40
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/checkov/cloudformation/checks/resource/aws/ECRPolicy.py b/checkov/cloudformation/checks/resource/aws/ECRPolicy.py
--- a/checkov/cloudformation/checks/resource/aws/ECRPolicy.py
+++ b/checkov/cloudformation/checks/resource/aws/ECRPolicy.py
@@ -31,9 +31,22 @@
for statement_index, statement in enumerate(policy_text['Statement']):
if 'Principal' in statement.keys():
for principal_index, principal in enumerate(statement['Principal']):
- if principal == "*":
+ if principal == "*" and not self.check_for_constrained_condition(statement):
self.evaluated_keys = [f"Properties/RepositoryPolicyText/Statement/[{statement_index}]/Principal/[{principal_index}]"]
return CheckResult.FAILED
return CheckResult.PASSED
+ def check_for_constrained_condition(self, statement):
+ """
+ Checks to see if there is a constraint on a a wildcarded principal
+ :param statement: statement from aws_repository_configuration
+ :return: true if there is a constraint
+ """
+ if 'Condition' in statement.keys():
+ condition = statement['Condition']
+ if 'ForAllValues:StringEquals' in condition.keys():
+ if 'aws:PrincipalOrgID' in condition['ForAllValues:StringEquals'].keys():
+ return True
+ return False
+
check = ECRPolicy()
diff --git a/checkov/terraform/checks/resource/aws/ECRPolicy.py b/checkov/terraform/checks/resource/aws/ECRPolicy.py
--- a/checkov/terraform/checks/resource/aws/ECRPolicy.py
+++ b/checkov/terraform/checks/resource/aws/ECRPolicy.py
@@ -27,7 +27,7 @@
statement = policy['Statement'][0]
if statement['Principal'] and type(statement['Principal']) is str:
principal = statement['Principal']
- if principal == "*":
+ if principal == "*" and not self.check_for_constrained_condition(statement):
self.evaluated_keys = ["policy/Statement/Principal"]
return CheckResult.FAILED
return CheckResult.PASSED
@@ -35,5 +35,17 @@
def get_evaluated_keys(self) -> List[str]:
return ['policy']
+ def check_for_constrained_condition(self, statement):
+ """
+ Checks to see if there is a constraint on a a wildcarded principal
+ :param statement: statement from aws_repository_configuration
+ :return: True if there is a constraint
+ """
+ if 'Condition' in statement and isinstance(statement['Condition'], dict):
+ condition = statement['Condition']
+ if 'ForAllValues:StringEquals' in condition and isinstance(condition['ForAllValues:StringEquals'], dict):
+ if 'aws:PrincipalOrgID' in condition['ForAllValues:StringEquals']:
+ return True
+ return False
check = ECRPolicy()
| {"golden_diff": "diff --git a/checkov/cloudformation/checks/resource/aws/ECRPolicy.py b/checkov/cloudformation/checks/resource/aws/ECRPolicy.py\n--- a/checkov/cloudformation/checks/resource/aws/ECRPolicy.py\n+++ b/checkov/cloudformation/checks/resource/aws/ECRPolicy.py\n@@ -31,9 +31,22 @@\n for statement_index, statement in enumerate(policy_text['Statement']):\n if 'Principal' in statement.keys():\n for principal_index, principal in enumerate(statement['Principal']):\n- if principal == \"*\":\n+ if principal == \"*\" and not self.check_for_constrained_condition(statement):\n self.evaluated_keys = [f\"Properties/RepositoryPolicyText/Statement/[{statement_index}]/Principal/[{principal_index}]\"]\n return CheckResult.FAILED\n return CheckResult.PASSED\n \n+ def check_for_constrained_condition(self, statement):\n+ \"\"\"\n+ Checks to see if there is a constraint on a a wildcarded principal\n+ :param statement: statement from aws_repository_configuration\n+ :return: true if there is a constraint\n+ \"\"\"\n+ if 'Condition' in statement.keys():\n+ condition = statement['Condition']\n+ if 'ForAllValues:StringEquals' in condition.keys():\n+ if 'aws:PrincipalOrgID' in condition['ForAllValues:StringEquals'].keys():\n+ return True\n+ return False\n+\n check = ECRPolicy()\ndiff --git a/checkov/terraform/checks/resource/aws/ECRPolicy.py b/checkov/terraform/checks/resource/aws/ECRPolicy.py\n--- a/checkov/terraform/checks/resource/aws/ECRPolicy.py\n+++ b/checkov/terraform/checks/resource/aws/ECRPolicy.py\n@@ -27,7 +27,7 @@\n statement = policy['Statement'][0]\n if statement['Principal'] and type(statement['Principal']) is str:\n principal = statement['Principal']\n- if principal == \"*\":\n+ if principal == \"*\" and not self.check_for_constrained_condition(statement):\n self.evaluated_keys = [\"policy/Statement/Principal\"]\n return CheckResult.FAILED\n return CheckResult.PASSED\n@@ -35,5 +35,17 @@\n def get_evaluated_keys(self) -> List[str]:\n return ['policy']\n \n+ def check_for_constrained_condition(self, statement):\n+ \"\"\"\n+ Checks to see if there is a constraint on a a wildcarded principal\n+ :param statement: statement from aws_repository_configuration\n+ :return: True if there is a constraint\n+ \"\"\"\n+ if 'Condition' in statement and isinstance(statement['Condition'], dict):\n+ condition = statement['Condition']\n+ if 'ForAllValues:StringEquals' in condition and isinstance(condition['ForAllValues:StringEquals'], dict):\n+ if 'aws:PrincipalOrgID' in condition['ForAllValues:StringEquals']:\n+ return True\n+ return False\n \n check = ECRPolicy()\n", "issue": "BC_AWS_PUBLIC_1 does not take Conditions into account\n**Describe the bug**\r\nBC_AWS_PUBLIC_1 seems to only look for the principal in a ECR Resource Policy. If you want to control access to a repo based on the pricipals AWS Organization ID, you have to do this using a condition instead of a Principal. It seems like checkov does not take this into account.\r\n\r\nExample:\r\n\r\n``` yaml\r\n Repository:\r\n Type: AWS::ECR::Repository\r\n Properties:\r\n RepositoryName: !Ref RepoName\r\n RepositoryPolicyText:\r\n Version: \"2012-10-17\"\r\n Statement:\r\n - Sid: OrgRead\r\n Effect: Allow\r\n Principal: \"*\"\r\n Action:\r\n - ecr:GetAuthorizationToken\r\n - ecr:BatchCheckLayerAvailability\r\n - ecr:GetDownloadUrlForLayer\r\n - ecr:GetRepositoryPolicy\r\n - ecr:DescribeRepositories\r\n - ecr:ListImages\r\n - ecr:DescribeImages\r\n - ecr:BatchGetImage\r\n - ecr:DescribeImageScanFindings\r\n Condition:\r\n ForAllValues:StringEquals:\r\n aws:PrincipalOrgID: \"o-12345678\"\r\n``` \r\n\r\n**To Reproduce**\r\nCreate the above mentioned resource in cloudformation and scan it with checkov\r\n\r\n**Expected behavior**\r\nI expect checkov to look for the usage of a PrincipalOrgID or similar global condition keys and not fail the check when they are used to make the repository non public.\r\n\r\n**Desktop (please complete the following information):**\r\n - OS: macOS 11.5.2\r\n - Checkov Version 2.0.390\r\n\n", "before_files": [{"content": "\nfrom checkov.common.models.enums import CheckResult, CheckCategories\nfrom checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\nfrom typing import List\n\n\nclass ECRPolicy(BaseResourceCheck):\n def __init__(self):\n name = \"Ensure ECR policy is not set to public\"\n id = \"CKV_AWS_32\"\n supported_resources = ['aws_ecr_repository_policy']\n categories = [CheckCategories.GENERAL_SECURITY]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def scan_resource_conf(self, conf):\n \"\"\"\n Looks for public * policy for ecr repository:\n https://www.terraform.io/docs/providers/aws/r/ecr_repository_policy.html\n :param conf: aws_ecr_repository configuration\n :return: <CheckResult>\n \"\"\"\n if \"policy\" in conf.keys():\n policy = conf[\"policy\"][0]\n if type(policy) is str:\n return CheckResult.PASSED\n if policy['Statement'][0] and type(policy['Statement'][0]) is dict:\n statement = policy['Statement'][0]\n if statement['Principal'] and type(statement['Principal']) is str:\n principal = statement['Principal']\n if principal == \"*\":\n self.evaluated_keys = [\"policy/Statement/Principal\"]\n return CheckResult.FAILED\n return CheckResult.PASSED\n\n def get_evaluated_keys(self) -> List[str]:\n return ['policy']\n\n\ncheck = ECRPolicy()\n", "path": "checkov/terraform/checks/resource/aws/ECRPolicy.py"}, {"content": "import json\nfrom typing import List\n\nfrom checkov.common.parsers.node import StrNode\nfrom checkov.common.models.enums import CheckResult, CheckCategories\nfrom checkov.cloudformation.checks.resource.base_resource_check import BaseResourceCheck\n\n\nclass ECRPolicy(BaseResourceCheck):\n def __init__(self):\n name = \"Ensure ECR policy is not set to public\"\n id = \"CKV_AWS_32\"\n supported_resources = ['AWS::ECR::Repository']\n categories = [CheckCategories.GENERAL_SECURITY]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def scan_resource_conf(self, conf):\n \"\"\"\n Looks for public * policy for ecr repository:\n https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ecr-repository.html\n :param conf: aws_ecr_repository configuration\n :return: <CheckResult>\n \"\"\"\n self.evaluated_keys = [\"Properties/RepositoryPolicyText/Statement\"]\n if 'Properties' in conf.keys():\n if 'RepositoryPolicyText' in conf['Properties'].keys():\n policy_text = conf['Properties']['RepositoryPolicyText']\n if type(policy_text) in (str, StrNode):\n policy_text = json.loads(str(policy_text))\n if 'Statement' in policy_text.keys():\n for statement_index, statement in enumerate(policy_text['Statement']):\n if 'Principal' in statement.keys():\n for principal_index, principal in enumerate(statement['Principal']):\n if principal == \"*\":\n self.evaluated_keys = [f\"Properties/RepositoryPolicyText/Statement/[{statement_index}]/Principal/[{principal_index}]\"]\n return CheckResult.FAILED\n return CheckResult.PASSED\n\ncheck = ECRPolicy()\n", "path": "checkov/cloudformation/checks/resource/aws/ECRPolicy.py"}], "after_files": [{"content": "\nfrom checkov.common.models.enums import CheckResult, CheckCategories\nfrom checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\nfrom typing import List\n\n\nclass ECRPolicy(BaseResourceCheck):\n def __init__(self):\n name = \"Ensure ECR policy is not set to public\"\n id = \"CKV_AWS_32\"\n supported_resources = ['aws_ecr_repository_policy']\n categories = [CheckCategories.GENERAL_SECURITY]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def scan_resource_conf(self, conf):\n \"\"\"\n Looks for public * policy for ecr repository:\n https://www.terraform.io/docs/providers/aws/r/ecr_repository_policy.html\n :param conf: aws_ecr_repository configuration\n :return: <CheckResult>\n \"\"\"\n if \"policy\" in conf.keys():\n policy = conf[\"policy\"][0]\n if type(policy) is str:\n return CheckResult.PASSED\n if policy['Statement'][0] and type(policy['Statement'][0]) is dict:\n statement = policy['Statement'][0]\n if statement['Principal'] and type(statement['Principal']) is str:\n principal = statement['Principal']\n if principal == \"*\" and not self.check_for_constrained_condition(statement):\n self.evaluated_keys = [\"policy/Statement/Principal\"]\n return CheckResult.FAILED\n return CheckResult.PASSED\n\n def get_evaluated_keys(self) -> List[str]:\n return ['policy']\n\n def check_for_constrained_condition(self, statement):\n \"\"\"\n Checks to see if there is a constraint on a a wildcarded principal\n :param statement: statement from aws_repository_configuration\n :return: True if there is a constraint\n \"\"\"\n if 'Condition' in statement and isinstance(statement['Condition'], dict):\n condition = statement['Condition']\n if 'ForAllValues:StringEquals' in condition and isinstance(condition['ForAllValues:StringEquals'], dict):\n if 'aws:PrincipalOrgID' in condition['ForAllValues:StringEquals']:\n return True\n return False\n\ncheck = ECRPolicy()\n", "path": "checkov/terraform/checks/resource/aws/ECRPolicy.py"}, {"content": "import json\nfrom typing import List\n\nfrom checkov.common.parsers.node import StrNode\nfrom checkov.common.models.enums import CheckResult, CheckCategories\nfrom checkov.cloudformation.checks.resource.base_resource_check import BaseResourceCheck\n\n\nclass ECRPolicy(BaseResourceCheck):\n def __init__(self):\n name = \"Ensure ECR policy is not set to public\"\n id = \"CKV_AWS_32\"\n supported_resources = ['AWS::ECR::Repository']\n categories = [CheckCategories.GENERAL_SECURITY]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def scan_resource_conf(self, conf):\n \"\"\"\n Looks for public * policy for ecr repository:\n https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ecr-repository.html\n :param conf: aws_ecr_repository configuration\n :return: <CheckResult>\n \"\"\"\n self.evaluated_keys = [\"Properties/RepositoryPolicyText/Statement\"]\n if 'Properties' in conf.keys():\n if 'RepositoryPolicyText' in conf['Properties'].keys():\n policy_text = conf['Properties']['RepositoryPolicyText']\n if type(policy_text) in (str, StrNode):\n policy_text = json.loads(str(policy_text))\n if 'Statement' in policy_text.keys():\n for statement_index, statement in enumerate(policy_text['Statement']):\n if 'Principal' in statement.keys():\n for principal_index, principal in enumerate(statement['Principal']):\n if principal == \"*\" and not self.check_for_constrained_condition(statement):\n self.evaluated_keys = [f\"Properties/RepositoryPolicyText/Statement/[{statement_index}]/Principal/[{principal_index}]\"]\n return CheckResult.FAILED\n return CheckResult.PASSED\n\n def check_for_constrained_condition(self, statement):\n \"\"\"\n Checks to see if there is a constraint on a a wildcarded principal\n :param statement: statement from aws_repository_configuration\n :return: true if there is a constraint\n \"\"\"\n if 'Condition' in statement.keys():\n condition = statement['Condition']\n if 'ForAllValues:StringEquals' in condition.keys():\n if 'aws:PrincipalOrgID' in condition['ForAllValues:StringEquals'].keys():\n return True\n return False\n\ncheck = ECRPolicy()\n", "path": "checkov/cloudformation/checks/resource/aws/ECRPolicy.py"}]} | 1,498 | 641 |
gh_patches_debug_30456 | rasdani/github-patches | git_diff | enthought__chaco-734 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Replace mutable default argument in plot_factory create_*_plot functions
Replace the use of mutable default `data` argument to the `create_*_plot` functions in the `plot_factory` module.
https://github.com/enthought/chaco/blob/ef7227706642f7d26042717ad9b60648d7572068/chaco/plot_factory.py#L43
https://github.com/enthought/chaco/blob/ef7227706642f7d26042717ad9b60648d7572068/chaco/plot_factory.py#L91
https://github.com/enthought/chaco/blob/ef7227706642f7d26042717ad9b60648d7572068/chaco/plot_factory.py#L136
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `chaco/plot_factory.py`
Content:
```
1 """
2 Contains convenience functions to create ready-made PlotRenderer instances of
3 various types.
4 """
5
6 from numpy import array, ndarray, transpose, cos, sin
7
8 # Local relative imports
9 from .abstract_data_source import AbstractDataSource
10 from .array_data_source import ArrayDataSource
11 from .axis import PlotAxis
12 from .barplot import BarPlot
13 from .data_range_1d import DataRange1D
14 from .grid import PlotGrid
15 from .linear_mapper import LinearMapper
16 from .scatterplot import ScatterPlot
17 from .polar_mapper import PolarMapper
18 from .lineplot import LinePlot
19 from .polar_line_renderer import PolarLineRenderer
20
21
22 def _create_data_sources(data, index_sort="none"):
23 """
24 Returns datasources for index and value based on the inputs. Assumes that
25 the index data is unsorted unless otherwise specified.
26 """
27 if (type(data) == ndarray) or (len(data) == 2):
28 index, value = data
29 if type(index) in (list, tuple, ndarray):
30 index = ArrayDataSource(array(index), sort_order=index_sort)
31 elif not isinstance(index, AbstractDataSource):
32 raise RuntimeError(
33 "Need an array or list of values or a DataSource, got %s instead."
34 % type(index)
35 )
36
37 if type(value) in (list, tuple, ndarray):
38 value = ArrayDataSource(array(value))
39 elif not isinstance(value, AbstractDataSource):
40 raise RuntimeError(
41 "Need an array or list of values or a DataSource, got %s instead."
42 % type(index)
43 )
44
45 return index, value
46 else:
47 raise RuntimeError("Unable to create datasources.")
48
49
50 def create_scatter_plot(
51 data=[],
52 index_bounds=None,
53 value_bounds=None,
54 orientation="h",
55 color="green",
56 marker="square",
57 marker_size=4,
58 bgcolor="transparent",
59 outline_color="black",
60 border_visible=True,
61 add_grid=False,
62 add_axis=False,
63 index_sort="none",
64 **renderer_traits
65 ):
66 """Creates a ScatterPlot renderer from a single Nx2 data array or a tuple
67 of two length-N 1-D arrays. The data must be sorted on the index if any
68 reverse-mapping tools are to be used.
69
70 Pre-existing "index" and "value" datasources can be passed in.
71 """
72
73 index, value = _create_data_sources(data)
74
75 if index_bounds is not None:
76 index_range = DataRange1D(low=index_bounds[0], high=index_bounds[1])
77 else:
78 index_range = DataRange1D()
79 index_range.add(index)
80 index_mapper = LinearMapper(range=index_range)
81
82 if value_bounds is not None:
83 value_range = DataRange1D(low=value_bounds[0], high=value_bounds[1])
84 else:
85 value_range = DataRange1D()
86 value_range.add(value)
87 value_mapper = LinearMapper(range=value_range)
88
89 plot = ScatterPlot(
90 index=index,
91 value=value,
92 index_mapper=index_mapper,
93 value_mapper=value_mapper,
94 orientation=orientation,
95 marker=marker,
96 marker_size=marker_size,
97 color=color,
98 bgcolor=bgcolor,
99 outline_color=outline_color,
100 border_visible=border_visible,
101 **renderer_traits
102 )
103
104 if add_grid:
105 add_default_grids(plot, orientation)
106 if add_axis:
107 add_default_axes(plot, orientation)
108 return plot
109
110
111 def create_line_plot(
112 data=[],
113 index_bounds=None,
114 value_bounds=None,
115 orientation="h",
116 color="red",
117 width=1.0,
118 dash="solid",
119 value_mapper_class=LinearMapper,
120 bgcolor="transparent",
121 border_visible=False,
122 add_grid=False,
123 add_axis=False,
124 index_sort="none",
125 **renderer_traits
126 ):
127 """Creates a LinePlot renderer from a single Nx2 data array or a tuple of
128 two length-N 1-D arrays. The data must be sorted on the index if any
129 reverse-mapping tools are to be used.
130
131 Pre-existing "index" and "value" datasources can be passed in.
132 """
133 index, value = _create_data_sources(data, index_sort)
134
135 if index_bounds is not None:
136 index_range = DataRange1D(low=index_bounds[0], high=index_bounds[1])
137 else:
138 index_range = DataRange1D()
139 index_range.add(index)
140 index_mapper = LinearMapper(range=index_range)
141
142 if value_bounds is not None:
143 value_range = DataRange1D(low=value_bounds[0], high=value_bounds[1])
144 else:
145 value_range = DataRange1D()
146 value_range.add(value)
147 value_mapper = value_mapper_class(range=value_range)
148
149 plot = LinePlot(
150 index=index,
151 value=value,
152 index_mapper=index_mapper,
153 value_mapper=value_mapper,
154 orientation=orientation,
155 color=color,
156 bgcolor=bgcolor,
157 line_width=width,
158 line_style=dash,
159 border_visible=border_visible,
160 **renderer_traits
161 )
162
163 if add_grid:
164 add_default_grids(plot, orientation)
165 if add_axis:
166 add_default_axes(plot, orientation)
167 return plot
168
169
170 def create_bar_plot(
171 data=[],
172 index_bounds=None,
173 value_bounds=None,
174 orientation="h",
175 color="red",
176 bar_width=10.0,
177 value_mapper_class=LinearMapper,
178 line_color="black",
179 fill_color="red",
180 line_width=1,
181 bgcolor="transparent",
182 border_visible=False,
183 antialias=True,
184 add_grid=False,
185 add_axis=False,
186 **renderer_traits
187 ):
188 """Creates a BarPlot renderer from a single Nx2 data array or a tuple of
189 two length-N 1-D arrays. The data must be sorted on the index if any
190 reverse-mapping tools are to be used.
191
192 Pre-existing "index" and "value" datasources can be passed in.
193 """
194 index, value = _create_data_sources(data)
195
196 if index_bounds is not None:
197 index_range = DataRange1D(low=index_bounds[0], high=index_bounds[1])
198 else:
199 index_range = DataRange1D()
200 index_range.add(index)
201 index_mapper = LinearMapper(range=index_range)
202
203 if value_bounds is not None:
204 value_range = DataRange1D(low=value_bounds[0], high=value_bounds[1])
205 else:
206 value_range = DataRange1D()
207 value_range.add(value)
208 value_mapper = value_mapper_class(range=value_range)
209
210 # Create the plot
211 plot = BarPlot(
212 index=index,
213 value=value,
214 value_mapper=value_mapper,
215 index_mapper=index_mapper,
216 orientation=orientation,
217 line_color=line_color,
218 fill_color=fill_color,
219 line_width=line_width,
220 bar_width=bar_width,
221 antialias=antialias,
222 **renderer_traits
223 )
224
225 if add_grid:
226 add_default_grids(plot, orientation)
227 if add_axis:
228 add_default_axes(plot, orientation)
229 return plot
230
231
232 def create_polar_plot(
233 data,
234 orientation="h",
235 color="black",
236 width=1.0,
237 dash="solid",
238 grid="dot",
239 value_mapper_class=PolarMapper,
240 **renderer_traits
241 ):
242 """Creates a polar plot renderer from a single Nx2 data array or a tuple
243 of two length-N 1-D arrays. The data must be sorted on the index if any
244 reverse-mapping tools are to be used.
245
246 Pre-existing "index" and "value" datasources can be passed in.
247 """
248 if (type(data) != ndarray) and (len(data) == 2):
249 data = transpose(array(data))
250
251 r_data, t_data = transpose(data)
252 index_data = r_data * cos(t_data)
253 value_data = r_data * sin(t_data)
254
255 index = ArrayDataSource(index_data, sort_order="ascending")
256 # Typically the value data is unsorted
257 value = ArrayDataSource(value_data)
258
259 index_range = DataRange1D()
260 index_range.add(index)
261 index_mapper = PolarMapper(range=index_range)
262
263 value_range = DataRange1D()
264 value_range.add(value)
265 value_mapper = value_mapper_class(range=value_range)
266
267 plot = PolarLineRenderer(
268 index=index,
269 value=value,
270 index_mapper=index_mapper,
271 value_mapper=value_mapper,
272 orientation=orientation,
273 color=color,
274 line_width=width,
275 line_style=dash,
276 grid_style=grid,
277 **renderer_traits
278 )
279
280 return plot
281
282
283 def add_default_axes(
284 plot, orientation="normal", vtitle="", htitle="", axis_class=PlotAxis
285 ):
286 """
287 Creates left and bottom axes for a plot. Assumes that the index is
288 horizontal and value is vertical by default; set *orientation* to
289 something other than "normal" if they are flipped.
290 """
291 if orientation in ("normal", "h"):
292 v_mapper = plot.value_mapper
293 h_mapper = plot.index_mapper
294 else:
295 v_mapper = plot.index_mapper
296 h_mapper = plot.value_mapper
297
298 left = axis_class(
299 orientation="left", title=vtitle, mapper=v_mapper, component=plot
300 )
301
302 bottom = axis_class(
303 orientation="bottom", title=htitle, mapper=h_mapper, component=plot
304 )
305
306 plot.underlays.append(left)
307 plot.underlays.append(bottom)
308 return left, bottom
309
310
311 def add_default_grids(plot, orientation="normal"):
312 """
313 Creates horizontal and vertical gridlines for a plot. Assumes that the
314 index is horizontal and value is vertical by default; set orientation to
315 something other than "normal" if they are flipped.
316 """
317 if orientation in ("normal", "h"):
318 v_mapper = plot.index_mapper
319 h_mapper = plot.value_mapper
320 else:
321 v_mapper = plot.value_mapper
322 h_mapper = plot.index_mapper
323
324 vgrid = PlotGrid(
325 mapper=v_mapper,
326 orientation="vertical",
327 component=plot,
328 line_color="lightgray",
329 line_style="dot",
330 )
331
332 hgrid = PlotGrid(
333 mapper=h_mapper,
334 orientation="horizontal",
335 component=plot,
336 line_color="lightgray",
337 line_style="dot",
338 )
339
340 plot.underlays.append(vgrid)
341 plot.underlays.append(hgrid)
342 return hgrid, vgrid
343
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/chaco/plot_factory.py b/chaco/plot_factory.py
--- a/chaco/plot_factory.py
+++ b/chaco/plot_factory.py
@@ -48,7 +48,7 @@
def create_scatter_plot(
- data=[],
+ data=None,
index_bounds=None,
value_bounds=None,
orientation="h",
@@ -69,6 +69,8 @@
Pre-existing "index" and "value" datasources can be passed in.
"""
+ if data is None:
+ data = []
index, value = _create_data_sources(data)
@@ -109,7 +111,7 @@
def create_line_plot(
- data=[],
+ data=None,
index_bounds=None,
value_bounds=None,
orientation="h",
@@ -130,6 +132,9 @@
Pre-existing "index" and "value" datasources can be passed in.
"""
+ if data is None:
+ data = []
+
index, value = _create_data_sources(data, index_sort)
if index_bounds is not None:
@@ -168,7 +173,7 @@
def create_bar_plot(
- data=[],
+ data=None,
index_bounds=None,
value_bounds=None,
orientation="h",
@@ -191,6 +196,9 @@
Pre-existing "index" and "value" datasources can be passed in.
"""
+ if data is None:
+ data = []
+
index, value = _create_data_sources(data)
if index_bounds is not None:
| {"golden_diff": "diff --git a/chaco/plot_factory.py b/chaco/plot_factory.py\n--- a/chaco/plot_factory.py\n+++ b/chaco/plot_factory.py\n@@ -48,7 +48,7 @@\n \n \n def create_scatter_plot(\n- data=[],\n+ data=None,\n index_bounds=None,\n value_bounds=None,\n orientation=\"h\",\n@@ -69,6 +69,8 @@\n \n Pre-existing \"index\" and \"value\" datasources can be passed in.\n \"\"\"\n+ if data is None:\n+ data = []\n \n index, value = _create_data_sources(data)\n \n@@ -109,7 +111,7 @@\n \n \n def create_line_plot(\n- data=[],\n+ data=None,\n index_bounds=None,\n value_bounds=None,\n orientation=\"h\",\n@@ -130,6 +132,9 @@\n \n Pre-existing \"index\" and \"value\" datasources can be passed in.\n \"\"\"\n+ if data is None:\n+ data = []\n+\n index, value = _create_data_sources(data, index_sort)\n \n if index_bounds is not None:\n@@ -168,7 +173,7 @@\n \n \n def create_bar_plot(\n- data=[],\n+ data=None,\n index_bounds=None,\n value_bounds=None,\n orientation=\"h\",\n@@ -191,6 +196,9 @@\n \n Pre-existing \"index\" and \"value\" datasources can be passed in.\n \"\"\"\n+ if data is None:\n+ data = []\n+\n index, value = _create_data_sources(data)\n \n if index_bounds is not None:\n", "issue": "Replace mutable default argument in plot_factory create_*_plot functions\nReplace the use of mutable default `data` argument to the `create_*_plot` functions in the `plot_factory` module.\r\n\r\nhttps://github.com/enthought/chaco/blob/ef7227706642f7d26042717ad9b60648d7572068/chaco/plot_factory.py#L43\r\nhttps://github.com/enthought/chaco/blob/ef7227706642f7d26042717ad9b60648d7572068/chaco/plot_factory.py#L91\r\nhttps://github.com/enthought/chaco/blob/ef7227706642f7d26042717ad9b60648d7572068/chaco/plot_factory.py#L136\r\n\r\n\n", "before_files": [{"content": "\"\"\"\nContains convenience functions to create ready-made PlotRenderer instances of\nvarious types.\n\"\"\"\n\nfrom numpy import array, ndarray, transpose, cos, sin\n\n# Local relative imports\nfrom .abstract_data_source import AbstractDataSource\nfrom .array_data_source import ArrayDataSource\nfrom .axis import PlotAxis\nfrom .barplot import BarPlot\nfrom .data_range_1d import DataRange1D\nfrom .grid import PlotGrid\nfrom .linear_mapper import LinearMapper\nfrom .scatterplot import ScatterPlot\nfrom .polar_mapper import PolarMapper\nfrom .lineplot import LinePlot\nfrom .polar_line_renderer import PolarLineRenderer\n\n\ndef _create_data_sources(data, index_sort=\"none\"):\n \"\"\"\n Returns datasources for index and value based on the inputs. Assumes that\n the index data is unsorted unless otherwise specified.\n \"\"\"\n if (type(data) == ndarray) or (len(data) == 2):\n index, value = data\n if type(index) in (list, tuple, ndarray):\n index = ArrayDataSource(array(index), sort_order=index_sort)\n elif not isinstance(index, AbstractDataSource):\n raise RuntimeError(\n \"Need an array or list of values or a DataSource, got %s instead.\"\n % type(index)\n )\n\n if type(value) in (list, tuple, ndarray):\n value = ArrayDataSource(array(value))\n elif not isinstance(value, AbstractDataSource):\n raise RuntimeError(\n \"Need an array or list of values or a DataSource, got %s instead.\"\n % type(index)\n )\n\n return index, value\n else:\n raise RuntimeError(\"Unable to create datasources.\")\n\n\ndef create_scatter_plot(\n data=[],\n index_bounds=None,\n value_bounds=None,\n orientation=\"h\",\n color=\"green\",\n marker=\"square\",\n marker_size=4,\n bgcolor=\"transparent\",\n outline_color=\"black\",\n border_visible=True,\n add_grid=False,\n add_axis=False,\n index_sort=\"none\",\n **renderer_traits\n):\n \"\"\"Creates a ScatterPlot renderer from a single Nx2 data array or a tuple\n of two length-N 1-D arrays. The data must be sorted on the index if any\n reverse-mapping tools are to be used.\n\n Pre-existing \"index\" and \"value\" datasources can be passed in.\n \"\"\"\n\n index, value = _create_data_sources(data)\n\n if index_bounds is not None:\n index_range = DataRange1D(low=index_bounds[0], high=index_bounds[1])\n else:\n index_range = DataRange1D()\n index_range.add(index)\n index_mapper = LinearMapper(range=index_range)\n\n if value_bounds is not None:\n value_range = DataRange1D(low=value_bounds[0], high=value_bounds[1])\n else:\n value_range = DataRange1D()\n value_range.add(value)\n value_mapper = LinearMapper(range=value_range)\n\n plot = ScatterPlot(\n index=index,\n value=value,\n index_mapper=index_mapper,\n value_mapper=value_mapper,\n orientation=orientation,\n marker=marker,\n marker_size=marker_size,\n color=color,\n bgcolor=bgcolor,\n outline_color=outline_color,\n border_visible=border_visible,\n **renderer_traits\n )\n\n if add_grid:\n add_default_grids(plot, orientation)\n if add_axis:\n add_default_axes(plot, orientation)\n return plot\n\n\ndef create_line_plot(\n data=[],\n index_bounds=None,\n value_bounds=None,\n orientation=\"h\",\n color=\"red\",\n width=1.0,\n dash=\"solid\",\n value_mapper_class=LinearMapper,\n bgcolor=\"transparent\",\n border_visible=False,\n add_grid=False,\n add_axis=False,\n index_sort=\"none\",\n **renderer_traits\n):\n \"\"\"Creates a LinePlot renderer from a single Nx2 data array or a tuple of\n two length-N 1-D arrays. The data must be sorted on the index if any\n reverse-mapping tools are to be used.\n\n Pre-existing \"index\" and \"value\" datasources can be passed in.\n \"\"\"\n index, value = _create_data_sources(data, index_sort)\n\n if index_bounds is not None:\n index_range = DataRange1D(low=index_bounds[0], high=index_bounds[1])\n else:\n index_range = DataRange1D()\n index_range.add(index)\n index_mapper = LinearMapper(range=index_range)\n\n if value_bounds is not None:\n value_range = DataRange1D(low=value_bounds[0], high=value_bounds[1])\n else:\n value_range = DataRange1D()\n value_range.add(value)\n value_mapper = value_mapper_class(range=value_range)\n\n plot = LinePlot(\n index=index,\n value=value,\n index_mapper=index_mapper,\n value_mapper=value_mapper,\n orientation=orientation,\n color=color,\n bgcolor=bgcolor,\n line_width=width,\n line_style=dash,\n border_visible=border_visible,\n **renderer_traits\n )\n\n if add_grid:\n add_default_grids(plot, orientation)\n if add_axis:\n add_default_axes(plot, orientation)\n return plot\n\n\ndef create_bar_plot(\n data=[],\n index_bounds=None,\n value_bounds=None,\n orientation=\"h\",\n color=\"red\",\n bar_width=10.0,\n value_mapper_class=LinearMapper,\n line_color=\"black\",\n fill_color=\"red\",\n line_width=1,\n bgcolor=\"transparent\",\n border_visible=False,\n antialias=True,\n add_grid=False,\n add_axis=False,\n **renderer_traits\n):\n \"\"\"Creates a BarPlot renderer from a single Nx2 data array or a tuple of\n two length-N 1-D arrays. The data must be sorted on the index if any\n reverse-mapping tools are to be used.\n\n Pre-existing \"index\" and \"value\" datasources can be passed in.\n \"\"\"\n index, value = _create_data_sources(data)\n\n if index_bounds is not None:\n index_range = DataRange1D(low=index_bounds[0], high=index_bounds[1])\n else:\n index_range = DataRange1D()\n index_range.add(index)\n index_mapper = LinearMapper(range=index_range)\n\n if value_bounds is not None:\n value_range = DataRange1D(low=value_bounds[0], high=value_bounds[1])\n else:\n value_range = DataRange1D()\n value_range.add(value)\n value_mapper = value_mapper_class(range=value_range)\n\n # Create the plot\n plot = BarPlot(\n index=index,\n value=value,\n value_mapper=value_mapper,\n index_mapper=index_mapper,\n orientation=orientation,\n line_color=line_color,\n fill_color=fill_color,\n line_width=line_width,\n bar_width=bar_width,\n antialias=antialias,\n **renderer_traits\n )\n\n if add_grid:\n add_default_grids(plot, orientation)\n if add_axis:\n add_default_axes(plot, orientation)\n return plot\n\n\ndef create_polar_plot(\n data,\n orientation=\"h\",\n color=\"black\",\n width=1.0,\n dash=\"solid\",\n grid=\"dot\",\n value_mapper_class=PolarMapper,\n **renderer_traits\n):\n \"\"\"Creates a polar plot renderer from a single Nx2 data array or a tuple\n of two length-N 1-D arrays. The data must be sorted on the index if any\n reverse-mapping tools are to be used.\n\n Pre-existing \"index\" and \"value\" datasources can be passed in.\n \"\"\"\n if (type(data) != ndarray) and (len(data) == 2):\n data = transpose(array(data))\n\n r_data, t_data = transpose(data)\n index_data = r_data * cos(t_data)\n value_data = r_data * sin(t_data)\n\n index = ArrayDataSource(index_data, sort_order=\"ascending\")\n # Typically the value data is unsorted\n value = ArrayDataSource(value_data)\n\n index_range = DataRange1D()\n index_range.add(index)\n index_mapper = PolarMapper(range=index_range)\n\n value_range = DataRange1D()\n value_range.add(value)\n value_mapper = value_mapper_class(range=value_range)\n\n plot = PolarLineRenderer(\n index=index,\n value=value,\n index_mapper=index_mapper,\n value_mapper=value_mapper,\n orientation=orientation,\n color=color,\n line_width=width,\n line_style=dash,\n grid_style=grid,\n **renderer_traits\n )\n\n return plot\n\n\ndef add_default_axes(\n plot, orientation=\"normal\", vtitle=\"\", htitle=\"\", axis_class=PlotAxis\n):\n \"\"\"\n Creates left and bottom axes for a plot. Assumes that the index is\n horizontal and value is vertical by default; set *orientation* to\n something other than \"normal\" if they are flipped.\n \"\"\"\n if orientation in (\"normal\", \"h\"):\n v_mapper = plot.value_mapper\n h_mapper = plot.index_mapper\n else:\n v_mapper = plot.index_mapper\n h_mapper = plot.value_mapper\n\n left = axis_class(\n orientation=\"left\", title=vtitle, mapper=v_mapper, component=plot\n )\n\n bottom = axis_class(\n orientation=\"bottom\", title=htitle, mapper=h_mapper, component=plot\n )\n\n plot.underlays.append(left)\n plot.underlays.append(bottom)\n return left, bottom\n\n\ndef add_default_grids(plot, orientation=\"normal\"):\n \"\"\"\n Creates horizontal and vertical gridlines for a plot. Assumes that the\n index is horizontal and value is vertical by default; set orientation to\n something other than \"normal\" if they are flipped.\n \"\"\"\n if orientation in (\"normal\", \"h\"):\n v_mapper = plot.index_mapper\n h_mapper = plot.value_mapper\n else:\n v_mapper = plot.value_mapper\n h_mapper = plot.index_mapper\n\n vgrid = PlotGrid(\n mapper=v_mapper,\n orientation=\"vertical\",\n component=plot,\n line_color=\"lightgray\",\n line_style=\"dot\",\n )\n\n hgrid = PlotGrid(\n mapper=h_mapper,\n orientation=\"horizontal\",\n component=plot,\n line_color=\"lightgray\",\n line_style=\"dot\",\n )\n\n plot.underlays.append(vgrid)\n plot.underlays.append(hgrid)\n return hgrid, vgrid\n", "path": "chaco/plot_factory.py"}], "after_files": [{"content": "\"\"\"\nContains convenience functions to create ready-made PlotRenderer instances of\nvarious types.\n\"\"\"\n\nfrom numpy import array, ndarray, transpose, cos, sin\n\n# Local relative imports\nfrom .abstract_data_source import AbstractDataSource\nfrom .array_data_source import ArrayDataSource\nfrom .axis import PlotAxis\nfrom .barplot import BarPlot\nfrom .data_range_1d import DataRange1D\nfrom .grid import PlotGrid\nfrom .linear_mapper import LinearMapper\nfrom .scatterplot import ScatterPlot\nfrom .polar_mapper import PolarMapper\nfrom .lineplot import LinePlot\nfrom .polar_line_renderer import PolarLineRenderer\n\n\ndef _create_data_sources(data, index_sort=\"none\"):\n \"\"\"\n Returns datasources for index and value based on the inputs. Assumes that\n the index data is unsorted unless otherwise specified.\n \"\"\"\n if (type(data) == ndarray) or (len(data) == 2):\n index, value = data\n if type(index) in (list, tuple, ndarray):\n index = ArrayDataSource(array(index), sort_order=index_sort)\n elif not isinstance(index, AbstractDataSource):\n raise RuntimeError(\n \"Need an array or list of values or a DataSource, got %s instead.\"\n % type(index)\n )\n\n if type(value) in (list, tuple, ndarray):\n value = ArrayDataSource(array(value))\n elif not isinstance(value, AbstractDataSource):\n raise RuntimeError(\n \"Need an array or list of values or a DataSource, got %s instead.\"\n % type(index)\n )\n\n return index, value\n else:\n raise RuntimeError(\"Unable to create datasources.\")\n\n\ndef create_scatter_plot(\n data=None,\n index_bounds=None,\n value_bounds=None,\n orientation=\"h\",\n color=\"green\",\n marker=\"square\",\n marker_size=4,\n bgcolor=\"transparent\",\n outline_color=\"black\",\n border_visible=True,\n add_grid=False,\n add_axis=False,\n index_sort=\"none\",\n **renderer_traits\n):\n \"\"\"Creates a ScatterPlot renderer from a single Nx2 data array or a tuple\n of two length-N 1-D arrays. The data must be sorted on the index if any\n reverse-mapping tools are to be used.\n\n Pre-existing \"index\" and \"value\" datasources can be passed in.\n \"\"\"\n if data is None:\n data = []\n\n index, value = _create_data_sources(data)\n\n if index_bounds is not None:\n index_range = DataRange1D(low=index_bounds[0], high=index_bounds[1])\n else:\n index_range = DataRange1D()\n index_range.add(index)\n index_mapper = LinearMapper(range=index_range)\n\n if value_bounds is not None:\n value_range = DataRange1D(low=value_bounds[0], high=value_bounds[1])\n else:\n value_range = DataRange1D()\n value_range.add(value)\n value_mapper = LinearMapper(range=value_range)\n\n plot = ScatterPlot(\n index=index,\n value=value,\n index_mapper=index_mapper,\n value_mapper=value_mapper,\n orientation=orientation,\n marker=marker,\n marker_size=marker_size,\n color=color,\n bgcolor=bgcolor,\n outline_color=outline_color,\n border_visible=border_visible,\n **renderer_traits\n )\n\n if add_grid:\n add_default_grids(plot, orientation)\n if add_axis:\n add_default_axes(plot, orientation)\n return plot\n\n\ndef create_line_plot(\n data=None,\n index_bounds=None,\n value_bounds=None,\n orientation=\"h\",\n color=\"red\",\n width=1.0,\n dash=\"solid\",\n value_mapper_class=LinearMapper,\n bgcolor=\"transparent\",\n border_visible=False,\n add_grid=False,\n add_axis=False,\n index_sort=\"none\",\n **renderer_traits\n):\n \"\"\"Creates a LinePlot renderer from a single Nx2 data array or a tuple of\n two length-N 1-D arrays. The data must be sorted on the index if any\n reverse-mapping tools are to be used.\n\n Pre-existing \"index\" and \"value\" datasources can be passed in.\n \"\"\"\n if data is None:\n data = []\n\n index, value = _create_data_sources(data, index_sort)\n\n if index_bounds is not None:\n index_range = DataRange1D(low=index_bounds[0], high=index_bounds[1])\n else:\n index_range = DataRange1D()\n index_range.add(index)\n index_mapper = LinearMapper(range=index_range)\n\n if value_bounds is not None:\n value_range = DataRange1D(low=value_bounds[0], high=value_bounds[1])\n else:\n value_range = DataRange1D()\n value_range.add(value)\n value_mapper = value_mapper_class(range=value_range)\n\n plot = LinePlot(\n index=index,\n value=value,\n index_mapper=index_mapper,\n value_mapper=value_mapper,\n orientation=orientation,\n color=color,\n bgcolor=bgcolor,\n line_width=width,\n line_style=dash,\n border_visible=border_visible,\n **renderer_traits\n )\n\n if add_grid:\n add_default_grids(plot, orientation)\n if add_axis:\n add_default_axes(plot, orientation)\n return plot\n\n\ndef create_bar_plot(\n data=None,\n index_bounds=None,\n value_bounds=None,\n orientation=\"h\",\n color=\"red\",\n bar_width=10.0,\n value_mapper_class=LinearMapper,\n line_color=\"black\",\n fill_color=\"red\",\n line_width=1,\n bgcolor=\"transparent\",\n border_visible=False,\n antialias=True,\n add_grid=False,\n add_axis=False,\n **renderer_traits\n):\n \"\"\"Creates a BarPlot renderer from a single Nx2 data array or a tuple of\n two length-N 1-D arrays. The data must be sorted on the index if any\n reverse-mapping tools are to be used.\n\n Pre-existing \"index\" and \"value\" datasources can be passed in.\n \"\"\"\n if data is None:\n data = []\n\n index, value = _create_data_sources(data)\n\n if index_bounds is not None:\n index_range = DataRange1D(low=index_bounds[0], high=index_bounds[1])\n else:\n index_range = DataRange1D()\n index_range.add(index)\n index_mapper = LinearMapper(range=index_range)\n\n if value_bounds is not None:\n value_range = DataRange1D(low=value_bounds[0], high=value_bounds[1])\n else:\n value_range = DataRange1D()\n value_range.add(value)\n value_mapper = value_mapper_class(range=value_range)\n\n # Create the plot\n plot = BarPlot(\n index=index,\n value=value,\n value_mapper=value_mapper,\n index_mapper=index_mapper,\n orientation=orientation,\n line_color=line_color,\n fill_color=fill_color,\n line_width=line_width,\n bar_width=bar_width,\n antialias=antialias,\n **renderer_traits\n )\n\n if add_grid:\n add_default_grids(plot, orientation)\n if add_axis:\n add_default_axes(plot, orientation)\n return plot\n\n\ndef create_polar_plot(\n data,\n orientation=\"h\",\n color=\"black\",\n width=1.0,\n dash=\"solid\",\n grid=\"dot\",\n value_mapper_class=PolarMapper,\n **renderer_traits\n):\n \"\"\"Creates a polar plot renderer from a single Nx2 data array or a tuple\n of two length-N 1-D arrays. The data must be sorted on the index if any\n reverse-mapping tools are to be used.\n\n Pre-existing \"index\" and \"value\" datasources can be passed in.\n \"\"\"\n if (type(data) != ndarray) and (len(data) == 2):\n data = transpose(array(data))\n\n r_data, t_data = transpose(data)\n index_data = r_data * cos(t_data)\n value_data = r_data * sin(t_data)\n\n index = ArrayDataSource(index_data, sort_order=\"ascending\")\n # Typically the value data is unsorted\n value = ArrayDataSource(value_data)\n\n index_range = DataRange1D()\n index_range.add(index)\n index_mapper = PolarMapper(range=index_range)\n\n value_range = DataRange1D()\n value_range.add(value)\n value_mapper = value_mapper_class(range=value_range)\n\n plot = PolarLineRenderer(\n index=index,\n value=value,\n index_mapper=index_mapper,\n value_mapper=value_mapper,\n orientation=orientation,\n color=color,\n line_width=width,\n line_style=dash,\n grid_style=grid,\n **renderer_traits\n )\n\n return plot\n\n\ndef add_default_axes(\n plot, orientation=\"normal\", vtitle=\"\", htitle=\"\", axis_class=PlotAxis\n):\n \"\"\"\n Creates left and bottom axes for a plot. Assumes that the index is\n horizontal and value is vertical by default; set *orientation* to\n something other than \"normal\" if they are flipped.\n \"\"\"\n if orientation in (\"normal\", \"h\"):\n v_mapper = plot.value_mapper\n h_mapper = plot.index_mapper\n else:\n v_mapper = plot.index_mapper\n h_mapper = plot.value_mapper\n\n left = axis_class(\n orientation=\"left\", title=vtitle, mapper=v_mapper, component=plot\n )\n\n bottom = axis_class(\n orientation=\"bottom\", title=htitle, mapper=h_mapper, component=plot\n )\n\n plot.underlays.append(left)\n plot.underlays.append(bottom)\n return left, bottom\n\n\ndef add_default_grids(plot, orientation=\"normal\"):\n \"\"\"\n Creates horizontal and vertical gridlines for a plot. Assumes that the\n index is horizontal and value is vertical by default; set orientation to\n something other than \"normal\" if they are flipped.\n \"\"\"\n if orientation in (\"normal\", \"h\"):\n v_mapper = plot.index_mapper\n h_mapper = plot.value_mapper\n else:\n v_mapper = plot.value_mapper\n h_mapper = plot.index_mapper\n\n vgrid = PlotGrid(\n mapper=v_mapper,\n orientation=\"vertical\",\n component=plot,\n line_color=\"lightgray\",\n line_style=\"dot\",\n )\n\n hgrid = PlotGrid(\n mapper=h_mapper,\n orientation=\"horizontal\",\n component=plot,\n line_color=\"lightgray\",\n line_style=\"dot\",\n )\n\n plot.underlays.append(vgrid)\n plot.underlays.append(hgrid)\n return hgrid, vgrid\n", "path": "chaco/plot_factory.py"}]} | 3,656 | 364 |
gh_patches_debug_12228 | rasdani/github-patches | git_diff | google__osv.dev-487 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Make ecosystem counts in UI more accurate.
Currently these counts include withdrawn/invalid entries.
Context: https://github.com/google/osv/issues/468#issuecomment-1169755263
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gcp/appengine/frontend_handlers.py`
Content:
```
1 # Copyright 2021 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """Handlers for the OSV web frontend."""
15
16 import json
17 import os
18 import math
19
20 from flask import abort
21 from flask import Blueprint
22 from flask import make_response
23 from flask import redirect
24 from flask import render_template
25 from flask import request
26 import markdown2
27
28 import cache
29 import osv
30 import rate_limiter
31 import source_mapper
32 import utils
33
34 blueprint = Blueprint('frontend_handlers', __name__)
35
36 _PAGE_SIZE = 16
37 _PAGE_LOOKAHEAD = 4
38 _REQUESTS_PER_MIN = 30
39
40 if utils.is_prod():
41 redis_host = os.environ.get('REDISHOST', 'localhost')
42 redis_port = int(os.environ.get('REDISPORT', 6379))
43 limiter = rate_limiter.RateLimiter(
44 redis_host, redis_port, requests_per_min=_REQUESTS_PER_MIN)
45
46 @blueprint.before_request
47 def check_rate_limit():
48 ip_addr = request.headers.get('X-Appengine-User-Ip', 'unknown')
49 if not limiter.check_request(ip_addr):
50 abort(429)
51
52
53 @blueprint.before_request
54 def check_cors_preflight():
55 """Handle CORS preflight requests."""
56 if request.method != 'OPTIONS':
57 return None
58
59 response = make_response()
60 response.headers.add('Access-Control-Allow-Origin', 'http://localhost:8080')
61 response.headers.add('Access-Control-Allow-Methods', '*')
62 response.headers.add('Access-Control-Allow-Headers', '*')
63 return response
64
65
66 @blueprint.after_request
67 def add_cors_headers(response):
68 """Add CORS headers."""
69 response.headers.add('Access-Control-Allow-Origin', 'http://localhost:8080')
70 return response
71
72
73 @blueprint.route('/v2/')
74 def index_v2():
75 return redirect('/')
76
77
78 @blueprint.route('/v2/<path:subpath>')
79 def index_v2_with_subpath(subpath):
80 return redirect('/' + subpath)
81
82
83 @blueprint.route('/')
84 def index():
85 return render_template(
86 'home.html', ecosystem_counts=osv_get_ecosystem_counts_cached())
87
88
89 @blueprint.route('/about')
90 def about():
91 return render_template('about.html')
92
93
94 @blueprint.route('/list')
95 def list_vulnerabilities():
96 """Main page."""
97 query = request.args.get('q', '')
98 page = int(request.args.get('page', 1))
99 ecosystem = request.args.get('ecosystem')
100 results = osv_query(query, page, False, ecosystem)
101
102 # Fetch ecosystems by default. As an optimization, skip when rendering page
103 # fragments.
104 ecosystem_counts = osv_get_ecosystem_counts_cached(
105 ) if not request.headers.get('Turbo-Frame') else None
106
107 return render_template(
108 'list.html',
109 page=page,
110 query=query,
111 selected_ecosystem=ecosystem,
112 ecosystem_counts=ecosystem_counts,
113 vulnerabilities=results['items'])
114
115
116 @blueprint.route('/vulnerability/<vuln_id>')
117 def vulnerability(vuln_id):
118 """Vulnerability page."""
119 vuln = osv_get_by_id(vuln_id)
120 return render_template('vulnerability.html', vulnerability=vuln)
121
122
123 def bug_to_response(bug, detailed=True):
124 """Convert a Bug entity to a response object."""
125 response = osv.vulnerability_to_dict(bug.to_vulnerability())
126 response.update({
127 'isFixed': bug.is_fixed,
128 'invalid': bug.status == osv.BugStatus.INVALID
129 })
130
131 if detailed:
132 add_links(response)
133 add_source_info(bug, response)
134 return response
135
136
137 def add_links(bug):
138 """Add VCS links where possible."""
139
140 for entry in bug.get('affected', []):
141 for i, affected_range in enumerate(entry.get('ranges', [])):
142 affected_range['id'] = i
143 if affected_range['type'] != 'GIT':
144 continue
145
146 repo_url = affected_range.get('repo')
147 if not repo_url:
148 continue
149
150 for event in affected_range.get('events', []):
151 if event.get('introduced'):
152 event['introduced_link'] = _commit_to_link(repo_url,
153 event['introduced'])
154 continue
155
156 if event.get('fixed'):
157 event['fixed_link'] = _commit_to_link(repo_url, event['fixed'])
158 continue
159
160 if event.get('limit'):
161 event['limit_link'] = _commit_to_link(repo_url, event['limit'])
162 continue
163
164
165 def add_source_info(bug, response):
166 """Add source information to `response`."""
167 if bug.source_of_truth == osv.SourceOfTruth.INTERNAL:
168 response['source'] = 'INTERNAL'
169 return
170
171 source_repo = osv.get_source_repository(bug.source)
172 if not source_repo or not source_repo.link:
173 return
174
175 source_path = osv.source_path(source_repo, bug)
176 response['source'] = source_repo.link + source_path
177 response['source_link'] = response['source']
178
179
180 def _commit_to_link(repo_url, commit):
181 """Convert commit to link."""
182 vcs = source_mapper.get_vcs_viewer_for_url(repo_url)
183 if not vcs:
184 return None
185
186 if ':' not in commit:
187 return vcs.get_source_url_for_revision(commit)
188
189 commit_parts = commit.split(':')
190 if len(commit_parts) != 2:
191 return None
192
193 start, end = commit_parts
194 if start == 'unknown':
195 return None
196
197 return vcs.get_source_url_for_revision_diff(start, end)
198
199
200 def osv_get_ecosystems():
201 """Get list of ecosystems."""
202 query = osv.Bug.query(projection=[osv.Bug.ecosystem], distinct=True)
203 return sorted([bug.ecosystem[0] for bug in query if bug.ecosystem],
204 key=str.lower)
205
206
207 # TODO: Figure out how to skip cache when testing
208 @cache.instance.cached(
209 timeout=24 * 60 * 60, key_prefix='osv_get_ecosystem_counts')
210 def osv_get_ecosystem_counts_cached():
211 """Get count of vulnerabilities per ecosystem, cached"""
212 return osv_get_ecosystem_counts()
213
214
215 def osv_get_ecosystem_counts():
216 """Get count of vulnerabilities per ecosystem."""
217 counts = {}
218 ecosystems = osv_get_ecosystems()
219 for ecosystem in ecosystems:
220 base_ecosystem = ecosystem.split(':')[0]
221 if base_ecosystem not in counts:
222 counts[base_ecosystem] = 0
223
224 counts[base_ecosystem] += osv.Bug.query(
225 osv.Bug.ecosystem == ecosystem).count()
226
227 return counts
228
229
230 def osv_query(search_string, page, affected_only, ecosystem):
231 """Run an OSV query."""
232 query = osv.Bug.query(osv.Bug.status == osv.BugStatus.PROCESSED,
233 osv.Bug.public == True) # pylint: disable=singleton-comparison
234
235 if search_string:
236 query = query.filter(osv.Bug.search_indices == search_string.lower())
237
238 if affected_only:
239 query = query.filter(osv.Bug.has_affected == True) # pylint: disable=singleton-comparison
240
241 if ecosystem:
242 query = query.filter(osv.Bug.ecosystem == ecosystem)
243
244 query = query.order(-osv.Bug.last_modified)
245 total = query.count()
246 results = {
247 'total': total,
248 'items': [],
249 }
250
251 bugs, _, _ = query.fetch_page(
252 page_size=_PAGE_SIZE, offset=(page - 1) * _PAGE_SIZE)
253 for bug in bugs:
254 results['items'].append(bug_to_response(bug, detailed=False))
255
256 return results
257
258
259 def osv_get_by_id(vuln_id):
260 """Gets bug details from its id. If invalid, aborts the request."""
261 if not vuln_id:
262 abort(400)
263 return None
264
265 bug = osv.Bug.get_by_id(vuln_id)
266 if not bug:
267 abort(404)
268 return None
269
270 if bug.status == osv.BugStatus.UNPROCESSED:
271 abort(404)
272 return None
273
274 if not bug.public:
275 abort(403)
276 return None
277
278 return bug_to_response(bug)
279
280
281 @blueprint.app_template_filter('event_type')
282 def event_type(event):
283 """Get the type from an event."""
284 if event.get('introduced'):
285 return 'Introduced'
286 if event.get('fixed'):
287 return 'Fixed'
288 if event.get('limit'):
289 return 'Limit'
290 if event.get('last_affected'):
291 return 'Last affected'
292
293 return None
294
295
296 @blueprint.app_template_filter('event_link')
297 def event_link(event):
298 """Get the link from an event."""
299 if event.get('introduced_link'):
300 return event['introduced_link']
301 if event.get('fixed_link'):
302 return event['fixed_link']
303 if event.get('limit_link'):
304 return event['limit_link']
305 if event.get('last_affected_link'):
306 return event['last_affected_link']
307
308 return None
309
310
311 @blueprint.app_template_filter('event_value')
312 def event_value(event):
313 """Get the value from an event."""
314 if event.get('introduced'):
315 return event['introduced']
316 if event.get('fixed'):
317 return event['fixed']
318 if event.get('limit'):
319 return event['limit']
320 if event.get('last_affected'):
321 return event['last_affected']
322
323 return None
324
325
326 @blueprint.app_template_filter('should_collapse')
327 def should_collapse(affected):
328 """Whether if we should collapse the package tab bar."""
329 total_package_length = sum(
330 [len(entry.get('package', {}).get('name', '')) for entry in affected])
331 return total_package_length > 70 or len(affected) > 5
332
333
334 @blueprint.app_template_filter('group_versions')
335 def group_versions(versions):
336 """Group versions by prefix."""
337 groups = {}
338
339 for version in sorted(versions):
340 if '.' not in version:
341 groups.setdefault('Other', []).append(version)
342 continue
343
344 label = version.split('.')[0] + '.*'
345 groups.setdefault(label, []).append(version)
346
347 return groups
348
349
350 @blueprint.app_template_filter('markdown')
351 def markdown(text):
352 """Render markdown."""
353 if text:
354 return markdown2.markdown(text, extras=['fenced-code-blocks'])
355
356 return ''
357
358
359 @blueprint.app_template_filter('display_json')
360 def display_json(data):
361 # We can't use the default `tojson` filter as it's intended for code (and
362 # escapes characters like '<' to '\u003c'). We want to render the JSON for
363 # display purposes and use HTML escaping ('<') instead so it's rendered
364 # as '<'.
365 return json.dumps(data, indent=4)
366
367
368 @blueprint.app_template_filter('log')
369 def logarithm(n):
370 return math.log(n)
371
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/gcp/appengine/frontend_handlers.py b/gcp/appengine/frontend_handlers.py
--- a/gcp/appengine/frontend_handlers.py
+++ b/gcp/appengine/frontend_handlers.py
@@ -217,12 +217,15 @@
counts = {}
ecosystems = osv_get_ecosystems()
for ecosystem in ecosystems:
- base_ecosystem = ecosystem.split(':')[0]
- if base_ecosystem not in counts:
- counts[base_ecosystem] = 0
+ if ':' in ecosystem:
+ # Count by the base ecosystem index. Otherwise we'll overcount as a
+ # single entry may refer to multiple sub-ecosystems.
+ continue
- counts[base_ecosystem] += osv.Bug.query(
- osv.Bug.ecosystem == ecosystem).count()
+ counts[ecosystem] = osv.Bug.query(
+ osv.Bug.ecosystem == ecosystem,
+ osv.Bug.public == True, # pylint: disable=singleton-comparison
+ osv.Bug.status == osv.BugStatus.PROCESSED).count()
return counts
| {"golden_diff": "diff --git a/gcp/appengine/frontend_handlers.py b/gcp/appengine/frontend_handlers.py\n--- a/gcp/appengine/frontend_handlers.py\n+++ b/gcp/appengine/frontend_handlers.py\n@@ -217,12 +217,15 @@\n counts = {}\n ecosystems = osv_get_ecosystems()\n for ecosystem in ecosystems:\n- base_ecosystem = ecosystem.split(':')[0]\n- if base_ecosystem not in counts:\n- counts[base_ecosystem] = 0\n+ if ':' in ecosystem:\n+ # Count by the base ecosystem index. Otherwise we'll overcount as a\n+ # single entry may refer to multiple sub-ecosystems.\n+ continue\n \n- counts[base_ecosystem] += osv.Bug.query(\n- osv.Bug.ecosystem == ecosystem).count()\n+ counts[ecosystem] = osv.Bug.query(\n+ osv.Bug.ecosystem == ecosystem,\n+ osv.Bug.public == True, # pylint: disable=singleton-comparison\n+ osv.Bug.status == osv.BugStatus.PROCESSED).count()\n \n return counts\n", "issue": "Make ecosystem counts in UI more accurate. \nCurrently these counts include withdrawn/invalid entries.\r\n\r\nContext: https://github.com/google/osv/issues/468#issuecomment-1169755263\n", "before_files": [{"content": "# Copyright 2021 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Handlers for the OSV web frontend.\"\"\"\n\nimport json\nimport os\nimport math\n\nfrom flask import abort\nfrom flask import Blueprint\nfrom flask import make_response\nfrom flask import redirect\nfrom flask import render_template\nfrom flask import request\nimport markdown2\n\nimport cache\nimport osv\nimport rate_limiter\nimport source_mapper\nimport utils\n\nblueprint = Blueprint('frontend_handlers', __name__)\n\n_PAGE_SIZE = 16\n_PAGE_LOOKAHEAD = 4\n_REQUESTS_PER_MIN = 30\n\nif utils.is_prod():\n redis_host = os.environ.get('REDISHOST', 'localhost')\n redis_port = int(os.environ.get('REDISPORT', 6379))\n limiter = rate_limiter.RateLimiter(\n redis_host, redis_port, requests_per_min=_REQUESTS_PER_MIN)\n\n @blueprint.before_request\n def check_rate_limit():\n ip_addr = request.headers.get('X-Appengine-User-Ip', 'unknown')\n if not limiter.check_request(ip_addr):\n abort(429)\n\n\[email protected]_request\ndef check_cors_preflight():\n \"\"\"Handle CORS preflight requests.\"\"\"\n if request.method != 'OPTIONS':\n return None\n\n response = make_response()\n response.headers.add('Access-Control-Allow-Origin', 'http://localhost:8080')\n response.headers.add('Access-Control-Allow-Methods', '*')\n response.headers.add('Access-Control-Allow-Headers', '*')\n return response\n\n\[email protected]_request\ndef add_cors_headers(response):\n \"\"\"Add CORS headers.\"\"\"\n response.headers.add('Access-Control-Allow-Origin', 'http://localhost:8080')\n return response\n\n\[email protected]('/v2/')\ndef index_v2():\n return redirect('/')\n\n\[email protected]('/v2/<path:subpath>')\ndef index_v2_with_subpath(subpath):\n return redirect('/' + subpath)\n\n\[email protected]('/')\ndef index():\n return render_template(\n 'home.html', ecosystem_counts=osv_get_ecosystem_counts_cached())\n\n\[email protected]('/about')\ndef about():\n return render_template('about.html')\n\n\[email protected]('/list')\ndef list_vulnerabilities():\n \"\"\"Main page.\"\"\"\n query = request.args.get('q', '')\n page = int(request.args.get('page', 1))\n ecosystem = request.args.get('ecosystem')\n results = osv_query(query, page, False, ecosystem)\n\n # Fetch ecosystems by default. As an optimization, skip when rendering page\n # fragments.\n ecosystem_counts = osv_get_ecosystem_counts_cached(\n ) if not request.headers.get('Turbo-Frame') else None\n\n return render_template(\n 'list.html',\n page=page,\n query=query,\n selected_ecosystem=ecosystem,\n ecosystem_counts=ecosystem_counts,\n vulnerabilities=results['items'])\n\n\[email protected]('/vulnerability/<vuln_id>')\ndef vulnerability(vuln_id):\n \"\"\"Vulnerability page.\"\"\"\n vuln = osv_get_by_id(vuln_id)\n return render_template('vulnerability.html', vulnerability=vuln)\n\n\ndef bug_to_response(bug, detailed=True):\n \"\"\"Convert a Bug entity to a response object.\"\"\"\n response = osv.vulnerability_to_dict(bug.to_vulnerability())\n response.update({\n 'isFixed': bug.is_fixed,\n 'invalid': bug.status == osv.BugStatus.INVALID\n })\n\n if detailed:\n add_links(response)\n add_source_info(bug, response)\n return response\n\n\ndef add_links(bug):\n \"\"\"Add VCS links where possible.\"\"\"\n\n for entry in bug.get('affected', []):\n for i, affected_range in enumerate(entry.get('ranges', [])):\n affected_range['id'] = i\n if affected_range['type'] != 'GIT':\n continue\n\n repo_url = affected_range.get('repo')\n if not repo_url:\n continue\n\n for event in affected_range.get('events', []):\n if event.get('introduced'):\n event['introduced_link'] = _commit_to_link(repo_url,\n event['introduced'])\n continue\n\n if event.get('fixed'):\n event['fixed_link'] = _commit_to_link(repo_url, event['fixed'])\n continue\n\n if event.get('limit'):\n event['limit_link'] = _commit_to_link(repo_url, event['limit'])\n continue\n\n\ndef add_source_info(bug, response):\n \"\"\"Add source information to `response`.\"\"\"\n if bug.source_of_truth == osv.SourceOfTruth.INTERNAL:\n response['source'] = 'INTERNAL'\n return\n\n source_repo = osv.get_source_repository(bug.source)\n if not source_repo or not source_repo.link:\n return\n\n source_path = osv.source_path(source_repo, bug)\n response['source'] = source_repo.link + source_path\n response['source_link'] = response['source']\n\n\ndef _commit_to_link(repo_url, commit):\n \"\"\"Convert commit to link.\"\"\"\n vcs = source_mapper.get_vcs_viewer_for_url(repo_url)\n if not vcs:\n return None\n\n if ':' not in commit:\n return vcs.get_source_url_for_revision(commit)\n\n commit_parts = commit.split(':')\n if len(commit_parts) != 2:\n return None\n\n start, end = commit_parts\n if start == 'unknown':\n return None\n\n return vcs.get_source_url_for_revision_diff(start, end)\n\n\ndef osv_get_ecosystems():\n \"\"\"Get list of ecosystems.\"\"\"\n query = osv.Bug.query(projection=[osv.Bug.ecosystem], distinct=True)\n return sorted([bug.ecosystem[0] for bug in query if bug.ecosystem],\n key=str.lower)\n\n\n# TODO: Figure out how to skip cache when testing\[email protected](\n timeout=24 * 60 * 60, key_prefix='osv_get_ecosystem_counts')\ndef osv_get_ecosystem_counts_cached():\n \"\"\"Get count of vulnerabilities per ecosystem, cached\"\"\"\n return osv_get_ecosystem_counts()\n\n\ndef osv_get_ecosystem_counts():\n \"\"\"Get count of vulnerabilities per ecosystem.\"\"\"\n counts = {}\n ecosystems = osv_get_ecosystems()\n for ecosystem in ecosystems:\n base_ecosystem = ecosystem.split(':')[0]\n if base_ecosystem not in counts:\n counts[base_ecosystem] = 0\n\n counts[base_ecosystem] += osv.Bug.query(\n osv.Bug.ecosystem == ecosystem).count()\n\n return counts\n\n\ndef osv_query(search_string, page, affected_only, ecosystem):\n \"\"\"Run an OSV query.\"\"\"\n query = osv.Bug.query(osv.Bug.status == osv.BugStatus.PROCESSED,\n osv.Bug.public == True) # pylint: disable=singleton-comparison\n\n if search_string:\n query = query.filter(osv.Bug.search_indices == search_string.lower())\n\n if affected_only:\n query = query.filter(osv.Bug.has_affected == True) # pylint: disable=singleton-comparison\n\n if ecosystem:\n query = query.filter(osv.Bug.ecosystem == ecosystem)\n\n query = query.order(-osv.Bug.last_modified)\n total = query.count()\n results = {\n 'total': total,\n 'items': [],\n }\n\n bugs, _, _ = query.fetch_page(\n page_size=_PAGE_SIZE, offset=(page - 1) * _PAGE_SIZE)\n for bug in bugs:\n results['items'].append(bug_to_response(bug, detailed=False))\n\n return results\n\n\ndef osv_get_by_id(vuln_id):\n \"\"\"Gets bug details from its id. If invalid, aborts the request.\"\"\"\n if not vuln_id:\n abort(400)\n return None\n\n bug = osv.Bug.get_by_id(vuln_id)\n if not bug:\n abort(404)\n return None\n\n if bug.status == osv.BugStatus.UNPROCESSED:\n abort(404)\n return None\n\n if not bug.public:\n abort(403)\n return None\n\n return bug_to_response(bug)\n\n\[email protected]_template_filter('event_type')\ndef event_type(event):\n \"\"\"Get the type from an event.\"\"\"\n if event.get('introduced'):\n return 'Introduced'\n if event.get('fixed'):\n return 'Fixed'\n if event.get('limit'):\n return 'Limit'\n if event.get('last_affected'):\n return 'Last affected'\n\n return None\n\n\[email protected]_template_filter('event_link')\ndef event_link(event):\n \"\"\"Get the link from an event.\"\"\"\n if event.get('introduced_link'):\n return event['introduced_link']\n if event.get('fixed_link'):\n return event['fixed_link']\n if event.get('limit_link'):\n return event['limit_link']\n if event.get('last_affected_link'):\n return event['last_affected_link']\n\n return None\n\n\[email protected]_template_filter('event_value')\ndef event_value(event):\n \"\"\"Get the value from an event.\"\"\"\n if event.get('introduced'):\n return event['introduced']\n if event.get('fixed'):\n return event['fixed']\n if event.get('limit'):\n return event['limit']\n if event.get('last_affected'):\n return event['last_affected']\n\n return None\n\n\[email protected]_template_filter('should_collapse')\ndef should_collapse(affected):\n \"\"\"Whether if we should collapse the package tab bar.\"\"\"\n total_package_length = sum(\n [len(entry.get('package', {}).get('name', '')) for entry in affected])\n return total_package_length > 70 or len(affected) > 5\n\n\[email protected]_template_filter('group_versions')\ndef group_versions(versions):\n \"\"\"Group versions by prefix.\"\"\"\n groups = {}\n\n for version in sorted(versions):\n if '.' not in version:\n groups.setdefault('Other', []).append(version)\n continue\n\n label = version.split('.')[0] + '.*'\n groups.setdefault(label, []).append(version)\n\n return groups\n\n\[email protected]_template_filter('markdown')\ndef markdown(text):\n \"\"\"Render markdown.\"\"\"\n if text:\n return markdown2.markdown(text, extras=['fenced-code-blocks'])\n\n return ''\n\n\[email protected]_template_filter('display_json')\ndef display_json(data):\n # We can't use the default `tojson` filter as it's intended for code (and\n # escapes characters like '<' to '\\u003c'). We want to render the JSON for\n # display purposes and use HTML escaping ('<') instead so it's rendered\n # as '<'.\n return json.dumps(data, indent=4)\n\n\[email protected]_template_filter('log')\ndef logarithm(n):\n return math.log(n)\n", "path": "gcp/appengine/frontend_handlers.py"}], "after_files": [{"content": "# Copyright 2021 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Handlers for the OSV web frontend.\"\"\"\n\nimport json\nimport os\nimport math\n\nfrom flask import abort\nfrom flask import Blueprint\nfrom flask import make_response\nfrom flask import redirect\nfrom flask import render_template\nfrom flask import request\nimport markdown2\n\nimport cache\nimport osv\nimport rate_limiter\nimport source_mapper\nimport utils\n\nblueprint = Blueprint('frontend_handlers', __name__)\n\n_PAGE_SIZE = 16\n_PAGE_LOOKAHEAD = 4\n_REQUESTS_PER_MIN = 30\n\nif utils.is_prod():\n redis_host = os.environ.get('REDISHOST', 'localhost')\n redis_port = int(os.environ.get('REDISPORT', 6379))\n limiter = rate_limiter.RateLimiter(\n redis_host, redis_port, requests_per_min=_REQUESTS_PER_MIN)\n\n @blueprint.before_request\n def check_rate_limit():\n ip_addr = request.headers.get('X-Appengine-User-Ip', 'unknown')\n if not limiter.check_request(ip_addr):\n abort(429)\n\n\[email protected]_request\ndef check_cors_preflight():\n \"\"\"Handle CORS preflight requests.\"\"\"\n if request.method != 'OPTIONS':\n return None\n\n response = make_response()\n response.headers.add('Access-Control-Allow-Origin', 'http://localhost:8080')\n response.headers.add('Access-Control-Allow-Methods', '*')\n response.headers.add('Access-Control-Allow-Headers', '*')\n return response\n\n\[email protected]_request\ndef add_cors_headers(response):\n \"\"\"Add CORS headers.\"\"\"\n response.headers.add('Access-Control-Allow-Origin', 'http://localhost:8080')\n return response\n\n\[email protected]('/v2/')\ndef index_v2():\n return redirect('/')\n\n\[email protected]('/v2/<path:subpath>')\ndef index_v2_with_subpath(subpath):\n return redirect('/' + subpath)\n\n\[email protected]('/')\ndef index():\n return render_template(\n 'home.html', ecosystem_counts=osv_get_ecosystem_counts_cached())\n\n\[email protected]('/about')\ndef about():\n return render_template('about.html')\n\n\[email protected]('/list')\ndef list_vulnerabilities():\n \"\"\"Main page.\"\"\"\n query = request.args.get('q', '')\n page = int(request.args.get('page', 1))\n ecosystem = request.args.get('ecosystem')\n results = osv_query(query, page, False, ecosystem)\n\n # Fetch ecosystems by default. As an optimization, skip when rendering page\n # fragments.\n ecosystem_counts = osv_get_ecosystem_counts_cached(\n ) if not request.headers.get('Turbo-Frame') else None\n\n return render_template(\n 'list.html',\n page=page,\n query=query,\n selected_ecosystem=ecosystem,\n ecosystem_counts=ecosystem_counts,\n vulnerabilities=results['items'])\n\n\[email protected]('/vulnerability/<vuln_id>')\ndef vulnerability(vuln_id):\n \"\"\"Vulnerability page.\"\"\"\n vuln = osv_get_by_id(vuln_id)\n return render_template('vulnerability.html', vulnerability=vuln)\n\n\ndef bug_to_response(bug, detailed=True):\n \"\"\"Convert a Bug entity to a response object.\"\"\"\n response = osv.vulnerability_to_dict(bug.to_vulnerability())\n response.update({\n 'isFixed': bug.is_fixed,\n 'invalid': bug.status == osv.BugStatus.INVALID\n })\n\n if detailed:\n add_links(response)\n add_source_info(bug, response)\n return response\n\n\ndef add_links(bug):\n \"\"\"Add VCS links where possible.\"\"\"\n\n for entry in bug.get('affected', []):\n for i, affected_range in enumerate(entry.get('ranges', [])):\n affected_range['id'] = i\n if affected_range['type'] != 'GIT':\n continue\n\n repo_url = affected_range.get('repo')\n if not repo_url:\n continue\n\n for event in affected_range.get('events', []):\n if event.get('introduced'):\n event['introduced_link'] = _commit_to_link(repo_url,\n event['introduced'])\n continue\n\n if event.get('fixed'):\n event['fixed_link'] = _commit_to_link(repo_url, event['fixed'])\n continue\n\n if event.get('limit'):\n event['limit_link'] = _commit_to_link(repo_url, event['limit'])\n continue\n\n\ndef add_source_info(bug, response):\n \"\"\"Add source information to `response`.\"\"\"\n if bug.source_of_truth == osv.SourceOfTruth.INTERNAL:\n response['source'] = 'INTERNAL'\n return\n\n source_repo = osv.get_source_repository(bug.source)\n if not source_repo or not source_repo.link:\n return\n\n source_path = osv.source_path(source_repo, bug)\n response['source'] = source_repo.link + source_path\n response['source_link'] = response['source']\n\n\ndef _commit_to_link(repo_url, commit):\n \"\"\"Convert commit to link.\"\"\"\n vcs = source_mapper.get_vcs_viewer_for_url(repo_url)\n if not vcs:\n return None\n\n if ':' not in commit:\n return vcs.get_source_url_for_revision(commit)\n\n commit_parts = commit.split(':')\n if len(commit_parts) != 2:\n return None\n\n start, end = commit_parts\n if start == 'unknown':\n return None\n\n return vcs.get_source_url_for_revision_diff(start, end)\n\n\ndef osv_get_ecosystems():\n \"\"\"Get list of ecosystems.\"\"\"\n query = osv.Bug.query(projection=[osv.Bug.ecosystem], distinct=True)\n return sorted([bug.ecosystem[0] for bug in query if bug.ecosystem],\n key=str.lower)\n\n\n# TODO: Figure out how to skip cache when testing\[email protected](\n timeout=24 * 60 * 60, key_prefix='osv_get_ecosystem_counts')\ndef osv_get_ecosystem_counts_cached():\n \"\"\"Get count of vulnerabilities per ecosystem, cached\"\"\"\n return osv_get_ecosystem_counts()\n\n\ndef osv_get_ecosystem_counts():\n \"\"\"Get count of vulnerabilities per ecosystem.\"\"\"\n counts = {}\n ecosystems = osv_get_ecosystems()\n for ecosystem in ecosystems:\n if ':' in ecosystem:\n # Count by the base ecosystem index. Otherwise we'll overcount as a\n # single entry may refer to multiple sub-ecosystems.\n continue\n\n counts[ecosystem] = osv.Bug.query(\n osv.Bug.ecosystem == ecosystem,\n osv.Bug.public == True, # pylint: disable=singleton-comparison\n osv.Bug.status == osv.BugStatus.PROCESSED).count()\n\n return counts\n\n\ndef osv_query(search_string, page, affected_only, ecosystem):\n \"\"\"Run an OSV query.\"\"\"\n query = osv.Bug.query(osv.Bug.status == osv.BugStatus.PROCESSED,\n osv.Bug.public == True) # pylint: disable=singleton-comparison\n\n if search_string:\n query = query.filter(osv.Bug.search_indices == search_string.lower())\n\n if affected_only:\n query = query.filter(osv.Bug.has_affected == True) # pylint: disable=singleton-comparison\n\n if ecosystem:\n query = query.filter(osv.Bug.ecosystem == ecosystem)\n\n query = query.order(-osv.Bug.last_modified)\n total = query.count()\n results = {\n 'total': total,\n 'items': [],\n }\n\n bugs, _, _ = query.fetch_page(\n page_size=_PAGE_SIZE, offset=(page - 1) * _PAGE_SIZE)\n for bug in bugs:\n results['items'].append(bug_to_response(bug, detailed=False))\n\n return results\n\n\ndef osv_get_by_id(vuln_id):\n \"\"\"Gets bug details from its id. If invalid, aborts the request.\"\"\"\n if not vuln_id:\n abort(400)\n return None\n\n bug = osv.Bug.get_by_id(vuln_id)\n if not bug:\n abort(404)\n return None\n\n if bug.status == osv.BugStatus.UNPROCESSED:\n abort(404)\n return None\n\n if not bug.public:\n abort(403)\n return None\n\n return bug_to_response(bug)\n\n\[email protected]_template_filter('event_type')\ndef event_type(event):\n \"\"\"Get the type from an event.\"\"\"\n if event.get('introduced'):\n return 'Introduced'\n if event.get('fixed'):\n return 'Fixed'\n if event.get('limit'):\n return 'Limit'\n if event.get('last_affected'):\n return 'Last affected'\n\n return None\n\n\[email protected]_template_filter('event_link')\ndef event_link(event):\n \"\"\"Get the link from an event.\"\"\"\n if event.get('introduced_link'):\n return event['introduced_link']\n if event.get('fixed_link'):\n return event['fixed_link']\n if event.get('limit_link'):\n return event['limit_link']\n if event.get('last_affected_link'):\n return event['last_affected_link']\n\n return None\n\n\[email protected]_template_filter('event_value')\ndef event_value(event):\n \"\"\"Get the value from an event.\"\"\"\n if event.get('introduced'):\n return event['introduced']\n if event.get('fixed'):\n return event['fixed']\n if event.get('limit'):\n return event['limit']\n if event.get('last_affected'):\n return event['last_affected']\n\n return None\n\n\[email protected]_template_filter('should_collapse')\ndef should_collapse(affected):\n \"\"\"Whether if we should collapse the package tab bar.\"\"\"\n total_package_length = sum(\n [len(entry.get('package', {}).get('name', '')) for entry in affected])\n return total_package_length > 70 or len(affected) > 5\n\n\[email protected]_template_filter('group_versions')\ndef group_versions(versions):\n \"\"\"Group versions by prefix.\"\"\"\n groups = {}\n\n for version in sorted(versions):\n if '.' not in version:\n groups.setdefault('Other', []).append(version)\n continue\n\n label = version.split('.')[0] + '.*'\n groups.setdefault(label, []).append(version)\n\n return groups\n\n\[email protected]_template_filter('markdown')\ndef markdown(text):\n \"\"\"Render markdown.\"\"\"\n if text:\n return markdown2.markdown(text, extras=['fenced-code-blocks'])\n\n return ''\n\n\[email protected]_template_filter('display_json')\ndef display_json(data):\n # We can't use the default `tojson` filter as it's intended for code (and\n # escapes characters like '<' to '\\u003c'). We want to render the JSON for\n # display purposes and use HTML escaping ('<') instead so it's rendered\n # as '<'.\n return json.dumps(data, indent=4)\n\n\[email protected]_template_filter('log')\ndef logarithm(n):\n return math.log(n)\n", "path": "gcp/appengine/frontend_handlers.py"}]} | 3,862 | 257 |
gh_patches_debug_66083 | rasdani/github-patches | git_diff | freedomofpress__securedrop-3917 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Posting a Reply should return a UUID
## Description
The API should not return 204, but should return 200 with an object containing the UUID of the reply. Without doing that, there is no way for a client to sanely sync its local copy of a conversation with the server's copy.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `securedrop/journalist_app/api.py`
Content:
```
1 from datetime import datetime, timedelta
2 from functools import wraps
3 import json
4 from werkzeug.exceptions import default_exceptions # type: ignore
5
6 from flask import abort, Blueprint, current_app, jsonify, request
7
8 from db import db
9 from journalist_app import utils
10 from models import (Journalist, Reply, Source, Submission,
11 LoginThrottledException, InvalidUsernameException,
12 BadTokenException, WrongPasswordException)
13 from store import NotEncrypted
14
15
16 TOKEN_EXPIRATION_MINS = 60 * 8
17
18
19 def get_user_object(request):
20 """Helper function to use in token_required views that need a user
21 object
22 """
23 auth_token = request.headers.get('Authorization').split(" ")[1]
24 user = Journalist.validate_api_token_and_get_user(auth_token)
25 return user
26
27
28 def token_required(f):
29 @wraps(f)
30 def decorated_function(*args, **kwargs):
31 try:
32 auth_header = request.headers['Authorization']
33 except KeyError:
34 return abort(403, 'API token not found in Authorization header.')
35
36 if auth_header:
37 auth_token = auth_header.split(" ")[1]
38 else:
39 auth_token = ''
40 if not Journalist.validate_api_token_and_get_user(auth_token):
41 return abort(403, 'API token is invalid or expired.')
42 return f(*args, **kwargs)
43 return decorated_function
44
45
46 def get_or_404(model, object_id, column=''):
47 if column:
48 result = model.query.filter(column == object_id).one_or_none()
49 else:
50 result = model.query.get(object_id)
51 if result is None:
52 abort(404)
53 return result
54
55
56 def make_blueprint(config):
57 api = Blueprint('api', __name__)
58
59 @api.route('/')
60 def get_endpoints():
61 endpoints = {'sources_url': '/api/v1/sources',
62 'current_user_url': '/api/v1/user',
63 'submissions_url': '/api/v1/submissions',
64 'replies_url': '/api/v1/replies',
65 'auth_token_url': '/api/v1/token'}
66 return jsonify(endpoints), 200
67
68 # Before every post, we validate the payload before processing the request
69 @api.before_request
70 def validate_data():
71 if request.method == 'POST':
72 # flag and star can have empty payloads
73 if not request.data:
74 if ('flag' not in request.path and 'star' not in request.path):
75 return abort(400, 'malformed request')
76 # other requests must have valid JSON payload
77 else:
78 try:
79 json.loads(request.data)
80 except (ValueError):
81 return abort(400, 'malformed request')
82
83 @api.route('/token', methods=['POST'])
84 def get_token():
85 creds = json.loads(request.data)
86
87 username = creds.get('username', None)
88 passphrase = creds.get('passphrase', None)
89 one_time_code = creds.get('one_time_code', None)
90
91 if username is None:
92 return abort(400, 'username field is missing')
93 if passphrase is None:
94 return abort(400, 'passphrase field is missing')
95 if one_time_code is None:
96 return abort(400, 'one_time_code field is missing')
97
98 try:
99 journalist = Journalist.login(username, passphrase, one_time_code)
100 token_expiry = datetime.utcnow() + timedelta(
101 seconds=TOKEN_EXPIRATION_MINS * 60)
102 response = jsonify({'token': journalist.generate_api_token(
103 expiration=TOKEN_EXPIRATION_MINS * 60),
104 'expiration': token_expiry.isoformat() + 'Z'})
105
106 # Update access metadata
107 journalist.last_access = datetime.utcnow()
108 db.session.add(journalist)
109 db.session.commit()
110
111 return response, 200
112 except (LoginThrottledException, InvalidUsernameException,
113 BadTokenException, WrongPasswordException):
114 return abort(403, 'Token authentication failed.')
115
116 @api.route('/sources', methods=['GET'])
117 @token_required
118 def get_all_sources():
119 sources = Source.query.filter_by(pending=False).all()
120 return jsonify(
121 {'sources': [source.to_json() for source in sources]}), 200
122
123 @api.route('/sources/<source_uuid>', methods=['GET', 'DELETE'])
124 @token_required
125 def single_source(source_uuid):
126 if request.method == 'GET':
127 source = get_or_404(Source, source_uuid, column=Source.uuid)
128 return jsonify(source.to_json()), 200
129 elif request.method == 'DELETE':
130 source = get_or_404(Source, source_uuid, column=Source.uuid)
131 utils.delete_collection(source.filesystem_id)
132 return jsonify({'message': 'Source and submissions deleted'}), 200
133
134 @api.route('/sources/<source_uuid>/add_star', methods=['POST'])
135 @token_required
136 def add_star(source_uuid):
137 source = get_or_404(Source, source_uuid, column=Source.uuid)
138 utils.make_star_true(source.filesystem_id)
139 db.session.commit()
140 return jsonify({'message': 'Star added'}), 201
141
142 @api.route('/sources/<source_uuid>/remove_star', methods=['DELETE'])
143 @token_required
144 def remove_star(source_uuid):
145 source = get_or_404(Source, source_uuid, column=Source.uuid)
146 utils.make_star_false(source.filesystem_id)
147 db.session.commit()
148 return jsonify({'message': 'Star removed'}), 200
149
150 @api.route('/sources/<source_uuid>/flag', methods=['POST'])
151 @token_required
152 def flag(source_uuid):
153 source = get_or_404(Source, source_uuid,
154 column=Source.uuid)
155 source.flagged = True
156 db.session.commit()
157 return jsonify({'message': 'Source flagged for reply'}), 200
158
159 @api.route('/sources/<source_uuid>/submissions', methods=['GET'])
160 @token_required
161 def all_source_submissions(source_uuid):
162 source = get_or_404(Source, source_uuid, column=Source.uuid)
163 return jsonify(
164 {'submissions': [submission.to_json() for
165 submission in source.submissions]}), 200
166
167 @api.route('/sources/<source_uuid>/submissions/<submission_uuid>/download', # noqa
168 methods=['GET'])
169 @token_required
170 def download_submission(source_uuid, submission_uuid):
171 source = get_or_404(Source, source_uuid, column=Source.uuid)
172 submission = get_or_404(Submission, submission_uuid,
173 column=Submission.uuid)
174
175 # Mark as downloaded
176 submission.downloaded = True
177 db.session.commit()
178
179 return utils.serve_file_with_etag(source, submission.filename)
180
181 @api.route('/sources/<source_uuid>/replies/<reply_uuid>/download',
182 methods=['GET'])
183 @token_required
184 def download_reply(source_uuid, reply_uuid):
185 source = get_or_404(Source, source_uuid, column=Source.uuid)
186 reply = get_or_404(Reply, reply_uuid, column=Reply.uuid)
187
188 return utils.serve_file_with_etag(source, reply.filename)
189
190 @api.route('/sources/<source_uuid>/submissions/<submission_uuid>',
191 methods=['GET', 'DELETE'])
192 @token_required
193 def single_submission(source_uuid, submission_uuid):
194 if request.method == 'GET':
195 source = get_or_404(Source, source_uuid, column=Source.uuid)
196 submission = get_or_404(Submission, submission_uuid,
197 column=Submission.uuid)
198 return jsonify(submission.to_json()), 200
199 elif request.method == 'DELETE':
200 submission = get_or_404(Submission, submission_uuid,
201 column=Submission.uuid)
202 source = get_or_404(Source, source_uuid, column=Source.uuid)
203 utils.delete_file(source.filesystem_id, submission.filename,
204 submission)
205 return jsonify({'message': 'Submission deleted'}), 200
206
207 @api.route('/sources/<source_uuid>/replies', methods=['GET', 'POST'])
208 @token_required
209 def all_source_replies(source_uuid):
210 if request.method == 'GET':
211 source = get_or_404(Source, source_uuid, column=Source.uuid)
212 return jsonify(
213 {'replies': [reply.to_json() for
214 reply in source.replies]}), 200
215 elif request.method == 'POST':
216 source = get_or_404(Source, source_uuid,
217 column=Source.uuid)
218 if request.json is None:
219 abort(400, 'please send requests in valid JSON')
220
221 if 'reply' not in request.json:
222 abort(400, 'reply not found in request body')
223
224 user = get_user_object(request)
225
226 data = json.loads(request.data)
227 if not data['reply']:
228 abort(400, 'reply should not be empty')
229
230 source.interaction_count += 1
231 try:
232 filename = current_app.storage.save_pre_encrypted_reply(
233 source.filesystem_id,
234 source.interaction_count,
235 source.journalist_filename,
236 data['reply'])
237 except NotEncrypted:
238 return jsonify(
239 {'message': 'You must encrypt replies client side'}), 400
240
241 reply = Reply(user, source,
242 current_app.storage.path(source.filesystem_id,
243 filename))
244 db.session.add(reply)
245 db.session.add(source)
246 db.session.commit()
247 return jsonify({'message': 'Your reply has been stored'}), 201
248
249 @api.route('/sources/<source_uuid>/replies/<reply_uuid>',
250 methods=['GET', 'DELETE'])
251 @token_required
252 def single_reply(source_uuid, reply_uuid):
253 source = get_or_404(Source, source_uuid, column=Source.uuid)
254 reply = get_or_404(Reply, reply_uuid, column=Reply.uuid)
255 if request.method == 'GET':
256 return jsonify(reply.to_json()), 200
257 elif request.method == 'DELETE':
258 utils.delete_file(source.filesystem_id, reply.filename,
259 reply)
260 return jsonify({'message': 'Reply deleted'}), 200
261
262 @api.route('/submissions', methods=['GET'])
263 @token_required
264 def get_all_submissions():
265 submissions = Submission.query.all()
266 return jsonify({'submissions': [submission.to_json() for
267 submission in submissions]}), 200
268
269 @api.route('/replies', methods=['GET'])
270 @token_required
271 def get_all_replies():
272 replies = Reply.query.all()
273 return jsonify(
274 {'replies': [reply.to_json() for reply in replies]}), 200
275
276 @api.route('/user', methods=['GET'])
277 @token_required
278 def get_current_user():
279 user = get_user_object(request)
280 return jsonify(user.to_json()), 200
281
282 def _handle_http_exception(error):
283 # Workaround for no blueprint-level 404/5 error handlers, see:
284 # https://github.com/pallets/flask/issues/503#issuecomment-71383286
285 response = jsonify({'error': error.name,
286 'message': error.description})
287
288 return response, error.code
289
290 for code in default_exceptions:
291 api.errorhandler(code)(_handle_http_exception)
292
293 return api
294
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/securedrop/journalist_app/api.py b/securedrop/journalist_app/api.py
--- a/securedrop/journalist_app/api.py
+++ b/securedrop/journalist_app/api.py
@@ -244,7 +244,8 @@
db.session.add(reply)
db.session.add(source)
db.session.commit()
- return jsonify({'message': 'Your reply has been stored'}), 201
+ return jsonify({'message': 'Your reply has been stored',
+ 'uuid': reply.uuid}), 201
@api.route('/sources/<source_uuid>/replies/<reply_uuid>',
methods=['GET', 'DELETE'])
| {"golden_diff": "diff --git a/securedrop/journalist_app/api.py b/securedrop/journalist_app/api.py\n--- a/securedrop/journalist_app/api.py\n+++ b/securedrop/journalist_app/api.py\n@@ -244,7 +244,8 @@\n db.session.add(reply)\n db.session.add(source)\n db.session.commit()\n- return jsonify({'message': 'Your reply has been stored'}), 201\n+ return jsonify({'message': 'Your reply has been stored',\n+ 'uuid': reply.uuid}), 201\n \n @api.route('/sources/<source_uuid>/replies/<reply_uuid>',\n methods=['GET', 'DELETE'])\n", "issue": "Posting a Reply should return a UUID\n## Description\r\n\r\nThe API should not return 204, but should return 200 with an object containing the UUID of the reply. Without doing that, there is no way for a client to sanely sync its local copy of a conversation with the server's copy.\n", "before_files": [{"content": "from datetime import datetime, timedelta\nfrom functools import wraps\nimport json\nfrom werkzeug.exceptions import default_exceptions # type: ignore\n\nfrom flask import abort, Blueprint, current_app, jsonify, request\n\nfrom db import db\nfrom journalist_app import utils\nfrom models import (Journalist, Reply, Source, Submission,\n LoginThrottledException, InvalidUsernameException,\n BadTokenException, WrongPasswordException)\nfrom store import NotEncrypted\n\n\nTOKEN_EXPIRATION_MINS = 60 * 8\n\n\ndef get_user_object(request):\n \"\"\"Helper function to use in token_required views that need a user\n object\n \"\"\"\n auth_token = request.headers.get('Authorization').split(\" \")[1]\n user = Journalist.validate_api_token_and_get_user(auth_token)\n return user\n\n\ndef token_required(f):\n @wraps(f)\n def decorated_function(*args, **kwargs):\n try:\n auth_header = request.headers['Authorization']\n except KeyError:\n return abort(403, 'API token not found in Authorization header.')\n\n if auth_header:\n auth_token = auth_header.split(\" \")[1]\n else:\n auth_token = ''\n if not Journalist.validate_api_token_and_get_user(auth_token):\n return abort(403, 'API token is invalid or expired.')\n return f(*args, **kwargs)\n return decorated_function\n\n\ndef get_or_404(model, object_id, column=''):\n if column:\n result = model.query.filter(column == object_id).one_or_none()\n else:\n result = model.query.get(object_id)\n if result is None:\n abort(404)\n return result\n\n\ndef make_blueprint(config):\n api = Blueprint('api', __name__)\n\n @api.route('/')\n def get_endpoints():\n endpoints = {'sources_url': '/api/v1/sources',\n 'current_user_url': '/api/v1/user',\n 'submissions_url': '/api/v1/submissions',\n 'replies_url': '/api/v1/replies',\n 'auth_token_url': '/api/v1/token'}\n return jsonify(endpoints), 200\n\n # Before every post, we validate the payload before processing the request\n @api.before_request\n def validate_data():\n if request.method == 'POST':\n # flag and star can have empty payloads\n if not request.data:\n if ('flag' not in request.path and 'star' not in request.path):\n return abort(400, 'malformed request')\n # other requests must have valid JSON payload\n else:\n try:\n json.loads(request.data)\n except (ValueError):\n return abort(400, 'malformed request')\n\n @api.route('/token', methods=['POST'])\n def get_token():\n creds = json.loads(request.data)\n\n username = creds.get('username', None)\n passphrase = creds.get('passphrase', None)\n one_time_code = creds.get('one_time_code', None)\n\n if username is None:\n return abort(400, 'username field is missing')\n if passphrase is None:\n return abort(400, 'passphrase field is missing')\n if one_time_code is None:\n return abort(400, 'one_time_code field is missing')\n\n try:\n journalist = Journalist.login(username, passphrase, one_time_code)\n token_expiry = datetime.utcnow() + timedelta(\n seconds=TOKEN_EXPIRATION_MINS * 60)\n response = jsonify({'token': journalist.generate_api_token(\n expiration=TOKEN_EXPIRATION_MINS * 60),\n 'expiration': token_expiry.isoformat() + 'Z'})\n\n # Update access metadata\n journalist.last_access = datetime.utcnow()\n db.session.add(journalist)\n db.session.commit()\n\n return response, 200\n except (LoginThrottledException, InvalidUsernameException,\n BadTokenException, WrongPasswordException):\n return abort(403, 'Token authentication failed.')\n\n @api.route('/sources', methods=['GET'])\n @token_required\n def get_all_sources():\n sources = Source.query.filter_by(pending=False).all()\n return jsonify(\n {'sources': [source.to_json() for source in sources]}), 200\n\n @api.route('/sources/<source_uuid>', methods=['GET', 'DELETE'])\n @token_required\n def single_source(source_uuid):\n if request.method == 'GET':\n source = get_or_404(Source, source_uuid, column=Source.uuid)\n return jsonify(source.to_json()), 200\n elif request.method == 'DELETE':\n source = get_or_404(Source, source_uuid, column=Source.uuid)\n utils.delete_collection(source.filesystem_id)\n return jsonify({'message': 'Source and submissions deleted'}), 200\n\n @api.route('/sources/<source_uuid>/add_star', methods=['POST'])\n @token_required\n def add_star(source_uuid):\n source = get_or_404(Source, source_uuid, column=Source.uuid)\n utils.make_star_true(source.filesystem_id)\n db.session.commit()\n return jsonify({'message': 'Star added'}), 201\n\n @api.route('/sources/<source_uuid>/remove_star', methods=['DELETE'])\n @token_required\n def remove_star(source_uuid):\n source = get_or_404(Source, source_uuid, column=Source.uuid)\n utils.make_star_false(source.filesystem_id)\n db.session.commit()\n return jsonify({'message': 'Star removed'}), 200\n\n @api.route('/sources/<source_uuid>/flag', methods=['POST'])\n @token_required\n def flag(source_uuid):\n source = get_or_404(Source, source_uuid,\n column=Source.uuid)\n source.flagged = True\n db.session.commit()\n return jsonify({'message': 'Source flagged for reply'}), 200\n\n @api.route('/sources/<source_uuid>/submissions', methods=['GET'])\n @token_required\n def all_source_submissions(source_uuid):\n source = get_or_404(Source, source_uuid, column=Source.uuid)\n return jsonify(\n {'submissions': [submission.to_json() for\n submission in source.submissions]}), 200\n\n @api.route('/sources/<source_uuid>/submissions/<submission_uuid>/download', # noqa\n methods=['GET'])\n @token_required\n def download_submission(source_uuid, submission_uuid):\n source = get_or_404(Source, source_uuid, column=Source.uuid)\n submission = get_or_404(Submission, submission_uuid,\n column=Submission.uuid)\n\n # Mark as downloaded\n submission.downloaded = True\n db.session.commit()\n\n return utils.serve_file_with_etag(source, submission.filename)\n\n @api.route('/sources/<source_uuid>/replies/<reply_uuid>/download',\n methods=['GET'])\n @token_required\n def download_reply(source_uuid, reply_uuid):\n source = get_or_404(Source, source_uuid, column=Source.uuid)\n reply = get_or_404(Reply, reply_uuid, column=Reply.uuid)\n\n return utils.serve_file_with_etag(source, reply.filename)\n\n @api.route('/sources/<source_uuid>/submissions/<submission_uuid>',\n methods=['GET', 'DELETE'])\n @token_required\n def single_submission(source_uuid, submission_uuid):\n if request.method == 'GET':\n source = get_or_404(Source, source_uuid, column=Source.uuid)\n submission = get_or_404(Submission, submission_uuid,\n column=Submission.uuid)\n return jsonify(submission.to_json()), 200\n elif request.method == 'DELETE':\n submission = get_or_404(Submission, submission_uuid,\n column=Submission.uuid)\n source = get_or_404(Source, source_uuid, column=Source.uuid)\n utils.delete_file(source.filesystem_id, submission.filename,\n submission)\n return jsonify({'message': 'Submission deleted'}), 200\n\n @api.route('/sources/<source_uuid>/replies', methods=['GET', 'POST'])\n @token_required\n def all_source_replies(source_uuid):\n if request.method == 'GET':\n source = get_or_404(Source, source_uuid, column=Source.uuid)\n return jsonify(\n {'replies': [reply.to_json() for\n reply in source.replies]}), 200\n elif request.method == 'POST':\n source = get_or_404(Source, source_uuid,\n column=Source.uuid)\n if request.json is None:\n abort(400, 'please send requests in valid JSON')\n\n if 'reply' not in request.json:\n abort(400, 'reply not found in request body')\n\n user = get_user_object(request)\n\n data = json.loads(request.data)\n if not data['reply']:\n abort(400, 'reply should not be empty')\n\n source.interaction_count += 1\n try:\n filename = current_app.storage.save_pre_encrypted_reply(\n source.filesystem_id,\n source.interaction_count,\n source.journalist_filename,\n data['reply'])\n except NotEncrypted:\n return jsonify(\n {'message': 'You must encrypt replies client side'}), 400\n\n reply = Reply(user, source,\n current_app.storage.path(source.filesystem_id,\n filename))\n db.session.add(reply)\n db.session.add(source)\n db.session.commit()\n return jsonify({'message': 'Your reply has been stored'}), 201\n\n @api.route('/sources/<source_uuid>/replies/<reply_uuid>',\n methods=['GET', 'DELETE'])\n @token_required\n def single_reply(source_uuid, reply_uuid):\n source = get_or_404(Source, source_uuid, column=Source.uuid)\n reply = get_or_404(Reply, reply_uuid, column=Reply.uuid)\n if request.method == 'GET':\n return jsonify(reply.to_json()), 200\n elif request.method == 'DELETE':\n utils.delete_file(source.filesystem_id, reply.filename,\n reply)\n return jsonify({'message': 'Reply deleted'}), 200\n\n @api.route('/submissions', methods=['GET'])\n @token_required\n def get_all_submissions():\n submissions = Submission.query.all()\n return jsonify({'submissions': [submission.to_json() for\n submission in submissions]}), 200\n\n @api.route('/replies', methods=['GET'])\n @token_required\n def get_all_replies():\n replies = Reply.query.all()\n return jsonify(\n {'replies': [reply.to_json() for reply in replies]}), 200\n\n @api.route('/user', methods=['GET'])\n @token_required\n def get_current_user():\n user = get_user_object(request)\n return jsonify(user.to_json()), 200\n\n def _handle_http_exception(error):\n # Workaround for no blueprint-level 404/5 error handlers, see:\n # https://github.com/pallets/flask/issues/503#issuecomment-71383286\n response = jsonify({'error': error.name,\n 'message': error.description})\n\n return response, error.code\n\n for code in default_exceptions:\n api.errorhandler(code)(_handle_http_exception)\n\n return api\n", "path": "securedrop/journalist_app/api.py"}], "after_files": [{"content": "from datetime import datetime, timedelta\nfrom functools import wraps\nimport json\nfrom werkzeug.exceptions import default_exceptions # type: ignore\n\nfrom flask import abort, Blueprint, current_app, jsonify, request\n\nfrom db import db\nfrom journalist_app import utils\nfrom models import (Journalist, Reply, Source, Submission,\n LoginThrottledException, InvalidUsernameException,\n BadTokenException, WrongPasswordException)\nfrom store import NotEncrypted\n\n\nTOKEN_EXPIRATION_MINS = 60 * 8\n\n\ndef get_user_object(request):\n \"\"\"Helper function to use in token_required views that need a user\n object\n \"\"\"\n auth_token = request.headers.get('Authorization').split(\" \")[1]\n user = Journalist.validate_api_token_and_get_user(auth_token)\n return user\n\n\ndef token_required(f):\n @wraps(f)\n def decorated_function(*args, **kwargs):\n try:\n auth_header = request.headers['Authorization']\n except KeyError:\n return abort(403, 'API token not found in Authorization header.')\n\n if auth_header:\n auth_token = auth_header.split(\" \")[1]\n else:\n auth_token = ''\n if not Journalist.validate_api_token_and_get_user(auth_token):\n return abort(403, 'API token is invalid or expired.')\n return f(*args, **kwargs)\n return decorated_function\n\n\ndef get_or_404(model, object_id, column=''):\n if column:\n result = model.query.filter(column == object_id).one_or_none()\n else:\n result = model.query.get(object_id)\n if result is None:\n abort(404)\n return result\n\n\ndef make_blueprint(config):\n api = Blueprint('api', __name__)\n\n @api.route('/')\n def get_endpoints():\n endpoints = {'sources_url': '/api/v1/sources',\n 'current_user_url': '/api/v1/user',\n 'submissions_url': '/api/v1/submissions',\n 'replies_url': '/api/v1/replies',\n 'auth_token_url': '/api/v1/token'}\n return jsonify(endpoints), 200\n\n # Before every post, we validate the payload before processing the request\n @api.before_request\n def validate_data():\n if request.method == 'POST':\n # flag and star can have empty payloads\n if not request.data:\n if ('flag' not in request.path and 'star' not in request.path):\n return abort(400, 'malformed request')\n # other requests must have valid JSON payload\n else:\n try:\n json.loads(request.data)\n except (ValueError):\n return abort(400, 'malformed request')\n\n @api.route('/token', methods=['POST'])\n def get_token():\n creds = json.loads(request.data)\n\n username = creds.get('username', None)\n passphrase = creds.get('passphrase', None)\n one_time_code = creds.get('one_time_code', None)\n\n if username is None:\n return abort(400, 'username field is missing')\n if passphrase is None:\n return abort(400, 'passphrase field is missing')\n if one_time_code is None:\n return abort(400, 'one_time_code field is missing')\n\n try:\n journalist = Journalist.login(username, passphrase, one_time_code)\n token_expiry = datetime.utcnow() + timedelta(\n seconds=TOKEN_EXPIRATION_MINS * 60)\n response = jsonify({'token': journalist.generate_api_token(\n expiration=TOKEN_EXPIRATION_MINS * 60),\n 'expiration': token_expiry.isoformat() + 'Z'})\n\n # Update access metadata\n journalist.last_access = datetime.utcnow()\n db.session.add(journalist)\n db.session.commit()\n\n return response, 200\n except (LoginThrottledException, InvalidUsernameException,\n BadTokenException, WrongPasswordException):\n return abort(403, 'Token authentication failed.')\n\n @api.route('/sources', methods=['GET'])\n @token_required\n def get_all_sources():\n sources = Source.query.filter_by(pending=False).all()\n return jsonify(\n {'sources': [source.to_json() for source in sources]}), 200\n\n @api.route('/sources/<source_uuid>', methods=['GET', 'DELETE'])\n @token_required\n def single_source(source_uuid):\n if request.method == 'GET':\n source = get_or_404(Source, source_uuid, column=Source.uuid)\n return jsonify(source.to_json()), 200\n elif request.method == 'DELETE':\n source = get_or_404(Source, source_uuid, column=Source.uuid)\n utils.delete_collection(source.filesystem_id)\n return jsonify({'message': 'Source and submissions deleted'}), 200\n\n @api.route('/sources/<source_uuid>/add_star', methods=['POST'])\n @token_required\n def add_star(source_uuid):\n source = get_or_404(Source, source_uuid, column=Source.uuid)\n utils.make_star_true(source.filesystem_id)\n db.session.commit()\n return jsonify({'message': 'Star added'}), 201\n\n @api.route('/sources/<source_uuid>/remove_star', methods=['DELETE'])\n @token_required\n def remove_star(source_uuid):\n source = get_or_404(Source, source_uuid, column=Source.uuid)\n utils.make_star_false(source.filesystem_id)\n db.session.commit()\n return jsonify({'message': 'Star removed'}), 200\n\n @api.route('/sources/<source_uuid>/flag', methods=['POST'])\n @token_required\n def flag(source_uuid):\n source = get_or_404(Source, source_uuid,\n column=Source.uuid)\n source.flagged = True\n db.session.commit()\n return jsonify({'message': 'Source flagged for reply'}), 200\n\n @api.route('/sources/<source_uuid>/submissions', methods=['GET'])\n @token_required\n def all_source_submissions(source_uuid):\n source = get_or_404(Source, source_uuid, column=Source.uuid)\n return jsonify(\n {'submissions': [submission.to_json() for\n submission in source.submissions]}), 200\n\n @api.route('/sources/<source_uuid>/submissions/<submission_uuid>/download', # noqa\n methods=['GET'])\n @token_required\n def download_submission(source_uuid, submission_uuid):\n source = get_or_404(Source, source_uuid, column=Source.uuid)\n submission = get_or_404(Submission, submission_uuid,\n column=Submission.uuid)\n\n # Mark as downloaded\n submission.downloaded = True\n db.session.commit()\n\n return utils.serve_file_with_etag(source, submission.filename)\n\n @api.route('/sources/<source_uuid>/replies/<reply_uuid>/download',\n methods=['GET'])\n @token_required\n def download_reply(source_uuid, reply_uuid):\n source = get_or_404(Source, source_uuid, column=Source.uuid)\n reply = get_or_404(Reply, reply_uuid, column=Reply.uuid)\n\n return utils.serve_file_with_etag(source, reply.filename)\n\n @api.route('/sources/<source_uuid>/submissions/<submission_uuid>',\n methods=['GET', 'DELETE'])\n @token_required\n def single_submission(source_uuid, submission_uuid):\n if request.method == 'GET':\n source = get_or_404(Source, source_uuid, column=Source.uuid)\n submission = get_or_404(Submission, submission_uuid,\n column=Submission.uuid)\n return jsonify(submission.to_json()), 200\n elif request.method == 'DELETE':\n submission = get_or_404(Submission, submission_uuid,\n column=Submission.uuid)\n source = get_or_404(Source, source_uuid, column=Source.uuid)\n utils.delete_file(source.filesystem_id, submission.filename,\n submission)\n return jsonify({'message': 'Submission deleted'}), 200\n\n @api.route('/sources/<source_uuid>/replies', methods=['GET', 'POST'])\n @token_required\n def all_source_replies(source_uuid):\n if request.method == 'GET':\n source = get_or_404(Source, source_uuid, column=Source.uuid)\n return jsonify(\n {'replies': [reply.to_json() for\n reply in source.replies]}), 200\n elif request.method == 'POST':\n source = get_or_404(Source, source_uuid,\n column=Source.uuid)\n if request.json is None:\n abort(400, 'please send requests in valid JSON')\n\n if 'reply' not in request.json:\n abort(400, 'reply not found in request body')\n\n user = get_user_object(request)\n\n data = json.loads(request.data)\n if not data['reply']:\n abort(400, 'reply should not be empty')\n\n source.interaction_count += 1\n try:\n filename = current_app.storage.save_pre_encrypted_reply(\n source.filesystem_id,\n source.interaction_count,\n source.journalist_filename,\n data['reply'])\n except NotEncrypted:\n return jsonify(\n {'message': 'You must encrypt replies client side'}), 400\n\n reply = Reply(user, source,\n current_app.storage.path(source.filesystem_id,\n filename))\n db.session.add(reply)\n db.session.add(source)\n db.session.commit()\n return jsonify({'message': 'Your reply has been stored',\n 'uuid': reply.uuid}), 201\n\n @api.route('/sources/<source_uuid>/replies/<reply_uuid>',\n methods=['GET', 'DELETE'])\n @token_required\n def single_reply(source_uuid, reply_uuid):\n source = get_or_404(Source, source_uuid, column=Source.uuid)\n reply = get_or_404(Reply, reply_uuid, column=Reply.uuid)\n if request.method == 'GET':\n return jsonify(reply.to_json()), 200\n elif request.method == 'DELETE':\n utils.delete_file(source.filesystem_id, reply.filename,\n reply)\n return jsonify({'message': 'Reply deleted'}), 200\n\n @api.route('/submissions', methods=['GET'])\n @token_required\n def get_all_submissions():\n submissions = Submission.query.all()\n return jsonify({'submissions': [submission.to_json() for\n submission in submissions]}), 200\n\n @api.route('/replies', methods=['GET'])\n @token_required\n def get_all_replies():\n replies = Reply.query.all()\n return jsonify(\n {'replies': [reply.to_json() for reply in replies]}), 200\n\n @api.route('/user', methods=['GET'])\n @token_required\n def get_current_user():\n user = get_user_object(request)\n return jsonify(user.to_json()), 200\n\n def _handle_http_exception(error):\n # Workaround for no blueprint-level 404/5 error handlers, see:\n # https://github.com/pallets/flask/issues/503#issuecomment-71383286\n response = jsonify({'error': error.name,\n 'message': error.description})\n\n return response, error.code\n\n for code in default_exceptions:\n api.errorhandler(code)(_handle_http_exception)\n\n return api\n", "path": "securedrop/journalist_app/api.py"}]} | 3,598 | 150 |
gh_patches_debug_48734 | rasdani/github-patches | git_diff | microsoft__torchgeo-2100 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
PatternNet download link is broken
### Description
[PatterNet dataset](https://torchgeo.readthedocs.io/en/stable/_modules/torchgeo/datasets/patternnet.html#PatternNet) [download link ](https://drive.google.com/file/d/127lxXYqzO6Bd0yZhvEbgIfz95HaEnr9K) is broken.
It seems it is officially hosted [here](https://nuisteducn1-my.sharepoint.com/:u:/g/personal/zhouwx_nuist_edu_cn/EYSPYqBztbBBqS27B7uM_mEB3R9maNJze8M1Qg9Q6cnPBQ?e=MSf977). Was someone mirroring it on a personal Google Drive?
### Steps to reproduce
```
from torchgeo.datasets import PatternNet
patternet = PatternNet(root='./', transforms=None, download=True, checksum=True)
```
### Version
0.5.2
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `torchgeo/datasets/patternnet.py`
Content:
```
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License.
3
4 """PatternNet dataset."""
5
6 import os
7 from collections.abc import Callable
8 from typing import cast
9
10 import matplotlib.pyplot as plt
11 from matplotlib.figure import Figure
12 from torch import Tensor
13
14 from .errors import DatasetNotFoundError
15 from .geo import NonGeoClassificationDataset
16 from .utils import download_url, extract_archive
17
18
19 class PatternNet(NonGeoClassificationDataset):
20 """PatternNet dataset.
21
22 The `PatternNet <https://sites.google.com/view/zhouwx/dataset>`__
23 dataset is a dataset for remote sensing scene classification and image retrieval.
24
25 Dataset features:
26
27 * 30,400 images with 6-50 cm per pixel resolution (256x256 px)
28 * three spectral bands - RGB
29 * 38 scene classes, 800 images per class
30
31 Dataset format:
32
33 * images are three-channel jpgs
34
35 Dataset classes:
36
37 0. airplane
38 1. baseball_field
39 2. basketball_court
40 3. beach
41 4. bridge
42 5. cemetery
43 6. chaparral
44 7. christmas_tree_farm
45 8. closed_road
46 9. coastal_mansion
47 10. crosswalk
48 11. dense_residential
49 12. ferry_terminal
50 13. football_field
51 14. forest
52 15. freeway
53 16. golf_course
54 17. harbor
55 18. intersection
56 19. mobile_home_park
57 20. nursing_home
58 21. oil_gas_field
59 22. oil_well
60 23. overpass
61 24. parking_lot
62 25. parking_space
63 26. railway
64 27. river
65 28. runway
66 29. runway_marking
67 30. shipping_yard
68 31. solar_panel
69 32. sparse_residential
70 33. storage_tank
71 34. swimming_pool
72 35. tennis_court
73 36. transformer_station
74 37. wastewater_treatment_plant
75
76 If you use this dataset in your research, please cite the following paper:
77
78 * https://doi.org/10.1016/j.isprsjprs.2018.01.004
79 """
80
81 url = 'https://drive.google.com/file/d/127lxXYqzO6Bd0yZhvEbgIfz95HaEnr9K'
82 md5 = '96d54b3224c5350a98d55d5a7e6984ad'
83 filename = 'PatternNet.zip'
84 directory = os.path.join('PatternNet', 'images')
85
86 def __init__(
87 self,
88 root: str = 'data',
89 transforms: Callable[[dict[str, Tensor]], dict[str, Tensor]] | None = None,
90 download: bool = False,
91 checksum: bool = False,
92 ) -> None:
93 """Initialize a new PatternNet dataset instance.
94
95 Args:
96 root: root directory where dataset can be found
97 transforms: a function/transform that takes input sample and its target as
98 entry and returns a transformed version
99 download: if True, download dataset and store it in the root directory
100 checksum: if True, check the MD5 of the downloaded files (may be slow)
101
102 Raises:
103 DatasetNotFoundError: If dataset is not found and *download* is False.
104 """
105 self.root = root
106 self.download = download
107 self.checksum = checksum
108 self._verify()
109 super().__init__(root=os.path.join(root, self.directory), transforms=transforms)
110
111 def _verify(self) -> None:
112 """Verify the integrity of the dataset."""
113 # Check if the files already exist
114 filepath = os.path.join(self.root, self.directory)
115 if os.path.exists(filepath):
116 return
117
118 # Check if zip file already exists (if so then extract)
119 filepath = os.path.join(self.root, self.filename)
120 if os.path.exists(filepath):
121 self._extract()
122 return
123
124 # Check if the user requested to download the dataset
125 if not self.download:
126 raise DatasetNotFoundError(self)
127
128 # Download and extract the dataset
129 self._download()
130 self._extract()
131
132 def _download(self) -> None:
133 """Download the dataset."""
134 download_url(
135 self.url,
136 self.root,
137 filename=self.filename,
138 md5=self.md5 if self.checksum else None,
139 )
140
141 def _extract(self) -> None:
142 """Extract the dataset."""
143 filepath = os.path.join(self.root, self.filename)
144 extract_archive(filepath)
145
146 def plot(
147 self,
148 sample: dict[str, Tensor],
149 show_titles: bool = True,
150 suptitle: str | None = None,
151 ) -> Figure:
152 """Plot a sample from the dataset.
153
154 Args:
155 sample: a sample returned by :meth:`NonGeoClassificationDataset.__getitem__`
156 show_titles: flag indicating whether to show titles above each panel
157 suptitle: optional suptitle to use for figure
158
159 Returns:
160 a matplotlib Figure with the rendered sample
161
162 .. versionadded:: 0.2
163 """
164 image, label = sample['image'], cast(int, sample['label'].item())
165
166 showing_predictions = 'prediction' in sample
167 if showing_predictions:
168 prediction = cast(int, sample['prediction'].item())
169
170 fig, ax = plt.subplots(1, 1, figsize=(10, 10))
171
172 ax.imshow(image.permute(1, 2, 0))
173 ax.axis('off')
174
175 if show_titles:
176 title = f'Label: {self.classes[label]}'
177 if showing_predictions:
178 title += f'\nPrediction: {self.classes[prediction]}'
179 ax.set_title(title)
180
181 if suptitle is not None:
182 plt.suptitle(suptitle)
183
184 return fig
185
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/torchgeo/datasets/patternnet.py b/torchgeo/datasets/patternnet.py
--- a/torchgeo/datasets/patternnet.py
+++ b/torchgeo/datasets/patternnet.py
@@ -78,7 +78,7 @@
* https://doi.org/10.1016/j.isprsjprs.2018.01.004
"""
- url = 'https://drive.google.com/file/d/127lxXYqzO6Bd0yZhvEbgIfz95HaEnr9K'
+ url = 'https://hf.co/datasets/torchgeo/PatternNet/resolve/2dbd901b00e301967a5c5146b25454f5d3455ad0/PatternNet.zip'
md5 = '96d54b3224c5350a98d55d5a7e6984ad'
filename = 'PatternNet.zip'
directory = os.path.join('PatternNet', 'images')
| {"golden_diff": "diff --git a/torchgeo/datasets/patternnet.py b/torchgeo/datasets/patternnet.py\n--- a/torchgeo/datasets/patternnet.py\n+++ b/torchgeo/datasets/patternnet.py\n@@ -78,7 +78,7 @@\n * https://doi.org/10.1016/j.isprsjprs.2018.01.004\n \"\"\"\n \n- url = 'https://drive.google.com/file/d/127lxXYqzO6Bd0yZhvEbgIfz95HaEnr9K'\n+ url = 'https://hf.co/datasets/torchgeo/PatternNet/resolve/2dbd901b00e301967a5c5146b25454f5d3455ad0/PatternNet.zip'\n md5 = '96d54b3224c5350a98d55d5a7e6984ad'\n filename = 'PatternNet.zip'\n directory = os.path.join('PatternNet', 'images')\n", "issue": "PatternNet download link is broken\n### Description\r\n\r\n[PatterNet dataset](https://torchgeo.readthedocs.io/en/stable/_modules/torchgeo/datasets/patternnet.html#PatternNet) [download link ](https://drive.google.com/file/d/127lxXYqzO6Bd0yZhvEbgIfz95HaEnr9K) is broken.\r\n\r\nIt seems it is officially hosted [here](https://nuisteducn1-my.sharepoint.com/:u:/g/personal/zhouwx_nuist_edu_cn/EYSPYqBztbBBqS27B7uM_mEB3R9maNJze8M1Qg9Q6cnPBQ?e=MSf977). Was someone mirroring it on a personal Google Drive? \r\n\r\n### Steps to reproduce\r\n\r\n```\r\nfrom torchgeo.datasets import PatternNet\r\n\r\npatternet = PatternNet(root='./', transforms=None, download=True, checksum=True)\r\n```\r\n\r\n### Version\r\n\r\n0.5.2\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\n\"\"\"PatternNet dataset.\"\"\"\n\nimport os\nfrom collections.abc import Callable\nfrom typing import cast\n\nimport matplotlib.pyplot as plt\nfrom matplotlib.figure import Figure\nfrom torch import Tensor\n\nfrom .errors import DatasetNotFoundError\nfrom .geo import NonGeoClassificationDataset\nfrom .utils import download_url, extract_archive\n\n\nclass PatternNet(NonGeoClassificationDataset):\n \"\"\"PatternNet dataset.\n\n The `PatternNet <https://sites.google.com/view/zhouwx/dataset>`__\n dataset is a dataset for remote sensing scene classification and image retrieval.\n\n Dataset features:\n\n * 30,400 images with 6-50 cm per pixel resolution (256x256 px)\n * three spectral bands - RGB\n * 38 scene classes, 800 images per class\n\n Dataset format:\n\n * images are three-channel jpgs\n\n Dataset classes:\n\n 0. airplane\n 1. baseball_field\n 2. basketball_court\n 3. beach\n 4. bridge\n 5. cemetery\n 6. chaparral\n 7. christmas_tree_farm\n 8. closed_road\n 9. coastal_mansion\n 10. crosswalk\n 11. dense_residential\n 12. ferry_terminal\n 13. football_field\n 14. forest\n 15. freeway\n 16. golf_course\n 17. harbor\n 18. intersection\n 19. mobile_home_park\n 20. nursing_home\n 21. oil_gas_field\n 22. oil_well\n 23. overpass\n 24. parking_lot\n 25. parking_space\n 26. railway\n 27. river\n 28. runway\n 29. runway_marking\n 30. shipping_yard\n 31. solar_panel\n 32. sparse_residential\n 33. storage_tank\n 34. swimming_pool\n 35. tennis_court\n 36. transformer_station\n 37. wastewater_treatment_plant\n\n If you use this dataset in your research, please cite the following paper:\n\n * https://doi.org/10.1016/j.isprsjprs.2018.01.004\n \"\"\"\n\n url = 'https://drive.google.com/file/d/127lxXYqzO6Bd0yZhvEbgIfz95HaEnr9K'\n md5 = '96d54b3224c5350a98d55d5a7e6984ad'\n filename = 'PatternNet.zip'\n directory = os.path.join('PatternNet', 'images')\n\n def __init__(\n self,\n root: str = 'data',\n transforms: Callable[[dict[str, Tensor]], dict[str, Tensor]] | None = None,\n download: bool = False,\n checksum: bool = False,\n ) -> None:\n \"\"\"Initialize a new PatternNet dataset instance.\n\n Args:\n root: root directory where dataset can be found\n transforms: a function/transform that takes input sample and its target as\n entry and returns a transformed version\n download: if True, download dataset and store it in the root directory\n checksum: if True, check the MD5 of the downloaded files (may be slow)\n\n Raises:\n DatasetNotFoundError: If dataset is not found and *download* is False.\n \"\"\"\n self.root = root\n self.download = download\n self.checksum = checksum\n self._verify()\n super().__init__(root=os.path.join(root, self.directory), transforms=transforms)\n\n def _verify(self) -> None:\n \"\"\"Verify the integrity of the dataset.\"\"\"\n # Check if the files already exist\n filepath = os.path.join(self.root, self.directory)\n if os.path.exists(filepath):\n return\n\n # Check if zip file already exists (if so then extract)\n filepath = os.path.join(self.root, self.filename)\n if os.path.exists(filepath):\n self._extract()\n return\n\n # Check if the user requested to download the dataset\n if not self.download:\n raise DatasetNotFoundError(self)\n\n # Download and extract the dataset\n self._download()\n self._extract()\n\n def _download(self) -> None:\n \"\"\"Download the dataset.\"\"\"\n download_url(\n self.url,\n self.root,\n filename=self.filename,\n md5=self.md5 if self.checksum else None,\n )\n\n def _extract(self) -> None:\n \"\"\"Extract the dataset.\"\"\"\n filepath = os.path.join(self.root, self.filename)\n extract_archive(filepath)\n\n def plot(\n self,\n sample: dict[str, Tensor],\n show_titles: bool = True,\n suptitle: str | None = None,\n ) -> Figure:\n \"\"\"Plot a sample from the dataset.\n\n Args:\n sample: a sample returned by :meth:`NonGeoClassificationDataset.__getitem__`\n show_titles: flag indicating whether to show titles above each panel\n suptitle: optional suptitle to use for figure\n\n Returns:\n a matplotlib Figure with the rendered sample\n\n .. versionadded:: 0.2\n \"\"\"\n image, label = sample['image'], cast(int, sample['label'].item())\n\n showing_predictions = 'prediction' in sample\n if showing_predictions:\n prediction = cast(int, sample['prediction'].item())\n\n fig, ax = plt.subplots(1, 1, figsize=(10, 10))\n\n ax.imshow(image.permute(1, 2, 0))\n ax.axis('off')\n\n if show_titles:\n title = f'Label: {self.classes[label]}'\n if showing_predictions:\n title += f'\\nPrediction: {self.classes[prediction]}'\n ax.set_title(title)\n\n if suptitle is not None:\n plt.suptitle(suptitle)\n\n return fig\n", "path": "torchgeo/datasets/patternnet.py"}], "after_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\n\"\"\"PatternNet dataset.\"\"\"\n\nimport os\nfrom collections.abc import Callable\nfrom typing import cast\n\nimport matplotlib.pyplot as plt\nfrom matplotlib.figure import Figure\nfrom torch import Tensor\n\nfrom .errors import DatasetNotFoundError\nfrom .geo import NonGeoClassificationDataset\nfrom .utils import download_url, extract_archive\n\n\nclass PatternNet(NonGeoClassificationDataset):\n \"\"\"PatternNet dataset.\n\n The `PatternNet <https://sites.google.com/view/zhouwx/dataset>`__\n dataset is a dataset for remote sensing scene classification and image retrieval.\n\n Dataset features:\n\n * 30,400 images with 6-50 cm per pixel resolution (256x256 px)\n * three spectral bands - RGB\n * 38 scene classes, 800 images per class\n\n Dataset format:\n\n * images are three-channel jpgs\n\n Dataset classes:\n\n 0. airplane\n 1. baseball_field\n 2. basketball_court\n 3. beach\n 4. bridge\n 5. cemetery\n 6. chaparral\n 7. christmas_tree_farm\n 8. closed_road\n 9. coastal_mansion\n 10. crosswalk\n 11. dense_residential\n 12. ferry_terminal\n 13. football_field\n 14. forest\n 15. freeway\n 16. golf_course\n 17. harbor\n 18. intersection\n 19. mobile_home_park\n 20. nursing_home\n 21. oil_gas_field\n 22. oil_well\n 23. overpass\n 24. parking_lot\n 25. parking_space\n 26. railway\n 27. river\n 28. runway\n 29. runway_marking\n 30. shipping_yard\n 31. solar_panel\n 32. sparse_residential\n 33. storage_tank\n 34. swimming_pool\n 35. tennis_court\n 36. transformer_station\n 37. wastewater_treatment_plant\n\n If you use this dataset in your research, please cite the following paper:\n\n * https://doi.org/10.1016/j.isprsjprs.2018.01.004\n \"\"\"\n\n url = 'https://hf.co/datasets/torchgeo/PatternNet/resolve/2dbd901b00e301967a5c5146b25454f5d3455ad0/PatternNet.zip'\n md5 = '96d54b3224c5350a98d55d5a7e6984ad'\n filename = 'PatternNet.zip'\n directory = os.path.join('PatternNet', 'images')\n\n def __init__(\n self,\n root: str = 'data',\n transforms: Callable[[dict[str, Tensor]], dict[str, Tensor]] | None = None,\n download: bool = False,\n checksum: bool = False,\n ) -> None:\n \"\"\"Initialize a new PatternNet dataset instance.\n\n Args:\n root: root directory where dataset can be found\n transforms: a function/transform that takes input sample and its target as\n entry and returns a transformed version\n download: if True, download dataset and store it in the root directory\n checksum: if True, check the MD5 of the downloaded files (may be slow)\n\n Raises:\n DatasetNotFoundError: If dataset is not found and *download* is False.\n \"\"\"\n self.root = root\n self.download = download\n self.checksum = checksum\n self._verify()\n super().__init__(root=os.path.join(root, self.directory), transforms=transforms)\n\n def _verify(self) -> None:\n \"\"\"Verify the integrity of the dataset.\"\"\"\n # Check if the files already exist\n filepath = os.path.join(self.root, self.directory)\n if os.path.exists(filepath):\n return\n\n # Check if zip file already exists (if so then extract)\n filepath = os.path.join(self.root, self.filename)\n if os.path.exists(filepath):\n self._extract()\n return\n\n # Check if the user requested to download the dataset\n if not self.download:\n raise DatasetNotFoundError(self)\n\n # Download and extract the dataset\n self._download()\n self._extract()\n\n def _download(self) -> None:\n \"\"\"Download the dataset.\"\"\"\n download_url(\n self.url,\n self.root,\n filename=self.filename,\n md5=self.md5 if self.checksum else None,\n )\n\n def _extract(self) -> None:\n \"\"\"Extract the dataset.\"\"\"\n filepath = os.path.join(self.root, self.filename)\n extract_archive(filepath)\n\n def plot(\n self,\n sample: dict[str, Tensor],\n show_titles: bool = True,\n suptitle: str | None = None,\n ) -> Figure:\n \"\"\"Plot a sample from the dataset.\n\n Args:\n sample: a sample returned by :meth:`NonGeoClassificationDataset.__getitem__`\n show_titles: flag indicating whether to show titles above each panel\n suptitle: optional suptitle to use for figure\n\n Returns:\n a matplotlib Figure with the rendered sample\n\n .. versionadded:: 0.2\n \"\"\"\n image, label = sample['image'], cast(int, sample['label'].item())\n\n showing_predictions = 'prediction' in sample\n if showing_predictions:\n prediction = cast(int, sample['prediction'].item())\n\n fig, ax = plt.subplots(1, 1, figsize=(10, 10))\n\n ax.imshow(image.permute(1, 2, 0))\n ax.axis('off')\n\n if show_titles:\n title = f'Label: {self.classes[label]}'\n if showing_predictions:\n title += f'\\nPrediction: {self.classes[prediction]}'\n ax.set_title(title)\n\n if suptitle is not None:\n plt.suptitle(suptitle)\n\n return fig\n", "path": "torchgeo/datasets/patternnet.py"}]} | 2,305 | 256 |
gh_patches_debug_35698 | rasdani/github-patches | git_diff | nv-legate__cunumeric-766 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
cunumeric does not issue unimplemented warnings for functions in submodules
For example, calling unsupported functions in the `random` module silently fall back to numpy instead of issuing a warning.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cunumeric/random/__init__.py`
Content:
```
1 # Copyright 2021-2022 NVIDIA Corporation
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 #
15 from __future__ import annotations
16
17 import numpy.random as _nprandom
18 from cunumeric.coverage import clone_module
19 from cunumeric.runtime import runtime
20
21 if runtime.has_curand:
22 from cunumeric.random.random import *
23 from cunumeric.random.bitgenerator import *
24 from cunumeric.random.generator import *
25 else:
26 from cunumeric.random.legacy import *
27
28 clone_module(_nprandom, globals())
29
30 del clone_module
31 del _nprandom
32
```
Path: `cunumeric/coverage.py`
Content:
```
1 # Copyright 2022 NVIDIA Corporation
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 #
15 from __future__ import annotations
16
17 import warnings
18 from dataclasses import dataclass
19 from functools import wraps
20 from types import FunctionType, MethodDescriptorType, MethodType, ModuleType
21 from typing import Any, Container, Mapping, Optional, cast
22
23 import numpy as np
24 from legate.core import track_provenance
25 from typing_extensions import Protocol
26
27 from .runtime import runtime
28 from .utils import find_last_user_frames, find_last_user_stacklevel
29
30 __all__ = ("clone_module", "clone_np_ndarray")
31
32 FALLBACK_WARNING = (
33 "cuNumeric has not implemented {name} "
34 + "and is falling back to canonical numpy. "
35 + "You may notice significantly decreased performance "
36 + "for this function call."
37 )
38
39 MOD_INTERNAL = {"__dir__", "__getattr__"}
40
41 NDARRAY_INTERNAL = {
42 "__array_finalize__",
43 "__array_function__",
44 "__array_interface__",
45 "__array_prepare__",
46 "__array_priority__",
47 "__array_struct__",
48 "__array_ufunc__",
49 "__array_wrap__",
50 }
51
52
53 def filter_namespace(
54 ns: Mapping[str, Any],
55 *,
56 omit_names: Optional[Container[str]] = None,
57 omit_types: tuple[type, ...] = (),
58 ) -> dict[str, Any]:
59 omit_names = omit_names or set()
60 return {
61 attr: value
62 for attr, value in ns.items()
63 if attr not in omit_names and not isinstance(value, omit_types)
64 }
65
66
67 class AnyCallable(Protocol):
68 def __call__(self, *args: Any, **kwargs: Any) -> Any:
69 ...
70
71
72 @dataclass(frozen=True)
73 class CuWrapperMetadata:
74 implemented: bool
75 single: bool = False
76 multi: bool = False
77
78
79 class CuWrapped(AnyCallable, Protocol):
80 _cunumeric: CuWrapperMetadata
81 __wrapped__: AnyCallable
82 __name__: str
83 __qualname__: str
84
85
86 def implemented(
87 func: AnyCallable, prefix: str, name: str, reporting: bool = True
88 ) -> CuWrapped:
89 name = f"{prefix}.{name}"
90
91 wrapper: CuWrapped
92
93 if reporting:
94
95 @wraps(func)
96 @track_provenance(runtime.legate_context)
97 def wrapper(*args: Any, **kwargs: Any) -> Any:
98 location = find_last_user_frames(
99 not runtime.args.report_dump_callstack
100 )
101 runtime.record_api_call(
102 name=name,
103 location=location,
104 implemented=True,
105 )
106 return func(*args, **kwargs)
107
108 else:
109
110 @wraps(func)
111 @track_provenance(runtime.legate_context)
112 def wrapper(*args: Any, **kwargs: Any) -> Any:
113 return func(*args, **kwargs)
114
115 # This is incredibly ugly and unpleasant, but @wraps(func) doesn't handle
116 # ufuncs the way we need it to. The alternative would be to vendor and
117 # modify a custom version of @wraps
118 if hasattr(wrapper.__wrapped__, "_name"):
119 wrapper.__name__ = wrapper.__wrapped__._name
120 wrapper.__qualname__ = wrapper.__wrapped__._name
121
122 # TODO (bev) Scraping text to set flags seems a bit fragile. It would be
123 # preferable to start with flags, and use those to update docstrings.
124 multi = "Multiple GPUs" in (getattr(func, "__doc__", None) or "")
125 single = "Single GPU" in (getattr(func, "__doc__", None) or "") or multi
126
127 wrapper._cunumeric = CuWrapperMetadata(
128 implemented=True, single=single, multi=multi
129 )
130
131 return wrapper
132
133
134 def unimplemented(
135 func: AnyCallable,
136 prefix: str,
137 name: str,
138 reporting: bool = True,
139 self_fallback: Optional[str] = None,
140 ) -> CuWrapped:
141 name = f"{prefix}.{name}"
142
143 # Skip over NumPy's `__array_function__` dispatch wrapper, if present.
144 # NumPy adds `__array_function__` dispatch logic through decorators, but
145 # still makes the underlying code (which converts all array-like arguments
146 # to `numpy.ndarray` through `__array__`) available in the
147 # `_implementation` field.
148 # We have to skip the dispatch wrapper, otherwise we will trigger an
149 # infinite loop. Say we're dealing with a call to `cunumeric.foo`, and are
150 # trying to fall back to `numpy.foo`. If we didn't skip the dispatch
151 # wrapper of `numpy.foo`, then NumPy would ask
152 # `cunumeric.ndarray.__array_function__` to handle the call to `numpy.foo`,
153 # then `cunumeric.ndarray.__array_function__` would call `cunumeric.foo`,
154 # and we would end up here again.
155 func = getattr(func, "_implementation", func)
156
157 wrapper: CuWrapped
158
159 if reporting:
160
161 @wraps(func)
162 def wrapper(*args: Any, **kwargs: Any) -> Any:
163 location = find_last_user_frames(
164 not runtime.args.report_dump_callstack
165 )
166 runtime.record_api_call(
167 name=name,
168 location=location,
169 implemented=False,
170 )
171 if self_fallback:
172 self_value = getattr(args[0], self_fallback)()
173 args = (self_value,) + args[1:]
174 return func(*args, **kwargs)
175
176 else:
177
178 @wraps(func)
179 def wrapper(*args: Any, **kwargs: Any) -> Any:
180 stacklevel = find_last_user_stacklevel()
181 warnings.warn(
182 FALLBACK_WARNING.format(name=name),
183 stacklevel=stacklevel,
184 category=RuntimeWarning,
185 )
186 if self_fallback:
187 self_value = getattr(args[0], self_fallback)()
188 args = (self_value,) + args[1:]
189 return func(*args, **kwargs)
190
191 wrapper._cunumeric = CuWrapperMetadata(implemented=False)
192
193 return wrapper
194
195
196 def clone_module(
197 origin_module: ModuleType, new_globals: dict[str, Any]
198 ) -> None:
199 """Copy attributes from one module to another, excluding submodules
200
201 Function types are wrapped with a decorator to report API calls. All
202 other values are copied as-is.
203
204 Parameters
205 ----------
206 origin_module : ModuleTpe
207 Existing module to clone attributes from
208
209 new_globals : dict
210 a globals() dict for the new module to clone into
211
212 Returns
213 -------
214 None
215
216 """
217 mod_name = origin_module.__name__
218
219 missing = filter_namespace(
220 origin_module.__dict__,
221 omit_names=set(new_globals).union(MOD_INTERNAL),
222 omit_types=(ModuleType,),
223 )
224
225 reporting = runtime.args.report_coverage
226
227 from ._ufunc.ufunc import ufunc as lgufunc
228
229 for attr, value in new_globals.items():
230 # Only need to wrap things that are in the origin module to begin with
231 if attr not in origin_module.__dict__:
232 continue
233 if isinstance(value, (FunctionType, lgufunc)):
234 wrapped = implemented(
235 cast(AnyCallable, value), mod_name, attr, reporting=reporting
236 )
237 new_globals[attr] = wrapped
238
239 from numpy import ufunc as npufunc
240
241 for attr, value in missing.items():
242 if isinstance(value, (FunctionType, npufunc)):
243 wrapped = unimplemented(value, mod_name, attr, reporting=reporting)
244 new_globals[attr] = wrapped
245 else:
246 new_globals[attr] = value
247
248
249 def should_wrap(obj: object) -> bool:
250 return isinstance(obj, (FunctionType, MethodType, MethodDescriptorType))
251
252
253 def clone_np_ndarray(cls: type) -> type:
254 """Copy attributes from np.ndarray to cunumeric.ndarray
255
256 Method types are wrapped with a decorator to report API calls. All
257 other values are copied as-is.
258
259 """
260
261 origin_class = np.ndarray
262
263 class_name = f"{origin_class.__module__}.{origin_class.__name__}"
264
265 missing = filter_namespace(
266 origin_class.__dict__,
267 # this simply omits ndarray internal methods for any class. If
268 # we ever need to wrap more classes we may need to generalize to
269 # per-class specification of internal names to skip
270 omit_names=set(cls.__dict__).union(NDARRAY_INTERNAL),
271 )
272
273 reporting = runtime.args.report_coverage
274
275 for attr, value in cls.__dict__.items():
276 # Only need to wrap things that are in the origin class to begin with
277 if not hasattr(origin_class, attr):
278 continue
279 if should_wrap(value):
280 wrapped = implemented(value, class_name, attr, reporting=reporting)
281 setattr(cls, attr, wrapped)
282
283 for attr, value in missing.items():
284 if should_wrap(value):
285 wrapped = unimplemented(
286 value,
287 class_name,
288 attr,
289 reporting=reporting,
290 self_fallback="__array__",
291 )
292 setattr(cls, attr, wrapped)
293 else:
294 setattr(cls, attr, value)
295
296 return cls
297
298
299 def is_implemented(obj: Any) -> bool:
300 return hasattr(obj, "_cunumeric") and obj._cunumeric.implemented
301
302
303 def is_single(obj: Any) -> bool:
304 return hasattr(obj, "_cunumeric") and obj._cunumeric.single
305
306
307 def is_multi(obj: Any) -> bool:
308 return hasattr(obj, "_cunumeric") and obj._cunumeric.multi
309
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/cunumeric/coverage.py b/cunumeric/coverage.py
--- a/cunumeric/coverage.py
+++ b/cunumeric/coverage.py
@@ -17,7 +17,13 @@
import warnings
from dataclasses import dataclass
from functools import wraps
-from types import FunctionType, MethodDescriptorType, MethodType, ModuleType
+from types import (
+ BuiltinFunctionType,
+ FunctionType,
+ MethodDescriptorType,
+ MethodType,
+ ModuleType,
+)
from typing import Any, Container, Mapping, Optional, cast
import numpy as np
@@ -194,7 +200,9 @@
def clone_module(
- origin_module: ModuleType, new_globals: dict[str, Any]
+ origin_module: ModuleType,
+ new_globals: dict[str, Any],
+ include_builtin_function_type: bool = False,
) -> None:
"""Copy attributes from one module to another, excluding submodules
@@ -230,7 +238,10 @@
# Only need to wrap things that are in the origin module to begin with
if attr not in origin_module.__dict__:
continue
- if isinstance(value, (FunctionType, lgufunc)):
+ if isinstance(value, (FunctionType, lgufunc)) or (
+ include_builtin_function_type
+ and isinstance(value, BuiltinFunctionType)
+ ):
wrapped = implemented(
cast(AnyCallable, value), mod_name, attr, reporting=reporting
)
@@ -239,7 +250,10 @@
from numpy import ufunc as npufunc
for attr, value in missing.items():
- if isinstance(value, (FunctionType, npufunc)):
+ if isinstance(value, (FunctionType, npufunc)) or (
+ include_builtin_function_type
+ and isinstance(value, BuiltinFunctionType)
+ ):
wrapped = unimplemented(value, mod_name, attr, reporting=reporting)
new_globals[attr] = wrapped
else:
diff --git a/cunumeric/random/__init__.py b/cunumeric/random/__init__.py
--- a/cunumeric/random/__init__.py
+++ b/cunumeric/random/__init__.py
@@ -25,7 +25,7 @@
else:
from cunumeric.random.legacy import *
-clone_module(_nprandom, globals())
+clone_module(_nprandom, globals(), include_builtin_function_type=True)
del clone_module
del _nprandom
| {"golden_diff": "diff --git a/cunumeric/coverage.py b/cunumeric/coverage.py\n--- a/cunumeric/coverage.py\n+++ b/cunumeric/coverage.py\n@@ -17,7 +17,13 @@\n import warnings\n from dataclasses import dataclass\n from functools import wraps\n-from types import FunctionType, MethodDescriptorType, MethodType, ModuleType\n+from types import (\n+ BuiltinFunctionType,\n+ FunctionType,\n+ MethodDescriptorType,\n+ MethodType,\n+ ModuleType,\n+)\n from typing import Any, Container, Mapping, Optional, cast\n \n import numpy as np\n@@ -194,7 +200,9 @@\n \n \n def clone_module(\n- origin_module: ModuleType, new_globals: dict[str, Any]\n+ origin_module: ModuleType,\n+ new_globals: dict[str, Any],\n+ include_builtin_function_type: bool = False,\n ) -> None:\n \"\"\"Copy attributes from one module to another, excluding submodules\n \n@@ -230,7 +238,10 @@\n # Only need to wrap things that are in the origin module to begin with\n if attr not in origin_module.__dict__:\n continue\n- if isinstance(value, (FunctionType, lgufunc)):\n+ if isinstance(value, (FunctionType, lgufunc)) or (\n+ include_builtin_function_type\n+ and isinstance(value, BuiltinFunctionType)\n+ ):\n wrapped = implemented(\n cast(AnyCallable, value), mod_name, attr, reporting=reporting\n )\n@@ -239,7 +250,10 @@\n from numpy import ufunc as npufunc\n \n for attr, value in missing.items():\n- if isinstance(value, (FunctionType, npufunc)):\n+ if isinstance(value, (FunctionType, npufunc)) or (\n+ include_builtin_function_type\n+ and isinstance(value, BuiltinFunctionType)\n+ ):\n wrapped = unimplemented(value, mod_name, attr, reporting=reporting)\n new_globals[attr] = wrapped\n else:\ndiff --git a/cunumeric/random/__init__.py b/cunumeric/random/__init__.py\n--- a/cunumeric/random/__init__.py\n+++ b/cunumeric/random/__init__.py\n@@ -25,7 +25,7 @@\n else:\n from cunumeric.random.legacy import *\n \n-clone_module(_nprandom, globals())\n+clone_module(_nprandom, globals(), include_builtin_function_type=True)\n \n del clone_module\n del _nprandom\n", "issue": "cunumeric does not issue unimplemented warnings for functions in submodules\nFor example, calling unsupported functions in the `random` module silently fall back to numpy instead of issuing a warning.\n", "before_files": [{"content": "# Copyright 2021-2022 NVIDIA Corporation\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\nfrom __future__ import annotations\n\nimport numpy.random as _nprandom\nfrom cunumeric.coverage import clone_module\nfrom cunumeric.runtime import runtime\n\nif runtime.has_curand:\n from cunumeric.random.random import *\n from cunumeric.random.bitgenerator import *\n from cunumeric.random.generator import *\nelse:\n from cunumeric.random.legacy import *\n\nclone_module(_nprandom, globals())\n\ndel clone_module\ndel _nprandom\n", "path": "cunumeric/random/__init__.py"}, {"content": "# Copyright 2022 NVIDIA Corporation\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\nfrom __future__ import annotations\n\nimport warnings\nfrom dataclasses import dataclass\nfrom functools import wraps\nfrom types import FunctionType, MethodDescriptorType, MethodType, ModuleType\nfrom typing import Any, Container, Mapping, Optional, cast\n\nimport numpy as np\nfrom legate.core import track_provenance\nfrom typing_extensions import Protocol\n\nfrom .runtime import runtime\nfrom .utils import find_last_user_frames, find_last_user_stacklevel\n\n__all__ = (\"clone_module\", \"clone_np_ndarray\")\n\nFALLBACK_WARNING = (\n \"cuNumeric has not implemented {name} \"\n + \"and is falling back to canonical numpy. \"\n + \"You may notice significantly decreased performance \"\n + \"for this function call.\"\n)\n\nMOD_INTERNAL = {\"__dir__\", \"__getattr__\"}\n\nNDARRAY_INTERNAL = {\n \"__array_finalize__\",\n \"__array_function__\",\n \"__array_interface__\",\n \"__array_prepare__\",\n \"__array_priority__\",\n \"__array_struct__\",\n \"__array_ufunc__\",\n \"__array_wrap__\",\n}\n\n\ndef filter_namespace(\n ns: Mapping[str, Any],\n *,\n omit_names: Optional[Container[str]] = None,\n omit_types: tuple[type, ...] = (),\n) -> dict[str, Any]:\n omit_names = omit_names or set()\n return {\n attr: value\n for attr, value in ns.items()\n if attr not in omit_names and not isinstance(value, omit_types)\n }\n\n\nclass AnyCallable(Protocol):\n def __call__(self, *args: Any, **kwargs: Any) -> Any:\n ...\n\n\n@dataclass(frozen=True)\nclass CuWrapperMetadata:\n implemented: bool\n single: bool = False\n multi: bool = False\n\n\nclass CuWrapped(AnyCallable, Protocol):\n _cunumeric: CuWrapperMetadata\n __wrapped__: AnyCallable\n __name__: str\n __qualname__: str\n\n\ndef implemented(\n func: AnyCallable, prefix: str, name: str, reporting: bool = True\n) -> CuWrapped:\n name = f\"{prefix}.{name}\"\n\n wrapper: CuWrapped\n\n if reporting:\n\n @wraps(func)\n @track_provenance(runtime.legate_context)\n def wrapper(*args: Any, **kwargs: Any) -> Any:\n location = find_last_user_frames(\n not runtime.args.report_dump_callstack\n )\n runtime.record_api_call(\n name=name,\n location=location,\n implemented=True,\n )\n return func(*args, **kwargs)\n\n else:\n\n @wraps(func)\n @track_provenance(runtime.legate_context)\n def wrapper(*args: Any, **kwargs: Any) -> Any:\n return func(*args, **kwargs)\n\n # This is incredibly ugly and unpleasant, but @wraps(func) doesn't handle\n # ufuncs the way we need it to. The alternative would be to vendor and\n # modify a custom version of @wraps\n if hasattr(wrapper.__wrapped__, \"_name\"):\n wrapper.__name__ = wrapper.__wrapped__._name\n wrapper.__qualname__ = wrapper.__wrapped__._name\n\n # TODO (bev) Scraping text to set flags seems a bit fragile. It would be\n # preferable to start with flags, and use those to update docstrings.\n multi = \"Multiple GPUs\" in (getattr(func, \"__doc__\", None) or \"\")\n single = \"Single GPU\" in (getattr(func, \"__doc__\", None) or \"\") or multi\n\n wrapper._cunumeric = CuWrapperMetadata(\n implemented=True, single=single, multi=multi\n )\n\n return wrapper\n\n\ndef unimplemented(\n func: AnyCallable,\n prefix: str,\n name: str,\n reporting: bool = True,\n self_fallback: Optional[str] = None,\n) -> CuWrapped:\n name = f\"{prefix}.{name}\"\n\n # Skip over NumPy's `__array_function__` dispatch wrapper, if present.\n # NumPy adds `__array_function__` dispatch logic through decorators, but\n # still makes the underlying code (which converts all array-like arguments\n # to `numpy.ndarray` through `__array__`) available in the\n # `_implementation` field.\n # We have to skip the dispatch wrapper, otherwise we will trigger an\n # infinite loop. Say we're dealing with a call to `cunumeric.foo`, and are\n # trying to fall back to `numpy.foo`. If we didn't skip the dispatch\n # wrapper of `numpy.foo`, then NumPy would ask\n # `cunumeric.ndarray.__array_function__` to handle the call to `numpy.foo`,\n # then `cunumeric.ndarray.__array_function__` would call `cunumeric.foo`,\n # and we would end up here again.\n func = getattr(func, \"_implementation\", func)\n\n wrapper: CuWrapped\n\n if reporting:\n\n @wraps(func)\n def wrapper(*args: Any, **kwargs: Any) -> Any:\n location = find_last_user_frames(\n not runtime.args.report_dump_callstack\n )\n runtime.record_api_call(\n name=name,\n location=location,\n implemented=False,\n )\n if self_fallback:\n self_value = getattr(args[0], self_fallback)()\n args = (self_value,) + args[1:]\n return func(*args, **kwargs)\n\n else:\n\n @wraps(func)\n def wrapper(*args: Any, **kwargs: Any) -> Any:\n stacklevel = find_last_user_stacklevel()\n warnings.warn(\n FALLBACK_WARNING.format(name=name),\n stacklevel=stacklevel,\n category=RuntimeWarning,\n )\n if self_fallback:\n self_value = getattr(args[0], self_fallback)()\n args = (self_value,) + args[1:]\n return func(*args, **kwargs)\n\n wrapper._cunumeric = CuWrapperMetadata(implemented=False)\n\n return wrapper\n\n\ndef clone_module(\n origin_module: ModuleType, new_globals: dict[str, Any]\n) -> None:\n \"\"\"Copy attributes from one module to another, excluding submodules\n\n Function types are wrapped with a decorator to report API calls. All\n other values are copied as-is.\n\n Parameters\n ----------\n origin_module : ModuleTpe\n Existing module to clone attributes from\n\n new_globals : dict\n a globals() dict for the new module to clone into\n\n Returns\n -------\n None\n\n \"\"\"\n mod_name = origin_module.__name__\n\n missing = filter_namespace(\n origin_module.__dict__,\n omit_names=set(new_globals).union(MOD_INTERNAL),\n omit_types=(ModuleType,),\n )\n\n reporting = runtime.args.report_coverage\n\n from ._ufunc.ufunc import ufunc as lgufunc\n\n for attr, value in new_globals.items():\n # Only need to wrap things that are in the origin module to begin with\n if attr not in origin_module.__dict__:\n continue\n if isinstance(value, (FunctionType, lgufunc)):\n wrapped = implemented(\n cast(AnyCallable, value), mod_name, attr, reporting=reporting\n )\n new_globals[attr] = wrapped\n\n from numpy import ufunc as npufunc\n\n for attr, value in missing.items():\n if isinstance(value, (FunctionType, npufunc)):\n wrapped = unimplemented(value, mod_name, attr, reporting=reporting)\n new_globals[attr] = wrapped\n else:\n new_globals[attr] = value\n\n\ndef should_wrap(obj: object) -> bool:\n return isinstance(obj, (FunctionType, MethodType, MethodDescriptorType))\n\n\ndef clone_np_ndarray(cls: type) -> type:\n \"\"\"Copy attributes from np.ndarray to cunumeric.ndarray\n\n Method types are wrapped with a decorator to report API calls. All\n other values are copied as-is.\n\n \"\"\"\n\n origin_class = np.ndarray\n\n class_name = f\"{origin_class.__module__}.{origin_class.__name__}\"\n\n missing = filter_namespace(\n origin_class.__dict__,\n # this simply omits ndarray internal methods for any class. If\n # we ever need to wrap more classes we may need to generalize to\n # per-class specification of internal names to skip\n omit_names=set(cls.__dict__).union(NDARRAY_INTERNAL),\n )\n\n reporting = runtime.args.report_coverage\n\n for attr, value in cls.__dict__.items():\n # Only need to wrap things that are in the origin class to begin with\n if not hasattr(origin_class, attr):\n continue\n if should_wrap(value):\n wrapped = implemented(value, class_name, attr, reporting=reporting)\n setattr(cls, attr, wrapped)\n\n for attr, value in missing.items():\n if should_wrap(value):\n wrapped = unimplemented(\n value,\n class_name,\n attr,\n reporting=reporting,\n self_fallback=\"__array__\",\n )\n setattr(cls, attr, wrapped)\n else:\n setattr(cls, attr, value)\n\n return cls\n\n\ndef is_implemented(obj: Any) -> bool:\n return hasattr(obj, \"_cunumeric\") and obj._cunumeric.implemented\n\n\ndef is_single(obj: Any) -> bool:\n return hasattr(obj, \"_cunumeric\") and obj._cunumeric.single\n\n\ndef is_multi(obj: Any) -> bool:\n return hasattr(obj, \"_cunumeric\") and obj._cunumeric.multi\n", "path": "cunumeric/coverage.py"}], "after_files": [{"content": "# Copyright 2021-2022 NVIDIA Corporation\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\nfrom __future__ import annotations\n\nimport numpy.random as _nprandom\nfrom cunumeric.coverage import clone_module\nfrom cunumeric.runtime import runtime\n\nif runtime.has_curand:\n from cunumeric.random.random import *\n from cunumeric.random.bitgenerator import *\n from cunumeric.random.generator import *\nelse:\n from cunumeric.random.legacy import *\n\nclone_module(_nprandom, globals(), include_builtin_function_type=True)\n\ndel clone_module\ndel _nprandom\n", "path": "cunumeric/random/__init__.py"}, {"content": "# Copyright 2022 NVIDIA Corporation\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\nfrom __future__ import annotations\n\nimport warnings\nfrom dataclasses import dataclass\nfrom functools import wraps\nfrom types import (\n BuiltinFunctionType,\n FunctionType,\n MethodDescriptorType,\n MethodType,\n ModuleType,\n)\nfrom typing import Any, Container, Mapping, Optional, cast\n\nimport numpy as np\nfrom legate.core import track_provenance\nfrom typing_extensions import Protocol\n\nfrom .runtime import runtime\nfrom .utils import find_last_user_frames, find_last_user_stacklevel\n\n__all__ = (\"clone_module\", \"clone_np_ndarray\")\n\nFALLBACK_WARNING = (\n \"cuNumeric has not implemented {name} \"\n + \"and is falling back to canonical numpy. \"\n + \"You may notice significantly decreased performance \"\n + \"for this function call.\"\n)\n\nMOD_INTERNAL = {\"__dir__\", \"__getattr__\"}\n\nNDARRAY_INTERNAL = {\n \"__array_finalize__\",\n \"__array_function__\",\n \"__array_interface__\",\n \"__array_prepare__\",\n \"__array_priority__\",\n \"__array_struct__\",\n \"__array_ufunc__\",\n \"__array_wrap__\",\n}\n\n\ndef filter_namespace(\n ns: Mapping[str, Any],\n *,\n omit_names: Optional[Container[str]] = None,\n omit_types: tuple[type, ...] = (),\n) -> dict[str, Any]:\n omit_names = omit_names or set()\n return {\n attr: value\n for attr, value in ns.items()\n if attr not in omit_names and not isinstance(value, omit_types)\n }\n\n\nclass AnyCallable(Protocol):\n def __call__(self, *args: Any, **kwargs: Any) -> Any:\n ...\n\n\n@dataclass(frozen=True)\nclass CuWrapperMetadata:\n implemented: bool\n single: bool = False\n multi: bool = False\n\n\nclass CuWrapped(AnyCallable, Protocol):\n _cunumeric: CuWrapperMetadata\n __wrapped__: AnyCallable\n __name__: str\n __qualname__: str\n\n\ndef implemented(\n func: AnyCallable, prefix: str, name: str, reporting: bool = True\n) -> CuWrapped:\n name = f\"{prefix}.{name}\"\n\n wrapper: CuWrapped\n\n if reporting:\n\n @wraps(func)\n @track_provenance(runtime.legate_context)\n def wrapper(*args: Any, **kwargs: Any) -> Any:\n location = find_last_user_frames(\n not runtime.args.report_dump_callstack\n )\n runtime.record_api_call(\n name=name,\n location=location,\n implemented=True,\n )\n return func(*args, **kwargs)\n\n else:\n\n @wraps(func)\n @track_provenance(runtime.legate_context)\n def wrapper(*args: Any, **kwargs: Any) -> Any:\n return func(*args, **kwargs)\n\n # This is incredibly ugly and unpleasant, but @wraps(func) doesn't handle\n # ufuncs the way we need it to. The alternative would be to vendor and\n # modify a custom version of @wraps\n if hasattr(wrapper.__wrapped__, \"_name\"):\n wrapper.__name__ = wrapper.__wrapped__._name\n wrapper.__qualname__ = wrapper.__wrapped__._name\n\n # TODO (bev) Scraping text to set flags seems a bit fragile. It would be\n # preferable to start with flags, and use those to update docstrings.\n multi = \"Multiple GPUs\" in (getattr(func, \"__doc__\", None) or \"\")\n single = \"Single GPU\" in (getattr(func, \"__doc__\", None) or \"\") or multi\n\n wrapper._cunumeric = CuWrapperMetadata(\n implemented=True, single=single, multi=multi\n )\n\n return wrapper\n\n\ndef unimplemented(\n func: AnyCallable,\n prefix: str,\n name: str,\n reporting: bool = True,\n self_fallback: Optional[str] = None,\n) -> CuWrapped:\n name = f\"{prefix}.{name}\"\n\n # Skip over NumPy's `__array_function__` dispatch wrapper, if present.\n # NumPy adds `__array_function__` dispatch logic through decorators, but\n # still makes the underlying code (which converts all array-like arguments\n # to `numpy.ndarray` through `__array__`) available in the\n # `_implementation` field.\n # We have to skip the dispatch wrapper, otherwise we will trigger an\n # infinite loop. Say we're dealing with a call to `cunumeric.foo`, and are\n # trying to fall back to `numpy.foo`. If we didn't skip the dispatch\n # wrapper of `numpy.foo`, then NumPy would ask\n # `cunumeric.ndarray.__array_function__` to handle the call to `numpy.foo`,\n # then `cunumeric.ndarray.__array_function__` would call `cunumeric.foo`,\n # and we would end up here again.\n func = getattr(func, \"_implementation\", func)\n\n wrapper: CuWrapped\n\n if reporting:\n\n @wraps(func)\n def wrapper(*args: Any, **kwargs: Any) -> Any:\n location = find_last_user_frames(\n not runtime.args.report_dump_callstack\n )\n runtime.record_api_call(\n name=name,\n location=location,\n implemented=False,\n )\n if self_fallback:\n self_value = getattr(args[0], self_fallback)()\n args = (self_value,) + args[1:]\n return func(*args, **kwargs)\n\n else:\n\n @wraps(func)\n def wrapper(*args: Any, **kwargs: Any) -> Any:\n stacklevel = find_last_user_stacklevel()\n warnings.warn(\n FALLBACK_WARNING.format(name=name),\n stacklevel=stacklevel,\n category=RuntimeWarning,\n )\n if self_fallback:\n self_value = getattr(args[0], self_fallback)()\n args = (self_value,) + args[1:]\n return func(*args, **kwargs)\n\n wrapper._cunumeric = CuWrapperMetadata(implemented=False)\n\n return wrapper\n\n\ndef clone_module(\n origin_module: ModuleType,\n new_globals: dict[str, Any],\n include_builtin_function_type: bool = False,\n) -> None:\n \"\"\"Copy attributes from one module to another, excluding submodules\n\n Function types are wrapped with a decorator to report API calls. All\n other values are copied as-is.\n\n Parameters\n ----------\n origin_module : ModuleTpe\n Existing module to clone attributes from\n\n new_globals : dict\n a globals() dict for the new module to clone into\n\n Returns\n -------\n None\n\n \"\"\"\n mod_name = origin_module.__name__\n\n missing = filter_namespace(\n origin_module.__dict__,\n omit_names=set(new_globals).union(MOD_INTERNAL),\n omit_types=(ModuleType,),\n )\n\n reporting = runtime.args.report_coverage\n\n from ._ufunc.ufunc import ufunc as lgufunc\n\n for attr, value in new_globals.items():\n # Only need to wrap things that are in the origin module to begin with\n if attr not in origin_module.__dict__:\n continue\n if isinstance(value, (FunctionType, lgufunc)) or (\n include_builtin_function_type\n and isinstance(value, BuiltinFunctionType)\n ):\n wrapped = implemented(\n cast(AnyCallable, value), mod_name, attr, reporting=reporting\n )\n new_globals[attr] = wrapped\n\n from numpy import ufunc as npufunc\n\n for attr, value in missing.items():\n if isinstance(value, (FunctionType, npufunc)) or (\n include_builtin_function_type\n and isinstance(value, BuiltinFunctionType)\n ):\n wrapped = unimplemented(value, mod_name, attr, reporting=reporting)\n new_globals[attr] = wrapped\n else:\n new_globals[attr] = value\n\n\ndef should_wrap(obj: object) -> bool:\n return isinstance(obj, (FunctionType, MethodType, MethodDescriptorType))\n\n\ndef clone_np_ndarray(cls: type) -> type:\n \"\"\"Copy attributes from np.ndarray to cunumeric.ndarray\n\n Method types are wrapped with a decorator to report API calls. All\n other values are copied as-is.\n\n \"\"\"\n\n origin_class = np.ndarray\n\n class_name = f\"{origin_class.__module__}.{origin_class.__name__}\"\n\n missing = filter_namespace(\n origin_class.__dict__,\n # this simply omits ndarray internal methods for any class. If\n # we ever need to wrap more classes we may need to generalize to\n # per-class specification of internal names to skip\n omit_names=set(cls.__dict__).union(NDARRAY_INTERNAL),\n )\n\n reporting = runtime.args.report_coverage\n\n for attr, value in cls.__dict__.items():\n # Only need to wrap things that are in the origin class to begin with\n if not hasattr(origin_class, attr):\n continue\n if should_wrap(value):\n wrapped = implemented(value, class_name, attr, reporting=reporting)\n setattr(cls, attr, wrapped)\n\n for attr, value in missing.items():\n if should_wrap(value):\n wrapped = unimplemented(\n value,\n class_name,\n attr,\n reporting=reporting,\n self_fallback=\"__array__\",\n )\n setattr(cls, attr, wrapped)\n else:\n setattr(cls, attr, value)\n\n return cls\n\n\ndef is_implemented(obj: Any) -> bool:\n return hasattr(obj, \"_cunumeric\") and obj._cunumeric.implemented\n\n\ndef is_single(obj: Any) -> bool:\n return hasattr(obj, \"_cunumeric\") and obj._cunumeric.single\n\n\ndef is_multi(obj: Any) -> bool:\n return hasattr(obj, \"_cunumeric\") and obj._cunumeric.multi\n", "path": "cunumeric/coverage.py"}]} | 3,639 | 557 |
gh_patches_debug_88 | rasdani/github-patches | git_diff | watchdogpolska__small_eod-479 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Tagi
Stworzenie ekranu Tagi
tabela zawiera kolumnę
* tag (pole `name` w serializatorze)
Ekran paginowy wielkość strony 20
Ekran dostępny z menu bocznego
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `backend-project/small_eod/tags/serializers.py`
Content:
```
1 from rest_framework import serializers
2 from .models import Tag
3
4
5 class TagSerializer(serializers.ModelSerializer):
6 class Meta:
7 model = Tag
8 fields = [
9 "name",
10 ]
11
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/backend-project/small_eod/tags/serializers.py b/backend-project/small_eod/tags/serializers.py
--- a/backend-project/small_eod/tags/serializers.py
+++ b/backend-project/small_eod/tags/serializers.py
@@ -7,4 +7,5 @@
model = Tag
fields = [
"name",
+ "id",
]
| {"golden_diff": "diff --git a/backend-project/small_eod/tags/serializers.py b/backend-project/small_eod/tags/serializers.py\n--- a/backend-project/small_eod/tags/serializers.py\n+++ b/backend-project/small_eod/tags/serializers.py\n@@ -7,4 +7,5 @@\n model = Tag\n fields = [\n \"name\",\n+ \"id\",\n ]\n", "issue": "Tagi\nStworzenie ekranu Tagi\r\n\r\ntabela zawiera kolumn\u0119 \r\n* tag (pole `name` w serializatorze)\r\n\r\nEkran paginowy wielko\u015b\u0107 strony 20\r\nEkran dost\u0119pny z menu bocznego\n", "before_files": [{"content": "from rest_framework import serializers\nfrom .models import Tag\n\n\nclass TagSerializer(serializers.ModelSerializer):\n class Meta:\n model = Tag\n fields = [\n \"name\",\n ]\n", "path": "backend-project/small_eod/tags/serializers.py"}], "after_files": [{"content": "from rest_framework import serializers\nfrom .models import Tag\n\n\nclass TagSerializer(serializers.ModelSerializer):\n class Meta:\n model = Tag\n fields = [\n \"name\",\n \"id\",\n ]\n", "path": "backend-project/small_eod/tags/serializers.py"}]} | 373 | 82 |
gh_patches_debug_31692 | rasdani/github-patches | git_diff | ARM-DOE__ACT-540 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ARM Data API Returns Text when File not Available
I tried to download gucceilpblhtM1.a0 but it's not available through the webservice due to it being an a0-level file. Instead the API returned text that was included in the file. We should put in a check to ensure that these cases are caught and files are not produced.
```
This data file is not available on /data/archive. To download this file, please an order via Data Discovery. https://adc.arm.gov/discovery
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `act/discovery/get_armfiles.py`
Content:
```
1 """
2 Script for downloading data from ARM's Live Data Webservice
3
4 """
5
6 import argparse
7 import json
8 import os
9 import sys
10 from datetime import timedelta
11
12 try:
13 from urllib.request import urlopen
14 except ImportError:
15 from urllib import urlopen
16
17 from act.utils import date_parser
18
19
20 def download_data(username, token, datastream, startdate, enddate, time=None, output=None):
21 """
22 This tool will help users utilize the ARM Live Data Webservice to download
23 ARM data.
24
25 Parameters
26 ----------
27 username : str
28 The username to use for logging into the ADC archive.
29 token : str
30 The access token for accessing the ADC archive.
31 datastream : str
32 The name of the datastream to acquire.
33 startdate : str
34 The start date of the data to acquire. Formats accepted are
35 YYYY-MM-DD, DD.MM.YYYY, DD/MM/YYYY, YYYYMMDD, YYYY/MM/DD or
36 any of the previous formats with THH:MM:SS added onto the end
37 (ex. 2020-09-15T12:00:00).
38 enddate : str
39 The end date of the data to acquire. Formats accepted are
40 YYYY-MM-DD, DD.MM.YYYY, DD/MM/YYYY, YYYYMMDD or YYYY/MM/DD, or
41 any of the previous formats with THH:MM:SS added onto the end
42 (ex. 2020-09-15T13:00:00).
43 time: str or None
44 The specific time. Format is HHMMSS. Set to None to download all files
45 in the given date interval.
46 output : str
47 The output directory for the data. Set to None to make a folder in the
48 current working directory with the same name as *datastream* to place
49 the files in.
50
51 Returns
52 -------
53 files : list
54 Returns list of files retrieved
55
56 Notes
57 -----
58 This programmatic interface allows users to query and automate
59 machine-to-machine downloads of ARM data. This tool uses a REST URL and
60 specific parameters (saveData, query), user ID and access token, a
61 datastream name, a start date, and an end date, and data files matching
62 the criteria will be returned to the user and downloaded.
63
64 By using this web service, users can setup cron jobs and automatically
65 download data from /data/archive into their workspace. This will also
66 eliminate the manual step of following a link in an email to download data.
67 All other data files, which are not on the spinning
68 disk (on HPSS), will have to go through the regular ordering process.
69 More information about this REST API and tools can be found on `ARM Live
70 <https://adc.arm.gov/armlive/#scripts>`_.
71
72 To login/register for an access token click `here
73 <https://adc.arm.gov/armlive/livedata/home>`_.
74
75 Author: Michael Giansiracusa
76 Email: [email protected]
77
78 Web Tools Contact: Ranjeet Devarakonda [email protected]
79
80 Examples
81 --------
82 This code will download the netCDF files from the sgpmetE13.b1 datastream
83 and place them in a directory named sgpmetE13.b1. The data from 14 Jan to
84 20 Jan 2017 will be downloaded. Replace *userName* and *XXXXXXXXXXXXXXXX*
85 with your username and token for ARM Data Discovery. See the Notes for
86 information on how to obtain a username and token.
87
88 .. code-block:: python
89
90 act.discovery.download_data(
91 "userName", "XXXXXXXXXXXXXXXX", "sgpmetE13.b1", "2017-01-14", "2017-01-20"
92 )
93
94 """
95 # default start and end are empty
96 start, end = '', ''
97 # start and end strings for query_url are constructed
98 # if the arguments were provided
99 if startdate:
100 start_datetime = date_parser(startdate, return_datetime=True)
101 start = start_datetime.strftime('%Y-%m-%dT%H:%M:%S.%f')[:-3] + 'Z'
102 start = f'&start={start}'
103 if enddate:
104 end_datetime = date_parser(enddate, return_datetime=True)
105 # If the start and end date are the same, and a day to the end date
106 if start_datetime == end_datetime:
107 end_datetime += timedelta(hours=23, minutes=59, seconds=59)
108 end = end_datetime.strftime('%Y-%m-%dT%H:%M:%S.%f')[:-3] + 'Z'
109 end = f'&end={end}'
110 # build the url to query the web service using the arguments provided
111 query_url = (
112 'https://adc.arm.gov/armlive/livedata/query?' + 'user={0}&ds={1}{2}{3}&wt=json'
113 ).format(':'.join([username, token]), datastream, start, end)
114
115 # get url response, read the body of the message,
116 # and decode from bytes type to utf-8 string
117 response_body = urlopen(query_url).read().decode('utf-8')
118 # if the response is an html doc, then there was an error with the user
119 if response_body[1:14] == '!DOCTYPE html':
120 raise ConnectionRefusedError('Error with user. Check username or token.')
121
122 # parse into json object
123 response_body_json = json.loads(response_body)
124
125 # construct output directory
126 if output:
127 # output files to directory specified
128 output_dir = os.path.join(output)
129 else:
130 # if no folder given, add datastream folder
131 # to current working dir to prevent file mix-up
132 output_dir = os.path.join(os.getcwd(), datastream)
133
134 # not testing, response is successful and files were returned
135 if response_body_json is None:
136 print('ARM Data Live Webservice does not appear to be functioning')
137 return []
138
139 num_files = len(response_body_json['files'])
140 file_names = []
141 if response_body_json['status'] == 'success' and num_files > 0:
142 for fname in response_body_json['files']:
143 if time is not None:
144 if time not in fname:
145 continue
146 print(f'[DOWNLOADING] {fname}')
147 # construct link to web service saveData function
148 save_data_url = (
149 'https://adc.arm.gov/armlive/livedata/' + 'saveData?user={0}&file={1}'
150 ).format(':'.join([username, token]), fname)
151 output_file = os.path.join(output_dir, fname)
152 # make directory if it doesn't exist
153 if not os.path.isdir(output_dir):
154 os.makedirs(output_dir)
155 # create file and write bytes to file
156 with open(output_file, 'wb') as open_bytes_file:
157 open_bytes_file.write(urlopen(save_data_url).read())
158 file_names.append(output_file)
159 else:
160 print(
161 'No files returned or url status error.\n' 'Check datastream name, start, and end date.'
162 )
163
164 return file_names
165
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/act/discovery/get_armfiles.py b/act/discovery/get_armfiles.py
--- a/act/discovery/get_armfiles.py
+++ b/act/discovery/get_armfiles.py
@@ -4,6 +4,7 @@
"""
import argparse
+import contextlib
import json
import os
import sys
@@ -143,17 +144,28 @@
if time is not None:
if time not in fname:
continue
- print(f'[DOWNLOADING] {fname}')
# construct link to web service saveData function
save_data_url = (
'https://adc.arm.gov/armlive/livedata/' + 'saveData?user={0}&file={1}'
).format(':'.join([username, token]), fname)
output_file = os.path.join(output_dir, fname)
+
+ # if datastream is not in data archive, return error
+ with contextlib.closing(urlopen(save_data_url).read().decode().lower()) as check:
+ if 'this data file is not available' in check:
+ raise OSError(
+ "The datastream '" + datastream
+ + "' is not available on /data/archive. To download "
+ + "this file, place an order via Data Discovery. "
+ + "https://adc.arm.gov/discovery")
+
# make directory if it doesn't exist
if not os.path.isdir(output_dir):
os.makedirs(output_dir)
+
# create file and write bytes to file
with open(output_file, 'wb') as open_bytes_file:
+ print(f'[DOWNLOADING] {fname}')
open_bytes_file.write(urlopen(save_data_url).read())
file_names.append(output_file)
else:
| {"golden_diff": "diff --git a/act/discovery/get_armfiles.py b/act/discovery/get_armfiles.py\n--- a/act/discovery/get_armfiles.py\n+++ b/act/discovery/get_armfiles.py\n@@ -4,6 +4,7 @@\n \"\"\"\n \n import argparse\n+import contextlib\n import json\n import os\n import sys\n@@ -143,17 +144,28 @@\n if time is not None:\n if time not in fname:\n continue\n- print(f'[DOWNLOADING] {fname}')\n # construct link to web service saveData function\n save_data_url = (\n 'https://adc.arm.gov/armlive/livedata/' + 'saveData?user={0}&file={1}'\n ).format(':'.join([username, token]), fname)\n output_file = os.path.join(output_dir, fname)\n+\n+ # if datastream is not in data archive, return error\n+ with contextlib.closing(urlopen(save_data_url).read().decode().lower()) as check:\n+ if 'this data file is not available' in check:\n+ raise OSError(\n+ \"The datastream '\" + datastream\n+ + \"' is not available on /data/archive. To download \"\n+ + \"this file, place an order via Data Discovery. \"\n+ + \"https://adc.arm.gov/discovery\")\n+\n # make directory if it doesn't exist\n if not os.path.isdir(output_dir):\n os.makedirs(output_dir)\n+\n # create file and write bytes to file\n with open(output_file, 'wb') as open_bytes_file:\n+ print(f'[DOWNLOADING] {fname}')\n open_bytes_file.write(urlopen(save_data_url).read())\n file_names.append(output_file)\n else:\n", "issue": "ARM Data API Returns Text when File not Available \nI tried to download gucceilpblhtM1.a0 but it's not available through the webservice due to it being an a0-level file. Instead the API returned text that was included in the file. We should put in a check to ensure that these cases are caught and files are not produced.\r\n\r\n```\r\nThis data file is not available on /data/archive. To download this file, please an order via Data Discovery. https://adc.arm.gov/discovery\r\n```\n", "before_files": [{"content": "\"\"\"\nScript for downloading data from ARM's Live Data Webservice\n\n\"\"\"\n\nimport argparse\nimport json\nimport os\nimport sys\nfrom datetime import timedelta\n\ntry:\n from urllib.request import urlopen\nexcept ImportError:\n from urllib import urlopen\n\nfrom act.utils import date_parser\n\n\ndef download_data(username, token, datastream, startdate, enddate, time=None, output=None):\n \"\"\"\n This tool will help users utilize the ARM Live Data Webservice to download\n ARM data.\n\n Parameters\n ----------\n username : str\n The username to use for logging into the ADC archive.\n token : str\n The access token for accessing the ADC archive.\n datastream : str\n The name of the datastream to acquire.\n startdate : str\n The start date of the data to acquire. Formats accepted are\n YYYY-MM-DD, DD.MM.YYYY, DD/MM/YYYY, YYYYMMDD, YYYY/MM/DD or\n any of the previous formats with THH:MM:SS added onto the end\n (ex. 2020-09-15T12:00:00).\n enddate : str\n The end date of the data to acquire. Formats accepted are\n YYYY-MM-DD, DD.MM.YYYY, DD/MM/YYYY, YYYYMMDD or YYYY/MM/DD, or\n any of the previous formats with THH:MM:SS added onto the end\n (ex. 2020-09-15T13:00:00).\n time: str or None\n The specific time. Format is HHMMSS. Set to None to download all files\n in the given date interval.\n output : str\n The output directory for the data. Set to None to make a folder in the\n current working directory with the same name as *datastream* to place\n the files in.\n\n Returns\n -------\n files : list\n Returns list of files retrieved\n\n Notes\n -----\n This programmatic interface allows users to query and automate\n machine-to-machine downloads of ARM data. This tool uses a REST URL and\n specific parameters (saveData, query), user ID and access token, a\n datastream name, a start date, and an end date, and data files matching\n the criteria will be returned to the user and downloaded.\n\n By using this web service, users can setup cron jobs and automatically\n download data from /data/archive into their workspace. This will also\n eliminate the manual step of following a link in an email to download data.\n All other data files, which are not on the spinning\n disk (on HPSS), will have to go through the regular ordering process.\n More information about this REST API and tools can be found on `ARM Live\n <https://adc.arm.gov/armlive/#scripts>`_.\n\n To login/register for an access token click `here\n <https://adc.arm.gov/armlive/livedata/home>`_.\n\n Author: Michael Giansiracusa\n Email: [email protected]\n\n Web Tools Contact: Ranjeet Devarakonda [email protected]\n\n Examples\n --------\n This code will download the netCDF files from the sgpmetE13.b1 datastream\n and place them in a directory named sgpmetE13.b1. The data from 14 Jan to\n 20 Jan 2017 will be downloaded. Replace *userName* and *XXXXXXXXXXXXXXXX*\n with your username and token for ARM Data Discovery. See the Notes for\n information on how to obtain a username and token.\n\n .. code-block:: python\n\n act.discovery.download_data(\n \"userName\", \"XXXXXXXXXXXXXXXX\", \"sgpmetE13.b1\", \"2017-01-14\", \"2017-01-20\"\n )\n\n \"\"\"\n # default start and end are empty\n start, end = '', ''\n # start and end strings for query_url are constructed\n # if the arguments were provided\n if startdate:\n start_datetime = date_parser(startdate, return_datetime=True)\n start = start_datetime.strftime('%Y-%m-%dT%H:%M:%S.%f')[:-3] + 'Z'\n start = f'&start={start}'\n if enddate:\n end_datetime = date_parser(enddate, return_datetime=True)\n # If the start and end date are the same, and a day to the end date\n if start_datetime == end_datetime:\n end_datetime += timedelta(hours=23, minutes=59, seconds=59)\n end = end_datetime.strftime('%Y-%m-%dT%H:%M:%S.%f')[:-3] + 'Z'\n end = f'&end={end}'\n # build the url to query the web service using the arguments provided\n query_url = (\n 'https://adc.arm.gov/armlive/livedata/query?' + 'user={0}&ds={1}{2}{3}&wt=json'\n ).format(':'.join([username, token]), datastream, start, end)\n\n # get url response, read the body of the message,\n # and decode from bytes type to utf-8 string\n response_body = urlopen(query_url).read().decode('utf-8')\n # if the response is an html doc, then there was an error with the user\n if response_body[1:14] == '!DOCTYPE html':\n raise ConnectionRefusedError('Error with user. Check username or token.')\n\n # parse into json object\n response_body_json = json.loads(response_body)\n\n # construct output directory\n if output:\n # output files to directory specified\n output_dir = os.path.join(output)\n else:\n # if no folder given, add datastream folder\n # to current working dir to prevent file mix-up\n output_dir = os.path.join(os.getcwd(), datastream)\n\n # not testing, response is successful and files were returned\n if response_body_json is None:\n print('ARM Data Live Webservice does not appear to be functioning')\n return []\n\n num_files = len(response_body_json['files'])\n file_names = []\n if response_body_json['status'] == 'success' and num_files > 0:\n for fname in response_body_json['files']:\n if time is not None:\n if time not in fname:\n continue\n print(f'[DOWNLOADING] {fname}')\n # construct link to web service saveData function\n save_data_url = (\n 'https://adc.arm.gov/armlive/livedata/' + 'saveData?user={0}&file={1}'\n ).format(':'.join([username, token]), fname)\n output_file = os.path.join(output_dir, fname)\n # make directory if it doesn't exist\n if not os.path.isdir(output_dir):\n os.makedirs(output_dir)\n # create file and write bytes to file\n with open(output_file, 'wb') as open_bytes_file:\n open_bytes_file.write(urlopen(save_data_url).read())\n file_names.append(output_file)\n else:\n print(\n 'No files returned or url status error.\\n' 'Check datastream name, start, and end date.'\n )\n\n return file_names\n", "path": "act/discovery/get_armfiles.py"}], "after_files": [{"content": "\"\"\"\nScript for downloading data from ARM's Live Data Webservice\n\n\"\"\"\n\nimport argparse\nimport contextlib\nimport json\nimport os\nimport sys\nfrom datetime import timedelta\n\ntry:\n from urllib.request import urlopen\nexcept ImportError:\n from urllib import urlopen\n\nfrom act.utils import date_parser\n\n\ndef download_data(username, token, datastream, startdate, enddate, time=None, output=None):\n \"\"\"\n This tool will help users utilize the ARM Live Data Webservice to download\n ARM data.\n\n Parameters\n ----------\n username : str\n The username to use for logging into the ADC archive.\n token : str\n The access token for accessing the ADC archive.\n datastream : str\n The name of the datastream to acquire.\n startdate : str\n The start date of the data to acquire. Formats accepted are\n YYYY-MM-DD, DD.MM.YYYY, DD/MM/YYYY, YYYYMMDD, YYYY/MM/DD or\n any of the previous formats with THH:MM:SS added onto the end\n (ex. 2020-09-15T12:00:00).\n enddate : str\n The end date of the data to acquire. Formats accepted are\n YYYY-MM-DD, DD.MM.YYYY, DD/MM/YYYY, YYYYMMDD or YYYY/MM/DD, or\n any of the previous formats with THH:MM:SS added onto the end\n (ex. 2020-09-15T13:00:00).\n time: str or None\n The specific time. Format is HHMMSS. Set to None to download all files\n in the given date interval.\n output : str\n The output directory for the data. Set to None to make a folder in the\n current working directory with the same name as *datastream* to place\n the files in.\n\n Returns\n -------\n files : list\n Returns list of files retrieved\n\n Notes\n -----\n This programmatic interface allows users to query and automate\n machine-to-machine downloads of ARM data. This tool uses a REST URL and\n specific parameters (saveData, query), user ID and access token, a\n datastream name, a start date, and an end date, and data files matching\n the criteria will be returned to the user and downloaded.\n\n By using this web service, users can setup cron jobs and automatically\n download data from /data/archive into their workspace. This will also\n eliminate the manual step of following a link in an email to download data.\n All other data files, which are not on the spinning\n disk (on HPSS), will have to go through the regular ordering process.\n More information about this REST API and tools can be found on `ARM Live\n <https://adc.arm.gov/armlive/#scripts>`_.\n\n To login/register for an access token click `here\n <https://adc.arm.gov/armlive/livedata/home>`_.\n\n Author: Michael Giansiracusa\n Email: [email protected]\n\n Web Tools Contact: Ranjeet Devarakonda [email protected]\n\n Examples\n --------\n This code will download the netCDF files from the sgpmetE13.b1 datastream\n and place them in a directory named sgpmetE13.b1. The data from 14 Jan to\n 20 Jan 2017 will be downloaded. Replace *userName* and *XXXXXXXXXXXXXXXX*\n with your username and token for ARM Data Discovery. See the Notes for\n information on how to obtain a username and token.\n\n .. code-block:: python\n\n act.discovery.download_data(\n \"userName\", \"XXXXXXXXXXXXXXXX\", \"sgpmetE13.b1\", \"2017-01-14\", \"2017-01-20\"\n )\n\n \"\"\"\n # default start and end are empty\n start, end = '', ''\n # start and end strings for query_url are constructed\n # if the arguments were provided\n if startdate:\n start_datetime = date_parser(startdate, return_datetime=True)\n start = start_datetime.strftime('%Y-%m-%dT%H:%M:%S.%f')[:-3] + 'Z'\n start = f'&start={start}'\n if enddate:\n end_datetime = date_parser(enddate, return_datetime=True)\n # If the start and end date are the same, and a day to the end date\n if start_datetime == end_datetime:\n end_datetime += timedelta(hours=23, minutes=59, seconds=59)\n end = end_datetime.strftime('%Y-%m-%dT%H:%M:%S.%f')[:-3] + 'Z'\n end = f'&end={end}'\n # build the url to query the web service using the arguments provided\n query_url = (\n 'https://adc.arm.gov/armlive/livedata/query?' + 'user={0}&ds={1}{2}{3}&wt=json'\n ).format(':'.join([username, token]), datastream, start, end)\n\n # get url response, read the body of the message,\n # and decode from bytes type to utf-8 string\n response_body = urlopen(query_url).read().decode('utf-8')\n # if the response is an html doc, then there was an error with the user\n if response_body[1:14] == '!DOCTYPE html':\n raise ConnectionRefusedError('Error with user. Check username or token.')\n\n # parse into json object\n response_body_json = json.loads(response_body)\n\n # construct output directory\n if output:\n # output files to directory specified\n output_dir = os.path.join(output)\n else:\n # if no folder given, add datastream folder\n # to current working dir to prevent file mix-up\n output_dir = os.path.join(os.getcwd(), datastream)\n\n # not testing, response is successful and files were returned\n if response_body_json is None:\n print('ARM Data Live Webservice does not appear to be functioning')\n return []\n\n num_files = len(response_body_json['files'])\n file_names = []\n if response_body_json['status'] == 'success' and num_files > 0:\n for fname in response_body_json['files']:\n if time is not None:\n if time not in fname:\n continue\n # construct link to web service saveData function\n save_data_url = (\n 'https://adc.arm.gov/armlive/livedata/' + 'saveData?user={0}&file={1}'\n ).format(':'.join([username, token]), fname)\n output_file = os.path.join(output_dir, fname)\n\n # if datastream is not in data archive, return error\n with contextlib.closing(urlopen(save_data_url).read().decode().lower()) as check:\n if 'this data file is not available' in check:\n raise OSError(\n \"The datastream '\" + datastream\n + \"' is not available on /data/archive. To download \"\n + \"this file, place an order via Data Discovery. \"\n + \"https://adc.arm.gov/discovery\")\n\n # make directory if it doesn't exist\n if not os.path.isdir(output_dir):\n os.makedirs(output_dir)\n\n # create file and write bytes to file\n with open(output_file, 'wb') as open_bytes_file:\n print(f'[DOWNLOADING] {fname}')\n open_bytes_file.write(urlopen(save_data_url).read())\n file_names.append(output_file)\n else:\n print(\n 'No files returned or url status error.\\n' 'Check datastream name, start, and end date.'\n )\n\n return file_names\n", "path": "act/discovery/get_armfiles.py"}]} | 2,357 | 381 |
gh_patches_debug_59569 | rasdani/github-patches | git_diff | ytdl-org__youtube-dl-18343 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Gfycat cajax json endpoint is gone
### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2018.11.23*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
- [x] I've **verified** and **I assure** that I'm running youtube-dl **2018.11.23**
### Before submitting an *issue* make sure you have:
- [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
- [x] Checked that provided video/audio/playlist URLs (if any) are alive and playable in a browser
### What is the purpose of your *issue*?
- [x] Bug report (encountered problems with youtube-dl)
- [ ] Site support request (request for adding support for a new site)
- [ ] Feature request (request for a new functionality)
- [ ] Question
- [ ] Other
---
### If the purpose of this *issue* is a *bug report*, *site support request* or you are not completely sure provide the full verbose output as follows:
Add the `-v` flag to **your command line** you run youtube-dl with (`youtube-dl -v <your command line>`), copy the **whole** output and insert it here. It should look similar to one below (replace it with **your** log inserted between triple ```):
```
$ youtube-dl https://gfycat.com/belovedsparseamericanbobtail -v
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: ['https://gfycat.com/belovedsparseamericanbobtail', '-v']
[debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2018.11.23
[debug] Python version 3.6.5 (CPython) - Darwin-16.7.0-x86_64-i386-64bit
[debug] exe versions: ffmpeg 3.2.2, ffprobe 3.2.2
[debug] Proxy map: {}
[Gfycat] belovedsparseamericanbobtail: Downloading video info
ERROR: Unable to download JSON metadata: HTTP Error 404: Not Found (caused by <HTTPError 404: 'Not Found'>); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 605, in _request_webpage
return self._downloader.urlopen(url_or_request)
File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 2211, in urlopen
return self._opener.open(req, timeout=self._socket_timeout)
File "/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 532, in open
response = meth(req, response)
File "/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 642, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 564, in error
result = self._call_chain(*args)
File "/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 504, in _call_chain
result = func(*args)
File "/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 756, in http_error_302
return self.parent.open(new, timeout=req.timeout)
File "/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 532, in open
response = meth(req, response)
File "/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 642, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 570, in error
return self._call_chain(*args)
File "/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 504, in _call_chain
result = func(*args)
File "/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 650, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
```
---
### Description of your *issue*, suggested solution and other information
Gfycat downloads no longer work because Gfycat removed the API youtube-dl uses:
> Yes we announced deprecation of this endpoint 1.5 years ago. Are you using it for an app still?
>
> This is the link to our current API. https://developers.gfycat.com/api/#introduction
https://www.reddit.com/r/gfycat/comments/a17ewc/embeded_gfycats_returning_failed_to_load_resource/eanbmrh/
So if you wanted to get https://gfycat.com/belovedsparseamericanbobtail then `youtube-dl` would look for the meta at https://gfycat.com/cajax/get/belovedsparseamericanbobtail (you can try the link, it'll 404).
https://github.com/rg3/youtube-dl/blob/d9df8f120b325766181fb474a8c534e51df78f17/youtube_dl/extractor/gfycat.py#L55-L57
It's not obvious how to me how to fix this because the new api uses an oauth2 bearer token, and there isn't a key store AFAIK built into youtube-dl. The closest thing looks like the `.netrc` file support, or possibly a custom [config file entry](https://github.com/rg3/youtube-dl#configuration)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `youtube_dl/extractor/gfycat.py`
Content:
```
1 # coding: utf-8
2 from __future__ import unicode_literals
3
4 from .common import InfoExtractor
5 from ..utils import (
6 int_or_none,
7 float_or_none,
8 qualities,
9 ExtractorError,
10 )
11
12
13 class GfycatIE(InfoExtractor):
14 _VALID_URL = r'https?://(?:www\.)?gfycat\.com/(?:ifr/|gifs/detail/)?(?P<id>[^/?#]+)'
15 _TESTS = [{
16 'url': 'http://gfycat.com/DeadlyDecisiveGermanpinscher',
17 'info_dict': {
18 'id': 'DeadlyDecisiveGermanpinscher',
19 'ext': 'mp4',
20 'title': 'Ghost in the Shell',
21 'timestamp': 1410656006,
22 'upload_date': '20140914',
23 'uploader': 'anonymous',
24 'duration': 10.4,
25 'view_count': int,
26 'like_count': int,
27 'dislike_count': int,
28 'categories': list,
29 'age_limit': 0,
30 }
31 }, {
32 'url': 'http://gfycat.com/ifr/JauntyTimelyAmazontreeboa',
33 'info_dict': {
34 'id': 'JauntyTimelyAmazontreeboa',
35 'ext': 'mp4',
36 'title': 'JauntyTimelyAmazontreeboa',
37 'timestamp': 1411720126,
38 'upload_date': '20140926',
39 'uploader': 'anonymous',
40 'duration': 3.52,
41 'view_count': int,
42 'like_count': int,
43 'dislike_count': int,
44 'categories': list,
45 'age_limit': 0,
46 }
47 }, {
48 'url': 'https://gfycat.com/gifs/detail/UnconsciousLankyIvorygull',
49 'only_matching': True
50 }]
51
52 def _real_extract(self, url):
53 video_id = self._match_id(url)
54
55 gfy = self._download_json(
56 'http://gfycat.com/cajax/get/%s' % video_id,
57 video_id, 'Downloading video info')
58 if 'error' in gfy:
59 raise ExtractorError('Gfycat said: ' + gfy['error'], expected=True)
60 gfy = gfy['gfyItem']
61
62 title = gfy.get('title') or gfy['gfyName']
63 description = gfy.get('description')
64 timestamp = int_or_none(gfy.get('createDate'))
65 uploader = gfy.get('userName')
66 view_count = int_or_none(gfy.get('views'))
67 like_count = int_or_none(gfy.get('likes'))
68 dislike_count = int_or_none(gfy.get('dislikes'))
69 age_limit = 18 if gfy.get('nsfw') == '1' else 0
70
71 width = int_or_none(gfy.get('width'))
72 height = int_or_none(gfy.get('height'))
73 fps = int_or_none(gfy.get('frameRate'))
74 num_frames = int_or_none(gfy.get('numFrames'))
75
76 duration = float_or_none(num_frames, fps) if num_frames and fps else None
77
78 categories = gfy.get('tags') or gfy.get('extraLemmas') or []
79
80 FORMATS = ('gif', 'webm', 'mp4')
81 quality = qualities(FORMATS)
82
83 formats = []
84 for format_id in FORMATS:
85 video_url = gfy.get('%sUrl' % format_id)
86 if not video_url:
87 continue
88 filesize = int_or_none(gfy.get('%sSize' % format_id))
89 formats.append({
90 'url': video_url,
91 'format_id': format_id,
92 'width': width,
93 'height': height,
94 'fps': fps,
95 'filesize': filesize,
96 'quality': quality(format_id),
97 })
98 self._sort_formats(formats)
99
100 return {
101 'id': video_id,
102 'title': title,
103 'description': description,
104 'timestamp': timestamp,
105 'uploader': uploader,
106 'duration': duration,
107 'view_count': view_count,
108 'like_count': like_count,
109 'dislike_count': dislike_count,
110 'categories': categories,
111 'age_limit': age_limit,
112 'formats': formats,
113 }
114
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/youtube_dl/extractor/gfycat.py b/youtube_dl/extractor/gfycat.py
--- a/youtube_dl/extractor/gfycat.py
+++ b/youtube_dl/extractor/gfycat.py
@@ -53,7 +53,7 @@
video_id = self._match_id(url)
gfy = self._download_json(
- 'http://gfycat.com/cajax/get/%s' % video_id,
+ 'https://api.gfycat.com/v1/gfycats/%s' % video_id,
video_id, 'Downloading video info')
if 'error' in gfy:
raise ExtractorError('Gfycat said: ' + gfy['error'], expected=True)
| {"golden_diff": "diff --git a/youtube_dl/extractor/gfycat.py b/youtube_dl/extractor/gfycat.py\n--- a/youtube_dl/extractor/gfycat.py\n+++ b/youtube_dl/extractor/gfycat.py\n@@ -53,7 +53,7 @@\n video_id = self._match_id(url)\n \n gfy = self._download_json(\n- 'http://gfycat.com/cajax/get/%s' % video_id,\n+ 'https://api.gfycat.com/v1/gfycats/%s' % video_id,\n video_id, 'Downloading video info')\n if 'error' in gfy:\n raise ExtractorError('Gfycat said: ' + gfy['error'], expected=True)\n", "issue": "Gfycat cajax json endpoint is gone\n### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2018.11.23*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.\r\n- [x] I've **verified** and **I assure** that I'm running youtube-dl **2018.11.23**\r\n\r\n### Before submitting an *issue* make sure you have:\r\n- [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections\r\n- [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones\r\n- [x] Checked that provided video/audio/playlist URLs (if any) are alive and playable in a browser\r\n\r\n### What is the purpose of your *issue*?\r\n- [x] Bug report (encountered problems with youtube-dl)\r\n- [ ] Site support request (request for adding support for a new site)\r\n- [ ] Feature request (request for a new functionality)\r\n- [ ] Question\r\n- [ ] Other\r\n\r\n---\r\n\r\n### If the purpose of this *issue* is a *bug report*, *site support request* or you are not completely sure provide the full verbose output as follows:\r\n\r\nAdd the `-v` flag to **your command line** you run youtube-dl with (`youtube-dl -v <your command line>`), copy the **whole** output and insert it here. It should look similar to one below (replace it with **your** log inserted between triple ```):\r\n\r\n```\r\n$ youtube-dl https://gfycat.com/belovedsparseamericanbobtail -v\r\n[debug] System config: []\r\n[debug] User config: []\r\n[debug] Custom config: []\r\n[debug] Command-line args: ['https://gfycat.com/belovedsparseamericanbobtail', '-v']\r\n[debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8\r\n[debug] youtube-dl version 2018.11.23\r\n[debug] Python version 3.6.5 (CPython) - Darwin-16.7.0-x86_64-i386-64bit\r\n[debug] exe versions: ffmpeg 3.2.2, ffprobe 3.2.2\r\n[debug] Proxy map: {}\r\n[Gfycat] belovedsparseamericanbobtail: Downloading video info\r\nERROR: Unable to download JSON metadata: HTTP Error 404: Not Found (caused by <HTTPError 404: 'Not Found'>); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.\r\n File \"/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py\", line 605, in _request_webpage\r\n return self._downloader.urlopen(url_or_request)\r\n File \"/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py\", line 2211, in urlopen\r\n return self._opener.open(req, timeout=self._socket_timeout)\r\n File \"/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py\", line 532, in open\r\n response = meth(req, response)\r\n File \"/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py\", line 642, in http_response\r\n 'http', request, response, code, msg, hdrs)\r\n File \"/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py\", line 564, in error\r\n result = self._call_chain(*args)\r\n File \"/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py\", line 504, in _call_chain\r\n result = func(*args)\r\n File \"/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py\", line 756, in http_error_302\r\n return self.parent.open(new, timeout=req.timeout)\r\n File \"/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py\", line 532, in open\r\n response = meth(req, response)\r\n File \"/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py\", line 642, in http_response\r\n 'http', request, response, code, msg, hdrs)\r\n File \"/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py\", line 570, in error\r\n return self._call_chain(*args)\r\n File \"/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py\", line 504, in _call_chain\r\n result = func(*args)\r\n File \"/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py\", line 650, in http_error_default\r\n raise HTTPError(req.full_url, code, msg, hdrs, fp)\r\n\r\n```\r\n\r\n---\r\n\r\n### Description of your *issue*, suggested solution and other information\r\n\r\nGfycat downloads no longer work because Gfycat removed the API youtube-dl uses:\r\n\r\n> Yes we announced deprecation of this endpoint 1.5 years ago. Are you using it for an app still?\r\n>\r\n> This is the link to our current API. https://developers.gfycat.com/api/#introduction\r\nhttps://www.reddit.com/r/gfycat/comments/a17ewc/embeded_gfycats_returning_failed_to_load_resource/eanbmrh/\r\n\r\nSo if you wanted to get https://gfycat.com/belovedsparseamericanbobtail then `youtube-dl` would look for the meta at https://gfycat.com/cajax/get/belovedsparseamericanbobtail (you can try the link, it'll 404).\r\nhttps://github.com/rg3/youtube-dl/blob/d9df8f120b325766181fb474a8c534e51df78f17/youtube_dl/extractor/gfycat.py#L55-L57\r\n\r\nIt's not obvious how to me how to fix this because the new api uses an oauth2 bearer token, and there isn't a key store AFAIK built into youtube-dl. The closest thing looks like the `.netrc` file support, or possibly a custom [config file entry](https://github.com/rg3/youtube-dl#configuration)\n", "before_files": [{"content": "# coding: utf-8\nfrom __future__ import unicode_literals\n\nfrom .common import InfoExtractor\nfrom ..utils import (\n int_or_none,\n float_or_none,\n qualities,\n ExtractorError,\n)\n\n\nclass GfycatIE(InfoExtractor):\n _VALID_URL = r'https?://(?:www\\.)?gfycat\\.com/(?:ifr/|gifs/detail/)?(?P<id>[^/?#]+)'\n _TESTS = [{\n 'url': 'http://gfycat.com/DeadlyDecisiveGermanpinscher',\n 'info_dict': {\n 'id': 'DeadlyDecisiveGermanpinscher',\n 'ext': 'mp4',\n 'title': 'Ghost in the Shell',\n 'timestamp': 1410656006,\n 'upload_date': '20140914',\n 'uploader': 'anonymous',\n 'duration': 10.4,\n 'view_count': int,\n 'like_count': int,\n 'dislike_count': int,\n 'categories': list,\n 'age_limit': 0,\n }\n }, {\n 'url': 'http://gfycat.com/ifr/JauntyTimelyAmazontreeboa',\n 'info_dict': {\n 'id': 'JauntyTimelyAmazontreeboa',\n 'ext': 'mp4',\n 'title': 'JauntyTimelyAmazontreeboa',\n 'timestamp': 1411720126,\n 'upload_date': '20140926',\n 'uploader': 'anonymous',\n 'duration': 3.52,\n 'view_count': int,\n 'like_count': int,\n 'dislike_count': int,\n 'categories': list,\n 'age_limit': 0,\n }\n }, {\n 'url': 'https://gfycat.com/gifs/detail/UnconsciousLankyIvorygull',\n 'only_matching': True\n }]\n\n def _real_extract(self, url):\n video_id = self._match_id(url)\n\n gfy = self._download_json(\n 'http://gfycat.com/cajax/get/%s' % video_id,\n video_id, 'Downloading video info')\n if 'error' in gfy:\n raise ExtractorError('Gfycat said: ' + gfy['error'], expected=True)\n gfy = gfy['gfyItem']\n\n title = gfy.get('title') or gfy['gfyName']\n description = gfy.get('description')\n timestamp = int_or_none(gfy.get('createDate'))\n uploader = gfy.get('userName')\n view_count = int_or_none(gfy.get('views'))\n like_count = int_or_none(gfy.get('likes'))\n dislike_count = int_or_none(gfy.get('dislikes'))\n age_limit = 18 if gfy.get('nsfw') == '1' else 0\n\n width = int_or_none(gfy.get('width'))\n height = int_or_none(gfy.get('height'))\n fps = int_or_none(gfy.get('frameRate'))\n num_frames = int_or_none(gfy.get('numFrames'))\n\n duration = float_or_none(num_frames, fps) if num_frames and fps else None\n\n categories = gfy.get('tags') or gfy.get('extraLemmas') or []\n\n FORMATS = ('gif', 'webm', 'mp4')\n quality = qualities(FORMATS)\n\n formats = []\n for format_id in FORMATS:\n video_url = gfy.get('%sUrl' % format_id)\n if not video_url:\n continue\n filesize = int_or_none(gfy.get('%sSize' % format_id))\n formats.append({\n 'url': video_url,\n 'format_id': format_id,\n 'width': width,\n 'height': height,\n 'fps': fps,\n 'filesize': filesize,\n 'quality': quality(format_id),\n })\n self._sort_formats(formats)\n\n return {\n 'id': video_id,\n 'title': title,\n 'description': description,\n 'timestamp': timestamp,\n 'uploader': uploader,\n 'duration': duration,\n 'view_count': view_count,\n 'like_count': like_count,\n 'dislike_count': dislike_count,\n 'categories': categories,\n 'age_limit': age_limit,\n 'formats': formats,\n }\n", "path": "youtube_dl/extractor/gfycat.py"}], "after_files": [{"content": "# coding: utf-8\nfrom __future__ import unicode_literals\n\nfrom .common import InfoExtractor\nfrom ..utils import (\n int_or_none,\n float_or_none,\n qualities,\n ExtractorError,\n)\n\n\nclass GfycatIE(InfoExtractor):\n _VALID_URL = r'https?://(?:www\\.)?gfycat\\.com/(?:ifr/|gifs/detail/)?(?P<id>[^/?#]+)'\n _TESTS = [{\n 'url': 'http://gfycat.com/DeadlyDecisiveGermanpinscher',\n 'info_dict': {\n 'id': 'DeadlyDecisiveGermanpinscher',\n 'ext': 'mp4',\n 'title': 'Ghost in the Shell',\n 'timestamp': 1410656006,\n 'upload_date': '20140914',\n 'uploader': 'anonymous',\n 'duration': 10.4,\n 'view_count': int,\n 'like_count': int,\n 'dislike_count': int,\n 'categories': list,\n 'age_limit': 0,\n }\n }, {\n 'url': 'http://gfycat.com/ifr/JauntyTimelyAmazontreeboa',\n 'info_dict': {\n 'id': 'JauntyTimelyAmazontreeboa',\n 'ext': 'mp4',\n 'title': 'JauntyTimelyAmazontreeboa',\n 'timestamp': 1411720126,\n 'upload_date': '20140926',\n 'uploader': 'anonymous',\n 'duration': 3.52,\n 'view_count': int,\n 'like_count': int,\n 'dislike_count': int,\n 'categories': list,\n 'age_limit': 0,\n }\n }, {\n 'url': 'https://gfycat.com/gifs/detail/UnconsciousLankyIvorygull',\n 'only_matching': True\n }]\n\n def _real_extract(self, url):\n video_id = self._match_id(url)\n\n gfy = self._download_json(\n 'https://api.gfycat.com/v1/gfycats/%s' % video_id,\n video_id, 'Downloading video info')\n if 'error' in gfy:\n raise ExtractorError('Gfycat said: ' + gfy['error'], expected=True)\n gfy = gfy['gfyItem']\n\n title = gfy.get('title') or gfy['gfyName']\n description = gfy.get('description')\n timestamp = int_or_none(gfy.get('createDate'))\n uploader = gfy.get('userName')\n view_count = int_or_none(gfy.get('views'))\n like_count = int_or_none(gfy.get('likes'))\n dislike_count = int_or_none(gfy.get('dislikes'))\n age_limit = 18 if gfy.get('nsfw') == '1' else 0\n\n width = int_or_none(gfy.get('width'))\n height = int_or_none(gfy.get('height'))\n fps = int_or_none(gfy.get('frameRate'))\n num_frames = int_or_none(gfy.get('numFrames'))\n\n duration = float_or_none(num_frames, fps) if num_frames and fps else None\n\n categories = gfy.get('tags') or gfy.get('extraLemmas') or []\n\n FORMATS = ('gif', 'webm', 'mp4')\n quality = qualities(FORMATS)\n\n formats = []\n for format_id in FORMATS:\n video_url = gfy.get('%sUrl' % format_id)\n if not video_url:\n continue\n filesize = int_or_none(gfy.get('%sSize' % format_id))\n formats.append({\n 'url': video_url,\n 'format_id': format_id,\n 'width': width,\n 'height': height,\n 'fps': fps,\n 'filesize': filesize,\n 'quality': quality(format_id),\n })\n self._sort_formats(formats)\n\n return {\n 'id': video_id,\n 'title': title,\n 'description': description,\n 'timestamp': timestamp,\n 'uploader': uploader,\n 'duration': duration,\n 'view_count': view_count,\n 'like_count': like_count,\n 'dislike_count': dislike_count,\n 'categories': categories,\n 'age_limit': age_limit,\n 'formats': formats,\n }\n", "path": "youtube_dl/extractor/gfycat.py"}]} | 3,185 | 170 |
gh_patches_debug_37653 | rasdani/github-patches | git_diff | python-telegram-bot__python-telegram-bot-2149 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] Update timerbot example
https://github.com/python-telegram-bot/python-telegram-bot/blob/3b4559dd95d14a4c9a9ca54237b5fe547386b666/examples/timerbot.py#L56-L58
If the job has already run, those lines give an error after v13.0, as APS remvos the job after it has run. But as jobs can't be persisted natively, I think we should just restructure the example to give the jobs the `chat_id` as name and use `get_jobs_by_name` to remove them if necessary
Edit: And we should definetely remove the `pass_*` kwargs, those are deprecated since v12 …
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/timerbot.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 # This program is dedicated to the public domain under the CC0 license.
4
5 """
6 Simple Bot to send timed Telegram messages.
7
8 This Bot uses the Updater class to handle the bot and the JobQueue to send
9 timed messages.
10
11 First, a few handler functions are defined. Then, those functions are passed to
12 the Dispatcher and registered at their respective places.
13 Then, the bot is started and runs until we press Ctrl-C on the command line.
14
15 Usage:
16 Basic Alarm Bot example, sends a message after a set time.
17 Press Ctrl-C on the command line or send a signal to the process to stop the
18 bot.
19 """
20
21 import logging
22
23 from telegram.ext import Updater, CommandHandler
24
25 # Enable logging
26 logging.basicConfig(
27 format='%(asctime)s - %(name)s - %(levelname)s - %(message)s', level=logging.INFO
28 )
29
30 logger = logging.getLogger(__name__)
31
32
33 # Define a few command handlers. These usually take the two arguments update and
34 # context. Error handlers also receive the raised TelegramError object in error.
35 def start(update, context):
36 update.message.reply_text('Hi! Use /set <seconds> to set a timer')
37
38
39 def alarm(context):
40 """Send the alarm message."""
41 job = context.job
42 context.bot.send_message(job.context, text='Beep!')
43
44
45 def set_timer(update, context):
46 """Add a job to the queue."""
47 chat_id = update.message.chat_id
48 try:
49 # args[0] should contain the time for the timer in seconds
50 due = int(context.args[0])
51 if due < 0:
52 update.message.reply_text('Sorry we can not go back to future!')
53 return
54
55 # Add job to queue and stop current one if there is a timer already
56 if 'job' in context.chat_data:
57 old_job = context.chat_data['job']
58 old_job.schedule_removal()
59 new_job = context.job_queue.run_once(alarm, due, context=chat_id)
60 context.chat_data['job'] = new_job
61
62 update.message.reply_text('Timer successfully set!')
63
64 except (IndexError, ValueError):
65 update.message.reply_text('Usage: /set <seconds>')
66
67
68 def unset(update, context):
69 """Remove the job if the user changed their mind."""
70 if 'job' not in context.chat_data:
71 update.message.reply_text('You have no active timer')
72 return
73
74 job = context.chat_data['job']
75 job.schedule_removal()
76 del context.chat_data['job']
77
78 update.message.reply_text('Timer successfully unset!')
79
80
81 def main():
82 """Run bot."""
83 # Create the Updater and pass it your bot's token.
84 # Make sure to set use_context=True to use the new context based callbacks
85 # Post version 12 this will no longer be necessary
86 updater = Updater("TOKEN", use_context=True)
87
88 # Get the dispatcher to register handlers
89 dp = updater.dispatcher
90
91 # on different commands - answer in Telegram
92 dp.add_handler(CommandHandler("start", start))
93 dp.add_handler(CommandHandler("help", start))
94 dp.add_handler(
95 CommandHandler("set", set_timer, pass_args=True, pass_job_queue=True, pass_chat_data=True)
96 )
97 dp.add_handler(CommandHandler("unset", unset, pass_chat_data=True))
98
99 # Start the Bot
100 updater.start_polling()
101
102 # Block until you press Ctrl-C or the process receives SIGINT, SIGTERM or
103 # SIGABRT. This should be used most of the time, since start_polling() is
104 # non-blocking and will stop the bot gracefully.
105 updater.idle()
106
107
108 if __name__ == '__main__':
109 main()
110
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/examples/timerbot.py b/examples/timerbot.py
--- a/examples/timerbot.py
+++ b/examples/timerbot.py
@@ -42,6 +42,16 @@
context.bot.send_message(job.context, text='Beep!')
+def remove_job_if_exists(name, context):
+ """Remove job with given name. Returns whether job was removed."""
+ current_jobs = context.job_queue.get_jobs_by_name(name)
+ if not current_jobs:
+ return False
+ for job in current_jobs:
+ job.schedule_removal()
+ return True
+
+
def set_timer(update, context):
"""Add a job to the queue."""
chat_id = update.message.chat_id
@@ -52,14 +62,13 @@
update.message.reply_text('Sorry we can not go back to future!')
return
- # Add job to queue and stop current one if there is a timer already
- if 'job' in context.chat_data:
- old_job = context.chat_data['job']
- old_job.schedule_removal()
- new_job = context.job_queue.run_once(alarm, due, context=chat_id)
- context.chat_data['job'] = new_job
+ job_removed = remove_job_if_exists(str(chat_id), context)
+ context.job_queue.run_once(alarm, due, context=chat_id, name=str(chat_id))
- update.message.reply_text('Timer successfully set!')
+ text = 'Timer successfully set!'
+ if job_removed:
+ text += ' Old one was removed.'
+ update.message.reply_text(text)
except (IndexError, ValueError):
update.message.reply_text('Usage: /set <seconds>')
@@ -67,15 +76,10 @@
def unset(update, context):
"""Remove the job if the user changed their mind."""
- if 'job' not in context.chat_data:
- update.message.reply_text('You have no active timer')
- return
-
- job = context.chat_data['job']
- job.schedule_removal()
- del context.chat_data['job']
-
- update.message.reply_text('Timer successfully unset!')
+ chat_id = update.message.chat_id
+ job_removed = remove_job_if_exists(str(chat_id), context)
+ text = 'Timer successfully cancelled!' if job_removed else 'You have no active timer.'
+ update.message.reply_text(text)
def main():
@@ -91,10 +95,8 @@
# on different commands - answer in Telegram
dp.add_handler(CommandHandler("start", start))
dp.add_handler(CommandHandler("help", start))
- dp.add_handler(
- CommandHandler("set", set_timer, pass_args=True, pass_job_queue=True, pass_chat_data=True)
- )
- dp.add_handler(CommandHandler("unset", unset, pass_chat_data=True))
+ dp.add_handler(CommandHandler("set", set_timer))
+ dp.add_handler(CommandHandler("unset", unset))
# Start the Bot
updater.start_polling()
| {"golden_diff": "diff --git a/examples/timerbot.py b/examples/timerbot.py\n--- a/examples/timerbot.py\n+++ b/examples/timerbot.py\n@@ -42,6 +42,16 @@\n context.bot.send_message(job.context, text='Beep!')\n \n \n+def remove_job_if_exists(name, context):\n+ \"\"\"Remove job with given name. Returns whether job was removed.\"\"\"\n+ current_jobs = context.job_queue.get_jobs_by_name(name)\n+ if not current_jobs:\n+ return False\n+ for job in current_jobs:\n+ job.schedule_removal()\n+ return True\n+\n+\n def set_timer(update, context):\n \"\"\"Add a job to the queue.\"\"\"\n chat_id = update.message.chat_id\n@@ -52,14 +62,13 @@\n update.message.reply_text('Sorry we can not go back to future!')\n return\n \n- # Add job to queue and stop current one if there is a timer already\n- if 'job' in context.chat_data:\n- old_job = context.chat_data['job']\n- old_job.schedule_removal()\n- new_job = context.job_queue.run_once(alarm, due, context=chat_id)\n- context.chat_data['job'] = new_job\n+ job_removed = remove_job_if_exists(str(chat_id), context)\n+ context.job_queue.run_once(alarm, due, context=chat_id, name=str(chat_id))\n \n- update.message.reply_text('Timer successfully set!')\n+ text = 'Timer successfully set!'\n+ if job_removed:\n+ text += ' Old one was removed.'\n+ update.message.reply_text(text)\n \n except (IndexError, ValueError):\n update.message.reply_text('Usage: /set <seconds>')\n@@ -67,15 +76,10 @@\n \n def unset(update, context):\n \"\"\"Remove the job if the user changed their mind.\"\"\"\n- if 'job' not in context.chat_data:\n- update.message.reply_text('You have no active timer')\n- return\n-\n- job = context.chat_data['job']\n- job.schedule_removal()\n- del context.chat_data['job']\n-\n- update.message.reply_text('Timer successfully unset!')\n+ chat_id = update.message.chat_id\n+ job_removed = remove_job_if_exists(str(chat_id), context)\n+ text = 'Timer successfully cancelled!' if job_removed else 'You have no active timer.'\n+ update.message.reply_text(text)\n \n \n def main():\n@@ -91,10 +95,8 @@\n # on different commands - answer in Telegram\n dp.add_handler(CommandHandler(\"start\", start))\n dp.add_handler(CommandHandler(\"help\", start))\n- dp.add_handler(\n- CommandHandler(\"set\", set_timer, pass_args=True, pass_job_queue=True, pass_chat_data=True)\n- )\n- dp.add_handler(CommandHandler(\"unset\", unset, pass_chat_data=True))\n+ dp.add_handler(CommandHandler(\"set\", set_timer))\n+ dp.add_handler(CommandHandler(\"unset\", unset))\n \n # Start the Bot\n updater.start_polling()\n", "issue": "[BUG] Update timerbot example\nhttps://github.com/python-telegram-bot/python-telegram-bot/blob/3b4559dd95d14a4c9a9ca54237b5fe547386b666/examples/timerbot.py#L56-L58\r\n\r\nIf the job has already run, those lines give an error after v13.0, as APS remvos the job after it has run. But as jobs can't be persisted natively, I think we should just restructure the example to give the jobs the `chat_id` as name and use `get_jobs_by_name` to remove them if necessary\r\n\r\nEdit: And we should definetely remove the `pass_*` kwargs, those are deprecated since v12 \u2026\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n# This program is dedicated to the public domain under the CC0 license.\n\n\"\"\"\nSimple Bot to send timed Telegram messages.\n\nThis Bot uses the Updater class to handle the bot and the JobQueue to send\ntimed messages.\n\nFirst, a few handler functions are defined. Then, those functions are passed to\nthe Dispatcher and registered at their respective places.\nThen, the bot is started and runs until we press Ctrl-C on the command line.\n\nUsage:\nBasic Alarm Bot example, sends a message after a set time.\nPress Ctrl-C on the command line or send a signal to the process to stop the\nbot.\n\"\"\"\n\nimport logging\n\nfrom telegram.ext import Updater, CommandHandler\n\n# Enable logging\nlogging.basicConfig(\n format='%(asctime)s - %(name)s - %(levelname)s - %(message)s', level=logging.INFO\n)\n\nlogger = logging.getLogger(__name__)\n\n\n# Define a few command handlers. These usually take the two arguments update and\n# context. Error handlers also receive the raised TelegramError object in error.\ndef start(update, context):\n update.message.reply_text('Hi! Use /set <seconds> to set a timer')\n\n\ndef alarm(context):\n \"\"\"Send the alarm message.\"\"\"\n job = context.job\n context.bot.send_message(job.context, text='Beep!')\n\n\ndef set_timer(update, context):\n \"\"\"Add a job to the queue.\"\"\"\n chat_id = update.message.chat_id\n try:\n # args[0] should contain the time for the timer in seconds\n due = int(context.args[0])\n if due < 0:\n update.message.reply_text('Sorry we can not go back to future!')\n return\n\n # Add job to queue and stop current one if there is a timer already\n if 'job' in context.chat_data:\n old_job = context.chat_data['job']\n old_job.schedule_removal()\n new_job = context.job_queue.run_once(alarm, due, context=chat_id)\n context.chat_data['job'] = new_job\n\n update.message.reply_text('Timer successfully set!')\n\n except (IndexError, ValueError):\n update.message.reply_text('Usage: /set <seconds>')\n\n\ndef unset(update, context):\n \"\"\"Remove the job if the user changed their mind.\"\"\"\n if 'job' not in context.chat_data:\n update.message.reply_text('You have no active timer')\n return\n\n job = context.chat_data['job']\n job.schedule_removal()\n del context.chat_data['job']\n\n update.message.reply_text('Timer successfully unset!')\n\n\ndef main():\n \"\"\"Run bot.\"\"\"\n # Create the Updater and pass it your bot's token.\n # Make sure to set use_context=True to use the new context based callbacks\n # Post version 12 this will no longer be necessary\n updater = Updater(\"TOKEN\", use_context=True)\n\n # Get the dispatcher to register handlers\n dp = updater.dispatcher\n\n # on different commands - answer in Telegram\n dp.add_handler(CommandHandler(\"start\", start))\n dp.add_handler(CommandHandler(\"help\", start))\n dp.add_handler(\n CommandHandler(\"set\", set_timer, pass_args=True, pass_job_queue=True, pass_chat_data=True)\n )\n dp.add_handler(CommandHandler(\"unset\", unset, pass_chat_data=True))\n\n # Start the Bot\n updater.start_polling()\n\n # Block until you press Ctrl-C or the process receives SIGINT, SIGTERM or\n # SIGABRT. This should be used most of the time, since start_polling() is\n # non-blocking and will stop the bot gracefully.\n updater.idle()\n\n\nif __name__ == '__main__':\n main()\n", "path": "examples/timerbot.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n# This program is dedicated to the public domain under the CC0 license.\n\n\"\"\"\nSimple Bot to send timed Telegram messages.\n\nThis Bot uses the Updater class to handle the bot and the JobQueue to send\ntimed messages.\n\nFirst, a few handler functions are defined. Then, those functions are passed to\nthe Dispatcher and registered at their respective places.\nThen, the bot is started and runs until we press Ctrl-C on the command line.\n\nUsage:\nBasic Alarm Bot example, sends a message after a set time.\nPress Ctrl-C on the command line or send a signal to the process to stop the\nbot.\n\"\"\"\n\nimport logging\n\nfrom telegram.ext import Updater, CommandHandler\n\n# Enable logging\nlogging.basicConfig(\n format='%(asctime)s - %(name)s - %(levelname)s - %(message)s', level=logging.INFO\n)\n\nlogger = logging.getLogger(__name__)\n\n\n# Define a few command handlers. These usually take the two arguments update and\n# context. Error handlers also receive the raised TelegramError object in error.\ndef start(update, context):\n update.message.reply_text('Hi! Use /set <seconds> to set a timer')\n\n\ndef alarm(context):\n \"\"\"Send the alarm message.\"\"\"\n job = context.job\n context.bot.send_message(job.context, text='Beep!')\n\n\ndef remove_job_if_exists(name, context):\n \"\"\"Remove job with given name. Returns whether job was removed.\"\"\"\n current_jobs = context.job_queue.get_jobs_by_name(name)\n if not current_jobs:\n return False\n for job in current_jobs:\n job.schedule_removal()\n return True\n\n\ndef set_timer(update, context):\n \"\"\"Add a job to the queue.\"\"\"\n chat_id = update.message.chat_id\n try:\n # args[0] should contain the time for the timer in seconds\n due = int(context.args[0])\n if due < 0:\n update.message.reply_text('Sorry we can not go back to future!')\n return\n\n job_removed = remove_job_if_exists(str(chat_id), context)\n context.job_queue.run_once(alarm, due, context=chat_id, name=str(chat_id))\n\n text = 'Timer successfully set!'\n if job_removed:\n text += ' Old one was removed.'\n update.message.reply_text(text)\n\n except (IndexError, ValueError):\n update.message.reply_text('Usage: /set <seconds>')\n\n\ndef unset(update, context):\n \"\"\"Remove the job if the user changed their mind.\"\"\"\n chat_id = update.message.chat_id\n job_removed = remove_job_if_exists(str(chat_id), context)\n text = 'Timer successfully cancelled!' if job_removed else 'You have no active timer.'\n update.message.reply_text(text)\n\n\ndef main():\n \"\"\"Run bot.\"\"\"\n # Create the Updater and pass it your bot's token.\n # Make sure to set use_context=True to use the new context based callbacks\n # Post version 12 this will no longer be necessary\n updater = Updater(\"TOKEN\", use_context=True)\n\n # Get the dispatcher to register handlers\n dp = updater.dispatcher\n\n # on different commands - answer in Telegram\n dp.add_handler(CommandHandler(\"start\", start))\n dp.add_handler(CommandHandler(\"help\", start))\n dp.add_handler(CommandHandler(\"set\", set_timer))\n dp.add_handler(CommandHandler(\"unset\", unset))\n\n # Start the Bot\n updater.start_polling()\n\n # Block until you press Ctrl-C or the process receives SIGINT, SIGTERM or\n # SIGABRT. This should be used most of the time, since start_polling() is\n # non-blocking and will stop the bot gracefully.\n updater.idle()\n\n\nif __name__ == '__main__':\n main()\n", "path": "examples/timerbot.py"}]} | 1,457 | 667 |
gh_patches_debug_29449 | rasdani/github-patches | git_diff | liqd__a4-meinberlin-4910 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Breadcrumb in modul "Priorisierung mit Karte" is saying always "list"
**URL:** https://meinberlin-dev.liqd.net/maptopicprio/2022-00910/
**user:** any
**expected behaviour:** When coming from map I don't want to get confused by a button which says "list" even though you going back to the map.
**behaviour:** In modul "Priorisierung mit Karte" in the detail view the breadcrumb says always list. When you for example come from the map, you go back to map but breadcrumb says "list".
**important screensize:** -
**device & browser:** mac ff
**Comment/Question:** in the other moduls with ideas/proposals (except Bürger*innenhaushalt), it's saying "Back" ("Zurück") and neither "list" nor "map". Probably related to this issue: https://github.com/liqd/a4-meinberlin/issues/4816
Screenshot?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `meinberlin/apps/budgeting/views.py`
Content:
```
1 from urllib.parse import parse_qs
2 from urllib.parse import urlparse
3
4 import django_filters
5 from django.urls import resolve
6 from django.urls import reverse
7 from django.utils.translation import gettext_lazy as _
8
9 from adhocracy4.categories import filters as category_filters
10 from adhocracy4.exports.views import DashboardExportView
11 from adhocracy4.filters import filters as a4_filters
12 from adhocracy4.labels import filters as label_filters
13 from adhocracy4.modules.predicates import module_is_between_phases
14 from adhocracy4.projects.mixins import DisplayProjectOrModuleMixin
15 from meinberlin.apps.ideas import views as idea_views
16 from meinberlin.apps.moderatorremark.forms import ModeratorRemarkForm
17 from meinberlin.apps.projects.views import ArchivedWidget
18 from meinberlin.apps.votes.forms import TokenForm
19 from meinberlin.apps.votes.models import VotingToken
20
21 from . import forms
22 from . import models
23
24
25 def get_ordering_choices(view):
26 choices = (("-created", _("Most recent")),)
27 if view.module.has_feature("rate", models.Proposal):
28 choices += (("-positive_rating_count", _("Most popular")),)
29 elif view.module.has_feature("support", models.Proposal):
30 choices += (("-positive_rating_count", _("Most support")),)
31 choices += (
32 ("-comment_count", _("Most commented")),
33 ("dailyrandom", _("Random")),
34 )
35 return choices
36
37
38 def get_default_ordering(view):
39 if module_is_between_phases(
40 "meinberlin_budgeting:support", "meinberlin_budgeting:voting", view.module
41 ):
42 return "-positive_rating_count"
43 elif (
44 view.module.has_feature("vote", models.Proposal)
45 and view.module.module_has_finished
46 ):
47 return "-token_vote_count"
48 return "dailyrandom"
49
50
51 class ProposalFilterSet(a4_filters.DefaultsFilterSet):
52 defaults = {"is_archived": "false"}
53 category = category_filters.CategoryFilter()
54 labels = label_filters.LabelFilter()
55 ordering = a4_filters.DistinctOrderingWithDailyRandomFilter(
56 choices=get_ordering_choices
57 )
58 is_archived = django_filters.BooleanFilter(widget=ArchivedWidget)
59
60 class Meta:
61 model = models.Proposal
62 fields = ["category", "labels", "is_archived"]
63
64 def __init__(self, data, *args, **kwargs):
65 self.defaults["ordering"] = get_default_ordering(kwargs["view"])
66 super().__init__(data, *args, **kwargs)
67
68
69 class ProposalListView(idea_views.AbstractIdeaListView, DisplayProjectOrModuleMixin):
70 model = models.Proposal
71 filter_set = ProposalFilterSet
72
73 def has_valid_token_in_session(self, request):
74 """Return whether a valid token is stored in the session.
75
76 The token is valid if it is valid for the respective module.
77 """
78 if "voting_tokens" in request.session:
79 module_key = str(self.module.id)
80 if module_key in request.session["voting_tokens"]:
81 return (
82 VotingToken.get_voting_token_by_hash(
83 token_hash=request.session["voting_tokens"][module_key],
84 module=self.module,
85 )
86 is not None
87 )
88 return False
89
90 def dispatch(self, request, **kwargs):
91 self.mode = request.GET.get("mode", "map")
92 if self.mode == "map":
93 self.paginate_by = 0
94 return super().dispatch(request, **kwargs)
95
96 def get_queryset(self):
97 return super().get_queryset().filter(module=self.module)
98
99 def get_context_data(self, **kwargs):
100 if "token_form" not in kwargs:
101 token_form = TokenForm(module_id=self.module.id)
102 kwargs["token_form"] = token_form
103 kwargs["valid_token_present"] = self.has_valid_token_in_session(self.request)
104 return super().get_context_data(**kwargs)
105
106 def post(self, request, *args, **kwargs):
107 self.object_list = self.get_queryset()
108 token_form = TokenForm(request.POST, module_id=self.module.id)
109 if token_form.is_valid():
110 if "voting_tokens" in request.session:
111 request.session["voting_tokens"][
112 str(self.module.id)
113 ] = token_form.cleaned_data["token"]
114 request.session.save()
115 else:
116 request.session["voting_tokens"] = {
117 str(self.module.id): token_form.cleaned_data["token"]
118 }
119 kwargs["valid_token_present"] = True
120 self.mode = "list"
121 kwargs["token_form"] = token_form
122 context = super().get_context_data(**kwargs)
123 return self.render_to_response(context)
124
125
126 class ProposalDetailView(idea_views.AbstractIdeaDetailView):
127 model = models.Proposal
128 queryset = (
129 models.Proposal.objects.annotate_positive_rating_count()
130 .annotate_negative_rating_count()
131 .annotate_token_vote_count()
132 )
133 permission_required = "meinberlin_budgeting.view_proposal"
134
135 def get_back(self):
136 """
137 Get last page to return to if was project or module view.
138
139 To make sure all the filters and the display mode (map or list)
140 are remembered when going back, we check if the referer is a
141 module or project detail view and add the appropriate back url.
142 """
143 if "Referer" in self.request.headers:
144 referer = self.request.headers["Referer"]
145 parsed_url = urlparse(referer)
146 match = resolve(parsed_url.path)
147 if match.url_name == "project-detail" or match.url_name == "module-detail":
148 back_mode = None
149 back_string = _("map")
150 if "mode" in parse_qs(parsed_url.query):
151 back_mode = parse_qs(parsed_url.query)["mode"][0]
152 if back_mode == "list":
153 back_string = _("list")
154 back_link = referer + "#proposal_{}".format(self.object.id)
155 return back_link, back_string
156 return None, None
157 return None, None
158
159 def has_valid_token_in_session(self, request):
160 """Return whether a valid token is stored in the session.
161
162 The token is valid if it is valid for the respective module.
163 """
164 if "voting_tokens" in request.session:
165 module_key = str(self.module.id)
166 if module_key in request.session["voting_tokens"]:
167 return (
168 VotingToken.get_voting_token_by_hash(
169 token_hash=request.session["voting_tokens"][module_key],
170 module=self.module,
171 )
172 is not None
173 )
174 return False
175
176 def get_context_data(self, **kwargs):
177 context = super().get_context_data(**kwargs)
178 back_link, back_string = self.get_back()
179 context["back"] = back_link
180 context["back_string"] = back_string
181 context["has_valid_token_in_session"] = self.has_valid_token_in_session(
182 self.request
183 )
184 return context
185
186
187 class ProposalCreateView(idea_views.AbstractIdeaCreateView):
188 model = models.Proposal
189 form_class = forms.ProposalForm
190 permission_required = "meinberlin_budgeting.add_proposal"
191 template_name = "meinberlin_budgeting/proposal_create_form.html"
192
193 def get_form_kwargs(self):
194 kwargs = super().get_form_kwargs()
195 kwargs["user"] = self.request.user
196 return kwargs
197
198
199 class ProposalUpdateView(idea_views.AbstractIdeaUpdateView):
200 model = models.Proposal
201 form_class = forms.ProposalForm
202 permission_required = "meinberlin_budgeting.change_proposal"
203 template_name = "meinberlin_budgeting/proposal_update_form.html"
204
205 def get_form_kwargs(self):
206 kwargs = super().get_form_kwargs()
207 kwargs["user"] = self.request.user
208 return kwargs
209
210
211 class ProposalDeleteView(idea_views.AbstractIdeaDeleteView):
212 model = models.Proposal
213 success_message = _("Your budget request has been deleted")
214 permission_required = "meinberlin_budgeting.change_proposal"
215 template_name = "meinberlin_budgeting/proposal_confirm_delete.html"
216
217
218 class ProposalModerateView(idea_views.AbstractIdeaModerateView):
219 model = models.Proposal
220 permission_required = "meinberlin_budgeting.moderate_proposal"
221 template_name = "meinberlin_budgeting/proposal_moderate_form.html"
222 moderateable_form_class = forms.ProposalModerateForm
223 remark_form_class = ModeratorRemarkForm
224
225
226 class ProposalDashboardExportView(DashboardExportView):
227 template_name = "a4exports/export_dashboard.html"
228
229 def get_context_data(self, **kwargs):
230 context = super().get_context_data(**kwargs)
231 context["export"] = reverse(
232 "a4dashboard:budgeting-export", kwargs={"module_slug": self.module.slug}
233 )
234 context["comment_export"] = reverse(
235 "a4dashboard:budgeting-comment-export",
236 kwargs={"module_slug": self.module.slug},
237 )
238 return context
239
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/meinberlin/apps/budgeting/views.py b/meinberlin/apps/budgeting/views.py
--- a/meinberlin/apps/budgeting/views.py
+++ b/meinberlin/apps/budgeting/views.py
@@ -140,21 +140,22 @@
are remembered when going back, we check if the referer is a
module or project detail view and add the appropriate back url.
"""
+ back_link = self.module.get_detail_url
+ back_string = _("map")
if "Referer" in self.request.headers:
referer = self.request.headers["Referer"]
parsed_url = urlparse(referer)
match = resolve(parsed_url.path)
if match.url_name == "project-detail" or match.url_name == "module-detail":
back_mode = None
- back_string = _("map")
if "mode" in parse_qs(parsed_url.query):
back_mode = parse_qs(parsed_url.query)["mode"][0]
if back_mode == "list":
back_string = _("list")
back_link = referer + "#proposal_{}".format(self.object.id)
return back_link, back_string
- return None, None
- return None, None
+ return back_link, back_string
+ return back_link, back_string
def has_valid_token_in_session(self, request):
"""Return whether a valid token is stored in the session.
| {"golden_diff": "diff --git a/meinberlin/apps/budgeting/views.py b/meinberlin/apps/budgeting/views.py\n--- a/meinberlin/apps/budgeting/views.py\n+++ b/meinberlin/apps/budgeting/views.py\n@@ -140,21 +140,22 @@\n are remembered when going back, we check if the referer is a\n module or project detail view and add the appropriate back url.\n \"\"\"\n+ back_link = self.module.get_detail_url\n+ back_string = _(\"map\")\n if \"Referer\" in self.request.headers:\n referer = self.request.headers[\"Referer\"]\n parsed_url = urlparse(referer)\n match = resolve(parsed_url.path)\n if match.url_name == \"project-detail\" or match.url_name == \"module-detail\":\n back_mode = None\n- back_string = _(\"map\")\n if \"mode\" in parse_qs(parsed_url.query):\n back_mode = parse_qs(parsed_url.query)[\"mode\"][0]\n if back_mode == \"list\":\n back_string = _(\"list\")\n back_link = referer + \"#proposal_{}\".format(self.object.id)\n return back_link, back_string\n- return None, None\n- return None, None\n+ return back_link, back_string\n+ return back_link, back_string\n \n def has_valid_token_in_session(self, request):\n \"\"\"Return whether a valid token is stored in the session.\n", "issue": "Breadcrumb in modul \"Priorisierung mit Karte\" is saying always \"list\" \n**URL:** https://meinberlin-dev.liqd.net/maptopicprio/2022-00910/\r\n**user:** any\r\n**expected behaviour:** When coming from map I don't want to get confused by a button which says \"list\" even though you going back to the map.\r\n**behaviour:** In modul \"Priorisierung mit Karte\" in the detail view the breadcrumb says always list. When you for example come from the map, you go back to map but breadcrumb says \"list\".\r\n**important screensize:** -\r\n**device & browser:** mac ff\r\n**Comment/Question:** in the other moduls with ideas/proposals (except B\u00fcrger*innenhaushalt), it's saying \"Back\" (\"Zur\u00fcck\") and neither \"list\" nor \"map\". Probably related to this issue: https://github.com/liqd/a4-meinberlin/issues/4816\r\n\r\nScreenshot?\r\n\n", "before_files": [{"content": "from urllib.parse import parse_qs\nfrom urllib.parse import urlparse\n\nimport django_filters\nfrom django.urls import resolve\nfrom django.urls import reverse\nfrom django.utils.translation import gettext_lazy as _\n\nfrom adhocracy4.categories import filters as category_filters\nfrom adhocracy4.exports.views import DashboardExportView\nfrom adhocracy4.filters import filters as a4_filters\nfrom adhocracy4.labels import filters as label_filters\nfrom adhocracy4.modules.predicates import module_is_between_phases\nfrom adhocracy4.projects.mixins import DisplayProjectOrModuleMixin\nfrom meinberlin.apps.ideas import views as idea_views\nfrom meinberlin.apps.moderatorremark.forms import ModeratorRemarkForm\nfrom meinberlin.apps.projects.views import ArchivedWidget\nfrom meinberlin.apps.votes.forms import TokenForm\nfrom meinberlin.apps.votes.models import VotingToken\n\nfrom . import forms\nfrom . import models\n\n\ndef get_ordering_choices(view):\n choices = ((\"-created\", _(\"Most recent\")),)\n if view.module.has_feature(\"rate\", models.Proposal):\n choices += ((\"-positive_rating_count\", _(\"Most popular\")),)\n elif view.module.has_feature(\"support\", models.Proposal):\n choices += ((\"-positive_rating_count\", _(\"Most support\")),)\n choices += (\n (\"-comment_count\", _(\"Most commented\")),\n (\"dailyrandom\", _(\"Random\")),\n )\n return choices\n\n\ndef get_default_ordering(view):\n if module_is_between_phases(\n \"meinberlin_budgeting:support\", \"meinberlin_budgeting:voting\", view.module\n ):\n return \"-positive_rating_count\"\n elif (\n view.module.has_feature(\"vote\", models.Proposal)\n and view.module.module_has_finished\n ):\n return \"-token_vote_count\"\n return \"dailyrandom\"\n\n\nclass ProposalFilterSet(a4_filters.DefaultsFilterSet):\n defaults = {\"is_archived\": \"false\"}\n category = category_filters.CategoryFilter()\n labels = label_filters.LabelFilter()\n ordering = a4_filters.DistinctOrderingWithDailyRandomFilter(\n choices=get_ordering_choices\n )\n is_archived = django_filters.BooleanFilter(widget=ArchivedWidget)\n\n class Meta:\n model = models.Proposal\n fields = [\"category\", \"labels\", \"is_archived\"]\n\n def __init__(self, data, *args, **kwargs):\n self.defaults[\"ordering\"] = get_default_ordering(kwargs[\"view\"])\n super().__init__(data, *args, **kwargs)\n\n\nclass ProposalListView(idea_views.AbstractIdeaListView, DisplayProjectOrModuleMixin):\n model = models.Proposal\n filter_set = ProposalFilterSet\n\n def has_valid_token_in_session(self, request):\n \"\"\"Return whether a valid token is stored in the session.\n\n The token is valid if it is valid for the respective module.\n \"\"\"\n if \"voting_tokens\" in request.session:\n module_key = str(self.module.id)\n if module_key in request.session[\"voting_tokens\"]:\n return (\n VotingToken.get_voting_token_by_hash(\n token_hash=request.session[\"voting_tokens\"][module_key],\n module=self.module,\n )\n is not None\n )\n return False\n\n def dispatch(self, request, **kwargs):\n self.mode = request.GET.get(\"mode\", \"map\")\n if self.mode == \"map\":\n self.paginate_by = 0\n return super().dispatch(request, **kwargs)\n\n def get_queryset(self):\n return super().get_queryset().filter(module=self.module)\n\n def get_context_data(self, **kwargs):\n if \"token_form\" not in kwargs:\n token_form = TokenForm(module_id=self.module.id)\n kwargs[\"token_form\"] = token_form\n kwargs[\"valid_token_present\"] = self.has_valid_token_in_session(self.request)\n return super().get_context_data(**kwargs)\n\n def post(self, request, *args, **kwargs):\n self.object_list = self.get_queryset()\n token_form = TokenForm(request.POST, module_id=self.module.id)\n if token_form.is_valid():\n if \"voting_tokens\" in request.session:\n request.session[\"voting_tokens\"][\n str(self.module.id)\n ] = token_form.cleaned_data[\"token\"]\n request.session.save()\n else:\n request.session[\"voting_tokens\"] = {\n str(self.module.id): token_form.cleaned_data[\"token\"]\n }\n kwargs[\"valid_token_present\"] = True\n self.mode = \"list\"\n kwargs[\"token_form\"] = token_form\n context = super().get_context_data(**kwargs)\n return self.render_to_response(context)\n\n\nclass ProposalDetailView(idea_views.AbstractIdeaDetailView):\n model = models.Proposal\n queryset = (\n models.Proposal.objects.annotate_positive_rating_count()\n .annotate_negative_rating_count()\n .annotate_token_vote_count()\n )\n permission_required = \"meinberlin_budgeting.view_proposal\"\n\n def get_back(self):\n \"\"\"\n Get last page to return to if was project or module view.\n\n To make sure all the filters and the display mode (map or list)\n are remembered when going back, we check if the referer is a\n module or project detail view and add the appropriate back url.\n \"\"\"\n if \"Referer\" in self.request.headers:\n referer = self.request.headers[\"Referer\"]\n parsed_url = urlparse(referer)\n match = resolve(parsed_url.path)\n if match.url_name == \"project-detail\" or match.url_name == \"module-detail\":\n back_mode = None\n back_string = _(\"map\")\n if \"mode\" in parse_qs(parsed_url.query):\n back_mode = parse_qs(parsed_url.query)[\"mode\"][0]\n if back_mode == \"list\":\n back_string = _(\"list\")\n back_link = referer + \"#proposal_{}\".format(self.object.id)\n return back_link, back_string\n return None, None\n return None, None\n\n def has_valid_token_in_session(self, request):\n \"\"\"Return whether a valid token is stored in the session.\n\n The token is valid if it is valid for the respective module.\n \"\"\"\n if \"voting_tokens\" in request.session:\n module_key = str(self.module.id)\n if module_key in request.session[\"voting_tokens\"]:\n return (\n VotingToken.get_voting_token_by_hash(\n token_hash=request.session[\"voting_tokens\"][module_key],\n module=self.module,\n )\n is not None\n )\n return False\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n back_link, back_string = self.get_back()\n context[\"back\"] = back_link\n context[\"back_string\"] = back_string\n context[\"has_valid_token_in_session\"] = self.has_valid_token_in_session(\n self.request\n )\n return context\n\n\nclass ProposalCreateView(idea_views.AbstractIdeaCreateView):\n model = models.Proposal\n form_class = forms.ProposalForm\n permission_required = \"meinberlin_budgeting.add_proposal\"\n template_name = \"meinberlin_budgeting/proposal_create_form.html\"\n\n def get_form_kwargs(self):\n kwargs = super().get_form_kwargs()\n kwargs[\"user\"] = self.request.user\n return kwargs\n\n\nclass ProposalUpdateView(idea_views.AbstractIdeaUpdateView):\n model = models.Proposal\n form_class = forms.ProposalForm\n permission_required = \"meinberlin_budgeting.change_proposal\"\n template_name = \"meinberlin_budgeting/proposal_update_form.html\"\n\n def get_form_kwargs(self):\n kwargs = super().get_form_kwargs()\n kwargs[\"user\"] = self.request.user\n return kwargs\n\n\nclass ProposalDeleteView(idea_views.AbstractIdeaDeleteView):\n model = models.Proposal\n success_message = _(\"Your budget request has been deleted\")\n permission_required = \"meinberlin_budgeting.change_proposal\"\n template_name = \"meinberlin_budgeting/proposal_confirm_delete.html\"\n\n\nclass ProposalModerateView(idea_views.AbstractIdeaModerateView):\n model = models.Proposal\n permission_required = \"meinberlin_budgeting.moderate_proposal\"\n template_name = \"meinberlin_budgeting/proposal_moderate_form.html\"\n moderateable_form_class = forms.ProposalModerateForm\n remark_form_class = ModeratorRemarkForm\n\n\nclass ProposalDashboardExportView(DashboardExportView):\n template_name = \"a4exports/export_dashboard.html\"\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context[\"export\"] = reverse(\n \"a4dashboard:budgeting-export\", kwargs={\"module_slug\": self.module.slug}\n )\n context[\"comment_export\"] = reverse(\n \"a4dashboard:budgeting-comment-export\",\n kwargs={\"module_slug\": self.module.slug},\n )\n return context\n", "path": "meinberlin/apps/budgeting/views.py"}], "after_files": [{"content": "from urllib.parse import parse_qs\nfrom urllib.parse import urlparse\n\nimport django_filters\nfrom django.urls import resolve\nfrom django.urls import reverse\nfrom django.utils.translation import gettext_lazy as _\n\nfrom adhocracy4.categories import filters as category_filters\nfrom adhocracy4.exports.views import DashboardExportView\nfrom adhocracy4.filters import filters as a4_filters\nfrom adhocracy4.labels import filters as label_filters\nfrom adhocracy4.modules.predicates import module_is_between_phases\nfrom adhocracy4.projects.mixins import DisplayProjectOrModuleMixin\nfrom meinberlin.apps.ideas import views as idea_views\nfrom meinberlin.apps.moderatorremark.forms import ModeratorRemarkForm\nfrom meinberlin.apps.projects.views import ArchivedWidget\nfrom meinberlin.apps.votes.forms import TokenForm\nfrom meinberlin.apps.votes.models import VotingToken\n\nfrom . import forms\nfrom . import models\n\n\ndef get_ordering_choices(view):\n choices = ((\"-created\", _(\"Most recent\")),)\n if view.module.has_feature(\"rate\", models.Proposal):\n choices += ((\"-positive_rating_count\", _(\"Most popular\")),)\n elif view.module.has_feature(\"support\", models.Proposal):\n choices += ((\"-positive_rating_count\", _(\"Most support\")),)\n choices += (\n (\"-comment_count\", _(\"Most commented\")),\n (\"dailyrandom\", _(\"Random\")),\n )\n return choices\n\n\ndef get_default_ordering(view):\n if module_is_between_phases(\n \"meinberlin_budgeting:support\", \"meinberlin_budgeting:voting\", view.module\n ):\n return \"-positive_rating_count\"\n elif (\n view.module.has_feature(\"vote\", models.Proposal)\n and view.module.module_has_finished\n ):\n return \"-token_vote_count\"\n return \"dailyrandom\"\n\n\nclass ProposalFilterSet(a4_filters.DefaultsFilterSet):\n defaults = {\"is_archived\": \"false\"}\n category = category_filters.CategoryFilter()\n labels = label_filters.LabelFilter()\n ordering = a4_filters.DistinctOrderingWithDailyRandomFilter(\n choices=get_ordering_choices\n )\n is_archived = django_filters.BooleanFilter(widget=ArchivedWidget)\n\n class Meta:\n model = models.Proposal\n fields = [\"category\", \"labels\", \"is_archived\"]\n\n def __init__(self, data, *args, **kwargs):\n self.defaults[\"ordering\"] = get_default_ordering(kwargs[\"view\"])\n super().__init__(data, *args, **kwargs)\n\n\nclass ProposalListView(idea_views.AbstractIdeaListView, DisplayProjectOrModuleMixin):\n model = models.Proposal\n filter_set = ProposalFilterSet\n\n def has_valid_token_in_session(self, request):\n \"\"\"Return whether a valid token is stored in the session.\n\n The token is valid if it is valid for the respective module.\n \"\"\"\n if \"voting_tokens\" in request.session:\n module_key = str(self.module.id)\n if module_key in request.session[\"voting_tokens\"]:\n return (\n VotingToken.get_voting_token_by_hash(\n token_hash=request.session[\"voting_tokens\"][module_key],\n module=self.module,\n )\n is not None\n )\n return False\n\n def dispatch(self, request, **kwargs):\n self.mode = request.GET.get(\"mode\", \"map\")\n if self.mode == \"map\":\n self.paginate_by = 0\n return super().dispatch(request, **kwargs)\n\n def get_queryset(self):\n return super().get_queryset().filter(module=self.module)\n\n def get_context_data(self, **kwargs):\n if \"token_form\" not in kwargs:\n token_form = TokenForm(module_id=self.module.id)\n kwargs[\"token_form\"] = token_form\n kwargs[\"valid_token_present\"] = self.has_valid_token_in_session(self.request)\n return super().get_context_data(**kwargs)\n\n def post(self, request, *args, **kwargs):\n self.object_list = self.get_queryset()\n token_form = TokenForm(request.POST, module_id=self.module.id)\n if token_form.is_valid():\n if \"voting_tokens\" in request.session:\n request.session[\"voting_tokens\"][\n str(self.module.id)\n ] = token_form.cleaned_data[\"token\"]\n request.session.save()\n else:\n request.session[\"voting_tokens\"] = {\n str(self.module.id): token_form.cleaned_data[\"token\"]\n }\n kwargs[\"valid_token_present\"] = True\n self.mode = \"list\"\n kwargs[\"token_form\"] = token_form\n context = super().get_context_data(**kwargs)\n return self.render_to_response(context)\n\n\nclass ProposalDetailView(idea_views.AbstractIdeaDetailView):\n model = models.Proposal\n queryset = (\n models.Proposal.objects.annotate_positive_rating_count()\n .annotate_negative_rating_count()\n .annotate_token_vote_count()\n )\n permission_required = \"meinberlin_budgeting.view_proposal\"\n\n def get_back(self):\n \"\"\"\n Get last page to return to if was project or module view.\n\n To make sure all the filters and the display mode (map or list)\n are remembered when going back, we check if the referer is a\n module or project detail view and add the appropriate back url.\n \"\"\"\n back_link = self.module.get_detail_url\n back_string = _(\"map\")\n if \"Referer\" in self.request.headers:\n referer = self.request.headers[\"Referer\"]\n parsed_url = urlparse(referer)\n match = resolve(parsed_url.path)\n if match.url_name == \"project-detail\" or match.url_name == \"module-detail\":\n back_mode = None\n if \"mode\" in parse_qs(parsed_url.query):\n back_mode = parse_qs(parsed_url.query)[\"mode\"][0]\n if back_mode == \"list\":\n back_string = _(\"list\")\n back_link = referer + \"#proposal_{}\".format(self.object.id)\n return back_link, back_string\n return back_link, back_string\n return back_link, back_string\n\n def has_valid_token_in_session(self, request):\n \"\"\"Return whether a valid token is stored in the session.\n\n The token is valid if it is valid for the respective module.\n \"\"\"\n if \"voting_tokens\" in request.session:\n module_key = str(self.module.id)\n if module_key in request.session[\"voting_tokens\"]:\n return (\n VotingToken.get_voting_token_by_hash(\n token_hash=request.session[\"voting_tokens\"][module_key],\n module=self.module,\n )\n is not None\n )\n return False\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n back_link, back_string = self.get_back()\n context[\"back\"] = back_link\n context[\"back_string\"] = back_string\n context[\"has_valid_token_in_session\"] = self.has_valid_token_in_session(\n self.request\n )\n return context\n\n\nclass ProposalCreateView(idea_views.AbstractIdeaCreateView):\n model = models.Proposal\n form_class = forms.ProposalForm\n permission_required = \"meinberlin_budgeting.add_proposal\"\n template_name = \"meinberlin_budgeting/proposal_create_form.html\"\n\n def get_form_kwargs(self):\n kwargs = super().get_form_kwargs()\n kwargs[\"user\"] = self.request.user\n return kwargs\n\n\nclass ProposalUpdateView(idea_views.AbstractIdeaUpdateView):\n model = models.Proposal\n form_class = forms.ProposalForm\n permission_required = \"meinberlin_budgeting.change_proposal\"\n template_name = \"meinberlin_budgeting/proposal_update_form.html\"\n\n def get_form_kwargs(self):\n kwargs = super().get_form_kwargs()\n kwargs[\"user\"] = self.request.user\n return kwargs\n\n\nclass ProposalDeleteView(idea_views.AbstractIdeaDeleteView):\n model = models.Proposal\n success_message = _(\"Your budget request has been deleted\")\n permission_required = \"meinberlin_budgeting.change_proposal\"\n template_name = \"meinberlin_budgeting/proposal_confirm_delete.html\"\n\n\nclass ProposalModerateView(idea_views.AbstractIdeaModerateView):\n model = models.Proposal\n permission_required = \"meinberlin_budgeting.moderate_proposal\"\n template_name = \"meinberlin_budgeting/proposal_moderate_form.html\"\n moderateable_form_class = forms.ProposalModerateForm\n remark_form_class = ModeratorRemarkForm\n\n\nclass ProposalDashboardExportView(DashboardExportView):\n template_name = \"a4exports/export_dashboard.html\"\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context[\"export\"] = reverse(\n \"a4dashboard:budgeting-export\", kwargs={\"module_slug\": self.module.slug}\n )\n context[\"comment_export\"] = reverse(\n \"a4dashboard:budgeting-comment-export\",\n kwargs={\"module_slug\": self.module.slug},\n )\n return context\n", "path": "meinberlin/apps/budgeting/views.py"}]} | 3,006 | 311 |
gh_patches_debug_32102 | rasdani/github-patches | git_diff | getsentry__sentry-2226 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
500 error when rename team slug
Error raised if same slug already exists
Traceback:
```
File "/home/www/pyenv/versions/sentry/lib/python2.7/site-packages/django/core/handlers/base.py", line 112, in get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/home/www/pyenv/versions/sentry/lib/python2.7/site-packages/django/views/generic/base.py", line 69, in view
return self.dispatch(request, *args, **kwargs)
File "/home/www/pyenv/versions/sentry/lib/python2.7/site-packages/django/utils/decorators.py", line 29, in _wrapper
return bound_func(*args, **kwargs)
File "/home/www/pyenv/versions/sentry/lib/python2.7/site-packages/django/utils/decorators.py", line 99, in _wrapped_view
response = view_func(request, *args, **kwargs)
File "/home/www/pyenv/versions/sentry/lib/python2.7/site-packages/django/utils/decorators.py", line 25, in bound_func
return func(self, *args2, **kwargs2)
File "/home/www/pyenv/versions/sentry/lib/python2.7/site-packages/sentry/web/frontend/base.py", line 144, in dispatch
return self.handle(request, *args, **kwargs)
File "/home/www/pyenv/versions/sentry/lib/python2.7/site-packages/sentry/web/frontend/team_settings.py", line 37, in handle
team = form.save()
File "/home/www/pyenv/versions/sentry/lib/python2.7/site-packages/django/forms/models.py", line 446, in save
construct=False)
File "/home/www/pyenv/versions/sentry/lib/python2.7/site-packages/django/forms/models.py", line 99, in save_instance
instance.save()
File "/home/www/pyenv/versions/sentry/lib/python2.7/site-packages/sentry/models/team.py", line 141, in save
super(Team, self).save(*args, **kwargs)
File "/home/www/pyenv/versions/sentry/lib/python2.7/site-packages/django/db/models/base.py", line 545, in save
force_update=force_update, update_fields=update_fields)
File "/home/www/pyenv/versions/sentry/lib/python2.7/site-packages/django/db/models/base.py", line 573, in save_base
updated = self._save_table(raw, cls, force_insert, force_update, using, update_fields)
File "/home/www/pyenv/versions/sentry/lib/python2.7/site-packages/django/db/models/base.py", line 635, in _save_table
forced_update)
File "/home/www/pyenv/versions/sentry/lib/python2.7/site-packages/django/db/models/base.py", line 679, in _do_update
return filtered._update(values) > 0
File "/home/www/pyenv/versions/sentry/lib/python2.7/site-packages/django/db/models/query.py", line 510, in _update
return query.get_compiler(self.db).execute_sql(None)
File "/home/www/pyenv/versions/sentry/lib/python2.7/site-packages/django/db/models/sql/compiler.py", line 980, in execute_sql
cursor = super(SQLUpdateCompiler, self).execute_sql(result_type)
File "/home/www/pyenv/versions/sentry/lib/python2.7/site-packages/django/db/models/sql/compiler.py", line 786, in execute_sql
cursor.execute(sql, params)
File "/home/www/pyenv/versions/sentry/lib/python2.7/site-packages/django/db/backends/util.py", line 53, in execute
return self.cursor.execute(sql, params)
File "/home/www/pyenv/versions/sentry/lib/python2.7/site-packages/django/db/utils.py", line 99, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "/home/www/pyenv/versions/sentry/lib/python2.7/site-packages/django/db/backends/util.py", line 53, in execute
return self.cursor.execute(sql, params)
IntegrityError: duplicate key value violates unique constraint "sentry_team_organization_id_1e0ece47434a2ed_uniq"
DETAIL: Key (organization_id, slug)=(1, adwile) already exists.
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/sentry/web/frontend/team_settings.py`
Content:
```
1 from __future__ import absolute_import
2
3 from django import forms
4 from django.contrib import messages
5 from django.core.urlresolvers import reverse
6 from django.http import HttpResponseRedirect
7 from django.utils.translation import ugettext_lazy as _
8
9 from sentry.models import AuditLogEntry, AuditLogEntryEvent, Team
10 from sentry.web.frontend.base import TeamView
11
12
13 class EditTeamForm(forms.ModelForm):
14 class Meta:
15 fields = ('name', 'slug',)
16 model = Team
17
18
19 class TeamSettingsView(TeamView):
20 required_scope = 'team:write'
21
22 def get_form(self, request, team):
23 return EditTeamForm(request.POST or None, instance=team)
24
25 def handle(self, request, organization, team):
26 form = self.get_form(request, team)
27 if form.is_valid():
28 team = form.save()
29
30 AuditLogEntry.objects.create(
31 organization=organization,
32 actor=request.user,
33 ip_address=request.META['REMOTE_ADDR'],
34 target_object=team.id,
35 event=AuditLogEntryEvent.TEAM_EDIT,
36 data=team.get_audit_log_data(),
37 )
38
39 messages.add_message(request, messages.SUCCESS,
40 _('Changes to your team were saved.'))
41
42 return HttpResponseRedirect(reverse('sentry-manage-team', args=[organization.slug, team.slug]))
43
44 if request.user.is_active_superuser():
45 can_remove_team = True
46 else:
47 can_remove_team = request.access.has_team_scope(team, 'team:delete')
48
49 context = {
50 'form': form,
51 'can_remove_team': can_remove_team,
52 }
53
54 return self.respond('sentry/teams/manage.html', context)
55
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/sentry/web/frontend/team_settings.py b/src/sentry/web/frontend/team_settings.py
--- a/src/sentry/web/frontend/team_settings.py
+++ b/src/sentry/web/frontend/team_settings.py
@@ -3,6 +3,7 @@
from django import forms
from django.contrib import messages
from django.core.urlresolvers import reverse
+from django.db import IntegrityError
from django.http import HttpResponseRedirect
from django.utils.translation import ugettext_lazy as _
@@ -23,21 +24,27 @@
return EditTeamForm(request.POST or None, instance=team)
def handle(self, request, organization, team):
+ old_slug = team.slug
form = self.get_form(request, team)
if form.is_valid():
- team = form.save()
-
- AuditLogEntry.objects.create(
- organization=organization,
- actor=request.user,
- ip_address=request.META['REMOTE_ADDR'],
- target_object=team.id,
- event=AuditLogEntryEvent.TEAM_EDIT,
- data=team.get_audit_log_data(),
- )
-
- messages.add_message(request, messages.SUCCESS,
- _('Changes to your team were saved.'))
+ try:
+ team = form.save()
+ except IntegrityError:
+ team.slug = old_slug
+ messages.add_message(request, messages.ERROR,
+ _('Changes to your team failed. Slug already exists.'))
+ else:
+ AuditLogEntry.objects.create(
+ organization=organization,
+ actor=request.user,
+ ip_address=request.META['REMOTE_ADDR'],
+ target_object=team.id,
+ event=AuditLogEntryEvent.TEAM_EDIT,
+ data=team.get_audit_log_data(),
+ )
+
+ messages.add_message(request, messages.SUCCESS,
+ _('Changes to your team were saved.'))
return HttpResponseRedirect(reverse('sentry-manage-team', args=[organization.slug, team.slug]))
| {"golden_diff": "diff --git a/src/sentry/web/frontend/team_settings.py b/src/sentry/web/frontend/team_settings.py\n--- a/src/sentry/web/frontend/team_settings.py\n+++ b/src/sentry/web/frontend/team_settings.py\n@@ -3,6 +3,7 @@\n from django import forms\n from django.contrib import messages\n from django.core.urlresolvers import reverse\n+from django.db import IntegrityError\n from django.http import HttpResponseRedirect\n from django.utils.translation import ugettext_lazy as _\n \n@@ -23,21 +24,27 @@\n return EditTeamForm(request.POST or None, instance=team)\n \n def handle(self, request, organization, team):\n+ old_slug = team.slug\n form = self.get_form(request, team)\n if form.is_valid():\n- team = form.save()\n-\n- AuditLogEntry.objects.create(\n- organization=organization,\n- actor=request.user,\n- ip_address=request.META['REMOTE_ADDR'],\n- target_object=team.id,\n- event=AuditLogEntryEvent.TEAM_EDIT,\n- data=team.get_audit_log_data(),\n- )\n-\n- messages.add_message(request, messages.SUCCESS,\n- _('Changes to your team were saved.'))\n+ try:\n+ team = form.save()\n+ except IntegrityError:\n+ team.slug = old_slug\n+ messages.add_message(request, messages.ERROR,\n+ _('Changes to your team failed. Slug already exists.'))\n+ else:\n+ AuditLogEntry.objects.create(\n+ organization=organization,\n+ actor=request.user,\n+ ip_address=request.META['REMOTE_ADDR'],\n+ target_object=team.id,\n+ event=AuditLogEntryEvent.TEAM_EDIT,\n+ data=team.get_audit_log_data(),\n+ )\n+\n+ messages.add_message(request, messages.SUCCESS,\n+ _('Changes to your team were saved.'))\n \n return HttpResponseRedirect(reverse('sentry-manage-team', args=[organization.slug, team.slug]))\n", "issue": "500 error when rename team slug\nError raised if same slug already exists\n\nTraceback:\n\n```\nFile \"/home/www/pyenv/versions/sentry/lib/python2.7/site-packages/django/core/handlers/base.py\", line 112, in get_response\n response = wrapped_callback(request, *callback_args, **callback_kwargs)\n File \"/home/www/pyenv/versions/sentry/lib/python2.7/site-packages/django/views/generic/base.py\", line 69, in view\n return self.dispatch(request, *args, **kwargs)\n File \"/home/www/pyenv/versions/sentry/lib/python2.7/site-packages/django/utils/decorators.py\", line 29, in _wrapper\n return bound_func(*args, **kwargs)\n File \"/home/www/pyenv/versions/sentry/lib/python2.7/site-packages/django/utils/decorators.py\", line 99, in _wrapped_view\n response = view_func(request, *args, **kwargs)\n File \"/home/www/pyenv/versions/sentry/lib/python2.7/site-packages/django/utils/decorators.py\", line 25, in bound_func\n return func(self, *args2, **kwargs2)\n File \"/home/www/pyenv/versions/sentry/lib/python2.7/site-packages/sentry/web/frontend/base.py\", line 144, in dispatch\n return self.handle(request, *args, **kwargs)\n File \"/home/www/pyenv/versions/sentry/lib/python2.7/site-packages/sentry/web/frontend/team_settings.py\", line 37, in handle\n team = form.save()\n File \"/home/www/pyenv/versions/sentry/lib/python2.7/site-packages/django/forms/models.py\", line 446, in save\n construct=False)\n File \"/home/www/pyenv/versions/sentry/lib/python2.7/site-packages/django/forms/models.py\", line 99, in save_instance\n instance.save()\n File \"/home/www/pyenv/versions/sentry/lib/python2.7/site-packages/sentry/models/team.py\", line 141, in save\n super(Team, self).save(*args, **kwargs)\n File \"/home/www/pyenv/versions/sentry/lib/python2.7/site-packages/django/db/models/base.py\", line 545, in save\n force_update=force_update, update_fields=update_fields)\n File \"/home/www/pyenv/versions/sentry/lib/python2.7/site-packages/django/db/models/base.py\", line 573, in save_base\n updated = self._save_table(raw, cls, force_insert, force_update, using, update_fields)\n File \"/home/www/pyenv/versions/sentry/lib/python2.7/site-packages/django/db/models/base.py\", line 635, in _save_table\n forced_update)\n File \"/home/www/pyenv/versions/sentry/lib/python2.7/site-packages/django/db/models/base.py\", line 679, in _do_update\n return filtered._update(values) > 0\n File \"/home/www/pyenv/versions/sentry/lib/python2.7/site-packages/django/db/models/query.py\", line 510, in _update\n return query.get_compiler(self.db).execute_sql(None)\n File \"/home/www/pyenv/versions/sentry/lib/python2.7/site-packages/django/db/models/sql/compiler.py\", line 980, in execute_sql\n cursor = super(SQLUpdateCompiler, self).execute_sql(result_type)\n File \"/home/www/pyenv/versions/sentry/lib/python2.7/site-packages/django/db/models/sql/compiler.py\", line 786, in execute_sql\n cursor.execute(sql, params)\n File \"/home/www/pyenv/versions/sentry/lib/python2.7/site-packages/django/db/backends/util.py\", line 53, in execute\n return self.cursor.execute(sql, params)\n File \"/home/www/pyenv/versions/sentry/lib/python2.7/site-packages/django/db/utils.py\", line 99, in __exit__\n six.reraise(dj_exc_type, dj_exc_value, traceback)\n File \"/home/www/pyenv/versions/sentry/lib/python2.7/site-packages/django/db/backends/util.py\", line 53, in execute\n return self.cursor.execute(sql, params)\n IntegrityError: duplicate key value violates unique constraint \"sentry_team_organization_id_1e0ece47434a2ed_uniq\"\n DETAIL: Key (organization_id, slug)=(1, adwile) already exists.\n```\n\n", "before_files": [{"content": "from __future__ import absolute_import\n\nfrom django import forms\nfrom django.contrib import messages\nfrom django.core.urlresolvers import reverse\nfrom django.http import HttpResponseRedirect\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom sentry.models import AuditLogEntry, AuditLogEntryEvent, Team\nfrom sentry.web.frontend.base import TeamView\n\n\nclass EditTeamForm(forms.ModelForm):\n class Meta:\n fields = ('name', 'slug',)\n model = Team\n\n\nclass TeamSettingsView(TeamView):\n required_scope = 'team:write'\n\n def get_form(self, request, team):\n return EditTeamForm(request.POST or None, instance=team)\n\n def handle(self, request, organization, team):\n form = self.get_form(request, team)\n if form.is_valid():\n team = form.save()\n\n AuditLogEntry.objects.create(\n organization=organization,\n actor=request.user,\n ip_address=request.META['REMOTE_ADDR'],\n target_object=team.id,\n event=AuditLogEntryEvent.TEAM_EDIT,\n data=team.get_audit_log_data(),\n )\n\n messages.add_message(request, messages.SUCCESS,\n _('Changes to your team were saved.'))\n\n return HttpResponseRedirect(reverse('sentry-manage-team', args=[organization.slug, team.slug]))\n\n if request.user.is_active_superuser():\n can_remove_team = True\n else:\n can_remove_team = request.access.has_team_scope(team, 'team:delete')\n\n context = {\n 'form': form,\n 'can_remove_team': can_remove_team,\n }\n\n return self.respond('sentry/teams/manage.html', context)\n", "path": "src/sentry/web/frontend/team_settings.py"}], "after_files": [{"content": "from __future__ import absolute_import\n\nfrom django import forms\nfrom django.contrib import messages\nfrom django.core.urlresolvers import reverse\nfrom django.db import IntegrityError\nfrom django.http import HttpResponseRedirect\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom sentry.models import AuditLogEntry, AuditLogEntryEvent, Team\nfrom sentry.web.frontend.base import TeamView\n\n\nclass EditTeamForm(forms.ModelForm):\n class Meta:\n fields = ('name', 'slug',)\n model = Team\n\n\nclass TeamSettingsView(TeamView):\n required_scope = 'team:write'\n\n def get_form(self, request, team):\n return EditTeamForm(request.POST or None, instance=team)\n\n def handle(self, request, organization, team):\n old_slug = team.slug\n form = self.get_form(request, team)\n if form.is_valid():\n try:\n team = form.save()\n except IntegrityError:\n team.slug = old_slug\n messages.add_message(request, messages.ERROR,\n _('Changes to your team failed. Slug already exists.'))\n else:\n AuditLogEntry.objects.create(\n organization=organization,\n actor=request.user,\n ip_address=request.META['REMOTE_ADDR'],\n target_object=team.id,\n event=AuditLogEntryEvent.TEAM_EDIT,\n data=team.get_audit_log_data(),\n )\n\n messages.add_message(request, messages.SUCCESS,\n _('Changes to your team were saved.'))\n\n return HttpResponseRedirect(reverse('sentry-manage-team', args=[organization.slug, team.slug]))\n\n if request.user.is_active_superuser():\n can_remove_team = True\n else:\n can_remove_team = request.access.has_team_scope(team, 'team:delete')\n\n context = {\n 'form': form,\n 'can_remove_team': can_remove_team,\n }\n\n return self.respond('sentry/teams/manage.html', context)\n", "path": "src/sentry/web/frontend/team_settings.py"}]} | 1,701 | 418 |
gh_patches_debug_29925 | rasdani/github-patches | git_diff | kornia__kornia-2009 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Update `make test-all`
```Makefile
# TODO: Add cuda-float16 when #649 is solved
test-all: FORCE
pytest -v --device all --dtype float32,float64 --cov=kornia test/ --flake8 --mypy
```
Maybe instead of just adding `float16`, we can use `--dtype all`? (https://github.com/kornia/kornia/blob/5d1d8cc8c4fb3e398b429edd2cc25ef40d6299ce/conftest.py#L77)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kornia/enhance/core.py`
Content:
```
1 from kornia.core import Module, Tensor
2 from kornia.testing import KORNIA_CHECK, KORNIA_CHECK_IS_TENSOR
3
4
5 def add_weighted(src1: Tensor, alpha, src2: Tensor, beta, gamma) -> Tensor:
6 r"""Calculate the weighted sum of two Tensors.
7
8 .. image:: _static/img/add_weighted.png
9
10 The function calculates the weighted sum of two Tensors as follows:
11
12 .. math::
13 out = src1 * alpha + src2 * beta + gamma
14
15 Args:
16 src1: Tensor with an arbitrary shape, equal to shape of src2.
17 alpha: weight of the src1 elements as Union[float, Tensor].
18 src2: Tensor with an arbitrary shape, equal to shape of src1.
19 beta: weight of the src2 elements as Union[float, Tensor].
20 gamma: scalar added to each sum as Union[float, Tensor].
21
22 Returns:
23 Weighted Tensor with shape equal to src1 and src2 shapes.
24
25 Example:
26 >>> input1 = torch.rand(1, 1, 5, 5)
27 >>> input2 = torch.rand(1, 1, 5, 5)
28 >>> output = add_weighted(input1, 0.5, input2, 0.5, 1.0)
29 >>> output.shape
30 torch.Size([1, 1, 5, 5])
31
32 Notes:
33 Tensor alpha/beta/gamma have to be with shape broadcastable to src1 and src2 shapes.
34 """
35 KORNIA_CHECK_IS_TENSOR(src1)
36 KORNIA_CHECK_IS_TENSOR(src2)
37 KORNIA_CHECK(src1.shape == src2.shape, f"src1 and src2 have different shapes. Got {src1.shape} and {src2.shape}")
38 if isinstance(alpha, Tensor):
39 KORNIA_CHECK(src1.shape == alpha.shape, "alpha has a different shape than src.")
40 if isinstance(beta, Tensor):
41 KORNIA_CHECK(src1.shape == beta.shape, "beta has a different shape than src.")
42 if isinstance(gamma, Tensor):
43 KORNIA_CHECK(src1.shape == gamma.shape, "gamma has a different shape than src.")
44
45 return src1 * alpha + src2 * beta + gamma
46
47
48 class AddWeighted(Module):
49 r"""Calculate the weighted sum of two Tensors.
50
51 The function calculates the weighted sum of two Tensors as follows:
52
53 .. math::
54 out = src1 * alpha + src2 * beta + gamma
55
56 Args:
57 alpha: weight of the src1 elements as Union[float, Tensor].
58 beta: weight of the src2 elements as Union[float, Tensor].
59 gamma: scalar added to each sum as Union[float, Tensor].
60
61 Shape:
62 - Input1: Tensor with an arbitrary shape, equal to shape of Input2.
63 - Input2: Tensor with an arbitrary shape, equal to shape of Input1.
64 - Output: Weighted tensor with shape equal to src1 and src2 shapes.
65
66 Example:
67 >>> input1 = torch.rand(1, 1, 5, 5)
68 >>> input2 = torch.rand(1, 1, 5, 5)
69 >>> output = AddWeighted(0.5, 0.5, 1.0)(input1, input2)
70 >>> output.shape
71 torch.Size([1, 1, 5, 5])
72
73 Notes:
74 Tensor alpha/beta/gamma have to be with shape broadcastable to src1 and src2 shapes.
75 """
76
77 def __init__(self, alpha, beta, gamma) -> None:
78 super().__init__()
79 self.alpha = alpha
80 self.beta = beta
81 self.gamma = gamma
82
83 def forward(self, src1: Tensor, src2: Tensor) -> Tensor:
84 return add_weighted(src1, self.alpha, src2, self.beta, self.gamma)
85
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/kornia/enhance/core.py b/kornia/enhance/core.py
--- a/kornia/enhance/core.py
+++ b/kornia/enhance/core.py
@@ -1,8 +1,12 @@
-from kornia.core import Module, Tensor
+from typing import Union
+
+from kornia.core import Module, Tensor, tensor
from kornia.testing import KORNIA_CHECK, KORNIA_CHECK_IS_TENSOR
-def add_weighted(src1: Tensor, alpha, src2: Tensor, beta, gamma) -> Tensor:
+def add_weighted(
+ src1: Tensor, alpha: Union[float, Tensor], src2: Tensor, beta: Union[float, Tensor], gamma: Union[float, Tensor]
+) -> Tensor:
r"""Calculate the weighted sum of two Tensors.
.. image:: _static/img/add_weighted.png
@@ -35,12 +39,21 @@
KORNIA_CHECK_IS_TENSOR(src1)
KORNIA_CHECK_IS_TENSOR(src2)
KORNIA_CHECK(src1.shape == src2.shape, f"src1 and src2 have different shapes. Got {src1.shape} and {src2.shape}")
+
if isinstance(alpha, Tensor):
KORNIA_CHECK(src1.shape == alpha.shape, "alpha has a different shape than src.")
+ else:
+ alpha = tensor(alpha, dtype=src1.dtype, device=src1.device)
+
if isinstance(beta, Tensor):
KORNIA_CHECK(src1.shape == beta.shape, "beta has a different shape than src.")
+ else:
+ beta = tensor(beta, dtype=src1.dtype, device=src1.device)
+
if isinstance(gamma, Tensor):
KORNIA_CHECK(src1.shape == gamma.shape, "gamma has a different shape than src.")
+ else:
+ gamma = tensor(gamma, dtype=src1.dtype, device=src1.device)
return src1 * alpha + src2 * beta + gamma
| {"golden_diff": "diff --git a/kornia/enhance/core.py b/kornia/enhance/core.py\n--- a/kornia/enhance/core.py\n+++ b/kornia/enhance/core.py\n@@ -1,8 +1,12 @@\n-from kornia.core import Module, Tensor\n+from typing import Union\n+\n+from kornia.core import Module, Tensor, tensor\n from kornia.testing import KORNIA_CHECK, KORNIA_CHECK_IS_TENSOR\n \n \n-def add_weighted(src1: Tensor, alpha, src2: Tensor, beta, gamma) -> Tensor:\n+def add_weighted(\n+ src1: Tensor, alpha: Union[float, Tensor], src2: Tensor, beta: Union[float, Tensor], gamma: Union[float, Tensor]\n+) -> Tensor:\n r\"\"\"Calculate the weighted sum of two Tensors.\n \n .. image:: _static/img/add_weighted.png\n@@ -35,12 +39,21 @@\n KORNIA_CHECK_IS_TENSOR(src1)\n KORNIA_CHECK_IS_TENSOR(src2)\n KORNIA_CHECK(src1.shape == src2.shape, f\"src1 and src2 have different shapes. Got {src1.shape} and {src2.shape}\")\n+\n if isinstance(alpha, Tensor):\n KORNIA_CHECK(src1.shape == alpha.shape, \"alpha has a different shape than src.\")\n+ else:\n+ alpha = tensor(alpha, dtype=src1.dtype, device=src1.device)\n+\n if isinstance(beta, Tensor):\n KORNIA_CHECK(src1.shape == beta.shape, \"beta has a different shape than src.\")\n+ else:\n+ beta = tensor(beta, dtype=src1.dtype, device=src1.device)\n+\n if isinstance(gamma, Tensor):\n KORNIA_CHECK(src1.shape == gamma.shape, \"gamma has a different shape than src.\")\n+ else:\n+ gamma = tensor(gamma, dtype=src1.dtype, device=src1.device)\n \n return src1 * alpha + src2 * beta + gamma\n", "issue": "Update `make test-all`\n```Makefile\r\n# TODO: Add cuda-float16 when #649 is solved\r\ntest-all: FORCE\r\n\tpytest -v --device all --dtype float32,float64 --cov=kornia test/ --flake8 --mypy\r\n```\r\nMaybe instead of just adding `float16`, we can use `--dtype all`? (https://github.com/kornia/kornia/blob/5d1d8cc8c4fb3e398b429edd2cc25ef40d6299ce/conftest.py#L77)\n", "before_files": [{"content": "from kornia.core import Module, Tensor\nfrom kornia.testing import KORNIA_CHECK, KORNIA_CHECK_IS_TENSOR\n\n\ndef add_weighted(src1: Tensor, alpha, src2: Tensor, beta, gamma) -> Tensor:\n r\"\"\"Calculate the weighted sum of two Tensors.\n\n .. image:: _static/img/add_weighted.png\n\n The function calculates the weighted sum of two Tensors as follows:\n\n .. math::\n out = src1 * alpha + src2 * beta + gamma\n\n Args:\n src1: Tensor with an arbitrary shape, equal to shape of src2.\n alpha: weight of the src1 elements as Union[float, Tensor].\n src2: Tensor with an arbitrary shape, equal to shape of src1.\n beta: weight of the src2 elements as Union[float, Tensor].\n gamma: scalar added to each sum as Union[float, Tensor].\n\n Returns:\n Weighted Tensor with shape equal to src1 and src2 shapes.\n\n Example:\n >>> input1 = torch.rand(1, 1, 5, 5)\n >>> input2 = torch.rand(1, 1, 5, 5)\n >>> output = add_weighted(input1, 0.5, input2, 0.5, 1.0)\n >>> output.shape\n torch.Size([1, 1, 5, 5])\n\n Notes:\n Tensor alpha/beta/gamma have to be with shape broadcastable to src1 and src2 shapes.\n \"\"\"\n KORNIA_CHECK_IS_TENSOR(src1)\n KORNIA_CHECK_IS_TENSOR(src2)\n KORNIA_CHECK(src1.shape == src2.shape, f\"src1 and src2 have different shapes. Got {src1.shape} and {src2.shape}\")\n if isinstance(alpha, Tensor):\n KORNIA_CHECK(src1.shape == alpha.shape, \"alpha has a different shape than src.\")\n if isinstance(beta, Tensor):\n KORNIA_CHECK(src1.shape == beta.shape, \"beta has a different shape than src.\")\n if isinstance(gamma, Tensor):\n KORNIA_CHECK(src1.shape == gamma.shape, \"gamma has a different shape than src.\")\n\n return src1 * alpha + src2 * beta + gamma\n\n\nclass AddWeighted(Module):\n r\"\"\"Calculate the weighted sum of two Tensors.\n\n The function calculates the weighted sum of two Tensors as follows:\n\n .. math::\n out = src1 * alpha + src2 * beta + gamma\n\n Args:\n alpha: weight of the src1 elements as Union[float, Tensor].\n beta: weight of the src2 elements as Union[float, Tensor].\n gamma: scalar added to each sum as Union[float, Tensor].\n\n Shape:\n - Input1: Tensor with an arbitrary shape, equal to shape of Input2.\n - Input2: Tensor with an arbitrary shape, equal to shape of Input1.\n - Output: Weighted tensor with shape equal to src1 and src2 shapes.\n\n Example:\n >>> input1 = torch.rand(1, 1, 5, 5)\n >>> input2 = torch.rand(1, 1, 5, 5)\n >>> output = AddWeighted(0.5, 0.5, 1.0)(input1, input2)\n >>> output.shape\n torch.Size([1, 1, 5, 5])\n\n Notes:\n Tensor alpha/beta/gamma have to be with shape broadcastable to src1 and src2 shapes.\n \"\"\"\n\n def __init__(self, alpha, beta, gamma) -> None:\n super().__init__()\n self.alpha = alpha\n self.beta = beta\n self.gamma = gamma\n\n def forward(self, src1: Tensor, src2: Tensor) -> Tensor:\n return add_weighted(src1, self.alpha, src2, self.beta, self.gamma)\n", "path": "kornia/enhance/core.py"}], "after_files": [{"content": "from typing import Union\n\nfrom kornia.core import Module, Tensor, tensor\nfrom kornia.testing import KORNIA_CHECK, KORNIA_CHECK_IS_TENSOR\n\n\ndef add_weighted(\n src1: Tensor, alpha: Union[float, Tensor], src2: Tensor, beta: Union[float, Tensor], gamma: Union[float, Tensor]\n) -> Tensor:\n r\"\"\"Calculate the weighted sum of two Tensors.\n\n .. image:: _static/img/add_weighted.png\n\n The function calculates the weighted sum of two Tensors as follows:\n\n .. math::\n out = src1 * alpha + src2 * beta + gamma\n\n Args:\n src1: Tensor with an arbitrary shape, equal to shape of src2.\n alpha: weight of the src1 elements as Union[float, Tensor].\n src2: Tensor with an arbitrary shape, equal to shape of src1.\n beta: weight of the src2 elements as Union[float, Tensor].\n gamma: scalar added to each sum as Union[float, Tensor].\n\n Returns:\n Weighted Tensor with shape equal to src1 and src2 shapes.\n\n Example:\n >>> input1 = torch.rand(1, 1, 5, 5)\n >>> input2 = torch.rand(1, 1, 5, 5)\n >>> output = add_weighted(input1, 0.5, input2, 0.5, 1.0)\n >>> output.shape\n torch.Size([1, 1, 5, 5])\n\n Notes:\n Tensor alpha/beta/gamma have to be with shape broadcastable to src1 and src2 shapes.\n \"\"\"\n KORNIA_CHECK_IS_TENSOR(src1)\n KORNIA_CHECK_IS_TENSOR(src2)\n KORNIA_CHECK(src1.shape == src2.shape, f\"src1 and src2 have different shapes. Got {src1.shape} and {src2.shape}\")\n\n if isinstance(alpha, Tensor):\n KORNIA_CHECK(src1.shape == alpha.shape, \"alpha has a different shape than src.\")\n else:\n alpha = tensor(alpha, dtype=src1.dtype, device=src1.device)\n\n if isinstance(beta, Tensor):\n KORNIA_CHECK(src1.shape == beta.shape, \"beta has a different shape than src.\")\n else:\n beta = tensor(beta, dtype=src1.dtype, device=src1.device)\n\n if isinstance(gamma, Tensor):\n KORNIA_CHECK(src1.shape == gamma.shape, \"gamma has a different shape than src.\")\n else:\n gamma = tensor(gamma, dtype=src1.dtype, device=src1.device)\n\n return src1 * alpha + src2 * beta + gamma\n\n\nclass AddWeighted(Module):\n r\"\"\"Calculate the weighted sum of two Tensors.\n\n The function calculates the weighted sum of two Tensors as follows:\n\n .. math::\n out = src1 * alpha + src2 * beta + gamma\n\n Args:\n alpha: weight of the src1 elements as Union[float, Tensor].\n beta: weight of the src2 elements as Union[float, Tensor].\n gamma: scalar added to each sum as Union[float, Tensor].\n\n Shape:\n - Input1: Tensor with an arbitrary shape, equal to shape of Input2.\n - Input2: Tensor with an arbitrary shape, equal to shape of Input1.\n - Output: Weighted tensor with shape equal to src1 and src2 shapes.\n\n Example:\n >>> input1 = torch.rand(1, 1, 5, 5)\n >>> input2 = torch.rand(1, 1, 5, 5)\n >>> output = AddWeighted(0.5, 0.5, 1.0)(input1, input2)\n >>> output.shape\n torch.Size([1, 1, 5, 5])\n\n Notes:\n Tensor alpha/beta/gamma have to be with shape broadcastable to src1 and src2 shapes.\n \"\"\"\n\n def __init__(self, alpha, beta, gamma) -> None:\n super().__init__()\n self.alpha = alpha\n self.beta = beta\n self.gamma = gamma\n\n def forward(self, src1: Tensor, src2: Tensor) -> Tensor:\n return add_weighted(src1, self.alpha, src2, self.beta, self.gamma)\n", "path": "kornia/enhance/core.py"}]} | 1,410 | 441 |
gh_patches_debug_18224 | rasdani/github-patches | git_diff | internetarchive__openlibrary-6910 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Data Dumps not auto-generating
Despite #5263 being resolved, it looks like the data dumps weren't uploaded on July 1st :/
### Relevant URL?
* https://github.com/internetarchive/openlibrary/wiki/Generating-Data-Dumps
* https://archive.org/details/ol_exports?sort=-publicdate
Related issues and pull requests:
* #3989
* #4621
* #4671
* #4723
* #5546
* #5673
* #5719
* #5892 - Worth reading!
* #6158
* #6163
Related files:
* [`docker-compose.production.yml`](../blob/master/docker-compose.production.yml#L90) defines `cron-jobs` Docker container.
* [`docker/ol-cron-start.sh`](../blob/master/docker/ol-cron-start.sh) sets up the cron tasks.
* [olsystem: `/etc/cron.d/openlibrary.ol_home0`](https://github.com/internetarchive/olsystem/blob/master/etc/cron.d/openlibrary.ol_home0#L11) defines the actual job
* modify and then to reactivate do: `crontab /etc/cron.d/openlibrary.ol_home0` Also: https://cron.help
* [x] internetarchive/olsystem#140
* [`scripts/oldump.sh`](../blob/master/scripts/oldump.sh) is the script that gets run.
* [x] #5860
### Proposal & Constraints
- Run manually for now
### Related files
<!-- Files related to this issue; this is super useful for new contributors who might want to help! If you're not sure, leave this blank; a maintainer will add them. -->
### Stakeholders
@mekarpeles @jimman2003
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scripts/oldump.py`
Content:
```
1 #!/usr/bin/env python
2
3 import logging
4 import os
5 import sys
6 from datetime import datetime
7
8 import _init_path # noqa: F401 Imported for its side effect of setting PYTHONPATH
9
10 logger = logging.getLogger(__file__)
11 logger.setLevel(logging.DEBUG)
12
13
14 def log(*args) -> None:
15 args_str = " ".join(str(a) for a in args)
16 msg = f"{datetime.now():%Y-%m-%d %H:%M:%S} [openlibrary.dump] {args_str}"
17 logger.info(msg)
18 print(msg, file=sys.stderr)
19
20
21 if __name__ == "__main__":
22 from infogami import config
23 from openlibrary.config import load_config
24 from openlibrary.data import dump
25 from openlibrary.utils.sentry import Sentry
26
27 log("{} on Python {}.{}.{}".format(sys.argv, *sys.version_info)) # Python 3.10.4
28
29 ol_config = os.getenv("OL_CONFIG")
30 if ol_config:
31 logger.info(f"loading config from {ol_config}")
32 load_config(ol_config)
33 sentry = Sentry(getattr(config, "sentry_cron_jobs", {}))
34 if sentry.enabled:
35 sentry.init()
36 log(f"sentry.enabled = {bool(ol_config and sentry.enabled)}")
37
38 dump.main(sys.argv[1], sys.argv[2:])
39
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/scripts/oldump.py b/scripts/oldump.py
--- a/scripts/oldump.py
+++ b/scripts/oldump.py
@@ -19,6 +19,7 @@
if __name__ == "__main__":
+ from contextlib import redirect_stdout
from infogami import config
from openlibrary.config import load_config
from openlibrary.data import dump
@@ -29,7 +30,11 @@
ol_config = os.getenv("OL_CONFIG")
if ol_config:
logger.info(f"loading config from {ol_config}")
- load_config(ol_config)
+ # Squelch output from infobase (needed for sentry setup)
+ # So it doesn't end up in our data dumps body
+ with open(os.devnull, 'w') as devnull:
+ with redirect_stdout(devnull):
+ load_config(ol_config)
sentry = Sentry(getattr(config, "sentry_cron_jobs", {}))
if sentry.enabled:
sentry.init()
| {"golden_diff": "diff --git a/scripts/oldump.py b/scripts/oldump.py\n--- a/scripts/oldump.py\n+++ b/scripts/oldump.py\n@@ -19,6 +19,7 @@\n \n \n if __name__ == \"__main__\":\n+ from contextlib import redirect_stdout\n from infogami import config\n from openlibrary.config import load_config\n from openlibrary.data import dump\n@@ -29,7 +30,11 @@\n ol_config = os.getenv(\"OL_CONFIG\")\n if ol_config:\n logger.info(f\"loading config from {ol_config}\")\n- load_config(ol_config)\n+ # Squelch output from infobase (needed for sentry setup)\n+ # So it doesn't end up in our data dumps body\n+ with open(os.devnull, 'w') as devnull:\n+ with redirect_stdout(devnull):\n+ load_config(ol_config)\n sentry = Sentry(getattr(config, \"sentry_cron_jobs\", {}))\n if sentry.enabled:\n sentry.init()\n", "issue": "Data Dumps not auto-generating\nDespite #5263 being resolved, it looks like the data dumps weren't uploaded on July 1st :/ \r\n\r\n### Relevant URL?\r\n* https://github.com/internetarchive/openlibrary/wiki/Generating-Data-Dumps\r\n* https://archive.org/details/ol_exports?sort=-publicdate\r\n\r\nRelated issues and pull requests:\r\n* #3989 \r\n* #4621\r\n* #4671\r\n* #4723\r\n* #5546\r\n* #5673 \r\n* #5719 \r\n* #5892 - Worth reading!\r\n* #6158\r\n* #6163\r\n\r\nRelated files:\r\n* [`docker-compose.production.yml`](../blob/master/docker-compose.production.yml#L90) defines `cron-jobs` Docker container.\r\n* [`docker/ol-cron-start.sh`](../blob/master/docker/ol-cron-start.sh) sets up the cron tasks.\r\n* [olsystem: `/etc/cron.d/openlibrary.ol_home0`](https://github.com/internetarchive/olsystem/blob/master/etc/cron.d/openlibrary.ol_home0#L11) defines the actual job\r\n * modify and then to reactivate do: `crontab /etc/cron.d/openlibrary.ol_home0` Also: https://cron.help\r\n * [x] internetarchive/olsystem#140\r\n* [`scripts/oldump.sh`](../blob/master/scripts/oldump.sh) is the script that gets run.\r\n * [x] #5860\r\n\r\n### Proposal & Constraints\r\n- Run manually for now\r\n\r\n### Related files\r\n<!-- Files related to this issue; this is super useful for new contributors who might want to help! If you're not sure, leave this blank; a maintainer will add them. -->\r\n\r\n### Stakeholders\r\n@mekarpeles @jimman2003 \r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\nimport logging\nimport os\nimport sys\nfrom datetime import datetime\n\nimport _init_path # noqa: F401 Imported for its side effect of setting PYTHONPATH\n\nlogger = logging.getLogger(__file__)\nlogger.setLevel(logging.DEBUG)\n\n\ndef log(*args) -> None:\n args_str = \" \".join(str(a) for a in args)\n msg = f\"{datetime.now():%Y-%m-%d %H:%M:%S} [openlibrary.dump] {args_str}\"\n logger.info(msg)\n print(msg, file=sys.stderr)\n\n\nif __name__ == \"__main__\":\n from infogami import config\n from openlibrary.config import load_config\n from openlibrary.data import dump\n from openlibrary.utils.sentry import Sentry\n\n log(\"{} on Python {}.{}.{}\".format(sys.argv, *sys.version_info)) # Python 3.10.4\n\n ol_config = os.getenv(\"OL_CONFIG\")\n if ol_config:\n logger.info(f\"loading config from {ol_config}\")\n load_config(ol_config)\n sentry = Sentry(getattr(config, \"sentry_cron_jobs\", {}))\n if sentry.enabled:\n sentry.init()\n log(f\"sentry.enabled = {bool(ol_config and sentry.enabled)}\")\n\n dump.main(sys.argv[1], sys.argv[2:])\n", "path": "scripts/oldump.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\nimport logging\nimport os\nimport sys\nfrom datetime import datetime\n\nimport _init_path # noqa: F401 Imported for its side effect of setting PYTHONPATH\n\nlogger = logging.getLogger(__file__)\nlogger.setLevel(logging.DEBUG)\n\n\ndef log(*args) -> None:\n args_str = \" \".join(str(a) for a in args)\n msg = f\"{datetime.now():%Y-%m-%d %H:%M:%S} [openlibrary.dump] {args_str}\"\n logger.info(msg)\n print(msg, file=sys.stderr)\n\n\nif __name__ == \"__main__\":\n from contextlib import redirect_stdout\n from infogami import config\n from openlibrary.config import load_config\n from openlibrary.data import dump\n from openlibrary.utils.sentry import Sentry\n\n log(\"{} on Python {}.{}.{}\".format(sys.argv, *sys.version_info)) # Python 3.10.4\n\n ol_config = os.getenv(\"OL_CONFIG\")\n if ol_config:\n logger.info(f\"loading config from {ol_config}\")\n # Squelch output from infobase (needed for sentry setup)\n # So it doesn't end up in our data dumps body\n with open(os.devnull, 'w') as devnull:\n with redirect_stdout(devnull):\n load_config(ol_config)\n sentry = Sentry(getattr(config, \"sentry_cron_jobs\", {}))\n if sentry.enabled:\n sentry.init()\n log(f\"sentry.enabled = {bool(ol_config and sentry.enabled)}\")\n\n dump.main(sys.argv[1], sys.argv[2:])\n", "path": "scripts/oldump.py"}]} | 1,036 | 228 |
gh_patches_debug_23651 | rasdani/github-patches | git_diff | netbox-community__netbox-1565 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add Primary IP for a device to Search Results
<!--
Before opening a new issue, please search through the existing issues to
see if your topic has already been addressed. Note that you may need to
remove the "is:open" filter from the search bar to include closed issues.
Check the appropriate type for your issue below by placing an x between the
brackets. If none of the below apply, please raise your issue for
discussion on our mailing list:
https://groups.google.com/forum/#!forum/netbox-discuss
Please note that issues which do not fall under any of the below categories
will be closed.
--->
### Issue type
[X] Feature request <!-- Requesting the implementation of a new feature -->
<!--
Please describe the environment in which you are running NetBox. (Be sure
to verify that you are running the latest stable release of NetBox before
submitting a bug report.)
-->
### Environment
* Python version: 2.7.5
* NetBox version: 2.1.4
<!--
BUG REPORTS must include:
* A list of the steps needed to reproduce the bug
* A description of the expected behavior
* Any relevant error messages (screenshots may also help)
FEATURE REQUESTS must include:
* A detailed description of the proposed functionality
* A use case for the new feature
* A rough description of any necessary changes to the database schema
* Any relevant third-party libraries which would be needed
-->
### Description
When searching for a device name from the upper-right search box please add the Primary IP (if assigned) to the results. Currently the Device results include these fields: Name, Status, Tenant, Site, Rack, Role, Type
This is an example of the URL for this search: https://netbox/search/?q=SW
Use case is, given a name, find the IP as quickly as possible.
No DB changes or additional libraries should be needed.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `netbox/netbox/views.py`
Content:
```
1 from __future__ import unicode_literals
2 from collections import OrderedDict
3 import sys
4
5 from rest_framework.views import APIView
6 from rest_framework.response import Response
7 from rest_framework.reverse import reverse
8
9 from django.shortcuts import render
10 from django.views.generic import View
11
12 from circuits.filters import CircuitFilter, ProviderFilter
13 from circuits.models import Circuit, Provider
14 from circuits.tables import CircuitTable, ProviderTable
15 from dcim.filters import DeviceFilter, DeviceTypeFilter, RackFilter, SiteFilter
16 from dcim.models import ConsolePort, Device, DeviceType, InterfaceConnection, PowerPort, Rack, Site
17 from dcim.tables import DeviceTable, DeviceTypeTable, RackTable, SiteTable
18 from extras.models import TopologyMap, UserAction
19 from ipam.filters import AggregateFilter, IPAddressFilter, PrefixFilter, VLANFilter, VRFFilter
20 from ipam.models import Aggregate, IPAddress, Prefix, VLAN, VRF
21 from ipam.tables import AggregateTable, IPAddressTable, PrefixTable, VLANTable, VRFTable
22 from secrets.filters import SecretFilter
23 from secrets.models import Secret
24 from secrets.tables import SecretTable
25 from tenancy.filters import TenantFilter
26 from tenancy.models import Tenant
27 from tenancy.tables import TenantTable
28 from .forms import SearchForm
29
30
31 SEARCH_MAX_RESULTS = 15
32 SEARCH_TYPES = OrderedDict((
33 # Circuits
34 ('provider', {
35 'queryset': Provider.objects.all(),
36 'filter': ProviderFilter,
37 'table': ProviderTable,
38 'url': 'circuits:provider_list',
39 }),
40 ('circuit', {
41 'queryset': Circuit.objects.select_related('type', 'provider', 'tenant').prefetch_related('terminations__site'),
42 'filter': CircuitFilter,
43 'table': CircuitTable,
44 'url': 'circuits:circuit_list',
45 }),
46 # DCIM
47 ('site', {
48 'queryset': Site.objects.select_related('region', 'tenant'),
49 'filter': SiteFilter,
50 'table': SiteTable,
51 'url': 'dcim:site_list',
52 }),
53 ('rack', {
54 'queryset': Rack.objects.select_related('site', 'group', 'tenant', 'role'),
55 'filter': RackFilter,
56 'table': RackTable,
57 'url': 'dcim:rack_list',
58 }),
59 ('devicetype', {
60 'queryset': DeviceType.objects.select_related('manufacturer'),
61 'filter': DeviceTypeFilter,
62 'table': DeviceTypeTable,
63 'url': 'dcim:devicetype_list',
64 }),
65 ('device', {
66 'queryset': Device.objects.select_related(
67 'device_type__manufacturer', 'device_role', 'tenant', 'site', 'rack'
68 ),
69 'filter': DeviceFilter,
70 'table': DeviceTable,
71 'url': 'dcim:device_list',
72 }),
73 # IPAM
74 ('vrf', {
75 'queryset': VRF.objects.select_related('tenant'),
76 'filter': VRFFilter,
77 'table': VRFTable,
78 'url': 'ipam:vrf_list',
79 }),
80 ('aggregate', {
81 'queryset': Aggregate.objects.select_related('rir'),
82 'filter': AggregateFilter,
83 'table': AggregateTable,
84 'url': 'ipam:aggregate_list',
85 }),
86 ('prefix', {
87 'queryset': Prefix.objects.select_related('site', 'vrf__tenant', 'tenant', 'vlan', 'role'),
88 'filter': PrefixFilter,
89 'table': PrefixTable,
90 'url': 'ipam:prefix_list',
91 }),
92 ('ipaddress', {
93 'queryset': IPAddress.objects.select_related('vrf__tenant', 'tenant', 'interface__device'),
94 'filter': IPAddressFilter,
95 'table': IPAddressTable,
96 'url': 'ipam:ipaddress_list',
97 }),
98 ('vlan', {
99 'queryset': VLAN.objects.select_related('site', 'group', 'tenant', 'role'),
100 'filter': VLANFilter,
101 'table': VLANTable,
102 'url': 'ipam:vlan_list',
103 }),
104 # Secrets
105 ('secret', {
106 'queryset': Secret.objects.select_related('role', 'device'),
107 'filter': SecretFilter,
108 'table': SecretTable,
109 'url': 'secrets:secret_list',
110 }),
111 # Tenancy
112 ('tenant', {
113 'queryset': Tenant.objects.select_related('group'),
114 'filter': TenantFilter,
115 'table': TenantTable,
116 'url': 'tenancy:tenant_list',
117 }),
118 ))
119
120
121 class HomeView(View):
122 template_name = 'home.html'
123
124 def get(self, request):
125
126 stats = {
127
128 # Organization
129 'site_count': Site.objects.count(),
130 'tenant_count': Tenant.objects.count(),
131
132 # DCIM
133 'rack_count': Rack.objects.count(),
134 'device_count': Device.objects.count(),
135 'interface_connections_count': InterfaceConnection.objects.count(),
136 'console_connections_count': ConsolePort.objects.filter(cs_port__isnull=False).count(),
137 'power_connections_count': PowerPort.objects.filter(power_outlet__isnull=False).count(),
138
139 # IPAM
140 'vrf_count': VRF.objects.count(),
141 'aggregate_count': Aggregate.objects.count(),
142 'prefix_count': Prefix.objects.count(),
143 'ipaddress_count': IPAddress.objects.count(),
144 'vlan_count': VLAN.objects.count(),
145
146 # Circuits
147 'provider_count': Provider.objects.count(),
148 'circuit_count': Circuit.objects.count(),
149
150 # Secrets
151 'secret_count': Secret.objects.count(),
152
153 }
154
155 return render(request, self.template_name, {
156 'search_form': SearchForm(),
157 'stats': stats,
158 'topology_maps': TopologyMap.objects.filter(site__isnull=True),
159 'recent_activity': UserAction.objects.select_related('user')[:50]
160 })
161
162
163 class SearchView(View):
164
165 def get(self, request):
166
167 # No query
168 if 'q' not in request.GET:
169 return render(request, 'search.html', {
170 'form': SearchForm(),
171 })
172
173 form = SearchForm(request.GET)
174 results = []
175
176 if form.is_valid():
177
178 # Searching for a single type of object
179 if form.cleaned_data['obj_type']:
180 obj_types = [form.cleaned_data['obj_type']]
181 # Searching all object types
182 else:
183 obj_types = SEARCH_TYPES.keys()
184
185 for obj_type in obj_types:
186
187 queryset = SEARCH_TYPES[obj_type]['queryset']
188 filter_cls = SEARCH_TYPES[obj_type]['filter']
189 table = SEARCH_TYPES[obj_type]['table']
190 url = SEARCH_TYPES[obj_type]['url']
191
192 # Construct the results table for this object type
193 filtered_queryset = filter_cls({'q': form.cleaned_data['q']}, queryset=queryset).qs
194 table = table(filtered_queryset, orderable=False)
195 table.paginate(per_page=SEARCH_MAX_RESULTS)
196
197 if table.page:
198 results.append({
199 'name': queryset.model._meta.verbose_name_plural,
200 'table': table,
201 'url': '{}?q={}'.format(reverse(url), form.cleaned_data['q'])
202 })
203
204 return render(request, 'search.html', {
205 'form': form,
206 'results': results,
207 })
208
209
210 class APIRootView(APIView):
211 _ignore_model_permissions = True
212 exclude_from_schema = True
213
214 def get_view_name(self):
215 return "API Root"
216
217 def get(self, request, format=None):
218
219 return Response({
220 'circuits': reverse('circuits-api:api-root', request=request, format=format),
221 'dcim': reverse('dcim-api:api-root', request=request, format=format),
222 'extras': reverse('extras-api:api-root', request=request, format=format),
223 'ipam': reverse('ipam-api:api-root', request=request, format=format),
224 'secrets': reverse('secrets-api:api-root', request=request, format=format),
225 'tenancy': reverse('tenancy-api:api-root', request=request, format=format),
226 })
227
228
229 def handle_500(request):
230 """
231 Custom server error handler
232 """
233 type_, error, traceback = sys.exc_info()
234 return render(request, '500.html', {
235 'exception': str(type_),
236 'error': error,
237 }, status=500)
238
239
240 def trigger_500(request):
241 """
242 Hot-wired method of triggering a server error to test reporting
243 """
244 raise Exception(
245 "Congratulations, you've triggered an exception! Go tell all your friends what an exceptional person you are."
246 )
247
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/netbox/netbox/views.py b/netbox/netbox/views.py
--- a/netbox/netbox/views.py
+++ b/netbox/netbox/views.py
@@ -14,7 +14,7 @@
from circuits.tables import CircuitTable, ProviderTable
from dcim.filters import DeviceFilter, DeviceTypeFilter, RackFilter, SiteFilter
from dcim.models import ConsolePort, Device, DeviceType, InterfaceConnection, PowerPort, Rack, Site
-from dcim.tables import DeviceTable, DeviceTypeTable, RackTable, SiteTable
+from dcim.tables import DeviceDetailTable, DeviceTypeTable, RackTable, SiteTable
from extras.models import TopologyMap, UserAction
from ipam.filters import AggregateFilter, IPAddressFilter, PrefixFilter, VLANFilter, VRFFilter
from ipam.models import Aggregate, IPAddress, Prefix, VLAN, VRF
@@ -67,7 +67,7 @@
'device_type__manufacturer', 'device_role', 'tenant', 'site', 'rack'
),
'filter': DeviceFilter,
- 'table': DeviceTable,
+ 'table': DeviceDetailTable,
'url': 'dcim:device_list',
}),
# IPAM
| {"golden_diff": "diff --git a/netbox/netbox/views.py b/netbox/netbox/views.py\n--- a/netbox/netbox/views.py\n+++ b/netbox/netbox/views.py\n@@ -14,7 +14,7 @@\n from circuits.tables import CircuitTable, ProviderTable\n from dcim.filters import DeviceFilter, DeviceTypeFilter, RackFilter, SiteFilter\n from dcim.models import ConsolePort, Device, DeviceType, InterfaceConnection, PowerPort, Rack, Site\n-from dcim.tables import DeviceTable, DeviceTypeTable, RackTable, SiteTable\n+from dcim.tables import DeviceDetailTable, DeviceTypeTable, RackTable, SiteTable\n from extras.models import TopologyMap, UserAction\n from ipam.filters import AggregateFilter, IPAddressFilter, PrefixFilter, VLANFilter, VRFFilter\n from ipam.models import Aggregate, IPAddress, Prefix, VLAN, VRF\n@@ -67,7 +67,7 @@\n 'device_type__manufacturer', 'device_role', 'tenant', 'site', 'rack'\n ),\n 'filter': DeviceFilter,\n- 'table': DeviceTable,\n+ 'table': DeviceDetailTable,\n 'url': 'dcim:device_list',\n }),\n # IPAM\n", "issue": "Add Primary IP for a device to Search Results\n<!--\r\n Before opening a new issue, please search through the existing issues to\r\n see if your topic has already been addressed. Note that you may need to\r\n remove the \"is:open\" filter from the search bar to include closed issues.\r\n\r\n Check the appropriate type for your issue below by placing an x between the\r\n brackets. If none of the below apply, please raise your issue for\r\n discussion on our mailing list:\r\n\r\n https://groups.google.com/forum/#!forum/netbox-discuss\r\n\r\n Please note that issues which do not fall under any of the below categories\r\n will be closed.\r\n--->\r\n### Issue type\r\n[X] Feature request <!-- Requesting the implementation of a new feature -->\r\n\r\n<!--\r\n Please describe the environment in which you are running NetBox. (Be sure\r\n to verify that you are running the latest stable release of NetBox before\r\n submitting a bug report.)\r\n-->\r\n### Environment\r\n* Python version: 2.7.5\r\n* NetBox version: 2.1.4\r\n\r\n<!--\r\n BUG REPORTS must include:\r\n * A list of the steps needed to reproduce the bug\r\n * A description of the expected behavior\r\n * Any relevant error messages (screenshots may also help)\r\n\r\n FEATURE REQUESTS must include:\r\n * A detailed description of the proposed functionality\r\n * A use case for the new feature\r\n * A rough description of any necessary changes to the database schema\r\n * Any relevant third-party libraries which would be needed\r\n-->\r\n### Description\r\nWhen searching for a device name from the upper-right search box please add the Primary IP (if assigned) to the results. Currently the Device results include these fields: Name, Status, Tenant, Site, Rack, Role, Type\r\n\r\nThis is an example of the URL for this search: https://netbox/search/?q=SW\r\n\r\nUse case is, given a name, find the IP as quickly as possible. \r\n\r\nNo DB changes or additional libraries should be needed.\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "from __future__ import unicode_literals\nfrom collections import OrderedDict\nimport sys\n\nfrom rest_framework.views import APIView\nfrom rest_framework.response import Response\nfrom rest_framework.reverse import reverse\n\nfrom django.shortcuts import render\nfrom django.views.generic import View\n\nfrom circuits.filters import CircuitFilter, ProviderFilter\nfrom circuits.models import Circuit, Provider\nfrom circuits.tables import CircuitTable, ProviderTable\nfrom dcim.filters import DeviceFilter, DeviceTypeFilter, RackFilter, SiteFilter\nfrom dcim.models import ConsolePort, Device, DeviceType, InterfaceConnection, PowerPort, Rack, Site\nfrom dcim.tables import DeviceTable, DeviceTypeTable, RackTable, SiteTable\nfrom extras.models import TopologyMap, UserAction\nfrom ipam.filters import AggregateFilter, IPAddressFilter, PrefixFilter, VLANFilter, VRFFilter\nfrom ipam.models import Aggregate, IPAddress, Prefix, VLAN, VRF\nfrom ipam.tables import AggregateTable, IPAddressTable, PrefixTable, VLANTable, VRFTable\nfrom secrets.filters import SecretFilter\nfrom secrets.models import Secret\nfrom secrets.tables import SecretTable\nfrom tenancy.filters import TenantFilter\nfrom tenancy.models import Tenant\nfrom tenancy.tables import TenantTable\nfrom .forms import SearchForm\n\n\nSEARCH_MAX_RESULTS = 15\nSEARCH_TYPES = OrderedDict((\n # Circuits\n ('provider', {\n 'queryset': Provider.objects.all(),\n 'filter': ProviderFilter,\n 'table': ProviderTable,\n 'url': 'circuits:provider_list',\n }),\n ('circuit', {\n 'queryset': Circuit.objects.select_related('type', 'provider', 'tenant').prefetch_related('terminations__site'),\n 'filter': CircuitFilter,\n 'table': CircuitTable,\n 'url': 'circuits:circuit_list',\n }),\n # DCIM\n ('site', {\n 'queryset': Site.objects.select_related('region', 'tenant'),\n 'filter': SiteFilter,\n 'table': SiteTable,\n 'url': 'dcim:site_list',\n }),\n ('rack', {\n 'queryset': Rack.objects.select_related('site', 'group', 'tenant', 'role'),\n 'filter': RackFilter,\n 'table': RackTable,\n 'url': 'dcim:rack_list',\n }),\n ('devicetype', {\n 'queryset': DeviceType.objects.select_related('manufacturer'),\n 'filter': DeviceTypeFilter,\n 'table': DeviceTypeTable,\n 'url': 'dcim:devicetype_list',\n }),\n ('device', {\n 'queryset': Device.objects.select_related(\n 'device_type__manufacturer', 'device_role', 'tenant', 'site', 'rack'\n ),\n 'filter': DeviceFilter,\n 'table': DeviceTable,\n 'url': 'dcim:device_list',\n }),\n # IPAM\n ('vrf', {\n 'queryset': VRF.objects.select_related('tenant'),\n 'filter': VRFFilter,\n 'table': VRFTable,\n 'url': 'ipam:vrf_list',\n }),\n ('aggregate', {\n 'queryset': Aggregate.objects.select_related('rir'),\n 'filter': AggregateFilter,\n 'table': AggregateTable,\n 'url': 'ipam:aggregate_list',\n }),\n ('prefix', {\n 'queryset': Prefix.objects.select_related('site', 'vrf__tenant', 'tenant', 'vlan', 'role'),\n 'filter': PrefixFilter,\n 'table': PrefixTable,\n 'url': 'ipam:prefix_list',\n }),\n ('ipaddress', {\n 'queryset': IPAddress.objects.select_related('vrf__tenant', 'tenant', 'interface__device'),\n 'filter': IPAddressFilter,\n 'table': IPAddressTable,\n 'url': 'ipam:ipaddress_list',\n }),\n ('vlan', {\n 'queryset': VLAN.objects.select_related('site', 'group', 'tenant', 'role'),\n 'filter': VLANFilter,\n 'table': VLANTable,\n 'url': 'ipam:vlan_list',\n }),\n # Secrets\n ('secret', {\n 'queryset': Secret.objects.select_related('role', 'device'),\n 'filter': SecretFilter,\n 'table': SecretTable,\n 'url': 'secrets:secret_list',\n }),\n # Tenancy\n ('tenant', {\n 'queryset': Tenant.objects.select_related('group'),\n 'filter': TenantFilter,\n 'table': TenantTable,\n 'url': 'tenancy:tenant_list',\n }),\n))\n\n\nclass HomeView(View):\n template_name = 'home.html'\n\n def get(self, request):\n\n stats = {\n\n # Organization\n 'site_count': Site.objects.count(),\n 'tenant_count': Tenant.objects.count(),\n\n # DCIM\n 'rack_count': Rack.objects.count(),\n 'device_count': Device.objects.count(),\n 'interface_connections_count': InterfaceConnection.objects.count(),\n 'console_connections_count': ConsolePort.objects.filter(cs_port__isnull=False).count(),\n 'power_connections_count': PowerPort.objects.filter(power_outlet__isnull=False).count(),\n\n # IPAM\n 'vrf_count': VRF.objects.count(),\n 'aggregate_count': Aggregate.objects.count(),\n 'prefix_count': Prefix.objects.count(),\n 'ipaddress_count': IPAddress.objects.count(),\n 'vlan_count': VLAN.objects.count(),\n\n # Circuits\n 'provider_count': Provider.objects.count(),\n 'circuit_count': Circuit.objects.count(),\n\n # Secrets\n 'secret_count': Secret.objects.count(),\n\n }\n\n return render(request, self.template_name, {\n 'search_form': SearchForm(),\n 'stats': stats,\n 'topology_maps': TopologyMap.objects.filter(site__isnull=True),\n 'recent_activity': UserAction.objects.select_related('user')[:50]\n })\n\n\nclass SearchView(View):\n\n def get(self, request):\n\n # No query\n if 'q' not in request.GET:\n return render(request, 'search.html', {\n 'form': SearchForm(),\n })\n\n form = SearchForm(request.GET)\n results = []\n\n if form.is_valid():\n\n # Searching for a single type of object\n if form.cleaned_data['obj_type']:\n obj_types = [form.cleaned_data['obj_type']]\n # Searching all object types\n else:\n obj_types = SEARCH_TYPES.keys()\n\n for obj_type in obj_types:\n\n queryset = SEARCH_TYPES[obj_type]['queryset']\n filter_cls = SEARCH_TYPES[obj_type]['filter']\n table = SEARCH_TYPES[obj_type]['table']\n url = SEARCH_TYPES[obj_type]['url']\n\n # Construct the results table for this object type\n filtered_queryset = filter_cls({'q': form.cleaned_data['q']}, queryset=queryset).qs\n table = table(filtered_queryset, orderable=False)\n table.paginate(per_page=SEARCH_MAX_RESULTS)\n\n if table.page:\n results.append({\n 'name': queryset.model._meta.verbose_name_plural,\n 'table': table,\n 'url': '{}?q={}'.format(reverse(url), form.cleaned_data['q'])\n })\n\n return render(request, 'search.html', {\n 'form': form,\n 'results': results,\n })\n\n\nclass APIRootView(APIView):\n _ignore_model_permissions = True\n exclude_from_schema = True\n\n def get_view_name(self):\n return \"API Root\"\n\n def get(self, request, format=None):\n\n return Response({\n 'circuits': reverse('circuits-api:api-root', request=request, format=format),\n 'dcim': reverse('dcim-api:api-root', request=request, format=format),\n 'extras': reverse('extras-api:api-root', request=request, format=format),\n 'ipam': reverse('ipam-api:api-root', request=request, format=format),\n 'secrets': reverse('secrets-api:api-root', request=request, format=format),\n 'tenancy': reverse('tenancy-api:api-root', request=request, format=format),\n })\n\n\ndef handle_500(request):\n \"\"\"\n Custom server error handler\n \"\"\"\n type_, error, traceback = sys.exc_info()\n return render(request, '500.html', {\n 'exception': str(type_),\n 'error': error,\n }, status=500)\n\n\ndef trigger_500(request):\n \"\"\"\n Hot-wired method of triggering a server error to test reporting\n \"\"\"\n raise Exception(\n \"Congratulations, you've triggered an exception! Go tell all your friends what an exceptional person you are.\"\n )\n", "path": "netbox/netbox/views.py"}], "after_files": [{"content": "from __future__ import unicode_literals\nfrom collections import OrderedDict\nimport sys\n\nfrom rest_framework.views import APIView\nfrom rest_framework.response import Response\nfrom rest_framework.reverse import reverse\n\nfrom django.shortcuts import render\nfrom django.views.generic import View\n\nfrom circuits.filters import CircuitFilter, ProviderFilter\nfrom circuits.models import Circuit, Provider\nfrom circuits.tables import CircuitTable, ProviderTable\nfrom dcim.filters import DeviceFilter, DeviceTypeFilter, RackFilter, SiteFilter\nfrom dcim.models import ConsolePort, Device, DeviceType, InterfaceConnection, PowerPort, Rack, Site\nfrom dcim.tables import DeviceDetailTable, DeviceTypeTable, RackTable, SiteTable\nfrom extras.models import TopologyMap, UserAction\nfrom ipam.filters import AggregateFilter, IPAddressFilter, PrefixFilter, VLANFilter, VRFFilter\nfrom ipam.models import Aggregate, IPAddress, Prefix, VLAN, VRF\nfrom ipam.tables import AggregateTable, IPAddressTable, PrefixTable, VLANTable, VRFTable\nfrom secrets.filters import SecretFilter\nfrom secrets.models import Secret\nfrom secrets.tables import SecretTable\nfrom tenancy.filters import TenantFilter\nfrom tenancy.models import Tenant\nfrom tenancy.tables import TenantTable\nfrom .forms import SearchForm\n\n\nSEARCH_MAX_RESULTS = 15\nSEARCH_TYPES = OrderedDict((\n # Circuits\n ('provider', {\n 'queryset': Provider.objects.all(),\n 'filter': ProviderFilter,\n 'table': ProviderTable,\n 'url': 'circuits:provider_list',\n }),\n ('circuit', {\n 'queryset': Circuit.objects.select_related('type', 'provider', 'tenant').prefetch_related('terminations__site'),\n 'filter': CircuitFilter,\n 'table': CircuitTable,\n 'url': 'circuits:circuit_list',\n }),\n # DCIM\n ('site', {\n 'queryset': Site.objects.select_related('region', 'tenant'),\n 'filter': SiteFilter,\n 'table': SiteTable,\n 'url': 'dcim:site_list',\n }),\n ('rack', {\n 'queryset': Rack.objects.select_related('site', 'group', 'tenant', 'role'),\n 'filter': RackFilter,\n 'table': RackTable,\n 'url': 'dcim:rack_list',\n }),\n ('devicetype', {\n 'queryset': DeviceType.objects.select_related('manufacturer'),\n 'filter': DeviceTypeFilter,\n 'table': DeviceTypeTable,\n 'url': 'dcim:devicetype_list',\n }),\n ('device', {\n 'queryset': Device.objects.select_related(\n 'device_type__manufacturer', 'device_role', 'tenant', 'site', 'rack'\n ),\n 'filter': DeviceFilter,\n 'table': DeviceDetailTable,\n 'url': 'dcim:device_list',\n }),\n # IPAM\n ('vrf', {\n 'queryset': VRF.objects.select_related('tenant'),\n 'filter': VRFFilter,\n 'table': VRFTable,\n 'url': 'ipam:vrf_list',\n }),\n ('aggregate', {\n 'queryset': Aggregate.objects.select_related('rir'),\n 'filter': AggregateFilter,\n 'table': AggregateTable,\n 'url': 'ipam:aggregate_list',\n }),\n ('prefix', {\n 'queryset': Prefix.objects.select_related('site', 'vrf__tenant', 'tenant', 'vlan', 'role'),\n 'filter': PrefixFilter,\n 'table': PrefixTable,\n 'url': 'ipam:prefix_list',\n }),\n ('ipaddress', {\n 'queryset': IPAddress.objects.select_related('vrf__tenant', 'tenant', 'interface__device'),\n 'filter': IPAddressFilter,\n 'table': IPAddressTable,\n 'url': 'ipam:ipaddress_list',\n }),\n ('vlan', {\n 'queryset': VLAN.objects.select_related('site', 'group', 'tenant', 'role'),\n 'filter': VLANFilter,\n 'table': VLANTable,\n 'url': 'ipam:vlan_list',\n }),\n # Secrets\n ('secret', {\n 'queryset': Secret.objects.select_related('role', 'device'),\n 'filter': SecretFilter,\n 'table': SecretTable,\n 'url': 'secrets:secret_list',\n }),\n # Tenancy\n ('tenant', {\n 'queryset': Tenant.objects.select_related('group'),\n 'filter': TenantFilter,\n 'table': TenantTable,\n 'url': 'tenancy:tenant_list',\n }),\n))\n\n\nclass HomeView(View):\n template_name = 'home.html'\n\n def get(self, request):\n\n stats = {\n\n # Organization\n 'site_count': Site.objects.count(),\n 'tenant_count': Tenant.objects.count(),\n\n # DCIM\n 'rack_count': Rack.objects.count(),\n 'device_count': Device.objects.count(),\n 'interface_connections_count': InterfaceConnection.objects.count(),\n 'console_connections_count': ConsolePort.objects.filter(cs_port__isnull=False).count(),\n 'power_connections_count': PowerPort.objects.filter(power_outlet__isnull=False).count(),\n\n # IPAM\n 'vrf_count': VRF.objects.count(),\n 'aggregate_count': Aggregate.objects.count(),\n 'prefix_count': Prefix.objects.count(),\n 'ipaddress_count': IPAddress.objects.count(),\n 'vlan_count': VLAN.objects.count(),\n\n # Circuits\n 'provider_count': Provider.objects.count(),\n 'circuit_count': Circuit.objects.count(),\n\n # Secrets\n 'secret_count': Secret.objects.count(),\n\n }\n\n return render(request, self.template_name, {\n 'search_form': SearchForm(),\n 'stats': stats,\n 'topology_maps': TopologyMap.objects.filter(site__isnull=True),\n 'recent_activity': UserAction.objects.select_related('user')[:50]\n })\n\n\nclass SearchView(View):\n\n def get(self, request):\n\n # No query\n if 'q' not in request.GET:\n return render(request, 'search.html', {\n 'form': SearchForm(),\n })\n\n form = SearchForm(request.GET)\n results = []\n\n if form.is_valid():\n\n # Searching for a single type of object\n if form.cleaned_data['obj_type']:\n obj_types = [form.cleaned_data['obj_type']]\n # Searching all object types\n else:\n obj_types = SEARCH_TYPES.keys()\n\n for obj_type in obj_types:\n\n queryset = SEARCH_TYPES[obj_type]['queryset']\n filter_cls = SEARCH_TYPES[obj_type]['filter']\n table = SEARCH_TYPES[obj_type]['table']\n url = SEARCH_TYPES[obj_type]['url']\n\n # Construct the results table for this object type\n filtered_queryset = filter_cls({'q': form.cleaned_data['q']}, queryset=queryset).qs\n table = table(filtered_queryset, orderable=False)\n table.paginate(per_page=SEARCH_MAX_RESULTS)\n\n if table.page:\n results.append({\n 'name': queryset.model._meta.verbose_name_plural,\n 'table': table,\n 'url': '{}?q={}'.format(reverse(url), form.cleaned_data['q'])\n })\n\n return render(request, 'search.html', {\n 'form': form,\n 'results': results,\n })\n\n\nclass APIRootView(APIView):\n _ignore_model_permissions = True\n exclude_from_schema = True\n\n def get_view_name(self):\n return \"API Root\"\n\n def get(self, request, format=None):\n\n return Response({\n 'circuits': reverse('circuits-api:api-root', request=request, format=format),\n 'dcim': reverse('dcim-api:api-root', request=request, format=format),\n 'extras': reverse('extras-api:api-root', request=request, format=format),\n 'ipam': reverse('ipam-api:api-root', request=request, format=format),\n 'secrets': reverse('secrets-api:api-root', request=request, format=format),\n 'tenancy': reverse('tenancy-api:api-root', request=request, format=format),\n })\n\n\ndef handle_500(request):\n \"\"\"\n Custom server error handler\n \"\"\"\n type_, error, traceback = sys.exc_info()\n return render(request, '500.html', {\n 'exception': str(type_),\n 'error': error,\n }, status=500)\n\n\ndef trigger_500(request):\n \"\"\"\n Hot-wired method of triggering a server error to test reporting\n \"\"\"\n raise Exception(\n \"Congratulations, you've triggered an exception! Go tell all your friends what an exceptional person you are.\"\n )\n", "path": "netbox/netbox/views.py"}]} | 3,170 | 261 |
gh_patches_debug_11510 | rasdani/github-patches | git_diff | googleapis__google-api-python-client-550 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
googleapiclient/channel.py has no license header
There is no license header at all in channel.py
googleapiclient/channel.py has no license header
There is no license header at all in channel.py
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `googleapiclient/channel.py`
Content:
```
1 """Channel notifications support.
2
3 Classes and functions to support channel subscriptions and notifications
4 on those channels.
5
6 Notes:
7 - This code is based on experimental APIs and is subject to change.
8 - Notification does not do deduplication of notification ids, that's up to
9 the receiver.
10 - Storing the Channel between calls is up to the caller.
11
12
13 Example setting up a channel:
14
15 # Create a new channel that gets notifications via webhook.
16 channel = new_webhook_channel("https://example.com/my_web_hook")
17
18 # Store the channel, keyed by 'channel.id'. Store it before calling the
19 # watch method because notifications may start arriving before the watch
20 # method returns.
21 ...
22
23 resp = service.objects().watchAll(
24 bucket="some_bucket_id", body=channel.body()).execute()
25 channel.update(resp)
26
27 # Store the channel, keyed by 'channel.id'. Store it after being updated
28 # since the resource_id value will now be correct, and that's needed to
29 # stop a subscription.
30 ...
31
32
33 An example Webhook implementation using webapp2. Note that webapp2 puts
34 headers in a case insensitive dictionary, as headers aren't guaranteed to
35 always be upper case.
36
37 id = self.request.headers[X_GOOG_CHANNEL_ID]
38
39 # Retrieve the channel by id.
40 channel = ...
41
42 # Parse notification from the headers, including validating the id.
43 n = notification_from_headers(channel, self.request.headers)
44
45 # Do app specific stuff with the notification here.
46 if n.resource_state == 'sync':
47 # Code to handle sync state.
48 elif n.resource_state == 'exists':
49 # Code to handle the exists state.
50 elif n.resource_state == 'not_exists':
51 # Code to handle the not exists state.
52
53
54 Example of unsubscribing.
55
56 service.channels().stop(channel.body())
57 """
58 from __future__ import absolute_import
59
60 import datetime
61 import uuid
62
63 from googleapiclient import errors
64 from googleapiclient import _helpers as util
65 import six
66
67
68 # The unix time epoch starts at midnight 1970.
69 EPOCH = datetime.datetime.utcfromtimestamp(0)
70
71 # Map the names of the parameters in the JSON channel description to
72 # the parameter names we use in the Channel class.
73 CHANNEL_PARAMS = {
74 'address': 'address',
75 'id': 'id',
76 'expiration': 'expiration',
77 'params': 'params',
78 'resourceId': 'resource_id',
79 'resourceUri': 'resource_uri',
80 'type': 'type',
81 'token': 'token',
82 }
83
84 X_GOOG_CHANNEL_ID = 'X-GOOG-CHANNEL-ID'
85 X_GOOG_MESSAGE_NUMBER = 'X-GOOG-MESSAGE-NUMBER'
86 X_GOOG_RESOURCE_STATE = 'X-GOOG-RESOURCE-STATE'
87 X_GOOG_RESOURCE_URI = 'X-GOOG-RESOURCE-URI'
88 X_GOOG_RESOURCE_ID = 'X-GOOG-RESOURCE-ID'
89
90
91 def _upper_header_keys(headers):
92 new_headers = {}
93 for k, v in six.iteritems(headers):
94 new_headers[k.upper()] = v
95 return new_headers
96
97
98 class Notification(object):
99 """A Notification from a Channel.
100
101 Notifications are not usually constructed directly, but are returned
102 from functions like notification_from_headers().
103
104 Attributes:
105 message_number: int, The unique id number of this notification.
106 state: str, The state of the resource being monitored.
107 uri: str, The address of the resource being monitored.
108 resource_id: str, The unique identifier of the version of the resource at
109 this event.
110 """
111 @util.positional(5)
112 def __init__(self, message_number, state, resource_uri, resource_id):
113 """Notification constructor.
114
115 Args:
116 message_number: int, The unique id number of this notification.
117 state: str, The state of the resource being monitored. Can be one
118 of "exists", "not_exists", or "sync".
119 resource_uri: str, The address of the resource being monitored.
120 resource_id: str, The identifier of the watched resource.
121 """
122 self.message_number = message_number
123 self.state = state
124 self.resource_uri = resource_uri
125 self.resource_id = resource_id
126
127
128 class Channel(object):
129 """A Channel for notifications.
130
131 Usually not constructed directly, instead it is returned from helper
132 functions like new_webhook_channel().
133
134 Attributes:
135 type: str, The type of delivery mechanism used by this channel. For
136 example, 'web_hook'.
137 id: str, A UUID for the channel.
138 token: str, An arbitrary string associated with the channel that
139 is delivered to the target address with each event delivered
140 over this channel.
141 address: str, The address of the receiving entity where events are
142 delivered. Specific to the channel type.
143 expiration: int, The time, in milliseconds from the epoch, when this
144 channel will expire.
145 params: dict, A dictionary of string to string, with additional parameters
146 controlling delivery channel behavior.
147 resource_id: str, An opaque id that identifies the resource that is
148 being watched. Stable across different API versions.
149 resource_uri: str, The canonicalized ID of the watched resource.
150 """
151
152 @util.positional(5)
153 def __init__(self, type, id, token, address, expiration=None,
154 params=None, resource_id="", resource_uri=""):
155 """Create a new Channel.
156
157 In user code, this Channel constructor will not typically be called
158 manually since there are functions for creating channels for each specific
159 type with a more customized set of arguments to pass.
160
161 Args:
162 type: str, The type of delivery mechanism used by this channel. For
163 example, 'web_hook'.
164 id: str, A UUID for the channel.
165 token: str, An arbitrary string associated with the channel that
166 is delivered to the target address with each event delivered
167 over this channel.
168 address: str, The address of the receiving entity where events are
169 delivered. Specific to the channel type.
170 expiration: int, The time, in milliseconds from the epoch, when this
171 channel will expire.
172 params: dict, A dictionary of string to string, with additional parameters
173 controlling delivery channel behavior.
174 resource_id: str, An opaque id that identifies the resource that is
175 being watched. Stable across different API versions.
176 resource_uri: str, The canonicalized ID of the watched resource.
177 """
178 self.type = type
179 self.id = id
180 self.token = token
181 self.address = address
182 self.expiration = expiration
183 self.params = params
184 self.resource_id = resource_id
185 self.resource_uri = resource_uri
186
187 def body(self):
188 """Build a body from the Channel.
189
190 Constructs a dictionary that's appropriate for passing into watch()
191 methods as the value of body argument.
192
193 Returns:
194 A dictionary representation of the channel.
195 """
196 result = {
197 'id': self.id,
198 'token': self.token,
199 'type': self.type,
200 'address': self.address
201 }
202 if self.params:
203 result['params'] = self.params
204 if self.resource_id:
205 result['resourceId'] = self.resource_id
206 if self.resource_uri:
207 result['resourceUri'] = self.resource_uri
208 if self.expiration:
209 result['expiration'] = self.expiration
210
211 return result
212
213 def update(self, resp):
214 """Update a channel with information from the response of watch().
215
216 When a request is sent to watch() a resource, the response returned
217 from the watch() request is a dictionary with updated channel information,
218 such as the resource_id, which is needed when stopping a subscription.
219
220 Args:
221 resp: dict, The response from a watch() method.
222 """
223 for json_name, param_name in six.iteritems(CHANNEL_PARAMS):
224 value = resp.get(json_name)
225 if value is not None:
226 setattr(self, param_name, value)
227
228
229 def notification_from_headers(channel, headers):
230 """Parse a notification from the webhook request headers, validate
231 the notification, and return a Notification object.
232
233 Args:
234 channel: Channel, The channel that the notification is associated with.
235 headers: dict, A dictionary like object that contains the request headers
236 from the webhook HTTP request.
237
238 Returns:
239 A Notification object.
240
241 Raises:
242 errors.InvalidNotificationError if the notification is invalid.
243 ValueError if the X-GOOG-MESSAGE-NUMBER can't be converted to an int.
244 """
245 headers = _upper_header_keys(headers)
246 channel_id = headers[X_GOOG_CHANNEL_ID]
247 if channel.id != channel_id:
248 raise errors.InvalidNotificationError(
249 'Channel id mismatch: %s != %s' % (channel.id, channel_id))
250 else:
251 message_number = int(headers[X_GOOG_MESSAGE_NUMBER])
252 state = headers[X_GOOG_RESOURCE_STATE]
253 resource_uri = headers[X_GOOG_RESOURCE_URI]
254 resource_id = headers[X_GOOG_RESOURCE_ID]
255 return Notification(message_number, state, resource_uri, resource_id)
256
257
258 @util.positional(2)
259 def new_webhook_channel(url, token=None, expiration=None, params=None):
260 """Create a new webhook Channel.
261
262 Args:
263 url: str, URL to post notifications to.
264 token: str, An arbitrary string associated with the channel that
265 is delivered to the target address with each notification delivered
266 over this channel.
267 expiration: datetime.datetime, A time in the future when the channel
268 should expire. Can also be None if the subscription should use the
269 default expiration. Note that different services may have different
270 limits on how long a subscription lasts. Check the response from the
271 watch() method to see the value the service has set for an expiration
272 time.
273 params: dict, Extra parameters to pass on channel creation. Currently
274 not used for webhook channels.
275 """
276 expiration_ms = 0
277 if expiration:
278 delta = expiration - EPOCH
279 expiration_ms = delta.microseconds/1000 + (
280 delta.seconds + delta.days*24*3600)*1000
281 if expiration_ms < 0:
282 expiration_ms = 0
283
284 return Channel('web_hook', str(uuid.uuid4()),
285 token, url, expiration=expiration_ms,
286 params=params)
287
288
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/googleapiclient/channel.py b/googleapiclient/channel.py
--- a/googleapiclient/channel.py
+++ b/googleapiclient/channel.py
@@ -1,3 +1,17 @@
+# Copyright 2014 Google Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
"""Channel notifications support.
Classes and functions to support channel subscriptions and notifications
| {"golden_diff": "diff --git a/googleapiclient/channel.py b/googleapiclient/channel.py\n--- a/googleapiclient/channel.py\n+++ b/googleapiclient/channel.py\n@@ -1,3 +1,17 @@\n+# Copyright 2014 Google Inc. All Rights Reserved.\n+#\n+# Licensed under the Apache License, Version 2.0 (the \"License\");\n+# you may not use this file except in compliance with the License.\n+# You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing, software\n+# distributed under the License is distributed on an \"AS IS\" BASIS,\n+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+# See the License for the specific language governing permissions and\n+# limitations under the License.\n+\n \"\"\"Channel notifications support.\n \n Classes and functions to support channel subscriptions and notifications\n", "issue": "googleapiclient/channel.py has no license header\nThere is no license header at all in channel.py\n\ngoogleapiclient/channel.py has no license header\nThere is no license header at all in channel.py\n\n", "before_files": [{"content": "\"\"\"Channel notifications support.\n\nClasses and functions to support channel subscriptions and notifications\non those channels.\n\nNotes:\n - This code is based on experimental APIs and is subject to change.\n - Notification does not do deduplication of notification ids, that's up to\n the receiver.\n - Storing the Channel between calls is up to the caller.\n\n\nExample setting up a channel:\n\n # Create a new channel that gets notifications via webhook.\n channel = new_webhook_channel(\"https://example.com/my_web_hook\")\n\n # Store the channel, keyed by 'channel.id'. Store it before calling the\n # watch method because notifications may start arriving before the watch\n # method returns.\n ...\n\n resp = service.objects().watchAll(\n bucket=\"some_bucket_id\", body=channel.body()).execute()\n channel.update(resp)\n\n # Store the channel, keyed by 'channel.id'. Store it after being updated\n # since the resource_id value will now be correct, and that's needed to\n # stop a subscription.\n ...\n\n\nAn example Webhook implementation using webapp2. Note that webapp2 puts\nheaders in a case insensitive dictionary, as headers aren't guaranteed to\nalways be upper case.\n\n id = self.request.headers[X_GOOG_CHANNEL_ID]\n\n # Retrieve the channel by id.\n channel = ...\n\n # Parse notification from the headers, including validating the id.\n n = notification_from_headers(channel, self.request.headers)\n\n # Do app specific stuff with the notification here.\n if n.resource_state == 'sync':\n # Code to handle sync state.\n elif n.resource_state == 'exists':\n # Code to handle the exists state.\n elif n.resource_state == 'not_exists':\n # Code to handle the not exists state.\n\n\nExample of unsubscribing.\n\n service.channels().stop(channel.body())\n\"\"\"\nfrom __future__ import absolute_import\n\nimport datetime\nimport uuid\n\nfrom googleapiclient import errors\nfrom googleapiclient import _helpers as util\nimport six\n\n\n# The unix time epoch starts at midnight 1970.\nEPOCH = datetime.datetime.utcfromtimestamp(0)\n\n# Map the names of the parameters in the JSON channel description to\n# the parameter names we use in the Channel class.\nCHANNEL_PARAMS = {\n 'address': 'address',\n 'id': 'id',\n 'expiration': 'expiration',\n 'params': 'params',\n 'resourceId': 'resource_id',\n 'resourceUri': 'resource_uri',\n 'type': 'type',\n 'token': 'token',\n }\n\nX_GOOG_CHANNEL_ID = 'X-GOOG-CHANNEL-ID'\nX_GOOG_MESSAGE_NUMBER = 'X-GOOG-MESSAGE-NUMBER'\nX_GOOG_RESOURCE_STATE = 'X-GOOG-RESOURCE-STATE'\nX_GOOG_RESOURCE_URI = 'X-GOOG-RESOURCE-URI'\nX_GOOG_RESOURCE_ID = 'X-GOOG-RESOURCE-ID'\n\n\ndef _upper_header_keys(headers):\n new_headers = {}\n for k, v in six.iteritems(headers):\n new_headers[k.upper()] = v\n return new_headers\n\n\nclass Notification(object):\n \"\"\"A Notification from a Channel.\n\n Notifications are not usually constructed directly, but are returned\n from functions like notification_from_headers().\n\n Attributes:\n message_number: int, The unique id number of this notification.\n state: str, The state of the resource being monitored.\n uri: str, The address of the resource being monitored.\n resource_id: str, The unique identifier of the version of the resource at\n this event.\n \"\"\"\n @util.positional(5)\n def __init__(self, message_number, state, resource_uri, resource_id):\n \"\"\"Notification constructor.\n\n Args:\n message_number: int, The unique id number of this notification.\n state: str, The state of the resource being monitored. Can be one\n of \"exists\", \"not_exists\", or \"sync\".\n resource_uri: str, The address of the resource being monitored.\n resource_id: str, The identifier of the watched resource.\n \"\"\"\n self.message_number = message_number\n self.state = state\n self.resource_uri = resource_uri\n self.resource_id = resource_id\n\n\nclass Channel(object):\n \"\"\"A Channel for notifications.\n\n Usually not constructed directly, instead it is returned from helper\n functions like new_webhook_channel().\n\n Attributes:\n type: str, The type of delivery mechanism used by this channel. For\n example, 'web_hook'.\n id: str, A UUID for the channel.\n token: str, An arbitrary string associated with the channel that\n is delivered to the target address with each event delivered\n over this channel.\n address: str, The address of the receiving entity where events are\n delivered. Specific to the channel type.\n expiration: int, The time, in milliseconds from the epoch, when this\n channel will expire.\n params: dict, A dictionary of string to string, with additional parameters\n controlling delivery channel behavior.\n resource_id: str, An opaque id that identifies the resource that is\n being watched. Stable across different API versions.\n resource_uri: str, The canonicalized ID of the watched resource.\n \"\"\"\n\n @util.positional(5)\n def __init__(self, type, id, token, address, expiration=None,\n params=None, resource_id=\"\", resource_uri=\"\"):\n \"\"\"Create a new Channel.\n\n In user code, this Channel constructor will not typically be called\n manually since there are functions for creating channels for each specific\n type with a more customized set of arguments to pass.\n\n Args:\n type: str, The type of delivery mechanism used by this channel. For\n example, 'web_hook'.\n id: str, A UUID for the channel.\n token: str, An arbitrary string associated with the channel that\n is delivered to the target address with each event delivered\n over this channel.\n address: str, The address of the receiving entity where events are\n delivered. Specific to the channel type.\n expiration: int, The time, in milliseconds from the epoch, when this\n channel will expire.\n params: dict, A dictionary of string to string, with additional parameters\n controlling delivery channel behavior.\n resource_id: str, An opaque id that identifies the resource that is\n being watched. Stable across different API versions.\n resource_uri: str, The canonicalized ID of the watched resource.\n \"\"\"\n self.type = type\n self.id = id\n self.token = token\n self.address = address\n self.expiration = expiration\n self.params = params\n self.resource_id = resource_id\n self.resource_uri = resource_uri\n\n def body(self):\n \"\"\"Build a body from the Channel.\n\n Constructs a dictionary that's appropriate for passing into watch()\n methods as the value of body argument.\n\n Returns:\n A dictionary representation of the channel.\n \"\"\"\n result = {\n 'id': self.id,\n 'token': self.token,\n 'type': self.type,\n 'address': self.address\n }\n if self.params:\n result['params'] = self.params\n if self.resource_id:\n result['resourceId'] = self.resource_id\n if self.resource_uri:\n result['resourceUri'] = self.resource_uri\n if self.expiration:\n result['expiration'] = self.expiration\n\n return result\n\n def update(self, resp):\n \"\"\"Update a channel with information from the response of watch().\n\n When a request is sent to watch() a resource, the response returned\n from the watch() request is a dictionary with updated channel information,\n such as the resource_id, which is needed when stopping a subscription.\n\n Args:\n resp: dict, The response from a watch() method.\n \"\"\"\n for json_name, param_name in six.iteritems(CHANNEL_PARAMS):\n value = resp.get(json_name)\n if value is not None:\n setattr(self, param_name, value)\n\n\ndef notification_from_headers(channel, headers):\n \"\"\"Parse a notification from the webhook request headers, validate\n the notification, and return a Notification object.\n\n Args:\n channel: Channel, The channel that the notification is associated with.\n headers: dict, A dictionary like object that contains the request headers\n from the webhook HTTP request.\n\n Returns:\n A Notification object.\n\n Raises:\n errors.InvalidNotificationError if the notification is invalid.\n ValueError if the X-GOOG-MESSAGE-NUMBER can't be converted to an int.\n \"\"\"\n headers = _upper_header_keys(headers)\n channel_id = headers[X_GOOG_CHANNEL_ID]\n if channel.id != channel_id:\n raise errors.InvalidNotificationError(\n 'Channel id mismatch: %s != %s' % (channel.id, channel_id))\n else:\n message_number = int(headers[X_GOOG_MESSAGE_NUMBER])\n state = headers[X_GOOG_RESOURCE_STATE]\n resource_uri = headers[X_GOOG_RESOURCE_URI]\n resource_id = headers[X_GOOG_RESOURCE_ID]\n return Notification(message_number, state, resource_uri, resource_id)\n\n\[email protected](2)\ndef new_webhook_channel(url, token=None, expiration=None, params=None):\n \"\"\"Create a new webhook Channel.\n\n Args:\n url: str, URL to post notifications to.\n token: str, An arbitrary string associated with the channel that\n is delivered to the target address with each notification delivered\n over this channel.\n expiration: datetime.datetime, A time in the future when the channel\n should expire. Can also be None if the subscription should use the\n default expiration. Note that different services may have different\n limits on how long a subscription lasts. Check the response from the\n watch() method to see the value the service has set for an expiration\n time.\n params: dict, Extra parameters to pass on channel creation. Currently\n not used for webhook channels.\n \"\"\"\n expiration_ms = 0\n if expiration:\n delta = expiration - EPOCH\n expiration_ms = delta.microseconds/1000 + (\n delta.seconds + delta.days*24*3600)*1000\n if expiration_ms < 0:\n expiration_ms = 0\n\n return Channel('web_hook', str(uuid.uuid4()),\n token, url, expiration=expiration_ms,\n params=params)\n\n", "path": "googleapiclient/channel.py"}], "after_files": [{"content": "# Copyright 2014 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Channel notifications support.\n\nClasses and functions to support channel subscriptions and notifications\non those channels.\n\nNotes:\n - This code is based on experimental APIs and is subject to change.\n - Notification does not do deduplication of notification ids, that's up to\n the receiver.\n - Storing the Channel between calls is up to the caller.\n\n\nExample setting up a channel:\n\n # Create a new channel that gets notifications via webhook.\n channel = new_webhook_channel(\"https://example.com/my_web_hook\")\n\n # Store the channel, keyed by 'channel.id'. Store it before calling the\n # watch method because notifications may start arriving before the watch\n # method returns.\n ...\n\n resp = service.objects().watchAll(\n bucket=\"some_bucket_id\", body=channel.body()).execute()\n channel.update(resp)\n\n # Store the channel, keyed by 'channel.id'. Store it after being updated\n # since the resource_id value will now be correct, and that's needed to\n # stop a subscription.\n ...\n\n\nAn example Webhook implementation using webapp2. Note that webapp2 puts\nheaders in a case insensitive dictionary, as headers aren't guaranteed to\nalways be upper case.\n\n id = self.request.headers[X_GOOG_CHANNEL_ID]\n\n # Retrieve the channel by id.\n channel = ...\n\n # Parse notification from the headers, including validating the id.\n n = notification_from_headers(channel, self.request.headers)\n\n # Do app specific stuff with the notification here.\n if n.resource_state == 'sync':\n # Code to handle sync state.\n elif n.resource_state == 'exists':\n # Code to handle the exists state.\n elif n.resource_state == 'not_exists':\n # Code to handle the not exists state.\n\n\nExample of unsubscribing.\n\n service.channels().stop(channel.body())\n\"\"\"\nfrom __future__ import absolute_import\n\nimport datetime\nimport uuid\n\nfrom googleapiclient import errors\nfrom googleapiclient import _helpers as util\nimport six\n\n\n# The unix time epoch starts at midnight 1970.\nEPOCH = datetime.datetime.utcfromtimestamp(0)\n\n# Map the names of the parameters in the JSON channel description to\n# the parameter names we use in the Channel class.\nCHANNEL_PARAMS = {\n 'address': 'address',\n 'id': 'id',\n 'expiration': 'expiration',\n 'params': 'params',\n 'resourceId': 'resource_id',\n 'resourceUri': 'resource_uri',\n 'type': 'type',\n 'token': 'token',\n }\n\nX_GOOG_CHANNEL_ID = 'X-GOOG-CHANNEL-ID'\nX_GOOG_MESSAGE_NUMBER = 'X-GOOG-MESSAGE-NUMBER'\nX_GOOG_RESOURCE_STATE = 'X-GOOG-RESOURCE-STATE'\nX_GOOG_RESOURCE_URI = 'X-GOOG-RESOURCE-URI'\nX_GOOG_RESOURCE_ID = 'X-GOOG-RESOURCE-ID'\n\n\ndef _upper_header_keys(headers):\n new_headers = {}\n for k, v in six.iteritems(headers):\n new_headers[k.upper()] = v\n return new_headers\n\n\nclass Notification(object):\n \"\"\"A Notification from a Channel.\n\n Notifications are not usually constructed directly, but are returned\n from functions like notification_from_headers().\n\n Attributes:\n message_number: int, The unique id number of this notification.\n state: str, The state of the resource being monitored.\n uri: str, The address of the resource being monitored.\n resource_id: str, The unique identifier of the version of the resource at\n this event.\n \"\"\"\n @util.positional(5)\n def __init__(self, message_number, state, resource_uri, resource_id):\n \"\"\"Notification constructor.\n\n Args:\n message_number: int, The unique id number of this notification.\n state: str, The state of the resource being monitored. Can be one\n of \"exists\", \"not_exists\", or \"sync\".\n resource_uri: str, The address of the resource being monitored.\n resource_id: str, The identifier of the watched resource.\n \"\"\"\n self.message_number = message_number\n self.state = state\n self.resource_uri = resource_uri\n self.resource_id = resource_id\n\n\nclass Channel(object):\n \"\"\"A Channel for notifications.\n\n Usually not constructed directly, instead it is returned from helper\n functions like new_webhook_channel().\n\n Attributes:\n type: str, The type of delivery mechanism used by this channel. For\n example, 'web_hook'.\n id: str, A UUID for the channel.\n token: str, An arbitrary string associated with the channel that\n is delivered to the target address with each event delivered\n over this channel.\n address: str, The address of the receiving entity where events are\n delivered. Specific to the channel type.\n expiration: int, The time, in milliseconds from the epoch, when this\n channel will expire.\n params: dict, A dictionary of string to string, with additional parameters\n controlling delivery channel behavior.\n resource_id: str, An opaque id that identifies the resource that is\n being watched. Stable across different API versions.\n resource_uri: str, The canonicalized ID of the watched resource.\n \"\"\"\n\n @util.positional(5)\n def __init__(self, type, id, token, address, expiration=None,\n params=None, resource_id=\"\", resource_uri=\"\"):\n \"\"\"Create a new Channel.\n\n In user code, this Channel constructor will not typically be called\n manually since there are functions for creating channels for each specific\n type with a more customized set of arguments to pass.\n\n Args:\n type: str, The type of delivery mechanism used by this channel. For\n example, 'web_hook'.\n id: str, A UUID for the channel.\n token: str, An arbitrary string associated with the channel that\n is delivered to the target address with each event delivered\n over this channel.\n address: str, The address of the receiving entity where events are\n delivered. Specific to the channel type.\n expiration: int, The time, in milliseconds from the epoch, when this\n channel will expire.\n params: dict, A dictionary of string to string, with additional parameters\n controlling delivery channel behavior.\n resource_id: str, An opaque id that identifies the resource that is\n being watched. Stable across different API versions.\n resource_uri: str, The canonicalized ID of the watched resource.\n \"\"\"\n self.type = type\n self.id = id\n self.token = token\n self.address = address\n self.expiration = expiration\n self.params = params\n self.resource_id = resource_id\n self.resource_uri = resource_uri\n\n def body(self):\n \"\"\"Build a body from the Channel.\n\n Constructs a dictionary that's appropriate for passing into watch()\n methods as the value of body argument.\n\n Returns:\n A dictionary representation of the channel.\n \"\"\"\n result = {\n 'id': self.id,\n 'token': self.token,\n 'type': self.type,\n 'address': self.address\n }\n if self.params:\n result['params'] = self.params\n if self.resource_id:\n result['resourceId'] = self.resource_id\n if self.resource_uri:\n result['resourceUri'] = self.resource_uri\n if self.expiration:\n result['expiration'] = self.expiration\n\n return result\n\n def update(self, resp):\n \"\"\"Update a channel with information from the response of watch().\n\n When a request is sent to watch() a resource, the response returned\n from the watch() request is a dictionary with updated channel information,\n such as the resource_id, which is needed when stopping a subscription.\n\n Args:\n resp: dict, The response from a watch() method.\n \"\"\"\n for json_name, param_name in six.iteritems(CHANNEL_PARAMS):\n value = resp.get(json_name)\n if value is not None:\n setattr(self, param_name, value)\n\n\ndef notification_from_headers(channel, headers):\n \"\"\"Parse a notification from the webhook request headers, validate\n the notification, and return a Notification object.\n\n Args:\n channel: Channel, The channel that the notification is associated with.\n headers: dict, A dictionary like object that contains the request headers\n from the webhook HTTP request.\n\n Returns:\n A Notification object.\n\n Raises:\n errors.InvalidNotificationError if the notification is invalid.\n ValueError if the X-GOOG-MESSAGE-NUMBER can't be converted to an int.\n \"\"\"\n headers = _upper_header_keys(headers)\n channel_id = headers[X_GOOG_CHANNEL_ID]\n if channel.id != channel_id:\n raise errors.InvalidNotificationError(\n 'Channel id mismatch: %s != %s' % (channel.id, channel_id))\n else:\n message_number = int(headers[X_GOOG_MESSAGE_NUMBER])\n state = headers[X_GOOG_RESOURCE_STATE]\n resource_uri = headers[X_GOOG_RESOURCE_URI]\n resource_id = headers[X_GOOG_RESOURCE_ID]\n return Notification(message_number, state, resource_uri, resource_id)\n\n\[email protected](2)\ndef new_webhook_channel(url, token=None, expiration=None, params=None):\n \"\"\"Create a new webhook Channel.\n\n Args:\n url: str, URL to post notifications to.\n token: str, An arbitrary string associated with the channel that\n is delivered to the target address with each notification delivered\n over this channel.\n expiration: datetime.datetime, A time in the future when the channel\n should expire. Can also be None if the subscription should use the\n default expiration. Note that different services may have different\n limits on how long a subscription lasts. Check the response from the\n watch() method to see the value the service has set for an expiration\n time.\n params: dict, Extra parameters to pass on channel creation. Currently\n not used for webhook channels.\n \"\"\"\n expiration_ms = 0\n if expiration:\n delta = expiration - EPOCH\n expiration_ms = delta.microseconds/1000 + (\n delta.seconds + delta.days*24*3600)*1000\n if expiration_ms < 0:\n expiration_ms = 0\n\n return Channel('web_hook', str(uuid.uuid4()),\n token, url, expiration=expiration_ms,\n params=params)\n\n", "path": "googleapiclient/channel.py"}]} | 3,302 | 201 |
gh_patches_debug_9976 | rasdani/github-patches | git_diff | mkdocs__mkdocs-2421 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
file-system race condition while testing with mkdocs serve
when mkdocs serve is shutting down, it cleans up after itself removing the temporary directory created earlier.
the condition whether the directory needs to be removed is unchecked and prone to a file system race condition (shared resource).
given the directory is already removed on the file-system while shutting down, mkdocs serve throws an exception and exits in failure.
this error can be safely defined out of existence by removing the temporary directory only if it exists, will suggest a PR for this report. /E: created #2421
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mkdocs/commands/serve.py`
Content:
```
1 import logging
2 import shutil
3 import tempfile
4
5 from os.path import isfile, join
6 from mkdocs.commands.build import build
7 from mkdocs.config import load_config
8 from mkdocs.exceptions import Abort
9 from mkdocs.livereload import LiveReloadServer
10
11 log = logging.getLogger(__name__)
12
13
14 def serve(config_file=None, dev_addr=None, strict=None, theme=None,
15 theme_dir=None, livereload='livereload', watch_theme=False, **kwargs):
16 """
17 Start the MkDocs development server
18
19 By default it will serve the documentation on http://localhost:8000/ and
20 it will rebuild the documentation and refresh the page automatically
21 whenever a file is edited.
22 """
23
24 # Create a temporary build directory, and set some options to serve it
25 # PY2 returns a byte string by default. The Unicode prefix ensures a Unicode
26 # string is returned. And it makes MkDocs temp dirs easier to identify.
27 site_dir = tempfile.mkdtemp(prefix='mkdocs_')
28
29 def builder():
30 log.info("Building documentation...")
31 config = load_config(
32 config_file=config_file,
33 dev_addr=dev_addr,
34 strict=strict,
35 theme=theme,
36 theme_dir=theme_dir,
37 site_dir=site_dir,
38 **kwargs
39 )
40 # Override a few config settings after validation
41 config['site_url'] = 'http://{}/'.format(config['dev_addr'])
42
43 live_server = livereload in ['dirty', 'livereload']
44 dirty = livereload == 'dirty'
45 build(config, live_server=live_server, dirty=dirty)
46 return config
47
48 try:
49 # Perform the initial build
50 config = builder()
51
52 host, port = config['dev_addr']
53
54 server = LiveReloadServer(builder=builder, host=host, port=port, root=site_dir)
55
56 def error_handler(code):
57 if code in (404, 500):
58 error_page = join(site_dir, f'{code}.html')
59 if isfile(error_page):
60 with open(error_page, 'rb') as f:
61 return f.read()
62
63 server.error_handler = error_handler
64
65 if livereload in ['livereload', 'dirty']:
66 # Watch the documentation files, the config file and the theme files.
67 server.watch(config['docs_dir'])
68 server.watch(config['config_file_path'])
69
70 if watch_theme:
71 for d in config['theme'].dirs:
72 server.watch(d)
73
74 # Run `serve` plugin events.
75 server = config['plugins'].run_event('serve', server, config=config, builder=builder)
76
77 try:
78 server.serve()
79 except KeyboardInterrupt:
80 log.info("Shutting down...")
81 finally:
82 server.shutdown()
83 except OSError as e: # pragma: no cover
84 # Avoid ugly, unhelpful traceback
85 raise Abort(str(e))
86 finally:
87 shutil.rmtree(site_dir)
88
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mkdocs/commands/serve.py b/mkdocs/commands/serve.py
--- a/mkdocs/commands/serve.py
+++ b/mkdocs/commands/serve.py
@@ -2,7 +2,7 @@
import shutil
import tempfile
-from os.path import isfile, join
+from os.path import isdir, isfile, join
from mkdocs.commands.build import build
from mkdocs.config import load_config
from mkdocs.exceptions import Abort
@@ -84,4 +84,5 @@
# Avoid ugly, unhelpful traceback
raise Abort(str(e))
finally:
- shutil.rmtree(site_dir)
+ if isdir(site_dir):
+ shutil.rmtree(site_dir)
| {"golden_diff": "diff --git a/mkdocs/commands/serve.py b/mkdocs/commands/serve.py\n--- a/mkdocs/commands/serve.py\n+++ b/mkdocs/commands/serve.py\n@@ -2,7 +2,7 @@\n import shutil\n import tempfile\n \n-from os.path import isfile, join\n+from os.path import isdir, isfile, join\n from mkdocs.commands.build import build\n from mkdocs.config import load_config\n from mkdocs.exceptions import Abort\n@@ -84,4 +84,5 @@\n # Avoid ugly, unhelpful traceback\n raise Abort(str(e))\n finally:\n- shutil.rmtree(site_dir)\n+ if isdir(site_dir):\n+ shutil.rmtree(site_dir)\n", "issue": "file-system race condition while testing with mkdocs serve\nwhen mkdocs serve is shutting down, it cleans up after itself removing the temporary directory created earlier.\r\n\r\nthe condition whether the directory needs to be removed is unchecked and prone to a file system race condition (shared resource).\r\n\r\ngiven the directory is already removed on the file-system while shutting down, mkdocs serve throws an exception and exits in failure.\r\n\r\nthis error can be safely defined out of existence by removing the temporary directory only if it exists, will suggest a PR for this report. /E: created #2421\n", "before_files": [{"content": "import logging\nimport shutil\nimport tempfile\n\nfrom os.path import isfile, join\nfrom mkdocs.commands.build import build\nfrom mkdocs.config import load_config\nfrom mkdocs.exceptions import Abort\nfrom mkdocs.livereload import LiveReloadServer\n\nlog = logging.getLogger(__name__)\n\n\ndef serve(config_file=None, dev_addr=None, strict=None, theme=None,\n theme_dir=None, livereload='livereload', watch_theme=False, **kwargs):\n \"\"\"\n Start the MkDocs development server\n\n By default it will serve the documentation on http://localhost:8000/ and\n it will rebuild the documentation and refresh the page automatically\n whenever a file is edited.\n \"\"\"\n\n # Create a temporary build directory, and set some options to serve it\n # PY2 returns a byte string by default. The Unicode prefix ensures a Unicode\n # string is returned. And it makes MkDocs temp dirs easier to identify.\n site_dir = tempfile.mkdtemp(prefix='mkdocs_')\n\n def builder():\n log.info(\"Building documentation...\")\n config = load_config(\n config_file=config_file,\n dev_addr=dev_addr,\n strict=strict,\n theme=theme,\n theme_dir=theme_dir,\n site_dir=site_dir,\n **kwargs\n )\n # Override a few config settings after validation\n config['site_url'] = 'http://{}/'.format(config['dev_addr'])\n\n live_server = livereload in ['dirty', 'livereload']\n dirty = livereload == 'dirty'\n build(config, live_server=live_server, dirty=dirty)\n return config\n\n try:\n # Perform the initial build\n config = builder()\n\n host, port = config['dev_addr']\n\n server = LiveReloadServer(builder=builder, host=host, port=port, root=site_dir)\n\n def error_handler(code):\n if code in (404, 500):\n error_page = join(site_dir, f'{code}.html')\n if isfile(error_page):\n with open(error_page, 'rb') as f:\n return f.read()\n\n server.error_handler = error_handler\n\n if livereload in ['livereload', 'dirty']:\n # Watch the documentation files, the config file and the theme files.\n server.watch(config['docs_dir'])\n server.watch(config['config_file_path'])\n\n if watch_theme:\n for d in config['theme'].dirs:\n server.watch(d)\n\n # Run `serve` plugin events.\n server = config['plugins'].run_event('serve', server, config=config, builder=builder)\n\n try:\n server.serve()\n except KeyboardInterrupt:\n log.info(\"Shutting down...\")\n finally:\n server.shutdown()\n except OSError as e: # pragma: no cover\n # Avoid ugly, unhelpful traceback\n raise Abort(str(e))\n finally:\n shutil.rmtree(site_dir)\n", "path": "mkdocs/commands/serve.py"}], "after_files": [{"content": "import logging\nimport shutil\nimport tempfile\n\nfrom os.path import isdir, isfile, join\nfrom mkdocs.commands.build import build\nfrom mkdocs.config import load_config\nfrom mkdocs.exceptions import Abort\nfrom mkdocs.livereload import LiveReloadServer\n\nlog = logging.getLogger(__name__)\n\n\ndef serve(config_file=None, dev_addr=None, strict=None, theme=None,\n theme_dir=None, livereload='livereload', watch_theme=False, **kwargs):\n \"\"\"\n Start the MkDocs development server\n\n By default it will serve the documentation on http://localhost:8000/ and\n it will rebuild the documentation and refresh the page automatically\n whenever a file is edited.\n \"\"\"\n\n # Create a temporary build directory, and set some options to serve it\n # PY2 returns a byte string by default. The Unicode prefix ensures a Unicode\n # string is returned. And it makes MkDocs temp dirs easier to identify.\n site_dir = tempfile.mkdtemp(prefix='mkdocs_')\n\n def builder():\n log.info(\"Building documentation...\")\n config = load_config(\n config_file=config_file,\n dev_addr=dev_addr,\n strict=strict,\n theme=theme,\n theme_dir=theme_dir,\n site_dir=site_dir,\n **kwargs\n )\n # Override a few config settings after validation\n config['site_url'] = 'http://{}/'.format(config['dev_addr'])\n\n live_server = livereload in ['dirty', 'livereload']\n dirty = livereload == 'dirty'\n build(config, live_server=live_server, dirty=dirty)\n return config\n\n try:\n # Perform the initial build\n config = builder()\n\n host, port = config['dev_addr']\n\n server = LiveReloadServer(builder=builder, host=host, port=port, root=site_dir)\n\n def error_handler(code):\n if code in (404, 500):\n error_page = join(site_dir, f'{code}.html')\n if isfile(error_page):\n with open(error_page, 'rb') as f:\n return f.read()\n\n server.error_handler = error_handler\n\n if livereload in ['livereload', 'dirty']:\n # Watch the documentation files, the config file and the theme files.\n server.watch(config['docs_dir'])\n server.watch(config['config_file_path'])\n\n if watch_theme:\n for d in config['theme'].dirs:\n server.watch(d)\n\n # Run `serve` plugin events.\n server = config['plugins'].run_event('serve', server, config=config, builder=builder)\n\n try:\n server.serve()\n except KeyboardInterrupt:\n log.info(\"Shutting down...\")\n finally:\n server.shutdown()\n except OSError as e: # pragma: no cover\n # Avoid ugly, unhelpful traceback\n raise Abort(str(e))\n finally:\n if isdir(site_dir):\n shutil.rmtree(site_dir)\n", "path": "mkdocs/commands/serve.py"}]} | 1,176 | 158 |
gh_patches_debug_27698 | rasdani/github-patches | git_diff | hpcaitech__ColossalAI-3299 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[tensor] fix some unittests
[tensor] fix some unittests
[tensor] fix some unittests
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `applications/Chat/inference/utils.py`
Content:
```
1 import re
2 from threading import Lock
3 from typing import Any, Callable, Generator, List, Optional
4 import json
5
6 import torch
7 import torch.distributed as dist
8 import torch.nn as nn
9 from pydantic import BaseModel, Field
10
11 try:
12 from transformers.generation_logits_process import (
13 LogitsProcessorList,
14 TemperatureLogitsWarper,
15 TopKLogitsWarper,
16 TopPLogitsWarper,
17 )
18 except ImportError:
19 from transformers.generation import LogitsProcessorList, TemperatureLogitsWarper, TopKLogitsWarper, TopPLogitsWarper
20
21
22 def prepare_logits_processor(top_k: Optional[int] = None,
23 top_p: Optional[float] = None,
24 temperature: Optional[float] = None) -> LogitsProcessorList:
25 processor_list = LogitsProcessorList()
26 if temperature is not None and temperature != 1.0:
27 processor_list.append(TemperatureLogitsWarper(temperature))
28 if top_k is not None and top_k != 0:
29 processor_list.append(TopKLogitsWarper(top_k))
30 if top_p is not None and top_p < 1.0:
31 processor_list.append(TopPLogitsWarper(top_p))
32 return processor_list
33
34
35 def _is_sequence_finished(unfinished_sequences: torch.Tensor) -> bool:
36 if dist.is_initialized() and dist.get_world_size() > 1:
37 # consider DP
38 unfinished_sequences = unfinished_sequences.clone()
39 dist.all_reduce(unfinished_sequences)
40 return unfinished_sequences.max() == 0
41
42
43 def sample_streamingly(model: nn.Module,
44 input_ids: torch.Tensor,
45 max_generate_tokens: int,
46 early_stopping: bool = False,
47 eos_token_id: Optional[int] = None,
48 pad_token_id: Optional[int] = None,
49 top_k: Optional[int] = None,
50 top_p: Optional[float] = None,
51 temperature: Optional[float] = None,
52 prepare_inputs_fn: Optional[Callable[[torch.Tensor, Any], dict]] = None,
53 update_model_kwargs_fn: Optional[Callable[[dict, Any], dict]] = None,
54 **model_kwargs) -> Generator:
55
56 logits_processor = prepare_logits_processor(top_k, top_p, temperature)
57 unfinished_sequences = input_ids.new(input_ids.shape[0]).fill_(1)
58
59 for _ in range(max_generate_tokens):
60 model_inputs = prepare_inputs_fn(input_ids, **model_kwargs) if prepare_inputs_fn is not None else {
61 'input_ids': input_ids
62 }
63 outputs = model(**model_inputs)
64
65 next_token_logits = outputs['logits'][:, -1, :]
66 # pre-process distribution
67 next_token_logits = logits_processor(input_ids, next_token_logits)
68 # sample
69 probs = torch.softmax(next_token_logits, dim=-1, dtype=torch.float)
70 next_tokens = torch.multinomial(probs, num_samples=1).squeeze(1)
71
72 # finished sentences should have their next token be a padding token
73 if eos_token_id is not None:
74 if pad_token_id is None:
75 raise ValueError("If `eos_token_id` is defined, make sure that `pad_token_id` is defined.")
76 next_tokens = next_tokens * unfinished_sequences + pad_token_id * (1 - unfinished_sequences)
77
78 yield next_tokens
79
80 # update generated ids, model inputs for next step
81 input_ids = torch.cat([input_ids, next_tokens[:, None]], dim=-1)
82 if update_model_kwargs_fn is not None:
83 model_kwargs = update_model_kwargs_fn(outputs, **model_kwargs)
84
85 # if eos_token was found in one sentence, set sentence to finished
86 if eos_token_id is not None:
87 unfinished_sequences = unfinished_sequences.mul((next_tokens != eos_token_id).long())
88
89 # stop when each sentence is finished if early_stopping=True
90 if early_stopping and _is_sequence_finished(unfinished_sequences):
91 break
92
93
94 def update_model_kwargs_fn(outputs: dict, **model_kwargs) -> dict:
95 if "past_key_values" in outputs:
96 model_kwargs["past"] = outputs["past_key_values"]
97 else:
98 model_kwargs["past"] = None
99
100 # update token_type_ids with last value
101 if "token_type_ids" in model_kwargs:
102 token_type_ids = model_kwargs["token_type_ids"]
103 model_kwargs["token_type_ids"] = torch.cat([token_type_ids, token_type_ids[:, -1].unsqueeze(-1)], dim=-1)
104
105 # update attention mask
106 if "attention_mask" in model_kwargs:
107 attention_mask = model_kwargs["attention_mask"]
108 model_kwargs["attention_mask"] = torch.cat(
109 [attention_mask, attention_mask.new_ones((attention_mask.shape[0], 1))], dim=-1)
110
111 return model_kwargs
112
113
114 class Dialogue(BaseModel):
115 instruction: str = Field(min_length=1, example='Count up from 1 to 500.')
116 response: str = Field(example='')
117
118
119 def _format_dialogue(instruction: str, response: str = ''):
120 return f'\n\n### Instruction:\n{instruction}\n\n### Response:\n{response}'
121
122
123 STOP_PAT = re.compile(r'(###|instruction:).*', flags=(re.I | re.S))
124
125
126 class ChatPromptProcessor:
127 SAFE_RESPONSE = 'The input/response contains inappropriate content, please rephrase your prompt.'
128
129 def __init__(self, tokenizer, context: str, max_len: int = 2048, censored_words: List[str]=[]):
130 self.tokenizer = tokenizer
131 self.context = context
132 self.max_len = max_len
133 if len(censored_words) > 0:
134 self.censored_pat = re.compile(f'({"|".join(map(re.escape, censored_words))})', flags=re.I)
135 else:
136 self.censored_pat = None
137 # These will be initialized after the first call of preprocess_prompt()
138 self.context_len: Optional[int] = None
139 self.dialogue_placeholder_len: Optional[int] = None
140
141 def preprocess_prompt(self, history: List[Dialogue], max_new_tokens: int) -> str:
142 if self.context_len is None:
143 self.context_len = len(self.tokenizer(self.context)['input_ids'])
144 if self.dialogue_placeholder_len is None:
145 self.dialogue_placeholder_len = len(
146 self.tokenizer(_format_dialogue(''), add_special_tokens=False)['input_ids'])
147 prompt = self.context
148 # the last dialogue must be in the prompt
149 last_dialogue = history.pop()
150 # the response of the last dialogue is empty
151 assert last_dialogue.response == ''
152 if len(self.tokenizer(_format_dialogue(last_dialogue.instruction), add_special_tokens=False)
153 ['input_ids']) + max_new_tokens + self.context_len >= self.max_len:
154 # to avoid truncate placeholder, apply truncate to the original instruction
155 instruction_truncated = self.tokenizer(last_dialogue.instruction,
156 add_special_tokens=False,
157 truncation=True,
158 max_length=(self.max_len - max_new_tokens - self.context_len -
159 self.dialogue_placeholder_len))['input_ids']
160 instruction_truncated = self.tokenizer.decode(instruction_truncated).lstrip()
161 prompt += _format_dialogue(instruction_truncated)
162 return prompt
163
164 res_len = self.max_len - max_new_tokens - len(self.tokenizer(prompt)['input_ids'])
165
166 rows = []
167 for dialogue in history[::-1]:
168 text = _format_dialogue(dialogue.instruction, dialogue.response)
169 cur_len = len(self.tokenizer(text, add_special_tokens=False)['input_ids'])
170 if res_len - cur_len < 0:
171 break
172 res_len -= cur_len
173 rows.insert(0, text)
174 prompt += ''.join(rows) + _format_dialogue(last_dialogue.instruction)
175 return prompt
176
177 def postprocess_output(self, output: str) -> str:
178 output = STOP_PAT.sub('', output)
179 return output.strip()
180
181 def has_censored_words(self, text: str) -> bool:
182 if self.censored_pat is None:
183 return False
184 return self.censored_pat.search(text) is not None
185
186 class LockedIterator:
187
188 def __init__(self, it, lock: Lock) -> None:
189 self.lock = lock
190 self.it = iter(it)
191
192 def __iter__(self):
193 return self
194
195 def __next__(self):
196 with self.lock:
197 return next(self.it)
198
199 def load_json(path: str):
200 with open(path) as f:
201 return json.load(f)
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/applications/Chat/inference/utils.py b/applications/Chat/inference/utils.py
--- a/applications/Chat/inference/utils.py
+++ b/applications/Chat/inference/utils.py
@@ -2,6 +2,7 @@
from threading import Lock
from typing import Any, Callable, Generator, List, Optional
import json
+import jieba
import torch
import torch.distributed as dist
@@ -130,10 +131,7 @@
self.tokenizer = tokenizer
self.context = context
self.max_len = max_len
- if len(censored_words) > 0:
- self.censored_pat = re.compile(f'({"|".join(map(re.escape, censored_words))})', flags=re.I)
- else:
- self.censored_pat = None
+ self.censored_words = set([word.lower() for word in censored_words])
# These will be initialized after the first call of preprocess_prompt()
self.context_len: Optional[int] = None
self.dialogue_placeholder_len: Optional[int] = None
@@ -179,9 +177,10 @@
return output.strip()
def has_censored_words(self, text: str) -> bool:
- if self.censored_pat is None:
+ if len(self.censored_words) == 0:
return False
- return self.censored_pat.search(text) is not None
+ intersection = set(jieba.cut(text.lower())) & self.censored_words
+ return len(intersection) > 0
class LockedIterator:
| {"golden_diff": "diff --git a/applications/Chat/inference/utils.py b/applications/Chat/inference/utils.py\n--- a/applications/Chat/inference/utils.py\n+++ b/applications/Chat/inference/utils.py\n@@ -2,6 +2,7 @@\n from threading import Lock\n from typing import Any, Callable, Generator, List, Optional\n import json\n+import jieba\n \n import torch\n import torch.distributed as dist\n@@ -130,10 +131,7 @@\n self.tokenizer = tokenizer\n self.context = context\n self.max_len = max_len\n- if len(censored_words) > 0:\n- self.censored_pat = re.compile(f'({\"|\".join(map(re.escape, censored_words))})', flags=re.I)\n- else:\n- self.censored_pat = None\n+ self.censored_words = set([word.lower() for word in censored_words])\n # These will be initialized after the first call of preprocess_prompt()\n self.context_len: Optional[int] = None\n self.dialogue_placeholder_len: Optional[int] = None\n@@ -179,9 +177,10 @@\n return output.strip()\n \n def has_censored_words(self, text: str) -> bool:\n- if self.censored_pat is None:\n+ if len(self.censored_words) == 0:\n return False\n- return self.censored_pat.search(text) is not None\n+ intersection = set(jieba.cut(text.lower())) & self.censored_words\n+ return len(intersection) > 0\n \n class LockedIterator:\n", "issue": "[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n", "before_files": [{"content": "import re\nfrom threading import Lock\nfrom typing import Any, Callable, Generator, List, Optional\nimport json\n\nimport torch\nimport torch.distributed as dist\nimport torch.nn as nn\nfrom pydantic import BaseModel, Field\n\ntry:\n from transformers.generation_logits_process import (\n LogitsProcessorList,\n TemperatureLogitsWarper,\n TopKLogitsWarper,\n TopPLogitsWarper,\n )\nexcept ImportError:\n from transformers.generation import LogitsProcessorList, TemperatureLogitsWarper, TopKLogitsWarper, TopPLogitsWarper\n\n\ndef prepare_logits_processor(top_k: Optional[int] = None,\n top_p: Optional[float] = None,\n temperature: Optional[float] = None) -> LogitsProcessorList:\n processor_list = LogitsProcessorList()\n if temperature is not None and temperature != 1.0:\n processor_list.append(TemperatureLogitsWarper(temperature))\n if top_k is not None and top_k != 0:\n processor_list.append(TopKLogitsWarper(top_k))\n if top_p is not None and top_p < 1.0:\n processor_list.append(TopPLogitsWarper(top_p))\n return processor_list\n\n\ndef _is_sequence_finished(unfinished_sequences: torch.Tensor) -> bool:\n if dist.is_initialized() and dist.get_world_size() > 1:\n # consider DP\n unfinished_sequences = unfinished_sequences.clone()\n dist.all_reduce(unfinished_sequences)\n return unfinished_sequences.max() == 0\n\n\ndef sample_streamingly(model: nn.Module,\n input_ids: torch.Tensor,\n max_generate_tokens: int,\n early_stopping: bool = False,\n eos_token_id: Optional[int] = None,\n pad_token_id: Optional[int] = None,\n top_k: Optional[int] = None,\n top_p: Optional[float] = None,\n temperature: Optional[float] = None,\n prepare_inputs_fn: Optional[Callable[[torch.Tensor, Any], dict]] = None,\n update_model_kwargs_fn: Optional[Callable[[dict, Any], dict]] = None,\n **model_kwargs) -> Generator:\n\n logits_processor = prepare_logits_processor(top_k, top_p, temperature)\n unfinished_sequences = input_ids.new(input_ids.shape[0]).fill_(1)\n\n for _ in range(max_generate_tokens):\n model_inputs = prepare_inputs_fn(input_ids, **model_kwargs) if prepare_inputs_fn is not None else {\n 'input_ids': input_ids\n }\n outputs = model(**model_inputs)\n\n next_token_logits = outputs['logits'][:, -1, :]\n # pre-process distribution\n next_token_logits = logits_processor(input_ids, next_token_logits)\n # sample\n probs = torch.softmax(next_token_logits, dim=-1, dtype=torch.float)\n next_tokens = torch.multinomial(probs, num_samples=1).squeeze(1)\n\n # finished sentences should have their next token be a padding token\n if eos_token_id is not None:\n if pad_token_id is None:\n raise ValueError(\"If `eos_token_id` is defined, make sure that `pad_token_id` is defined.\")\n next_tokens = next_tokens * unfinished_sequences + pad_token_id * (1 - unfinished_sequences)\n\n yield next_tokens\n\n # update generated ids, model inputs for next step\n input_ids = torch.cat([input_ids, next_tokens[:, None]], dim=-1)\n if update_model_kwargs_fn is not None:\n model_kwargs = update_model_kwargs_fn(outputs, **model_kwargs)\n\n # if eos_token was found in one sentence, set sentence to finished\n if eos_token_id is not None:\n unfinished_sequences = unfinished_sequences.mul((next_tokens != eos_token_id).long())\n\n # stop when each sentence is finished if early_stopping=True\n if early_stopping and _is_sequence_finished(unfinished_sequences):\n break\n\n\ndef update_model_kwargs_fn(outputs: dict, **model_kwargs) -> dict:\n if \"past_key_values\" in outputs:\n model_kwargs[\"past\"] = outputs[\"past_key_values\"]\n else:\n model_kwargs[\"past\"] = None\n\n # update token_type_ids with last value\n if \"token_type_ids\" in model_kwargs:\n token_type_ids = model_kwargs[\"token_type_ids\"]\n model_kwargs[\"token_type_ids\"] = torch.cat([token_type_ids, token_type_ids[:, -1].unsqueeze(-1)], dim=-1)\n\n # update attention mask\n if \"attention_mask\" in model_kwargs:\n attention_mask = model_kwargs[\"attention_mask\"]\n model_kwargs[\"attention_mask\"] = torch.cat(\n [attention_mask, attention_mask.new_ones((attention_mask.shape[0], 1))], dim=-1)\n\n return model_kwargs\n\n\nclass Dialogue(BaseModel):\n instruction: str = Field(min_length=1, example='Count up from 1 to 500.')\n response: str = Field(example='')\n\n\ndef _format_dialogue(instruction: str, response: str = ''):\n return f'\\n\\n### Instruction:\\n{instruction}\\n\\n### Response:\\n{response}'\n\n\nSTOP_PAT = re.compile(r'(###|instruction:).*', flags=(re.I | re.S))\n\n\nclass ChatPromptProcessor:\n SAFE_RESPONSE = 'The input/response contains inappropriate content, please rephrase your prompt.'\n\n def __init__(self, tokenizer, context: str, max_len: int = 2048, censored_words: List[str]=[]):\n self.tokenizer = tokenizer\n self.context = context\n self.max_len = max_len\n if len(censored_words) > 0:\n self.censored_pat = re.compile(f'({\"|\".join(map(re.escape, censored_words))})', flags=re.I)\n else:\n self.censored_pat = None\n # These will be initialized after the first call of preprocess_prompt()\n self.context_len: Optional[int] = None\n self.dialogue_placeholder_len: Optional[int] = None\n\n def preprocess_prompt(self, history: List[Dialogue], max_new_tokens: int) -> str:\n if self.context_len is None:\n self.context_len = len(self.tokenizer(self.context)['input_ids'])\n if self.dialogue_placeholder_len is None:\n self.dialogue_placeholder_len = len(\n self.tokenizer(_format_dialogue(''), add_special_tokens=False)['input_ids'])\n prompt = self.context\n # the last dialogue must be in the prompt\n last_dialogue = history.pop()\n # the response of the last dialogue is empty\n assert last_dialogue.response == ''\n if len(self.tokenizer(_format_dialogue(last_dialogue.instruction), add_special_tokens=False)\n ['input_ids']) + max_new_tokens + self.context_len >= self.max_len:\n # to avoid truncate placeholder, apply truncate to the original instruction\n instruction_truncated = self.tokenizer(last_dialogue.instruction,\n add_special_tokens=False,\n truncation=True,\n max_length=(self.max_len - max_new_tokens - self.context_len -\n self.dialogue_placeholder_len))['input_ids']\n instruction_truncated = self.tokenizer.decode(instruction_truncated).lstrip()\n prompt += _format_dialogue(instruction_truncated)\n return prompt\n\n res_len = self.max_len - max_new_tokens - len(self.tokenizer(prompt)['input_ids'])\n\n rows = []\n for dialogue in history[::-1]:\n text = _format_dialogue(dialogue.instruction, dialogue.response)\n cur_len = len(self.tokenizer(text, add_special_tokens=False)['input_ids'])\n if res_len - cur_len < 0:\n break\n res_len -= cur_len\n rows.insert(0, text)\n prompt += ''.join(rows) + _format_dialogue(last_dialogue.instruction)\n return prompt\n\n def postprocess_output(self, output: str) -> str:\n output = STOP_PAT.sub('', output)\n return output.strip()\n\n def has_censored_words(self, text: str) -> bool:\n if self.censored_pat is None:\n return False\n return self.censored_pat.search(text) is not None\n\nclass LockedIterator:\n\n def __init__(self, it, lock: Lock) -> None:\n self.lock = lock\n self.it = iter(it)\n\n def __iter__(self):\n return self\n\n def __next__(self):\n with self.lock:\n return next(self.it)\n\ndef load_json(path: str):\n with open(path) as f:\n return json.load(f)", "path": "applications/Chat/inference/utils.py"}], "after_files": [{"content": "import re\nfrom threading import Lock\nfrom typing import Any, Callable, Generator, List, Optional\nimport json\nimport jieba\n\nimport torch\nimport torch.distributed as dist\nimport torch.nn as nn\nfrom pydantic import BaseModel, Field\n\ntry:\n from transformers.generation_logits_process import (\n LogitsProcessorList,\n TemperatureLogitsWarper,\n TopKLogitsWarper,\n TopPLogitsWarper,\n )\nexcept ImportError:\n from transformers.generation import LogitsProcessorList, TemperatureLogitsWarper, TopKLogitsWarper, TopPLogitsWarper\n\n\ndef prepare_logits_processor(top_k: Optional[int] = None,\n top_p: Optional[float] = None,\n temperature: Optional[float] = None) -> LogitsProcessorList:\n processor_list = LogitsProcessorList()\n if temperature is not None and temperature != 1.0:\n processor_list.append(TemperatureLogitsWarper(temperature))\n if top_k is not None and top_k != 0:\n processor_list.append(TopKLogitsWarper(top_k))\n if top_p is not None and top_p < 1.0:\n processor_list.append(TopPLogitsWarper(top_p))\n return processor_list\n\n\ndef _is_sequence_finished(unfinished_sequences: torch.Tensor) -> bool:\n if dist.is_initialized() and dist.get_world_size() > 1:\n # consider DP\n unfinished_sequences = unfinished_sequences.clone()\n dist.all_reduce(unfinished_sequences)\n return unfinished_sequences.max() == 0\n\n\ndef sample_streamingly(model: nn.Module,\n input_ids: torch.Tensor,\n max_generate_tokens: int,\n early_stopping: bool = False,\n eos_token_id: Optional[int] = None,\n pad_token_id: Optional[int] = None,\n top_k: Optional[int] = None,\n top_p: Optional[float] = None,\n temperature: Optional[float] = None,\n prepare_inputs_fn: Optional[Callable[[torch.Tensor, Any], dict]] = None,\n update_model_kwargs_fn: Optional[Callable[[dict, Any], dict]] = None,\n **model_kwargs) -> Generator:\n\n logits_processor = prepare_logits_processor(top_k, top_p, temperature)\n unfinished_sequences = input_ids.new(input_ids.shape[0]).fill_(1)\n\n for _ in range(max_generate_tokens):\n model_inputs = prepare_inputs_fn(input_ids, **model_kwargs) if prepare_inputs_fn is not None else {\n 'input_ids': input_ids\n }\n outputs = model(**model_inputs)\n\n next_token_logits = outputs['logits'][:, -1, :]\n # pre-process distribution\n next_token_logits = logits_processor(input_ids, next_token_logits)\n # sample\n probs = torch.softmax(next_token_logits, dim=-1, dtype=torch.float)\n next_tokens = torch.multinomial(probs, num_samples=1).squeeze(1)\n\n # finished sentences should have their next token be a padding token\n if eos_token_id is not None:\n if pad_token_id is None:\n raise ValueError(\"If `eos_token_id` is defined, make sure that `pad_token_id` is defined.\")\n next_tokens = next_tokens * unfinished_sequences + pad_token_id * (1 - unfinished_sequences)\n\n yield next_tokens\n\n # update generated ids, model inputs for next step\n input_ids = torch.cat([input_ids, next_tokens[:, None]], dim=-1)\n if update_model_kwargs_fn is not None:\n model_kwargs = update_model_kwargs_fn(outputs, **model_kwargs)\n\n # if eos_token was found in one sentence, set sentence to finished\n if eos_token_id is not None:\n unfinished_sequences = unfinished_sequences.mul((next_tokens != eos_token_id).long())\n\n # stop when each sentence is finished if early_stopping=True\n if early_stopping and _is_sequence_finished(unfinished_sequences):\n break\n\n\ndef update_model_kwargs_fn(outputs: dict, **model_kwargs) -> dict:\n if \"past_key_values\" in outputs:\n model_kwargs[\"past\"] = outputs[\"past_key_values\"]\n else:\n model_kwargs[\"past\"] = None\n\n # update token_type_ids with last value\n if \"token_type_ids\" in model_kwargs:\n token_type_ids = model_kwargs[\"token_type_ids\"]\n model_kwargs[\"token_type_ids\"] = torch.cat([token_type_ids, token_type_ids[:, -1].unsqueeze(-1)], dim=-1)\n\n # update attention mask\n if \"attention_mask\" in model_kwargs:\n attention_mask = model_kwargs[\"attention_mask\"]\n model_kwargs[\"attention_mask\"] = torch.cat(\n [attention_mask, attention_mask.new_ones((attention_mask.shape[0], 1))], dim=-1)\n\n return model_kwargs\n\n\nclass Dialogue(BaseModel):\n instruction: str = Field(min_length=1, example='Count up from 1 to 500.')\n response: str = Field(example='')\n\n\ndef _format_dialogue(instruction: str, response: str = ''):\n return f'\\n\\n### Instruction:\\n{instruction}\\n\\n### Response:\\n{response}'\n\n\nSTOP_PAT = re.compile(r'(###|instruction:).*', flags=(re.I | re.S))\n\n\nclass ChatPromptProcessor:\n SAFE_RESPONSE = 'The input/response contains inappropriate content, please rephrase your prompt.'\n\n def __init__(self, tokenizer, context: str, max_len: int = 2048, censored_words: List[str]=[]):\n self.tokenizer = tokenizer\n self.context = context\n self.max_len = max_len\n self.censored_words = set([word.lower() for word in censored_words])\n # These will be initialized after the first call of preprocess_prompt()\n self.context_len: Optional[int] = None\n self.dialogue_placeholder_len: Optional[int] = None\n\n def preprocess_prompt(self, history: List[Dialogue], max_new_tokens: int) -> str:\n if self.context_len is None:\n self.context_len = len(self.tokenizer(self.context)['input_ids'])\n if self.dialogue_placeholder_len is None:\n self.dialogue_placeholder_len = len(\n self.tokenizer(_format_dialogue(''), add_special_tokens=False)['input_ids'])\n prompt = self.context\n # the last dialogue must be in the prompt\n last_dialogue = history.pop()\n # the response of the last dialogue is empty\n assert last_dialogue.response == ''\n if len(self.tokenizer(_format_dialogue(last_dialogue.instruction), add_special_tokens=False)\n ['input_ids']) + max_new_tokens + self.context_len >= self.max_len:\n # to avoid truncate placeholder, apply truncate to the original instruction\n instruction_truncated = self.tokenizer(last_dialogue.instruction,\n add_special_tokens=False,\n truncation=True,\n max_length=(self.max_len - max_new_tokens - self.context_len -\n self.dialogue_placeholder_len))['input_ids']\n instruction_truncated = self.tokenizer.decode(instruction_truncated).lstrip()\n prompt += _format_dialogue(instruction_truncated)\n return prompt\n\n res_len = self.max_len - max_new_tokens - len(self.tokenizer(prompt)['input_ids'])\n\n rows = []\n for dialogue in history[::-1]:\n text = _format_dialogue(dialogue.instruction, dialogue.response)\n cur_len = len(self.tokenizer(text, add_special_tokens=False)['input_ids'])\n if res_len - cur_len < 0:\n break\n res_len -= cur_len\n rows.insert(0, text)\n prompt += ''.join(rows) + _format_dialogue(last_dialogue.instruction)\n return prompt\n\n def postprocess_output(self, output: str) -> str:\n output = STOP_PAT.sub('', output)\n return output.strip()\n\n def has_censored_words(self, text: str) -> bool:\n if len(self.censored_words) == 0:\n return False\n intersection = set(jieba.cut(text.lower())) & self.censored_words\n return len(intersection) > 0\n\nclass LockedIterator:\n\n def __init__(self, it, lock: Lock) -> None:\n self.lock = lock\n self.it = iter(it)\n\n def __iter__(self):\n return self\n\n def __next__(self):\n with self.lock:\n return next(self.it)\n\ndef load_json(path: str):\n with open(path) as f:\n return json.load(f)", "path": "applications/Chat/inference/utils.py"}]} | 2,612 | 348 |
gh_patches_debug_12053 | rasdani/github-patches | git_diff | lnbits__lnbits-215 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
No success message and URL returned with LNURLp links
As the title says, the neither the URL, nor the message are returned to the wallet making the payment. This used to work when I was still on the [latest `raspiblitz` tag](https://github.com/lnbits/lnbits/releases/tag/raspiblitz). Now on [this commit](https://github.com/lnbits/lnbits/commit/7ab4553ef5083f6746bd6fe747775ba57e2e54a4) (the issue could have appeared during any of the commits in between), it has stopped working, even on newly created LNURLp links.
Edit: Reproducible on lnbits.com
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lnbits/extensions/lnurlp/lnurl.py`
Content:
```
1 import hashlib
2 import math
3 from http import HTTPStatus
4 from quart import jsonify, url_for, request
5 from lnurl import LnurlPayResponse, LnurlPayActionResponse, LnurlErrorResponse # type: ignore
6
7 from lnbits.core.services import create_invoice
8 from lnbits.utils.exchange_rates import get_fiat_rate_satoshis
9
10 from . import lnurlp_ext
11 from .crud import increment_pay_link
12
13
14 @lnurlp_ext.route("/api/v1/lnurl/<link_id>", methods=["GET"])
15 async def api_lnurl_response(link_id):
16 link = await increment_pay_link(link_id, served_meta=1)
17 if not link:
18 return (
19 jsonify({"status": "ERROR", "reason": "LNURL-pay not found."}),
20 HTTPStatus.OK,
21 )
22
23 rate = await get_fiat_rate_satoshis(link.currency) if link.currency else 1
24 resp = LnurlPayResponse(
25 callback=url_for("lnurlp.api_lnurl_callback", link_id=link.id, _external=True),
26 min_sendable=math.ceil(link.min * rate) * 1000,
27 max_sendable=round(link.max * rate) * 1000,
28 metadata=link.lnurlpay_metadata,
29 )
30 params = resp.dict()
31
32 if link.comment_chars > 0:
33 params["commentAllowed"] = link.comment_chars
34
35 return jsonify(params), HTTPStatus.OK
36
37
38 @lnurlp_ext.route("/api/v1/lnurl/cb/<link_id>", methods=["GET"])
39 async def api_lnurl_callback(link_id):
40 link = await increment_pay_link(link_id, served_pr=1)
41 if not link:
42 return (
43 jsonify({"status": "ERROR", "reason": "LNURL-pay not found."}),
44 HTTPStatus.OK,
45 )
46
47 min, max = link.min, link.max
48 rate = await get_fiat_rate_satoshis(link.currency) if link.currency else 1
49 if link.currency:
50 # allow some fluctuation (as the fiat price may have changed between the calls)
51 min = rate * 995 * link.min
52 max = rate * 1010 * link.max
53 else:
54 min = link.min * 1000
55 max = link.max * 1000
56
57 amount_received = int(request.args.get("amount") or 0)
58 if amount_received < min:
59 return (
60 jsonify(
61 LnurlErrorResponse(
62 reason=f"Amount {amount_received} is smaller than minimum {min}."
63 ).dict()
64 ),
65 HTTPStatus.OK,
66 )
67 elif amount_received > max:
68 return (
69 jsonify(
70 LnurlErrorResponse(
71 reason=f"Amount {amount_received} is greater than maximum {max}."
72 ).dict()
73 ),
74 HTTPStatus.OK,
75 )
76
77 comment = request.args.get("comment")
78 if len(comment or "") > link.comment_chars:
79 return (
80 jsonify(
81 LnurlErrorResponse(
82 reason=f"Got a comment with {len(comment)} characters, but can only accept {link.comment_chars}"
83 ).dict()
84 ),
85 HTTPStatus.OK,
86 )
87
88 payment_hash, payment_request = await create_invoice(
89 wallet_id=link.wallet,
90 amount=int(amount_received / 1000),
91 memo=link.description,
92 description_hash=hashlib.sha256(
93 link.lnurlpay_metadata.encode("utf-8")
94 ).digest(),
95 extra={"tag": "lnurlp", "link": link.id, "comment": comment},
96 )
97
98 resp = {
99 "routes": [],
100 "pr": payment_request,
101 }
102
103 success_action = link.success_action(payment_hash)
104 if success_action:
105 resp["success_action"] = success_action
106
107 return jsonify(resp), HTTPStatus.OK
108
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lnbits/extensions/lnurlp/lnurl.py b/lnbits/extensions/lnurlp/lnurl.py
--- a/lnbits/extensions/lnurlp/lnurl.py
+++ b/lnbits/extensions/lnurlp/lnurl.py
@@ -95,13 +95,17 @@
extra={"tag": "lnurlp", "link": link.id, "comment": comment},
)
- resp = {
- "routes": [],
- "pr": payment_request,
- }
-
success_action = link.success_action(payment_hash)
if success_action:
- resp["success_action"] = success_action
+ resp = LnurlPayActionResponse(
+ pr=payment_request,
+ success_action=success_action,
+ routes=[],
+ )
+ else:
+ resp = LnurlPayActionResponse(
+ pr=payment_request,
+ routes=[],
+ )
- return jsonify(resp), HTTPStatus.OK
+ return jsonify(resp.dict()), HTTPStatus.OK
| {"golden_diff": "diff --git a/lnbits/extensions/lnurlp/lnurl.py b/lnbits/extensions/lnurlp/lnurl.py\n--- a/lnbits/extensions/lnurlp/lnurl.py\n+++ b/lnbits/extensions/lnurlp/lnurl.py\n@@ -95,13 +95,17 @@\n extra={\"tag\": \"lnurlp\", \"link\": link.id, \"comment\": comment},\n )\n \n- resp = {\n- \"routes\": [],\n- \"pr\": payment_request,\n- }\n-\n success_action = link.success_action(payment_hash)\n if success_action:\n- resp[\"success_action\"] = success_action\n+ resp = LnurlPayActionResponse(\n+ pr=payment_request,\n+ success_action=success_action,\n+ routes=[],\n+ )\n+ else:\n+ resp = LnurlPayActionResponse(\n+ pr=payment_request,\n+ routes=[],\n+ )\n \n- return jsonify(resp), HTTPStatus.OK\n+ return jsonify(resp.dict()), HTTPStatus.OK\n", "issue": "No success message and URL returned with LNURLp links\nAs the title says, the neither the URL, nor the message are returned to the wallet making the payment. This used to work when I was still on the [latest `raspiblitz` tag](https://github.com/lnbits/lnbits/releases/tag/raspiblitz). Now on [this commit](https://github.com/lnbits/lnbits/commit/7ab4553ef5083f6746bd6fe747775ba57e2e54a4) (the issue could have appeared during any of the commits in between), it has stopped working, even on newly created LNURLp links.\r\nEdit: Reproducible on lnbits.com\n", "before_files": [{"content": "import hashlib\nimport math\nfrom http import HTTPStatus\nfrom quart import jsonify, url_for, request\nfrom lnurl import LnurlPayResponse, LnurlPayActionResponse, LnurlErrorResponse # type: ignore\n\nfrom lnbits.core.services import create_invoice\nfrom lnbits.utils.exchange_rates import get_fiat_rate_satoshis\n\nfrom . import lnurlp_ext\nfrom .crud import increment_pay_link\n\n\n@lnurlp_ext.route(\"/api/v1/lnurl/<link_id>\", methods=[\"GET\"])\nasync def api_lnurl_response(link_id):\n link = await increment_pay_link(link_id, served_meta=1)\n if not link:\n return (\n jsonify({\"status\": \"ERROR\", \"reason\": \"LNURL-pay not found.\"}),\n HTTPStatus.OK,\n )\n\n rate = await get_fiat_rate_satoshis(link.currency) if link.currency else 1\n resp = LnurlPayResponse(\n callback=url_for(\"lnurlp.api_lnurl_callback\", link_id=link.id, _external=True),\n min_sendable=math.ceil(link.min * rate) * 1000,\n max_sendable=round(link.max * rate) * 1000,\n metadata=link.lnurlpay_metadata,\n )\n params = resp.dict()\n\n if link.comment_chars > 0:\n params[\"commentAllowed\"] = link.comment_chars\n\n return jsonify(params), HTTPStatus.OK\n\n\n@lnurlp_ext.route(\"/api/v1/lnurl/cb/<link_id>\", methods=[\"GET\"])\nasync def api_lnurl_callback(link_id):\n link = await increment_pay_link(link_id, served_pr=1)\n if not link:\n return (\n jsonify({\"status\": \"ERROR\", \"reason\": \"LNURL-pay not found.\"}),\n HTTPStatus.OK,\n )\n\n min, max = link.min, link.max\n rate = await get_fiat_rate_satoshis(link.currency) if link.currency else 1\n if link.currency:\n # allow some fluctuation (as the fiat price may have changed between the calls)\n min = rate * 995 * link.min\n max = rate * 1010 * link.max\n else:\n min = link.min * 1000\n max = link.max * 1000\n\n amount_received = int(request.args.get(\"amount\") or 0)\n if amount_received < min:\n return (\n jsonify(\n LnurlErrorResponse(\n reason=f\"Amount {amount_received} is smaller than minimum {min}.\"\n ).dict()\n ),\n HTTPStatus.OK,\n )\n elif amount_received > max:\n return (\n jsonify(\n LnurlErrorResponse(\n reason=f\"Amount {amount_received} is greater than maximum {max}.\"\n ).dict()\n ),\n HTTPStatus.OK,\n )\n\n comment = request.args.get(\"comment\")\n if len(comment or \"\") > link.comment_chars:\n return (\n jsonify(\n LnurlErrorResponse(\n reason=f\"Got a comment with {len(comment)} characters, but can only accept {link.comment_chars}\"\n ).dict()\n ),\n HTTPStatus.OK,\n )\n\n payment_hash, payment_request = await create_invoice(\n wallet_id=link.wallet,\n amount=int(amount_received / 1000),\n memo=link.description,\n description_hash=hashlib.sha256(\n link.lnurlpay_metadata.encode(\"utf-8\")\n ).digest(),\n extra={\"tag\": \"lnurlp\", \"link\": link.id, \"comment\": comment},\n )\n\n resp = {\n \"routes\": [],\n \"pr\": payment_request,\n }\n\n success_action = link.success_action(payment_hash)\n if success_action:\n resp[\"success_action\"] = success_action\n\n return jsonify(resp), HTTPStatus.OK\n", "path": "lnbits/extensions/lnurlp/lnurl.py"}], "after_files": [{"content": "import hashlib\nimport math\nfrom http import HTTPStatus\nfrom quart import jsonify, url_for, request\nfrom lnurl import LnurlPayResponse, LnurlPayActionResponse, LnurlErrorResponse # type: ignore\n\nfrom lnbits.core.services import create_invoice\nfrom lnbits.utils.exchange_rates import get_fiat_rate_satoshis\n\nfrom . import lnurlp_ext\nfrom .crud import increment_pay_link\n\n\n@lnurlp_ext.route(\"/api/v1/lnurl/<link_id>\", methods=[\"GET\"])\nasync def api_lnurl_response(link_id):\n link = await increment_pay_link(link_id, served_meta=1)\n if not link:\n return (\n jsonify({\"status\": \"ERROR\", \"reason\": \"LNURL-pay not found.\"}),\n HTTPStatus.OK,\n )\n\n rate = await get_fiat_rate_satoshis(link.currency) if link.currency else 1\n resp = LnurlPayResponse(\n callback=url_for(\"lnurlp.api_lnurl_callback\", link_id=link.id, _external=True),\n min_sendable=math.ceil(link.min * rate) * 1000,\n max_sendable=round(link.max * rate) * 1000,\n metadata=link.lnurlpay_metadata,\n )\n params = resp.dict()\n\n if link.comment_chars > 0:\n params[\"commentAllowed\"] = link.comment_chars\n\n return jsonify(params), HTTPStatus.OK\n\n\n@lnurlp_ext.route(\"/api/v1/lnurl/cb/<link_id>\", methods=[\"GET\"])\nasync def api_lnurl_callback(link_id):\n link = await increment_pay_link(link_id, served_pr=1)\n if not link:\n return (\n jsonify({\"status\": \"ERROR\", \"reason\": \"LNURL-pay not found.\"}),\n HTTPStatus.OK,\n )\n\n min, max = link.min, link.max\n rate = await get_fiat_rate_satoshis(link.currency) if link.currency else 1\n if link.currency:\n # allow some fluctuation (as the fiat price may have changed between the calls)\n min = rate * 995 * link.min\n max = rate * 1010 * link.max\n else:\n min = link.min * 1000\n max = link.max * 1000\n\n amount_received = int(request.args.get(\"amount\") or 0)\n if amount_received < min:\n return (\n jsonify(\n LnurlErrorResponse(\n reason=f\"Amount {amount_received} is smaller than minimum {min}.\"\n ).dict()\n ),\n HTTPStatus.OK,\n )\n elif amount_received > max:\n return (\n jsonify(\n LnurlErrorResponse(\n reason=f\"Amount {amount_received} is greater than maximum {max}.\"\n ).dict()\n ),\n HTTPStatus.OK,\n )\n\n comment = request.args.get(\"comment\")\n if len(comment or \"\") > link.comment_chars:\n return (\n jsonify(\n LnurlErrorResponse(\n reason=f\"Got a comment with {len(comment)} characters, but can only accept {link.comment_chars}\"\n ).dict()\n ),\n HTTPStatus.OK,\n )\n\n payment_hash, payment_request = await create_invoice(\n wallet_id=link.wallet,\n amount=int(amount_received / 1000),\n memo=link.description,\n description_hash=hashlib.sha256(\n link.lnurlpay_metadata.encode(\"utf-8\")\n ).digest(),\n extra={\"tag\": \"lnurlp\", \"link\": link.id, \"comment\": comment},\n )\n\n success_action = link.success_action(payment_hash)\n if success_action:\n resp = LnurlPayActionResponse(\n pr=payment_request,\n success_action=success_action,\n routes=[],\n )\n else:\n resp = LnurlPayActionResponse(\n pr=payment_request,\n routes=[],\n )\n\n return jsonify(resp.dict()), HTTPStatus.OK\n", "path": "lnbits/extensions/lnurlp/lnurl.py"}]} | 1,472 | 231 |
gh_patches_debug_2997 | rasdani/github-patches | git_diff | ivy-llc__ivy-20554 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
rfftn
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ivy/functional/frontends/scipy/fft/fft.py`
Content:
```
1 # global
2 import ivy
3 from ivy.functional.frontends.scipy.func_wrapper import (
4 to_ivy_arrays_and_back,
5 )
6
7
8 # fft
9 @to_ivy_arrays_and_back
10 def fft(x, n=None, axis=-1, norm="backward", overwrite_x=False):
11 return ivy.fft(x, axis, norm=norm, n=n)
12
13
14 # ifft
15 @to_ivy_arrays_and_back
16 def ifft(x, n=None, axis=-1, norm="backward", overwrite_x=False):
17 return ivy.ifft(x, axis, norm=norm, n=n)
18
19
20 # dct
21 @to_ivy_arrays_and_back
22 def dct(x, type=2, n=None, axis=-1, norm=None, overwrite_x=False, orthogonalize=None):
23 return ivy.dct(x, type=type, n=n, axis=axis, norm=norm)
24
25
26 # idct
27 @to_ivy_arrays_and_back
28 def idct(x, type=2, n=None, axis=-1, norm=None, overwrite_x=False, orthogonalize=None):
29 inverse_type = {1: 1, 2: 3, 3: 2, 4: 4}[type]
30 return ivy.dct(x, type=inverse_type, n=n, axis=axis, norm=norm)
31
32
33 @to_ivy_arrays_and_back
34 def fft2(x, s=None, axes=(-2, -1), norm=None, overwrite_x=False):
35 return ivy.fft2(x, s=s, dim=axes, norm=norm)
36
37
38 @to_ivy_arrays_and_back
39 def ifftn(
40 x, s=None, axes=None, norm=None, overwrite_x=False, workers=None, *, plan=None
41 ):
42 return ivy.ifftn(x, s=s, dim=axes, norm=norm)
43
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ivy/functional/frontends/scipy/fft/fft.py b/ivy/functional/frontends/scipy/fft/fft.py
--- a/ivy/functional/frontends/scipy/fft/fft.py
+++ b/ivy/functional/frontends/scipy/fft/fft.py
@@ -40,3 +40,10 @@
x, s=None, axes=None, norm=None, overwrite_x=False, workers=None, *, plan=None
):
return ivy.ifftn(x, s=s, dim=axes, norm=norm)
+
+
+@to_ivy_arrays_and_back
+def rfftn(
+ x, s=None, axes=None, norm=None, overwrite_x=False, workers=None, *, plan=None
+):
+ return ivy.rfftn(x, s=s, dim=axes, norm=norm)
| {"golden_diff": "diff --git a/ivy/functional/frontends/scipy/fft/fft.py b/ivy/functional/frontends/scipy/fft/fft.py\n--- a/ivy/functional/frontends/scipy/fft/fft.py\n+++ b/ivy/functional/frontends/scipy/fft/fft.py\n@@ -40,3 +40,10 @@\n x, s=None, axes=None, norm=None, overwrite_x=False, workers=None, *, plan=None\n ):\n return ivy.ifftn(x, s=s, dim=axes, norm=norm)\n+\n+\n+@to_ivy_arrays_and_back\n+def rfftn(\n+ x, s=None, axes=None, norm=None, overwrite_x=False, workers=None, *, plan=None\n+):\n+ return ivy.rfftn(x, s=s, dim=axes, norm=norm)\n", "issue": "rfftn\n\n", "before_files": [{"content": "# global\nimport ivy\nfrom ivy.functional.frontends.scipy.func_wrapper import (\n to_ivy_arrays_and_back,\n)\n\n\n# fft\n@to_ivy_arrays_and_back\ndef fft(x, n=None, axis=-1, norm=\"backward\", overwrite_x=False):\n return ivy.fft(x, axis, norm=norm, n=n)\n\n\n# ifft\n@to_ivy_arrays_and_back\ndef ifft(x, n=None, axis=-1, norm=\"backward\", overwrite_x=False):\n return ivy.ifft(x, axis, norm=norm, n=n)\n\n\n# dct\n@to_ivy_arrays_and_back\ndef dct(x, type=2, n=None, axis=-1, norm=None, overwrite_x=False, orthogonalize=None):\n return ivy.dct(x, type=type, n=n, axis=axis, norm=norm)\n\n\n# idct\n@to_ivy_arrays_and_back\ndef idct(x, type=2, n=None, axis=-1, norm=None, overwrite_x=False, orthogonalize=None):\n inverse_type = {1: 1, 2: 3, 3: 2, 4: 4}[type]\n return ivy.dct(x, type=inverse_type, n=n, axis=axis, norm=norm)\n\n\n@to_ivy_arrays_and_back\ndef fft2(x, s=None, axes=(-2, -1), norm=None, overwrite_x=False):\n return ivy.fft2(x, s=s, dim=axes, norm=norm)\n\n\n@to_ivy_arrays_and_back\ndef ifftn(\n x, s=None, axes=None, norm=None, overwrite_x=False, workers=None, *, plan=None\n):\n return ivy.ifftn(x, s=s, dim=axes, norm=norm)\n", "path": "ivy/functional/frontends/scipy/fft/fft.py"}], "after_files": [{"content": "# global\nimport ivy\nfrom ivy.functional.frontends.scipy.func_wrapper import (\n to_ivy_arrays_and_back,\n)\n\n\n# fft\n@to_ivy_arrays_and_back\ndef fft(x, n=None, axis=-1, norm=\"backward\", overwrite_x=False):\n return ivy.fft(x, axis, norm=norm, n=n)\n\n\n# ifft\n@to_ivy_arrays_and_back\ndef ifft(x, n=None, axis=-1, norm=\"backward\", overwrite_x=False):\n return ivy.ifft(x, axis, norm=norm, n=n)\n\n\n# dct\n@to_ivy_arrays_and_back\ndef dct(x, type=2, n=None, axis=-1, norm=None, overwrite_x=False, orthogonalize=None):\n return ivy.dct(x, type=type, n=n, axis=axis, norm=norm)\n\n\n# idct\n@to_ivy_arrays_and_back\ndef idct(x, type=2, n=None, axis=-1, norm=None, overwrite_x=False, orthogonalize=None):\n inverse_type = {1: 1, 2: 3, 3: 2, 4: 4}[type]\n return ivy.dct(x, type=inverse_type, n=n, axis=axis, norm=norm)\n\n\n@to_ivy_arrays_and_back\ndef fft2(x, s=None, axes=(-2, -1), norm=None, overwrite_x=False):\n return ivy.fft2(x, s=s, dim=axes, norm=norm)\n\n\n@to_ivy_arrays_and_back\ndef ifftn(\n x, s=None, axes=None, norm=None, overwrite_x=False, workers=None, *, plan=None\n):\n return ivy.ifftn(x, s=s, dim=axes, norm=norm)\n\n\n@to_ivy_arrays_and_back\ndef rfftn(\n x, s=None, axes=None, norm=None, overwrite_x=False, workers=None, *, plan=None\n):\n return ivy.rfftn(x, s=s, dim=axes, norm=norm)\n", "path": "ivy/functional/frontends/scipy/fft/fft.py"}]} | 741 | 185 |
gh_patches_debug_35161 | rasdani/github-patches | git_diff | streamlit__streamlit-4869 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add tooltip to st.metric(0 to provide more information on a KPI
### Problem
It would help our business users to have a bit more description available on KPIs.
### Solution
**MVP:** Could we add a tooltip to the _st.metric()_ function? That would provide a lot of drill down flexibility, background description and the ability to use f-strings to include more detailed information
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lib/streamlit/elements/metric.py`
Content:
```
1 # Copyright 2018-2022 Streamlit Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from textwrap import dedent
16 from typing import cast, TYPE_CHECKING, Union
17 from typing_extensions import TypeAlias, Literal
18
19 import attr
20 from streamlit.errors import StreamlitAPIException
21 from streamlit.proto.Metric_pb2 import Metric as MetricProto
22
23 from .utils import clean_text
24
25 if TYPE_CHECKING:
26 import numpy as np
27
28 from streamlit.delta_generator import DeltaGenerator
29
30
31 Value: TypeAlias = Union["np.integer", "np.floating", float, str, None]
32 Delta: TypeAlias = Union[float, str, None]
33 DeltaColor: TypeAlias = Literal["normal", "inverse", "off"]
34
35
36 @attr.s(auto_attribs=True, slots=True, frozen=True)
37 class MetricColorAndDirection:
38 color: "MetricProto.MetricColor.ValueType"
39 direction: "MetricProto.MetricDirection.ValueType"
40
41
42 class MetricMixin:
43 def metric(
44 self,
45 label: str,
46 value: Value,
47 delta: Delta = None,
48 delta_color: DeltaColor = "normal",
49 ) -> "DeltaGenerator":
50 """Display a metric in big bold font, with an optional indicator of how the metric changed.
51
52 Tip: If you want to display a large number, it may be a good idea to
53 shorten it using packages like `millify <https://github.com/azaitsev/millify>`_
54 or `numerize <https://github.com/davidsa03/numerize>`_. E.g. ``1234`` can be
55 displayed as ``1.2k`` using ``st.metric("Short number", millify(1234))``.
56
57 Parameters
58 ----------
59 label : str
60 The header or Title for the metric
61 value : int, float, str, or None
62 Value of the metric. None is rendered as a long dash.
63 delta : int, float, str, or None
64 Indicator of how the metric changed, rendered with an arrow below
65 the metric. If delta is negative (int/float) or starts with a minus
66 sign (str), the arrow points down and the text is red; else the
67 arrow points up and the text is green. If None (default), no delta
68 indicator is shown.
69 delta_color : str
70 If "normal" (default), the delta indicator is shown as described
71 above. If "inverse", it is red when positive and green when
72 negative. This is useful when a negative change is considered
73 good, e.g. if cost decreased. If "off", delta is shown in gray
74 regardless of its value.
75
76 Example
77 -------
78 >>> st.metric(label="Temperature", value="70 °F", delta="1.2 °F")
79
80 .. output::
81 https://share.streamlit.io/streamlit/docs/main/python/api-examples-source/metric.example1.py
82 height: 210px
83
84 ``st.metric`` looks especially nice in combination with ``st.columns``:
85
86 >>> col1, col2, col3 = st.columns(3)
87 >>> col1.metric("Temperature", "70 °F", "1.2 °F")
88 >>> col2.metric("Wind", "9 mph", "-8%")
89 >>> col3.metric("Humidity", "86%", "4%")
90
91 .. output::
92 https://share.streamlit.io/streamlit/docs/main/python/api-examples-source/metric.example2.py
93 height: 210px
94
95 The delta indicator color can also be inverted or turned off:
96
97 >>> st.metric(label="Gas price", value=4, delta=-0.5,
98 ... delta_color="inverse")
99 >>>
100 >>> st.metric(label="Active developers", value=123, delta=123,
101 ... delta_color="off")
102
103 .. output::
104 https://share.streamlit.io/streamlit/docs/main/python/api-examples-source/metric.example3.py
105 height: 320px
106
107 """
108 metric_proto = MetricProto()
109 metric_proto.body = self.parse_value(value)
110 metric_proto.label = self.parse_label(label)
111 metric_proto.delta = self.parse_delta(delta)
112
113 color_and_direction = self.determine_delta_color_and_direction(
114 cast(DeltaColor, clean_text(delta_color)), delta
115 )
116 metric_proto.color = color_and_direction.color
117 metric_proto.direction = color_and_direction.direction
118
119 return self.dg._enqueue("metric", metric_proto)
120
121 @staticmethod
122 def parse_label(label: str) -> str:
123 if not isinstance(label, str):
124 raise TypeError(
125 f"'{str(label)}' is of type {str(type(label))}, which is not an accepted type."
126 " label only accepts: str. Please convert the label to an accepted type."
127 )
128 return label
129
130 @staticmethod
131 def parse_value(value: Value) -> str:
132 if value is None:
133 return "—"
134 if isinstance(value, int) or isinstance(value, float) or isinstance(value, str):
135 return str(value)
136 elif hasattr(value, "item"):
137 # Add support for numpy values (e.g. int16, float64, etc.)
138 try:
139 # Item could also be just a variable, so we use try, except
140 if isinstance(value.item(), float) or isinstance(value.item(), int):
141 return str(value.item())
142 except Exception:
143 pass
144
145 raise TypeError(
146 f"'{str(value)}' is of type {str(type(value))}, which is not an accepted type."
147 " value only accepts: int, float, str, or None."
148 " Please convert the value to an accepted type."
149 )
150
151 @staticmethod
152 def parse_delta(delta: Delta) -> str:
153 if delta is None or delta == "":
154 return ""
155 if isinstance(delta, str):
156 return dedent(delta)
157 elif isinstance(delta, int) or isinstance(delta, float):
158 return str(delta)
159 else:
160 raise TypeError(
161 f"'{str(delta)}' is of type {str(type(delta))}, which is not an accepted type."
162 " delta only accepts: int, float, str, or None."
163 " Please convert the value to an accepted type."
164 )
165
166 def determine_delta_color_and_direction(
167 self,
168 delta_color: DeltaColor,
169 delta: Delta,
170 ) -> MetricColorAndDirection:
171 if delta_color not in {"normal", "inverse", "off"}:
172 raise StreamlitAPIException(
173 f"'{str(delta_color)}' is not an accepted value. delta_color only accepts: "
174 "'normal', 'inverse', or 'off'"
175 )
176
177 if delta is None or delta == "":
178 return MetricColorAndDirection(
179 color=MetricProto.MetricColor.GRAY,
180 direction=MetricProto.MetricDirection.NONE,
181 )
182
183 if self.is_negative(delta):
184 if delta_color == "normal":
185 cd_color = MetricProto.MetricColor.RED
186 elif delta_color == "inverse":
187 cd_color = MetricProto.MetricColor.GREEN
188 else:
189 cd_color = MetricProto.MetricColor.GRAY
190 cd_direction = MetricProto.MetricDirection.DOWN
191 else:
192 if delta_color == "normal":
193 cd_color = MetricProto.MetricColor.GREEN
194 elif delta_color == "inverse":
195 cd_color = MetricProto.MetricColor.RED
196 else:
197 cd_color = MetricProto.MetricColor.GRAY
198 cd_direction = MetricProto.MetricDirection.UP
199
200 return MetricColorAndDirection(
201 color=cd_color,
202 direction=cd_direction,
203 )
204
205 @staticmethod
206 def is_negative(delta: Delta) -> bool:
207 return dedent(str(delta)).startswith("-")
208
209 @property
210 def dg(self) -> "DeltaGenerator":
211 return cast("DeltaGenerator", self)
212
```
Path: `e2e/scripts/st_tooltips.py`
Content:
```
1 # Copyright 2018-2022 Streamlit Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import streamlit as st
16 from datetime import datetime
17
18 default_tooltip = """
19 This is a really long tooltip.
20
21 Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut ut turpis vitae
22 justo ornare venenatis a vitae leo. Donec mollis ornare ante, eu ultricies
23 tellus ornare eu. Donec eros risus, ultrices ut eleifend vel, auctor eu turpis.
24 In consectetur erat vel ante accumsan, a egestas urna aliquet. Nullam eget
25 sapien eget diam euismod eleifend. Nulla purus enim, finibus ut velit eu,
26 malesuada dictum nulla. In non arcu et risus maximus fermentum eget nec ante.
27 """.strip()
28
29 leading_indent_code_tooltip = """
30 Code:
31
32 This
33 is
34 a
35 code
36 block!"""
37
38 leading_indent_regular_text_tooltip = """
39 This is a regular text block!
40 Test1
41 Test2
42
43 """
44
45 indented_code_tooltip = """
46 Code:
47
48 for i in range(10):
49 x = i * 10
50 print(x)
51 """
52
53 no_indent_tooltip = "thisisatooltipwithnoindents. It has some spaces but no idents."
54
55 st.text_input("some input text", "default text", help=default_tooltip)
56 st.number_input("number input", value=1, help=leading_indent_code_tooltip)
57 st.checkbox("some checkbox", help=leading_indent_regular_text_tooltip)
58 st.radio("best animal", ("tiger", "giraffe", "bear"), 0, help=indented_code_tooltip)
59 st.selectbox("selectbox", ("a", "b", "c"), 0, help=default_tooltip)
60 st.time_input("time", datetime(2019, 7, 6, 21, 15), help=leading_indent_code_tooltip)
61 st.date_input(
62 "date", datetime(2019, 7, 6, 21, 15), help=leading_indent_regular_text_tooltip
63 )
64 st.slider("slider", 0, 100, 50, help=indented_code_tooltip)
65 st.color_picker("color picker", help=no_indent_tooltip)
66 st.file_uploader("file uploader", help=default_tooltip)
67 st.multiselect(
68 "multiselect", ["a", "b", "c"], ["a", "b"], help=leading_indent_code_tooltip
69 )
70 st.text_area("textarea", help=leading_indent_regular_text_tooltip)
71 st.select_slider("selectslider", options=["a", "b", "c"], help=indented_code_tooltip)
72 st.button("some button", help=no_indent_tooltip)
73
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/e2e/scripts/st_tooltips.py b/e2e/scripts/st_tooltips.py
--- a/e2e/scripts/st_tooltips.py
+++ b/e2e/scripts/st_tooltips.py
@@ -70,3 +70,4 @@
st.text_area("textarea", help=leading_indent_regular_text_tooltip)
st.select_slider("selectslider", options=["a", "b", "c"], help=indented_code_tooltip)
st.button("some button", help=no_indent_tooltip)
+st.metric("some metric", value=500, help=no_indent_tooltip)
diff --git a/lib/streamlit/elements/metric.py b/lib/streamlit/elements/metric.py
--- a/lib/streamlit/elements/metric.py
+++ b/lib/streamlit/elements/metric.py
@@ -13,7 +13,7 @@
# limitations under the License.
from textwrap import dedent
-from typing import cast, TYPE_CHECKING, Union
+from typing import cast, TYPE_CHECKING, Union, Optional
from typing_extensions import TypeAlias, Literal
import attr
@@ -46,6 +46,7 @@
value: Value,
delta: Delta = None,
delta_color: DeltaColor = "normal",
+ help: Optional[str] = None,
) -> "DeltaGenerator":
"""Display a metric in big bold font, with an optional indicator of how the metric changed.
@@ -72,6 +73,8 @@
negative. This is useful when a negative change is considered
good, e.g. if cost decreased. If "off", delta is shown in gray
regardless of its value.
+ help : str
+ An optional tooltip that gets displayed next to the metric label.
Example
-------
@@ -109,6 +112,8 @@
metric_proto.body = self.parse_value(value)
metric_proto.label = self.parse_label(label)
metric_proto.delta = self.parse_delta(delta)
+ if help is not None:
+ metric_proto.help = dedent(help)
color_and_direction = self.determine_delta_color_and_direction(
cast(DeltaColor, clean_text(delta_color)), delta
| {"golden_diff": "diff --git a/e2e/scripts/st_tooltips.py b/e2e/scripts/st_tooltips.py\n--- a/e2e/scripts/st_tooltips.py\n+++ b/e2e/scripts/st_tooltips.py\n@@ -70,3 +70,4 @@\n st.text_area(\"textarea\", help=leading_indent_regular_text_tooltip)\n st.select_slider(\"selectslider\", options=[\"a\", \"b\", \"c\"], help=indented_code_tooltip)\n st.button(\"some button\", help=no_indent_tooltip)\n+st.metric(\"some metric\", value=500, help=no_indent_tooltip)\ndiff --git a/lib/streamlit/elements/metric.py b/lib/streamlit/elements/metric.py\n--- a/lib/streamlit/elements/metric.py\n+++ b/lib/streamlit/elements/metric.py\n@@ -13,7 +13,7 @@\n # limitations under the License.\n \n from textwrap import dedent\n-from typing import cast, TYPE_CHECKING, Union\n+from typing import cast, TYPE_CHECKING, Union, Optional\n from typing_extensions import TypeAlias, Literal\n \n import attr\n@@ -46,6 +46,7 @@\n value: Value,\n delta: Delta = None,\n delta_color: DeltaColor = \"normal\",\n+ help: Optional[str] = None,\n ) -> \"DeltaGenerator\":\n \"\"\"Display a metric in big bold font, with an optional indicator of how the metric changed.\n \n@@ -72,6 +73,8 @@\n negative. This is useful when a negative change is considered\n good, e.g. if cost decreased. If \"off\", delta is shown in gray\n regardless of its value.\n+ help : str\n+ An optional tooltip that gets displayed next to the metric label.\n \n Example\n -------\n@@ -109,6 +112,8 @@\n metric_proto.body = self.parse_value(value)\n metric_proto.label = self.parse_label(label)\n metric_proto.delta = self.parse_delta(delta)\n+ if help is not None:\n+ metric_proto.help = dedent(help)\n \n color_and_direction = self.determine_delta_color_and_direction(\n cast(DeltaColor, clean_text(delta_color)), delta\n", "issue": "Add tooltip to st.metric(0 to provide more information on a KPI\n\r\n### Problem\r\n\r\nIt would help our business users to have a bit more description available on KPIs.\r\n\r\n### Solution\r\n\r\n**MVP:** Could we add a tooltip to the _st.metric()_ function? That would provide a lot of drill down flexibility, background description and the ability to use f-strings to include more detailed information\r\n\r\n\n", "before_files": [{"content": "# Copyright 2018-2022 Streamlit Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom textwrap import dedent\nfrom typing import cast, TYPE_CHECKING, Union\nfrom typing_extensions import TypeAlias, Literal\n\nimport attr\nfrom streamlit.errors import StreamlitAPIException\nfrom streamlit.proto.Metric_pb2 import Metric as MetricProto\n\nfrom .utils import clean_text\n\nif TYPE_CHECKING:\n import numpy as np\n\n from streamlit.delta_generator import DeltaGenerator\n\n\nValue: TypeAlias = Union[\"np.integer\", \"np.floating\", float, str, None]\nDelta: TypeAlias = Union[float, str, None]\nDeltaColor: TypeAlias = Literal[\"normal\", \"inverse\", \"off\"]\n\n\[email protected](auto_attribs=True, slots=True, frozen=True)\nclass MetricColorAndDirection:\n color: \"MetricProto.MetricColor.ValueType\"\n direction: \"MetricProto.MetricDirection.ValueType\"\n\n\nclass MetricMixin:\n def metric(\n self,\n label: str,\n value: Value,\n delta: Delta = None,\n delta_color: DeltaColor = \"normal\",\n ) -> \"DeltaGenerator\":\n \"\"\"Display a metric in big bold font, with an optional indicator of how the metric changed.\n\n Tip: If you want to display a large number, it may be a good idea to\n shorten it using packages like `millify <https://github.com/azaitsev/millify>`_\n or `numerize <https://github.com/davidsa03/numerize>`_. E.g. ``1234`` can be\n displayed as ``1.2k`` using ``st.metric(\"Short number\", millify(1234))``.\n\n Parameters\n ----------\n label : str\n The header or Title for the metric\n value : int, float, str, or None\n Value of the metric. None is rendered as a long dash.\n delta : int, float, str, or None\n Indicator of how the metric changed, rendered with an arrow below\n the metric. If delta is negative (int/float) or starts with a minus\n sign (str), the arrow points down and the text is red; else the\n arrow points up and the text is green. If None (default), no delta\n indicator is shown.\n delta_color : str\n If \"normal\" (default), the delta indicator is shown as described\n above. If \"inverse\", it is red when positive and green when\n negative. This is useful when a negative change is considered\n good, e.g. if cost decreased. If \"off\", delta is shown in gray\n regardless of its value.\n\n Example\n -------\n >>> st.metric(label=\"Temperature\", value=\"70 \u00b0F\", delta=\"1.2 \u00b0F\")\n\n .. output::\n https://share.streamlit.io/streamlit/docs/main/python/api-examples-source/metric.example1.py\n height: 210px\n\n ``st.metric`` looks especially nice in combination with ``st.columns``:\n\n >>> col1, col2, col3 = st.columns(3)\n >>> col1.metric(\"Temperature\", \"70 \u00b0F\", \"1.2 \u00b0F\")\n >>> col2.metric(\"Wind\", \"9 mph\", \"-8%\")\n >>> col3.metric(\"Humidity\", \"86%\", \"4%\")\n\n .. output::\n https://share.streamlit.io/streamlit/docs/main/python/api-examples-source/metric.example2.py\n height: 210px\n\n The delta indicator color can also be inverted or turned off:\n\n >>> st.metric(label=\"Gas price\", value=4, delta=-0.5,\n ... delta_color=\"inverse\")\n >>>\n >>> st.metric(label=\"Active developers\", value=123, delta=123,\n ... delta_color=\"off\")\n\n .. output::\n https://share.streamlit.io/streamlit/docs/main/python/api-examples-source/metric.example3.py\n height: 320px\n\n \"\"\"\n metric_proto = MetricProto()\n metric_proto.body = self.parse_value(value)\n metric_proto.label = self.parse_label(label)\n metric_proto.delta = self.parse_delta(delta)\n\n color_and_direction = self.determine_delta_color_and_direction(\n cast(DeltaColor, clean_text(delta_color)), delta\n )\n metric_proto.color = color_and_direction.color\n metric_proto.direction = color_and_direction.direction\n\n return self.dg._enqueue(\"metric\", metric_proto)\n\n @staticmethod\n def parse_label(label: str) -> str:\n if not isinstance(label, str):\n raise TypeError(\n f\"'{str(label)}' is of type {str(type(label))}, which is not an accepted type.\"\n \" label only accepts: str. Please convert the label to an accepted type.\"\n )\n return label\n\n @staticmethod\n def parse_value(value: Value) -> str:\n if value is None:\n return \"\u2014\"\n if isinstance(value, int) or isinstance(value, float) or isinstance(value, str):\n return str(value)\n elif hasattr(value, \"item\"):\n # Add support for numpy values (e.g. int16, float64, etc.)\n try:\n # Item could also be just a variable, so we use try, except\n if isinstance(value.item(), float) or isinstance(value.item(), int):\n return str(value.item())\n except Exception:\n pass\n\n raise TypeError(\n f\"'{str(value)}' is of type {str(type(value))}, which is not an accepted type.\"\n \" value only accepts: int, float, str, or None.\"\n \" Please convert the value to an accepted type.\"\n )\n\n @staticmethod\n def parse_delta(delta: Delta) -> str:\n if delta is None or delta == \"\":\n return \"\"\n if isinstance(delta, str):\n return dedent(delta)\n elif isinstance(delta, int) or isinstance(delta, float):\n return str(delta)\n else:\n raise TypeError(\n f\"'{str(delta)}' is of type {str(type(delta))}, which is not an accepted type.\"\n \" delta only accepts: int, float, str, or None.\"\n \" Please convert the value to an accepted type.\"\n )\n\n def determine_delta_color_and_direction(\n self,\n delta_color: DeltaColor,\n delta: Delta,\n ) -> MetricColorAndDirection:\n if delta_color not in {\"normal\", \"inverse\", \"off\"}:\n raise StreamlitAPIException(\n f\"'{str(delta_color)}' is not an accepted value. delta_color only accepts: \"\n \"'normal', 'inverse', or 'off'\"\n )\n\n if delta is None or delta == \"\":\n return MetricColorAndDirection(\n color=MetricProto.MetricColor.GRAY,\n direction=MetricProto.MetricDirection.NONE,\n )\n\n if self.is_negative(delta):\n if delta_color == \"normal\":\n cd_color = MetricProto.MetricColor.RED\n elif delta_color == \"inverse\":\n cd_color = MetricProto.MetricColor.GREEN\n else:\n cd_color = MetricProto.MetricColor.GRAY\n cd_direction = MetricProto.MetricDirection.DOWN\n else:\n if delta_color == \"normal\":\n cd_color = MetricProto.MetricColor.GREEN\n elif delta_color == \"inverse\":\n cd_color = MetricProto.MetricColor.RED\n else:\n cd_color = MetricProto.MetricColor.GRAY\n cd_direction = MetricProto.MetricDirection.UP\n\n return MetricColorAndDirection(\n color=cd_color,\n direction=cd_direction,\n )\n\n @staticmethod\n def is_negative(delta: Delta) -> bool:\n return dedent(str(delta)).startswith(\"-\")\n\n @property\n def dg(self) -> \"DeltaGenerator\":\n return cast(\"DeltaGenerator\", self)\n", "path": "lib/streamlit/elements/metric.py"}, {"content": "# Copyright 2018-2022 Streamlit Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport streamlit as st\nfrom datetime import datetime\n\ndefault_tooltip = \"\"\"\nThis is a really long tooltip.\n\nLorem ipsum dolor sit amet, consectetur adipiscing elit. Ut ut turpis vitae\njusto ornare venenatis a vitae leo. Donec mollis ornare ante, eu ultricies\ntellus ornare eu. Donec eros risus, ultrices ut eleifend vel, auctor eu turpis.\nIn consectetur erat vel ante accumsan, a egestas urna aliquet. Nullam eget\nsapien eget diam euismod eleifend. Nulla purus enim, finibus ut velit eu,\nmalesuada dictum nulla. In non arcu et risus maximus fermentum eget nec ante.\n\"\"\".strip()\n\nleading_indent_code_tooltip = \"\"\"\nCode:\n\n This\n is\n a\n code\n block!\"\"\"\n\nleading_indent_regular_text_tooltip = \"\"\"\nThis is a regular text block!\nTest1\nTest2\n\n\"\"\"\n\nindented_code_tooltip = \"\"\"\nCode:\n\n for i in range(10):\n x = i * 10\n print(x)\n \"\"\"\n\nno_indent_tooltip = \"thisisatooltipwithnoindents. It has some spaces but no idents.\"\n\nst.text_input(\"some input text\", \"default text\", help=default_tooltip)\nst.number_input(\"number input\", value=1, help=leading_indent_code_tooltip)\nst.checkbox(\"some checkbox\", help=leading_indent_regular_text_tooltip)\nst.radio(\"best animal\", (\"tiger\", \"giraffe\", \"bear\"), 0, help=indented_code_tooltip)\nst.selectbox(\"selectbox\", (\"a\", \"b\", \"c\"), 0, help=default_tooltip)\nst.time_input(\"time\", datetime(2019, 7, 6, 21, 15), help=leading_indent_code_tooltip)\nst.date_input(\n \"date\", datetime(2019, 7, 6, 21, 15), help=leading_indent_regular_text_tooltip\n)\nst.slider(\"slider\", 0, 100, 50, help=indented_code_tooltip)\nst.color_picker(\"color picker\", help=no_indent_tooltip)\nst.file_uploader(\"file uploader\", help=default_tooltip)\nst.multiselect(\n \"multiselect\", [\"a\", \"b\", \"c\"], [\"a\", \"b\"], help=leading_indent_code_tooltip\n)\nst.text_area(\"textarea\", help=leading_indent_regular_text_tooltip)\nst.select_slider(\"selectslider\", options=[\"a\", \"b\", \"c\"], help=indented_code_tooltip)\nst.button(\"some button\", help=no_indent_tooltip)\n", "path": "e2e/scripts/st_tooltips.py"}], "after_files": [{"content": "# Copyright 2018-2022 Streamlit Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom textwrap import dedent\nfrom typing import cast, TYPE_CHECKING, Union, Optional\nfrom typing_extensions import TypeAlias, Literal\n\nimport attr\nfrom streamlit.errors import StreamlitAPIException\nfrom streamlit.proto.Metric_pb2 import Metric as MetricProto\n\nfrom .utils import clean_text\n\nif TYPE_CHECKING:\n import numpy as np\n\n from streamlit.delta_generator import DeltaGenerator\n\n\nValue: TypeAlias = Union[\"np.integer\", \"np.floating\", float, str, None]\nDelta: TypeAlias = Union[float, str, None]\nDeltaColor: TypeAlias = Literal[\"normal\", \"inverse\", \"off\"]\n\n\[email protected](auto_attribs=True, slots=True, frozen=True)\nclass MetricColorAndDirection:\n color: \"MetricProto.MetricColor.ValueType\"\n direction: \"MetricProto.MetricDirection.ValueType\"\n\n\nclass MetricMixin:\n def metric(\n self,\n label: str,\n value: Value,\n delta: Delta = None,\n delta_color: DeltaColor = \"normal\",\n help: Optional[str] = None,\n ) -> \"DeltaGenerator\":\n \"\"\"Display a metric in big bold font, with an optional indicator of how the metric changed.\n\n Tip: If you want to display a large number, it may be a good idea to\n shorten it using packages like `millify <https://github.com/azaitsev/millify>`_\n or `numerize <https://github.com/davidsa03/numerize>`_. E.g. ``1234`` can be\n displayed as ``1.2k`` using ``st.metric(\"Short number\", millify(1234))``.\n\n Parameters\n ----------\n label : str\n The header or Title for the metric\n value : int, float, str, or None\n Value of the metric. None is rendered as a long dash.\n delta : int, float, str, or None\n Indicator of how the metric changed, rendered with an arrow below\n the metric. If delta is negative (int/float) or starts with a minus\n sign (str), the arrow points down and the text is red; else the\n arrow points up and the text is green. If None (default), no delta\n indicator is shown.\n delta_color : str\n If \"normal\" (default), the delta indicator is shown as described\n above. If \"inverse\", it is red when positive and green when\n negative. This is useful when a negative change is considered\n good, e.g. if cost decreased. If \"off\", delta is shown in gray\n regardless of its value.\n help : str\n An optional tooltip that gets displayed next to the metric label.\n\n Example\n -------\n >>> st.metric(label=\"Temperature\", value=\"70 \u00b0F\", delta=\"1.2 \u00b0F\")\n\n .. output::\n https://share.streamlit.io/streamlit/docs/main/python/api-examples-source/metric.example1.py\n height: 210px\n\n ``st.metric`` looks especially nice in combination with ``st.columns``:\n\n >>> col1, col2, col3 = st.columns(3)\n >>> col1.metric(\"Temperature\", \"70 \u00b0F\", \"1.2 \u00b0F\")\n >>> col2.metric(\"Wind\", \"9 mph\", \"-8%\")\n >>> col3.metric(\"Humidity\", \"86%\", \"4%\")\n\n .. output::\n https://share.streamlit.io/streamlit/docs/main/python/api-examples-source/metric.example2.py\n height: 210px\n\n The delta indicator color can also be inverted or turned off:\n\n >>> st.metric(label=\"Gas price\", value=4, delta=-0.5,\n ... delta_color=\"inverse\")\n >>>\n >>> st.metric(label=\"Active developers\", value=123, delta=123,\n ... delta_color=\"off\")\n\n .. output::\n https://share.streamlit.io/streamlit/docs/main/python/api-examples-source/metric.example3.py\n height: 320px\n\n \"\"\"\n metric_proto = MetricProto()\n metric_proto.body = self.parse_value(value)\n metric_proto.label = self.parse_label(label)\n metric_proto.delta = self.parse_delta(delta)\n if help is not None:\n metric_proto.help = dedent(help)\n\n color_and_direction = self.determine_delta_color_and_direction(\n cast(DeltaColor, clean_text(delta_color)), delta\n )\n metric_proto.color = color_and_direction.color\n metric_proto.direction = color_and_direction.direction\n\n return self.dg._enqueue(\"metric\", metric_proto)\n\n @staticmethod\n def parse_label(label: str) -> str:\n if not isinstance(label, str):\n raise TypeError(\n f\"'{str(label)}' is of type {str(type(label))}, which is not an accepted type.\"\n \" label only accepts: str. Please convert the label to an accepted type.\"\n )\n return label\n\n @staticmethod\n def parse_value(value: Value) -> str:\n if value is None:\n return \"\u2014\"\n if isinstance(value, int) or isinstance(value, float) or isinstance(value, str):\n return str(value)\n elif hasattr(value, \"item\"):\n # Add support for numpy values (e.g. int16, float64, etc.)\n try:\n # Item could also be just a variable, so we use try, except\n if isinstance(value.item(), float) or isinstance(value.item(), int):\n return str(value.item())\n except Exception:\n pass\n\n raise TypeError(\n f\"'{str(value)}' is of type {str(type(value))}, which is not an accepted type.\"\n \" value only accepts: int, float, str, or None.\"\n \" Please convert the value to an accepted type.\"\n )\n\n @staticmethod\n def parse_delta(delta: Delta) -> str:\n if delta is None or delta == \"\":\n return \"\"\n if isinstance(delta, str):\n return dedent(delta)\n elif isinstance(delta, int) or isinstance(delta, float):\n return str(delta)\n else:\n raise TypeError(\n f\"'{str(delta)}' is of type {str(type(delta))}, which is not an accepted type.\"\n \" delta only accepts: int, float, str, or None.\"\n \" Please convert the value to an accepted type.\"\n )\n\n def determine_delta_color_and_direction(\n self,\n delta_color: DeltaColor,\n delta: Delta,\n ) -> MetricColorAndDirection:\n if delta_color not in {\"normal\", \"inverse\", \"off\"}:\n raise StreamlitAPIException(\n f\"'{str(delta_color)}' is not an accepted value. delta_color only accepts: \"\n \"'normal', 'inverse', or 'off'\"\n )\n\n if delta is None or delta == \"\":\n return MetricColorAndDirection(\n color=MetricProto.MetricColor.GRAY,\n direction=MetricProto.MetricDirection.NONE,\n )\n\n if self.is_negative(delta):\n if delta_color == \"normal\":\n cd_color = MetricProto.MetricColor.RED\n elif delta_color == \"inverse\":\n cd_color = MetricProto.MetricColor.GREEN\n else:\n cd_color = MetricProto.MetricColor.GRAY\n cd_direction = MetricProto.MetricDirection.DOWN\n else:\n if delta_color == \"normal\":\n cd_color = MetricProto.MetricColor.GREEN\n elif delta_color == \"inverse\":\n cd_color = MetricProto.MetricColor.RED\n else:\n cd_color = MetricProto.MetricColor.GRAY\n cd_direction = MetricProto.MetricDirection.UP\n\n return MetricColorAndDirection(\n color=cd_color,\n direction=cd_direction,\n )\n\n @staticmethod\n def is_negative(delta: Delta) -> bool:\n return dedent(str(delta)).startswith(\"-\")\n\n @property\n def dg(self) -> \"DeltaGenerator\":\n return cast(\"DeltaGenerator\", self)\n", "path": "lib/streamlit/elements/metric.py"}, {"content": "# Copyright 2018-2022 Streamlit Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport streamlit as st\nfrom datetime import datetime\n\ndefault_tooltip = \"\"\"\nThis is a really long tooltip.\n\nLorem ipsum dolor sit amet, consectetur adipiscing elit. Ut ut turpis vitae\njusto ornare venenatis a vitae leo. Donec mollis ornare ante, eu ultricies\ntellus ornare eu. Donec eros risus, ultrices ut eleifend vel, auctor eu turpis.\nIn consectetur erat vel ante accumsan, a egestas urna aliquet. Nullam eget\nsapien eget diam euismod eleifend. Nulla purus enim, finibus ut velit eu,\nmalesuada dictum nulla. In non arcu et risus maximus fermentum eget nec ante.\n\"\"\".strip()\n\nleading_indent_code_tooltip = \"\"\"\nCode:\n\n This\n is\n a\n code\n block!\"\"\"\n\nleading_indent_regular_text_tooltip = \"\"\"\nThis is a regular text block!\nTest1\nTest2\n\n\"\"\"\n\nindented_code_tooltip = \"\"\"\nCode:\n\n for i in range(10):\n x = i * 10\n print(x)\n \"\"\"\n\nno_indent_tooltip = \"thisisatooltipwithnoindents. It has some spaces but no idents.\"\n\nst.text_input(\"some input text\", \"default text\", help=default_tooltip)\nst.number_input(\"number input\", value=1, help=leading_indent_code_tooltip)\nst.checkbox(\"some checkbox\", help=leading_indent_regular_text_tooltip)\nst.radio(\"best animal\", (\"tiger\", \"giraffe\", \"bear\"), 0, help=indented_code_tooltip)\nst.selectbox(\"selectbox\", (\"a\", \"b\", \"c\"), 0, help=default_tooltip)\nst.time_input(\"time\", datetime(2019, 7, 6, 21, 15), help=leading_indent_code_tooltip)\nst.date_input(\n \"date\", datetime(2019, 7, 6, 21, 15), help=leading_indent_regular_text_tooltip\n)\nst.slider(\"slider\", 0, 100, 50, help=indented_code_tooltip)\nst.color_picker(\"color picker\", help=no_indent_tooltip)\nst.file_uploader(\"file uploader\", help=default_tooltip)\nst.multiselect(\n \"multiselect\", [\"a\", \"b\", \"c\"], [\"a\", \"b\"], help=leading_indent_code_tooltip\n)\nst.text_area(\"textarea\", help=leading_indent_regular_text_tooltip)\nst.select_slider(\"selectslider\", options=[\"a\", \"b\", \"c\"], help=indented_code_tooltip)\nst.button(\"some button\", help=no_indent_tooltip)\nst.metric(\"some metric\", value=500, help=no_indent_tooltip)\n", "path": "e2e/scripts/st_tooltips.py"}]} | 3,560 | 469 |
gh_patches_debug_28253 | rasdani/github-patches | git_diff | goauthentik__authentik-6841 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
SCIM exhausts all workers
**Describe the bug**
We do not have a SCIM provider configured but there are 8 workers constantly doing SCIM tasks and the number of entries in Redis continues to grow indefinitely (more than 200K right now).
**To Reproduce**
Steps to reproduce the behavior:
1. Have an LDAP source with at least 900 users and another 900 groups, with a total of 147k memberships.
2. Just wait for LDAP sync.
3. See how SCIM tasks take up all the workers despite not having a SCIM provider configured.
**Expected behavior**
Without a SCIM provider, there is no SCIM tasks to do.
**Logs**
All worker logs are similar to the following one:
```
{"event": "Task started", "level": "info", "logger": "authentik.root.celery", "pid": 62722, "task_id": "d8c04738-ba7f-4645-93fd-c5e634c0119c", "task_name": "scim_signal_direct", "timestamp": "2023-09-11T10:34:28.028322"}
{"event": "Task finished", "level": "info", "logger": "authentik.root.celery", "pid": 62722, "state": "SUCCESS", "task_id": "d8c04738-ba7f-4645-93fd-c5e634c0119c", "task_name": "scim_signal_direct", "timestamp": "2023-09-11T10:34:28.065772"}
{"event": "Task started", "level": "info", "logger": "authentik.root.celery", "pid": 62723, "task_id": "9fcf5591-1e26-4572-8f55-1dd50bc789fb", "task_name": "scim_signal_direct", "timestamp": "2023-09-11T10:34:28.472488"}
{"event": "Task finished", "level": "info", "logger": "authentik.root.celery", "pid": 62723, "state": "SUCCESS", "task_id": "9fcf5591-1e26-4572-8f55-1dd50bc789fb", "task_name": "scim_signal_direct", "timestamp": "2023-09-11T10:34:28.518844"}
{"event": "Task started", "level": "info", "logger": "authentik.root.celery", "pid": 62724, "task_id": "7aa43f29-11ba-44e4-9687-e3b002580e6b", "task_name": "scim_signal_direct", "timestamp": "2023-09-11T10:34:28.906288"}
{"event": "Task finished", "level": "info", "logger": "authentik.root.celery", "pid": 62724, "state": "SUCCESS", "task_id": "7aa43f29-11ba-44e4-9687-e3b002580e6b", "task_name": "scim_signal_direct", "timestamp": "2023-09-11T10:34:28.947772"}
{"event": "Task started", "level": "info", "logger": "authentik.root.celery", "pid": 62725, "task_id": "074a50e4-8454-4684-acff-eb46c5a25eff", "task_name": "scim_signal_direct", "timestamp": "2023-09-11T10:34:29.384486"}
{"event": "Task finished", "level": "info", "logger": "authentik.root.celery", "pid": 62725, "state": "SUCCESS", "task_id": "074a50e4-8454-4684-acff-eb46c5a25eff", "task_name": "scim_signal_direct", "timestamp": "2023-09-11T10:34:29.422690"}
{"event": "Task started", "level": "info", "logger": "authentik.root.celery", "pid": 62726, "task_id": "c155e461-1185-40a9-8018-685f399eb750", "task_name": "scim_signal_direct", "timestamp": "2023-09-11T10:34:29.841684"}
{"event": "Task finished", "level": "info", "logger": "authentik.root.celery", "pid": 62726, "state": "SUCCESS", "task_id": "c155e461-1185-40a9-8018-685f399eb750", "task_name": "scim_signal_direct", "timestamp": "2023-09-11T10:34:29.881108"}
{"event": "Task started", "level": "info", "logger": "authentik.root.celery", "pid": 62727, "task_id": "2ebd8a0e-83ca-4224-b14c-fdb4ad1e7c72", "task_name": "scim_signal_direct", "timestamp": "2023-09-11T10:34:30.313036"}
{"event": "Task finished", "level": "info", "logger": "authentik.root.celery", "pid": 62727, "state": "SUCCESS", "task_id": "2ebd8a0e-83ca-4224-b14c-fdb4ad1e7c72", "task_name": "scim_signal_direct", "timestamp": "2023-09-11T10:34:30.355698"}
{"event": "Task started", "level": "info", "logger": "authentik.root.celery", "pid": 62728, "task_id": "8c9a7cec-657d-49f5-a6a6-1e72a5f2d1c6", "task_name": "scim_signal_direct", "timestamp": "2023-09-11T10:34:30.793799"}
{"event": "Task finished", "level": "info", "logger": "authentik.root.celery", "pid": 62728, "state": "SUCCESS", "task_id": "8c9a7cec-657d-49f5-a6a6-1e72a5f2d1c6", "task_name": "scim_signal_direct", "timestamp": "2023-09-11T10:34:30.836449"}
{"event": "Task started", "level": "info", "logger": "authentik.root.celery", "pid": 62729, "task_id": "70fbfbfb-a63e-45df-bd99-fa6ece27181e", "task_name": "scim_signal_direct", "timestamp": "2023-09-11T10:34:31.210426"}
{"event": "Task finished", "level": "info", "logger": "authentik.root.celery", "pid": 62729, "state": "SUCCESS", "task_id": "70fbfbfb-a63e-45df-bd99-fa6ece27181e", "task_name": "scim_signal_direct", "timestamp": "2023-09-11T10:34:31.251003"}
{"event": "Task started", "level": "info", "logger": "authentik.root.celery", "pid": 62730, "task_id": "197ba0ec-424f-440d-9b2d-436cb34cb031", "task_name": "scim_signal_direct", "timestamp": "2023-09-11T10:34:31.684274"}
{"event": "Task finished", "level": "info", "logger": "authentik.root.celery", "pid": 62730, "state": "SUCCESS", "task_id": "197ba0ec-424f-440d-9b2d-436cb34cb031", "task_name": "scim_signal_direct", "timestamp": "2023-09-11T10:34:31.724073"}
{"event": "Task started", "level": "info", "logger": "authentik.root.celery", "pid": 62699, "task_id": "34e4f35a-8d6c-4a3a-96db-58fd30dc1878", "task_name": "scim_signal_direct", "timestamp": "2023-09-11T10:34:32.330238"}
{"event": "Task finished", "level": "info", "logger": "authentik.root.celery", "pid": 62699, "state": "SUCCESS", "task_id": "34e4f35a-8d6c-4a3a-96db-58fd30dc1878", "task_name": "scim_signal_direct", "timestamp": "2023-09-11T10:34:32.368228"}
{"event": "Task started", "level": "info", "logger": "authentik.root.celery", "pid": 62732, "task_id": "fd2ffb15-bf94-4684-b49d-011b61d1d459", "task_name": "scim_signal_direct", "timestamp": "2023-09-11T10:35:03.788787"}
{"event": "Task finished", "level": "info", "logger": "authentik.root.celery", "pid": 62732, "state": "SUCCESS", "task_id": "fd2ffb15-bf94-4684-b49d-011b61d1d459", "task_name": "scim_signal_direct", "timestamp": "2023-09-11T10:35:03.828492"}
{"event": "Task started", "level": "info", "logger": "authentik.root.celery", "pid": 62733, "task_id": "6b536317-decd-4fa9-a39c-083bdd1cf2ac", "task_name": "scim_signal_direct", "timestamp": "2023-09-11T10:35:08.961920"}
{"event": "Task finished", "level": "info", "logger": "authentik.root.celery", "pid": 62733, "state": "SUCCESS", "task_id": "6b536317-decd-4fa9-a39c-083bdd1cf2ac", "task_name": "scim_signal_direct", "timestamp": "2023-09-11T10:35:09.002660"}
{"event": "Task started", "level": "info", "logger": "authentik.root.celery", "pid": 62734, "task_id": "94fde81c-11cd-4811-8bb0-279f7b1b4fa8", "task_name": "scim_signal_direct", "timestamp": "2023-09-11T10:35:19.022284"}
{"event": "Task finished", "level": "info", "logger": "authentik.root.celery", "pid": 62734, "state": "SUCCESS", "task_id": "94fde81c-11cd-4811-8bb0-279f7b1b4fa8", "task_name": "scim_signal_direct", "timestamp": "2023-09-11T10:35:19.061598"}
```
**Version and Deployment (please complete the following information):**
- authentik version: 2023.6.1
- Deployment: helm
**Additional context**
Ask whatever you need.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `authentik/providers/scim/signals.py`
Content:
```
1 """SCIM provider signals"""
2 from django.db.models import Model
3 from django.db.models.signals import m2m_changed, post_save, pre_delete
4 from django.dispatch import receiver
5 from pydanticscim.responses import PatchOp
6 from structlog.stdlib import get_logger
7
8 from authentik.core.models import Group, User
9 from authentik.lib.utils.reflection import class_to_path
10 from authentik.providers.scim.models import SCIMProvider
11 from authentik.providers.scim.tasks import scim_signal_direct, scim_signal_m2m, scim_sync
12
13 LOGGER = get_logger()
14
15
16 @receiver(post_save, sender=SCIMProvider)
17 def post_save_provider(sender: type[Model], instance, created: bool, **_):
18 """Trigger sync when SCIM provider is saved"""
19 scim_sync.delay(instance.pk)
20
21
22 @receiver(post_save, sender=User)
23 @receiver(post_save, sender=Group)
24 def post_save_scim(sender: type[Model], instance: User | Group, created: bool, **_):
25 """Post save handler"""
26 scim_signal_direct.delay(class_to_path(instance.__class__), instance.pk, PatchOp.add.value)
27
28
29 @receiver(pre_delete, sender=User)
30 @receiver(pre_delete, sender=Group)
31 def pre_delete_scim(sender: type[Model], instance: User | Group, **_):
32 """Pre-delete handler"""
33 scim_signal_direct.delay(class_to_path(instance.__class__), instance.pk, PatchOp.remove.value)
34
35
36 @receiver(m2m_changed, sender=User.ak_groups.through)
37 def m2m_changed_scim(
38 sender: type[Model], instance, action: str, pk_set: set, reverse: bool, **kwargs
39 ):
40 """Sync group membership"""
41 if action not in ["post_add", "post_remove"]:
42 return
43 # reverse: instance is a Group, pk_set is a list of user pks
44 # non-reverse: instance is a User, pk_set is a list of groups
45 if reverse:
46 scim_signal_m2m.delay(str(instance.pk), action, list(pk_set))
47 else:
48 for group_pk in pk_set:
49 scim_signal_m2m.delay(group_pk, action, [instance.pk])
50
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/authentik/providers/scim/signals.py b/authentik/providers/scim/signals.py
--- a/authentik/providers/scim/signals.py
+++ b/authentik/providers/scim/signals.py
@@ -23,6 +23,8 @@
@receiver(post_save, sender=Group)
def post_save_scim(sender: type[Model], instance: User | Group, created: bool, **_):
"""Post save handler"""
+ if not SCIMProvider.objects.filter(backchannel_application__isnull=False).exists():
+ return
scim_signal_direct.delay(class_to_path(instance.__class__), instance.pk, PatchOp.add.value)
@@ -30,6 +32,8 @@
@receiver(pre_delete, sender=Group)
def pre_delete_scim(sender: type[Model], instance: User | Group, **_):
"""Pre-delete handler"""
+ if not SCIMProvider.objects.filter(backchannel_application__isnull=False).exists():
+ return
scim_signal_direct.delay(class_to_path(instance.__class__), instance.pk, PatchOp.remove.value)
@@ -40,6 +44,8 @@
"""Sync group membership"""
if action not in ["post_add", "post_remove"]:
return
+ if not SCIMProvider.objects.filter(backchannel_application__isnull=False).exists():
+ return
# reverse: instance is a Group, pk_set is a list of user pks
# non-reverse: instance is a User, pk_set is a list of groups
if reverse:
| {"golden_diff": "diff --git a/authentik/providers/scim/signals.py b/authentik/providers/scim/signals.py\n--- a/authentik/providers/scim/signals.py\n+++ b/authentik/providers/scim/signals.py\n@@ -23,6 +23,8 @@\n @receiver(post_save, sender=Group)\n def post_save_scim(sender: type[Model], instance: User | Group, created: bool, **_):\n \"\"\"Post save handler\"\"\"\n+ if not SCIMProvider.objects.filter(backchannel_application__isnull=False).exists():\n+ return\n scim_signal_direct.delay(class_to_path(instance.__class__), instance.pk, PatchOp.add.value)\n \n \n@@ -30,6 +32,8 @@\n @receiver(pre_delete, sender=Group)\n def pre_delete_scim(sender: type[Model], instance: User | Group, **_):\n \"\"\"Pre-delete handler\"\"\"\n+ if not SCIMProvider.objects.filter(backchannel_application__isnull=False).exists():\n+ return\n scim_signal_direct.delay(class_to_path(instance.__class__), instance.pk, PatchOp.remove.value)\n \n \n@@ -40,6 +44,8 @@\n \"\"\"Sync group membership\"\"\"\n if action not in [\"post_add\", \"post_remove\"]:\n return\n+ if not SCIMProvider.objects.filter(backchannel_application__isnull=False).exists():\n+ return\n # reverse: instance is a Group, pk_set is a list of user pks\n # non-reverse: instance is a User, pk_set is a list of groups\n if reverse:\n", "issue": "SCIM exhausts all workers\n**Describe the bug**\r\n\r\nWe do not have a SCIM provider configured but there are 8 workers constantly doing SCIM tasks and the number of entries in Redis continues to grow indefinitely (more than 200K right now).\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n\r\n1. Have an LDAP source with at least 900 users and another 900 groups, with a total of 147k memberships.\r\n2. Just wait for LDAP sync.\r\n3. See how SCIM tasks take up all the workers despite not having a SCIM provider configured.\r\n\r\n**Expected behavior**\r\n\r\nWithout a SCIM provider, there is no SCIM tasks to do.\r\n\r\n**Logs**\r\n\r\nAll worker logs are similar to the following one:\r\n\r\n```\r\n{\"event\": \"Task started\", \"level\": \"info\", \"logger\": \"authentik.root.celery\", \"pid\": 62722, \"task_id\": \"d8c04738-ba7f-4645-93fd-c5e634c0119c\", \"task_name\": \"scim_signal_direct\", \"timestamp\": \"2023-09-11T10:34:28.028322\"}\r\n{\"event\": \"Task finished\", \"level\": \"info\", \"logger\": \"authentik.root.celery\", \"pid\": 62722, \"state\": \"SUCCESS\", \"task_id\": \"d8c04738-ba7f-4645-93fd-c5e634c0119c\", \"task_name\": \"scim_signal_direct\", \"timestamp\": \"2023-09-11T10:34:28.065772\"}\r\n{\"event\": \"Task started\", \"level\": \"info\", \"logger\": \"authentik.root.celery\", \"pid\": 62723, \"task_id\": \"9fcf5591-1e26-4572-8f55-1dd50bc789fb\", \"task_name\": \"scim_signal_direct\", \"timestamp\": \"2023-09-11T10:34:28.472488\"}\r\n{\"event\": \"Task finished\", \"level\": \"info\", \"logger\": \"authentik.root.celery\", \"pid\": 62723, \"state\": \"SUCCESS\", \"task_id\": \"9fcf5591-1e26-4572-8f55-1dd50bc789fb\", \"task_name\": \"scim_signal_direct\", \"timestamp\": \"2023-09-11T10:34:28.518844\"}\r\n{\"event\": \"Task started\", \"level\": \"info\", \"logger\": \"authentik.root.celery\", \"pid\": 62724, \"task_id\": \"7aa43f29-11ba-44e4-9687-e3b002580e6b\", \"task_name\": \"scim_signal_direct\", \"timestamp\": \"2023-09-11T10:34:28.906288\"}\r\n{\"event\": \"Task finished\", \"level\": \"info\", \"logger\": \"authentik.root.celery\", \"pid\": 62724, \"state\": \"SUCCESS\", \"task_id\": \"7aa43f29-11ba-44e4-9687-e3b002580e6b\", \"task_name\": \"scim_signal_direct\", \"timestamp\": \"2023-09-11T10:34:28.947772\"}\r\n{\"event\": \"Task started\", \"level\": \"info\", \"logger\": \"authentik.root.celery\", \"pid\": 62725, \"task_id\": \"074a50e4-8454-4684-acff-eb46c5a25eff\", \"task_name\": \"scim_signal_direct\", \"timestamp\": \"2023-09-11T10:34:29.384486\"}\r\n{\"event\": \"Task finished\", \"level\": \"info\", \"logger\": \"authentik.root.celery\", \"pid\": 62725, \"state\": \"SUCCESS\", \"task_id\": \"074a50e4-8454-4684-acff-eb46c5a25eff\", \"task_name\": \"scim_signal_direct\", \"timestamp\": \"2023-09-11T10:34:29.422690\"}\r\n{\"event\": \"Task started\", \"level\": \"info\", \"logger\": \"authentik.root.celery\", \"pid\": 62726, \"task_id\": \"c155e461-1185-40a9-8018-685f399eb750\", \"task_name\": \"scim_signal_direct\", \"timestamp\": \"2023-09-11T10:34:29.841684\"}\r\n{\"event\": \"Task finished\", \"level\": \"info\", \"logger\": \"authentik.root.celery\", \"pid\": 62726, \"state\": \"SUCCESS\", \"task_id\": \"c155e461-1185-40a9-8018-685f399eb750\", \"task_name\": \"scim_signal_direct\", \"timestamp\": \"2023-09-11T10:34:29.881108\"}\r\n{\"event\": \"Task started\", \"level\": \"info\", \"logger\": \"authentik.root.celery\", \"pid\": 62727, \"task_id\": \"2ebd8a0e-83ca-4224-b14c-fdb4ad1e7c72\", \"task_name\": \"scim_signal_direct\", \"timestamp\": \"2023-09-11T10:34:30.313036\"}\r\n{\"event\": \"Task finished\", \"level\": \"info\", \"logger\": \"authentik.root.celery\", \"pid\": 62727, \"state\": \"SUCCESS\", \"task_id\": \"2ebd8a0e-83ca-4224-b14c-fdb4ad1e7c72\", \"task_name\": \"scim_signal_direct\", \"timestamp\": \"2023-09-11T10:34:30.355698\"}\r\n{\"event\": \"Task started\", \"level\": \"info\", \"logger\": \"authentik.root.celery\", \"pid\": 62728, \"task_id\": \"8c9a7cec-657d-49f5-a6a6-1e72a5f2d1c6\", \"task_name\": \"scim_signal_direct\", \"timestamp\": \"2023-09-11T10:34:30.793799\"}\r\n{\"event\": \"Task finished\", \"level\": \"info\", \"logger\": \"authentik.root.celery\", \"pid\": 62728, \"state\": \"SUCCESS\", \"task_id\": \"8c9a7cec-657d-49f5-a6a6-1e72a5f2d1c6\", \"task_name\": \"scim_signal_direct\", \"timestamp\": \"2023-09-11T10:34:30.836449\"}\r\n{\"event\": \"Task started\", \"level\": \"info\", \"logger\": \"authentik.root.celery\", \"pid\": 62729, \"task_id\": \"70fbfbfb-a63e-45df-bd99-fa6ece27181e\", \"task_name\": \"scim_signal_direct\", \"timestamp\": \"2023-09-11T10:34:31.210426\"}\r\n{\"event\": \"Task finished\", \"level\": \"info\", \"logger\": \"authentik.root.celery\", \"pid\": 62729, \"state\": \"SUCCESS\", \"task_id\": \"70fbfbfb-a63e-45df-bd99-fa6ece27181e\", \"task_name\": \"scim_signal_direct\", \"timestamp\": \"2023-09-11T10:34:31.251003\"}\r\n{\"event\": \"Task started\", \"level\": \"info\", \"logger\": \"authentik.root.celery\", \"pid\": 62730, \"task_id\": \"197ba0ec-424f-440d-9b2d-436cb34cb031\", \"task_name\": \"scim_signal_direct\", \"timestamp\": \"2023-09-11T10:34:31.684274\"}\r\n{\"event\": \"Task finished\", \"level\": \"info\", \"logger\": \"authentik.root.celery\", \"pid\": 62730, \"state\": \"SUCCESS\", \"task_id\": \"197ba0ec-424f-440d-9b2d-436cb34cb031\", \"task_name\": \"scim_signal_direct\", \"timestamp\": \"2023-09-11T10:34:31.724073\"}\r\n{\"event\": \"Task started\", \"level\": \"info\", \"logger\": \"authentik.root.celery\", \"pid\": 62699, \"task_id\": \"34e4f35a-8d6c-4a3a-96db-58fd30dc1878\", \"task_name\": \"scim_signal_direct\", \"timestamp\": \"2023-09-11T10:34:32.330238\"}\r\n{\"event\": \"Task finished\", \"level\": \"info\", \"logger\": \"authentik.root.celery\", \"pid\": 62699, \"state\": \"SUCCESS\", \"task_id\": \"34e4f35a-8d6c-4a3a-96db-58fd30dc1878\", \"task_name\": \"scim_signal_direct\", \"timestamp\": \"2023-09-11T10:34:32.368228\"}\r\n{\"event\": \"Task started\", \"level\": \"info\", \"logger\": \"authentik.root.celery\", \"pid\": 62732, \"task_id\": \"fd2ffb15-bf94-4684-b49d-011b61d1d459\", \"task_name\": \"scim_signal_direct\", \"timestamp\": \"2023-09-11T10:35:03.788787\"}\r\n{\"event\": \"Task finished\", \"level\": \"info\", \"logger\": \"authentik.root.celery\", \"pid\": 62732, \"state\": \"SUCCESS\", \"task_id\": \"fd2ffb15-bf94-4684-b49d-011b61d1d459\", \"task_name\": \"scim_signal_direct\", \"timestamp\": \"2023-09-11T10:35:03.828492\"}\r\n{\"event\": \"Task started\", \"level\": \"info\", \"logger\": \"authentik.root.celery\", \"pid\": 62733, \"task_id\": \"6b536317-decd-4fa9-a39c-083bdd1cf2ac\", \"task_name\": \"scim_signal_direct\", \"timestamp\": \"2023-09-11T10:35:08.961920\"}\r\n{\"event\": \"Task finished\", \"level\": \"info\", \"logger\": \"authentik.root.celery\", \"pid\": 62733, \"state\": \"SUCCESS\", \"task_id\": \"6b536317-decd-4fa9-a39c-083bdd1cf2ac\", \"task_name\": \"scim_signal_direct\", \"timestamp\": \"2023-09-11T10:35:09.002660\"}\r\n{\"event\": \"Task started\", \"level\": \"info\", \"logger\": \"authentik.root.celery\", \"pid\": 62734, \"task_id\": \"94fde81c-11cd-4811-8bb0-279f7b1b4fa8\", \"task_name\": \"scim_signal_direct\", \"timestamp\": \"2023-09-11T10:35:19.022284\"}\r\n{\"event\": \"Task finished\", \"level\": \"info\", \"logger\": \"authentik.root.celery\", \"pid\": 62734, \"state\": \"SUCCESS\", \"task_id\": \"94fde81c-11cd-4811-8bb0-279f7b1b4fa8\", \"task_name\": \"scim_signal_direct\", \"timestamp\": \"2023-09-11T10:35:19.061598\"}\r\n```\r\n\r\n**Version and Deployment (please complete the following information):**\r\n\r\n- authentik version: 2023.6.1\r\n- Deployment: helm\r\n\r\n**Additional context**\r\n\r\nAsk whatever you need.\r\n\n", "before_files": [{"content": "\"\"\"SCIM provider signals\"\"\"\nfrom django.db.models import Model\nfrom django.db.models.signals import m2m_changed, post_save, pre_delete\nfrom django.dispatch import receiver\nfrom pydanticscim.responses import PatchOp\nfrom structlog.stdlib import get_logger\n\nfrom authentik.core.models import Group, User\nfrom authentik.lib.utils.reflection import class_to_path\nfrom authentik.providers.scim.models import SCIMProvider\nfrom authentik.providers.scim.tasks import scim_signal_direct, scim_signal_m2m, scim_sync\n\nLOGGER = get_logger()\n\n\n@receiver(post_save, sender=SCIMProvider)\ndef post_save_provider(sender: type[Model], instance, created: bool, **_):\n \"\"\"Trigger sync when SCIM provider is saved\"\"\"\n scim_sync.delay(instance.pk)\n\n\n@receiver(post_save, sender=User)\n@receiver(post_save, sender=Group)\ndef post_save_scim(sender: type[Model], instance: User | Group, created: bool, **_):\n \"\"\"Post save handler\"\"\"\n scim_signal_direct.delay(class_to_path(instance.__class__), instance.pk, PatchOp.add.value)\n\n\n@receiver(pre_delete, sender=User)\n@receiver(pre_delete, sender=Group)\ndef pre_delete_scim(sender: type[Model], instance: User | Group, **_):\n \"\"\"Pre-delete handler\"\"\"\n scim_signal_direct.delay(class_to_path(instance.__class__), instance.pk, PatchOp.remove.value)\n\n\n@receiver(m2m_changed, sender=User.ak_groups.through)\ndef m2m_changed_scim(\n sender: type[Model], instance, action: str, pk_set: set, reverse: bool, **kwargs\n):\n \"\"\"Sync group membership\"\"\"\n if action not in [\"post_add\", \"post_remove\"]:\n return\n # reverse: instance is a Group, pk_set is a list of user pks\n # non-reverse: instance is a User, pk_set is a list of groups\n if reverse:\n scim_signal_m2m.delay(str(instance.pk), action, list(pk_set))\n else:\n for group_pk in pk_set:\n scim_signal_m2m.delay(group_pk, action, [instance.pk])\n", "path": "authentik/providers/scim/signals.py"}], "after_files": [{"content": "\"\"\"SCIM provider signals\"\"\"\nfrom django.db.models import Model\nfrom django.db.models.signals import m2m_changed, post_save, pre_delete\nfrom django.dispatch import receiver\nfrom pydanticscim.responses import PatchOp\nfrom structlog.stdlib import get_logger\n\nfrom authentik.core.models import Group, User\nfrom authentik.lib.utils.reflection import class_to_path\nfrom authentik.providers.scim.models import SCIMProvider\nfrom authentik.providers.scim.tasks import scim_signal_direct, scim_signal_m2m, scim_sync\n\nLOGGER = get_logger()\n\n\n@receiver(post_save, sender=SCIMProvider)\ndef post_save_provider(sender: type[Model], instance, created: bool, **_):\n \"\"\"Trigger sync when SCIM provider is saved\"\"\"\n scim_sync.delay(instance.pk)\n\n\n@receiver(post_save, sender=User)\n@receiver(post_save, sender=Group)\ndef post_save_scim(sender: type[Model], instance: User | Group, created: bool, **_):\n \"\"\"Post save handler\"\"\"\n if not SCIMProvider.objects.filter(backchannel_application__isnull=False).exists():\n return\n scim_signal_direct.delay(class_to_path(instance.__class__), instance.pk, PatchOp.add.value)\n\n\n@receiver(pre_delete, sender=User)\n@receiver(pre_delete, sender=Group)\ndef pre_delete_scim(sender: type[Model], instance: User | Group, **_):\n \"\"\"Pre-delete handler\"\"\"\n if not SCIMProvider.objects.filter(backchannel_application__isnull=False).exists():\n return\n scim_signal_direct.delay(class_to_path(instance.__class__), instance.pk, PatchOp.remove.value)\n\n\n@receiver(m2m_changed, sender=User.ak_groups.through)\ndef m2m_changed_scim(\n sender: type[Model], instance, action: str, pk_set: set, reverse: bool, **kwargs\n):\n \"\"\"Sync group membership\"\"\"\n if action not in [\"post_add\", \"post_remove\"]:\n return\n if not SCIMProvider.objects.filter(backchannel_application__isnull=False).exists():\n return\n # reverse: instance is a Group, pk_set is a list of user pks\n # non-reverse: instance is a User, pk_set is a list of groups\n if reverse:\n scim_signal_m2m.delay(str(instance.pk), action, list(pk_set))\n else:\n for group_pk in pk_set:\n scim_signal_m2m.delay(group_pk, action, [instance.pk])\n", "path": "authentik/providers/scim/signals.py"}]} | 4,093 | 335 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.