Dataset Viewer
problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.1k
25.4k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 582
39.1k
| num_tokens
int64 271
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_32737 | rasdani/github-patches | git_diff | dask__dask-586 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Additional improvement for da.image.imread
I think following 2 improvements make the function better.
#### 1. Allow custom `imread` function.
Some prefer `opencv` which reads color in BGR order, otherwise `skimage` reads in RGB order. Adding `dialect` option (like `dialect='skimage'` or `dialect='cv'`) or accept different file read function may better.
#### 2. Allow custom preprocessing function.
In image processing, input images may have different sizes. In these case, preprocessing is needed between image read and array creation. This preprocessing function must return the same size of array on user's responsibility.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `dask/array/image.py`
Content:
```
1 from glob import glob
2 import os
3
4 try:
5 from skimage.io import imread as sk_imread
6 except ImportError:
7 pass
8
9 from .core import Array
10 from ..base import tokenize
11
12 def add_leading_dimension(x):
13 return x[None, ...]
14
15
16 def imread(filename):
17 """ Read a stack of images into a dask array
18
19 Parameters
20 ----------
21
22 filename: string
23 A globstring like 'myfile.*.png'
24
25 Example
26 -------
27
28 >>> from dask.array.image import imread
29 >>> im = imread('2015-*-*.png') # doctest: +SKIP
30 >>> im.shape # doctest: +SKIP
31 (365, 1000, 1000, 3)
32
33 Returns
34 -------
35
36 Dask array of all images stacked along the first dimension. All images
37 will be treated as individual chunks
38 """
39 filenames = sorted(glob(filename))
40 if not filenames:
41 raise ValueError("No files found under name %s" % filename)
42
43 name = 'imread-%s' % tokenize(filenames, map(os.path.getmtime, filenames))
44
45 sample = sk_imread(filenames[0])
46
47 dsk = dict(((name, i) + (0,) * len(sample.shape),
48 (add_leading_dimension, (sk_imread, filename)))
49 for i, filename in enumerate(filenames))
50
51 chunks = ((1,) * len(filenames),) + tuple((d,) for d in sample.shape)
52
53 return Array(dsk, name, chunks, sample.dtype)
54
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/dask/array/image.py b/dask/array/image.py
--- a/dask/array/image.py
+++ b/dask/array/image.py
@@ -13,7 +13,7 @@
return x[None, ...]
-def imread(filename):
+def imread(filename, imread=None, preprocess=None):
""" Read a stack of images into a dask array
Parameters
@@ -21,6 +21,13 @@
filename: string
A globstring like 'myfile.*.png'
+ imread: function (optional)
+ Optionally provide custom imread function.
+ Function should expect a filename and produce a numpy array.
+ Defaults to ``skimage.io.imread``.
+ preprocess: function (optional)
+ Optionally provide custom function to preprocess the image.
+ Function should expect a numpy array for a single image.
Example
-------
@@ -36,17 +43,25 @@
Dask array of all images stacked along the first dimension. All images
will be treated as individual chunks
"""
+ imread = imread or sk_imread
filenames = sorted(glob(filename))
if not filenames:
raise ValueError("No files found under name %s" % filename)
name = 'imread-%s' % tokenize(filenames, map(os.path.getmtime, filenames))
- sample = sk_imread(filenames[0])
-
- dsk = dict(((name, i) + (0,) * len(sample.shape),
- (add_leading_dimension, (sk_imread, filename)))
- for i, filename in enumerate(filenames))
+ sample = imread(filenames[0])
+ if preprocess:
+ sample = preprocess(sample)
+
+ keys = [(name, i) + (0,) * len(sample.shape) for i in range(len(filenames))]
+ if preprocess:
+ values = [(add_leading_dimension, (preprocess, (imread, filename)))
+ for filename in filenames]
+ else:
+ values = [(add_leading_dimension, (imread, filename))
+ for filename in filenames]
+ dsk = dict(zip(keys, values))
chunks = ((1,) * len(filenames),) + tuple((d,) for d in sample.shape)
| {"golden_diff": "diff --git a/dask/array/image.py b/dask/array/image.py\n--- a/dask/array/image.py\n+++ b/dask/array/image.py\n@@ -13,7 +13,7 @@\n return x[None, ...]\n \n \n-def imread(filename):\n+def imread(filename, imread=None, preprocess=None):\n \"\"\" Read a stack of images into a dask array\n \n Parameters\n@@ -21,6 +21,13 @@\n \n filename: string\n A globstring like 'myfile.*.png'\n+ imread: function (optional)\n+ Optionally provide custom imread function.\n+ Function should expect a filename and produce a numpy array.\n+ Defaults to ``skimage.io.imread``.\n+ preprocess: function (optional)\n+ Optionally provide custom function to preprocess the image.\n+ Function should expect a numpy array for a single image.\n \n Example\n -------\n@@ -36,17 +43,25 @@\n Dask array of all images stacked along the first dimension. All images\n will be treated as individual chunks\n \"\"\"\n+ imread = imread or sk_imread\n filenames = sorted(glob(filename))\n if not filenames:\n raise ValueError(\"No files found under name %s\" % filename)\n \n name = 'imread-%s' % tokenize(filenames, map(os.path.getmtime, filenames))\n \n- sample = sk_imread(filenames[0])\n-\n- dsk = dict(((name, i) + (0,) * len(sample.shape),\n- (add_leading_dimension, (sk_imread, filename)))\n- for i, filename in enumerate(filenames))\n+ sample = imread(filenames[0])\n+ if preprocess:\n+ sample = preprocess(sample)\n+\n+ keys = [(name, i) + (0,) * len(sample.shape) for i in range(len(filenames))]\n+ if preprocess:\n+ values = [(add_leading_dimension, (preprocess, (imread, filename)))\n+ for filename in filenames]\n+ else:\n+ values = [(add_leading_dimension, (imread, filename))\n+ for filename in filenames]\n+ dsk = dict(zip(keys, values))\n \n chunks = ((1,) * len(filenames),) + tuple((d,) for d in sample.shape)\n", "issue": "Additional improvement for da.image.imread\nI think following 2 improvements make the function better.\n#### 1. Allow custom `imread` function.\n\nSome prefer `opencv` which reads color in BGR order, otherwise `skimage` reads in RGB order. Adding `dialect` option (like `dialect='skimage'` or `dialect='cv'`) or accept different file read function may better.\n#### 2. Allow custom preprocessing function.\n\nIn image processing, input images may have different sizes. In these case, preprocessing is needed between image read and array creation. This preprocessing function must return the same size of array on user's responsibility.\n\n", "before_files": [{"content": "from glob import glob\nimport os\n\ntry:\n from skimage.io import imread as sk_imread\nexcept ImportError:\n pass\n\nfrom .core import Array\nfrom ..base import tokenize\n\ndef add_leading_dimension(x):\n return x[None, ...]\n\n\ndef imread(filename):\n \"\"\" Read a stack of images into a dask array\n\n Parameters\n ----------\n\n filename: string\n A globstring like 'myfile.*.png'\n\n Example\n -------\n\n >>> from dask.array.image import imread\n >>> im = imread('2015-*-*.png') # doctest: +SKIP\n >>> im.shape # doctest: +SKIP\n (365, 1000, 1000, 3)\n\n Returns\n -------\n\n Dask array of all images stacked along the first dimension. All images\n will be treated as individual chunks\n \"\"\"\n filenames = sorted(glob(filename))\n if not filenames:\n raise ValueError(\"No files found under name %s\" % filename)\n\n name = 'imread-%s' % tokenize(filenames, map(os.path.getmtime, filenames))\n\n sample = sk_imread(filenames[0])\n\n dsk = dict(((name, i) + (0,) * len(sample.shape),\n (add_leading_dimension, (sk_imread, filename)))\n for i, filename in enumerate(filenames))\n\n chunks = ((1,) * len(filenames),) + tuple((d,) for d in sample.shape)\n\n return Array(dsk, name, chunks, sample.dtype)\n", "path": "dask/array/image.py"}], "after_files": [{"content": "from glob import glob\nimport os\n\ntry:\n from skimage.io import imread as sk_imread\nexcept ImportError:\n pass\n\nfrom .core import Array\nfrom ..base import tokenize\n\ndef add_leading_dimension(x):\n return x[None, ...]\n\n\ndef imread(filename, imread=None, preprocess=None):\n \"\"\" Read a stack of images into a dask array\n\n Parameters\n ----------\n\n filename: string\n A globstring like 'myfile.*.png'\n imread: function (optional)\n Optionally provide custom imread function.\n Function should expect a filename and produce a numpy array.\n Defaults to ``skimage.io.imread``.\n preprocess: function (optional)\n Optionally provide custom function to preprocess the image.\n Function should expect a numpy array for a single image.\n\n Example\n -------\n\n >>> from dask.array.image import imread\n >>> im = imread('2015-*-*.png') # doctest: +SKIP\n >>> im.shape # doctest: +SKIP\n (365, 1000, 1000, 3)\n\n Returns\n -------\n\n Dask array of all images stacked along the first dimension. All images\n will be treated as individual chunks\n \"\"\"\n imread = imread or sk_imread\n filenames = sorted(glob(filename))\n if not filenames:\n raise ValueError(\"No files found under name %s\" % filename)\n\n name = 'imread-%s' % tokenize(filenames, map(os.path.getmtime, filenames))\n\n sample = imread(filenames[0])\n if preprocess:\n sample = preprocess(sample)\n\n keys = [(name, i) + (0,) * len(sample.shape) for i in range(len(filenames))]\n if preprocess:\n values = [(add_leading_dimension, (preprocess, (imread, filename)))\n for filename in filenames]\n else:\n values = [(add_leading_dimension, (imread, filename))\n for filename in filenames]\n dsk = dict(zip(keys, values))\n\n chunks = ((1,) * len(filenames),) + tuple((d,) for d in sample.shape)\n\n return Array(dsk, name, chunks, sample.dtype)\n", "path": "dask/array/image.py"}]} | 843 | 497 |
gh_patches_debug_3876 | rasdani/github-patches | git_diff | xorbitsai__inference-299 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
FEAT: Disable Gradio Telemetry
Pull requests are disabled but see here:
https://github.com/arch-btw/inference/pull/1
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/gradio_chatinterface.py`
Content:
```
1 from typing import Dict, List
2
3 import gradio as gr
4
5 from xinference.client import Client
6
7 if __name__ == "__main__":
8 import argparse
9 import textwrap
10
11 parser = argparse.ArgumentParser(
12 formatter_class=argparse.RawDescriptionHelpFormatter,
13 epilog=textwrap.dedent(
14 """\
15 instructions to run:
16 1. Install Xinference and Llama-cpp-python
17 2. Run 'xinference --host "localhost" --port 9997' in terminal
18 3. Run this python file in new terminal window
19
20 e.g. (feel free to copy)
21 python gradio_chatinterface.py \\
22 --endpoint http://localhost:9997 \\
23 --model_name vicuna-v1.3 \\
24 --model_size_in_billions 7 \\
25 --model_format ggmlv3 \\
26 --quantization q2_K
27
28 If you decide to change the port number in step 2,
29 please also change the endpoint in the arguments
30 """
31 ),
32 )
33
34 parser.add_argument(
35 "--endpoint", type=str, required=True, help="Xinference endpoint, required"
36 )
37 parser.add_argument(
38 "--model_name", type=str, required=True, help="Name of the model, required"
39 )
40 parser.add_argument(
41 "--model_size_in_billions",
42 type=int,
43 required=False,
44 help="Size of the model in billions",
45 )
46 parser.add_argument(
47 "--model_format",
48 type=str,
49 required=False,
50 help="Format of the model",
51 )
52 parser.add_argument(
53 "--quantization", type=str, required=False, help="Quantization of the model"
54 )
55
56 args = parser.parse_args()
57
58 endpoint = args.endpoint
59 model_name = args.model_name
60 model_size_in_billions = args.model_size_in_billions
61 model_format = args.model_format
62 quantization = args.quantization
63
64 print(f"Xinference endpoint: {endpoint}")
65 print(f"Model Name: {model_name}")
66 print(f"Model Size (in billions): {model_size_in_billions}")
67 print(f"Model Format: {model_format}")
68 print(f"Quantization: {quantization}")
69
70 client = Client(endpoint)
71 model_uid = client.launch_model(
72 model_name,
73 model_size_in_billions=model_size_in_billions,
74 model_format=model_format,
75 quantization=quantization,
76 n_ctx=2048,
77 )
78 model = client.get_model(model_uid)
79
80 def flatten(matrix: List[List[str]]) -> List[str]:
81 flat_list = []
82 for row in matrix:
83 flat_list += row
84 return flat_list
85
86 def to_chat(lst: List[str]) -> List[Dict[str, str]]:
87 res = []
88 for i in range(len(lst)):
89 role = "assistant" if i % 2 == 1 else "user"
90 res.append(
91 {
92 "role": role,
93 "content": lst[i],
94 }
95 )
96 return res
97
98 def generate_wrapper(message: str, history: List[List[str]]) -> str:
99 output = model.chat(
100 prompt=message,
101 chat_history=to_chat(flatten(history)),
102 generate_config={"max_tokens": 512, "stream": False},
103 )
104 return output["choices"][0]["message"]["content"]
105
106 demo = gr.ChatInterface(
107 fn=generate_wrapper,
108 examples=[
109 "Show me a two sentence horror story with a plot twist",
110 "Generate a Haiku poem using trignometry as the central theme",
111 "Write three sentences of scholarly description regarding a supernatural beast",
112 "Prove there does not exist a largest integer",
113 ],
114 title="Xinference Chat Bot",
115 )
116 demo.launch()
117
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/examples/gradio_chatinterface.py b/examples/gradio_chatinterface.py
--- a/examples/gradio_chatinterface.py
+++ b/examples/gradio_chatinterface.py
@@ -105,6 +105,7 @@
demo = gr.ChatInterface(
fn=generate_wrapper,
+ analytics_enabled=False,
examples=[
"Show me a two sentence horror story with a plot twist",
"Generate a Haiku poem using trignometry as the central theme",
| {"golden_diff": "diff --git a/examples/gradio_chatinterface.py b/examples/gradio_chatinterface.py\n--- a/examples/gradio_chatinterface.py\n+++ b/examples/gradio_chatinterface.py\n@@ -105,6 +105,7 @@\n \n demo = gr.ChatInterface(\n fn=generate_wrapper,\n+ analytics_enabled=False,\n examples=[\n \"Show me a two sentence horror story with a plot twist\",\n \"Generate a Haiku poem using trignometry as the central theme\",\n", "issue": "FEAT: Disable Gradio Telemetry\nPull requests are disabled but see here:\r\n\r\nhttps://github.com/arch-btw/inference/pull/1\n", "before_files": [{"content": "from typing import Dict, List\n\nimport gradio as gr\n\nfrom xinference.client import Client\n\nif __name__ == \"__main__\":\n import argparse\n import textwrap\n\n parser = argparse.ArgumentParser(\n formatter_class=argparse.RawDescriptionHelpFormatter,\n epilog=textwrap.dedent(\n \"\"\"\\\n instructions to run:\n 1. Install Xinference and Llama-cpp-python\n 2. Run 'xinference --host \"localhost\" --port 9997' in terminal\n 3. Run this python file in new terminal window\n\n e.g. (feel free to copy)\n python gradio_chatinterface.py \\\\\n --endpoint http://localhost:9997 \\\\\n --model_name vicuna-v1.3 \\\\\n --model_size_in_billions 7 \\\\\n --model_format ggmlv3 \\\\\n --quantization q2_K\n\n If you decide to change the port number in step 2,\n please also change the endpoint in the arguments\n \"\"\"\n ),\n )\n\n parser.add_argument(\n \"--endpoint\", type=str, required=True, help=\"Xinference endpoint, required\"\n )\n parser.add_argument(\n \"--model_name\", type=str, required=True, help=\"Name of the model, required\"\n )\n parser.add_argument(\n \"--model_size_in_billions\",\n type=int,\n required=False,\n help=\"Size of the model in billions\",\n )\n parser.add_argument(\n \"--model_format\",\n type=str,\n required=False,\n help=\"Format of the model\",\n )\n parser.add_argument(\n \"--quantization\", type=str, required=False, help=\"Quantization of the model\"\n )\n\n args = parser.parse_args()\n\n endpoint = args.endpoint\n model_name = args.model_name\n model_size_in_billions = args.model_size_in_billions\n model_format = args.model_format\n quantization = args.quantization\n\n print(f\"Xinference endpoint: {endpoint}\")\n print(f\"Model Name: {model_name}\")\n print(f\"Model Size (in billions): {model_size_in_billions}\")\n print(f\"Model Format: {model_format}\")\n print(f\"Quantization: {quantization}\")\n\n client = Client(endpoint)\n model_uid = client.launch_model(\n model_name,\n model_size_in_billions=model_size_in_billions,\n model_format=model_format,\n quantization=quantization,\n n_ctx=2048,\n )\n model = client.get_model(model_uid)\n\n def flatten(matrix: List[List[str]]) -> List[str]:\n flat_list = []\n for row in matrix:\n flat_list += row\n return flat_list\n\n def to_chat(lst: List[str]) -> List[Dict[str, str]]:\n res = []\n for i in range(len(lst)):\n role = \"assistant\" if i % 2 == 1 else \"user\"\n res.append(\n {\n \"role\": role,\n \"content\": lst[i],\n }\n )\n return res\n\n def generate_wrapper(message: str, history: List[List[str]]) -> str:\n output = model.chat(\n prompt=message,\n chat_history=to_chat(flatten(history)),\n generate_config={\"max_tokens\": 512, \"stream\": False},\n )\n return output[\"choices\"][0][\"message\"][\"content\"]\n\n demo = gr.ChatInterface(\n fn=generate_wrapper,\n examples=[\n \"Show me a two sentence horror story with a plot twist\",\n \"Generate a Haiku poem using trignometry as the central theme\",\n \"Write three sentences of scholarly description regarding a supernatural beast\",\n \"Prove there does not exist a largest integer\",\n ],\n title=\"Xinference Chat Bot\",\n )\n demo.launch()\n", "path": "examples/gradio_chatinterface.py"}], "after_files": [{"content": "from typing import Dict, List\n\nimport gradio as gr\n\nfrom xinference.client import Client\n\nif __name__ == \"__main__\":\n import argparse\n import textwrap\n\n parser = argparse.ArgumentParser(\n formatter_class=argparse.RawDescriptionHelpFormatter,\n epilog=textwrap.dedent(\n \"\"\"\\\n instructions to run:\n 1. Install Xinference and Llama-cpp-python\n 2. Run 'xinference --host \"localhost\" --port 9997' in terminal\n 3. Run this python file in new terminal window\n\n e.g. (feel free to copy)\n python gradio_chatinterface.py \\\\\n --endpoint http://localhost:9997 \\\\\n --model_name vicuna-v1.3 \\\\\n --model_size_in_billions 7 \\\\\n --model_format ggmlv3 \\\\\n --quantization q2_K\n\n If you decide to change the port number in step 2,\n please also change the endpoint in the arguments\n \"\"\"\n ),\n )\n\n parser.add_argument(\n \"--endpoint\", type=str, required=True, help=\"Xinference endpoint, required\"\n )\n parser.add_argument(\n \"--model_name\", type=str, required=True, help=\"Name of the model, required\"\n )\n parser.add_argument(\n \"--model_size_in_billions\",\n type=int,\n required=False,\n help=\"Size of the model in billions\",\n )\n parser.add_argument(\n \"--model_format\",\n type=str,\n required=False,\n help=\"Format of the model\",\n )\n parser.add_argument(\n \"--quantization\", type=str, required=False, help=\"Quantization of the model\"\n )\n\n args = parser.parse_args()\n\n endpoint = args.endpoint\n model_name = args.model_name\n model_size_in_billions = args.model_size_in_billions\n model_format = args.model_format\n quantization = args.quantization\n\n print(f\"Xinference endpoint: {endpoint}\")\n print(f\"Model Name: {model_name}\")\n print(f\"Model Size (in billions): {model_size_in_billions}\")\n print(f\"Model Format: {model_format}\")\n print(f\"Quantization: {quantization}\")\n\n client = Client(endpoint)\n model_uid = client.launch_model(\n model_name,\n model_size_in_billions=model_size_in_billions,\n model_format=model_format,\n quantization=quantization,\n n_ctx=2048,\n )\n model = client.get_model(model_uid)\n\n def flatten(matrix: List[List[str]]) -> List[str]:\n flat_list = []\n for row in matrix:\n flat_list += row\n return flat_list\n\n def to_chat(lst: List[str]) -> List[Dict[str, str]]:\n res = []\n for i in range(len(lst)):\n role = \"assistant\" if i % 2 == 1 else \"user\"\n res.append(\n {\n \"role\": role,\n \"content\": lst[i],\n }\n )\n return res\n\n def generate_wrapper(message: str, history: List[List[str]]) -> str:\n output = model.chat(\n prompt=message,\n chat_history=to_chat(flatten(history)),\n generate_config={\"max_tokens\": 512, \"stream\": False},\n )\n return output[\"choices\"][0][\"message\"][\"content\"]\n\n demo = gr.ChatInterface(\n fn=generate_wrapper,\n analytics_enabled=False,\n examples=[\n \"Show me a two sentence horror story with a plot twist\",\n \"Generate a Haiku poem using trignometry as the central theme\",\n \"Write three sentences of scholarly description regarding a supernatural beast\",\n \"Prove there does not exist a largest integer\",\n ],\n title=\"Xinference Chat Bot\",\n )\n demo.launch()\n", "path": "examples/gradio_chatinterface.py"}]} | 1,351 | 103 |
gh_patches_debug_16504 | rasdani/github-patches | git_diff | mampfes__hacs_waste_collection_schedule-1693 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Bug]: portenf_sa_gov_au reporting incorrect dates
### I Have A Problem With:
A specific source
### What's Your Problem
The portenf_sa_gov_au sensor has been reporting incorrectly since it updated itself on 24 December 2023 (I can see this from HA logs). It appears that when there is 1 week or less left in the month "main-month" switches to the coming month and "other-month" becomes the current month.
Because of this, the integration reports the current collection next month and the next collections as in the past (and hides them).
The fix in #1110 by @5ila5 partly addresses the problem but it was not foreseeable to him that EOM would be treated this way. @5ila5 also noted that this might be an issue in that closed issue.
### Source (if relevant)
portenf_sa_gov_au
### Logs
```Shell
Output of test_sources.py:
Testing source portenf_sa_gov_au ...
found 8 entries for Broadview, Regency Road, 565
2024-01-26 : general-waste bin [mdi:trash-can]
2024-01-26 : recycling bin [mdi:recycle]
2023-12-02 : general-waste bin [mdi:trash-can]
2023-12-02 : organics bin [mdi:leaf]
2023-12-09 : general-waste bin [mdi:trash-can]
2023-12-09 : recycling bin [mdi:recycle]
2023-12-16 : general-waste bin [mdi:trash-can]
2023-12-16 : organics bin [mdi:leaf]
found 8 entries for 48 Floriedale Rd
2024-01-26 : general-waste bin [mdi:trash-can]
2024-01-26 : recycling bin [mdi:recycle]
2023-12-02 : general-waste bin [mdi:trash-can]
2023-12-02 : organics bin [mdi:leaf]
2023-12-09 : general-waste bin [mdi:trash-can]
2023-12-09 : recycling bin [mdi:recycle]
2023-12-16 : general-waste bin [mdi:trash-can]
2023-12-16 : organics bin [mdi:leaf]
found 8 entries for 24 Margaret Terrace
2024-01-28 : general-waste bin [mdi:trash-can]
2024-01-28 : organics bin [mdi:leaf]
2023-12-04 : general-waste bin [mdi:trash-can]
2023-12-04 : recycling bin [mdi:recycle]
2023-12-11 : general-waste bin [mdi:trash-can]
2023-12-11 : organics bin [mdi:leaf]
2023-12-18 : general-waste bin [mdi:trash-can]
2023-12-18 : recycling bin [mdi:recycle]
found 8 entries for Addison Road 91 with unit
2024-01-28 : general-waste bin [mdi:trash-can]
2024-01-28 : organics bin [mdi:leaf]
2023-12-04 : general-waste bin [mdi:trash-can]
2023-12-04 : recycling bin [mdi:recycle]
2023-12-11 : general-waste bin [mdi:trash-can]
2023-12-11 : organics bin [mdi:leaf]
2023-12-18 : general-waste bin [mdi:trash-can]
2023-12-18 : recycling bin [mdi:recycle]
```
### Relevant Configuration
_No response_
### Checklist Source Error
- [X] Use the example parameters for your source (often available in the documentation) (don't forget to restart Home Assistant after changing the configuration)
- [X] Checked that the website of your service provider is still working
- [X] Tested my attributes on the service provider website (if possible)
- [X] I have tested with the latest version of the integration (master) (for HACS in the 3 dot menu of the integration click on "Redownload" and choose master as version)
### Checklist Sensor Error
- [X] Checked in the Home Assistant Calendar tab if the event names match the types names (if types argument is used)
### Required
- [X] I have searched past (closed AND opened) issues to see if this bug has already been reported, and it hasn't been.
- [X] I understand that people give their precious time for free, and thus I've done my very best to make this problem as easy as possible to investigate.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `custom_components/waste_collection_schedule/waste_collection_schedule/source/portenf_sa_gov_au.py`
Content:
```
1 import logging
2 import re
3 from datetime import datetime
4
5 import requests
6 import urllib3
7 from bs4 import BeautifulSoup
8 from waste_collection_schedule import Collection # type: ignore[attr-defined]
9
10 # With verify=True the POST fails due to a SSLCertVerificationError.
11 # Using verify=False works, but is not ideal. The following links may provide a better way of dealing with this:
12 # https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings
13 # https://urllib3.readthedocs.io/en/1.26.x/user-guide.html#ssl
14 # These two lines areused to suppress the InsecureRequestWarning when using verify=False
15 urllib3.disable_warnings()
16
17 TITLE = "Port Adelaide Enfield, South Australia"
18 DESCRIPTION = "Source for City of Port Adelaide Enfield, South Australia."
19 URL = "https://ecouncil.portenf.sa.gov.au/"
20 TEST_CASES = {
21 "Broadview, Regency Road, 565 ": {
22 "suburb": "Broadview",
23 "street": "Regency Road",
24 "house_number": 565,
25 "unit_number": "",
26 },
27 "48 Floriedale Rd ": {
28 "suburb": "Greenacres",
29 "street": "Floriedale Rd",
30 "house_number": "48",
31 },
32 "24 Margaret Terrace": {
33 "suburb": "Rosewater",
34 "street": "Margaret Terrace",
35 "house_number": "24",
36 },
37 "Addison Road 91 with unit": {
38 "suburb": "Rosewater",
39 "street": "Addison Road",
40 "house_number": 91,
41 "unit_number": 2,
42 },
43 }
44
45 ICON_MAP = {
46 "general-waste bin": "mdi:trash-can",
47 "organics bin": "mdi:leaf",
48 "recycling bin": "mdi:recycle",
49 }
50
51 LOGGER = logging.getLogger(__name__)
52
53 API_URL = "https://ecouncil.portenf.sa.gov.au/public/propertywastedates/public.aspx"
54
55
56 class Source:
57 def __init__(
58 self,
59 suburb: str,
60 street: str,
61 house_number: str | int,
62 unit_number: str | int = "",
63 ):
64 self._suburb: str = suburb
65 self._street: str = street
66 self._house_number: str = str(house_number)
67 self._unit_number: str = str(unit_number)
68
69 def __set_args(
70 self, soup: BeautifulSoup, event_taget=None, additional: dict = {}
71 ) -> dict:
72 args = {
73 "ctl00$MainContent$txtSuburb": self._suburb,
74 "ctl00$MainContent$txtStreetName": self._street,
75 "ctl00$MainContent$txtHouseNumber": self._house_number,
76 "ctl00$MainContent$txtUnitNumber": self._unit_number,
77 }
78 if event_taget is not None:
79 args["__EVENTTARGET"] = event_taget
80
81 for hidden_val in soup.find_all("input", {"type": "hidden"}):
82 args[hidden_val["name"]] = hidden_val["value"]
83
84 for key, value in additional.items():
85 args[key] = value
86 return args
87
88 def fetch(self):
89 session = requests.Session()
90
91 # get First page
92 r = session.get(API_URL, verify=False)
93 r.raise_for_status()
94
95 # extractt arguments
96 args = self.__set_args(
97 BeautifulSoup(r.text, "html.parser"),
98 event_taget="ctl00$MainContent$btnSearch",
99 )
100
101 r = session.post(API_URL, data=args)
102 r.raise_for_status()
103
104 # get page to select an address
105 soup = BeautifulSoup(r.text, "html.parser")
106
107 selectable = soup.find_all("a", {"class": "anchor-button small"}, text="Select")
108
109 if len(selectable) == 0:
110 raise ValueError("No address found")
111 selected = selectable[0]
112
113 # If multiple addresses are found, try to find the one that matches the input and warn if there are multiple or none matches
114 if len(selectable) > 1:
115 found = [
116 " ".join(
117 [y.text for y in x.parent.parent.find_all("td")[1].find_all("span")]
118 )
119 for x in selectable
120 ]
121 using_index = 0
122
123 match = False
124
125 for index, entry in enumerate(found):
126 entry = entry.lower().strip().replace(" ", "")
127 if (
128 self._house_number.lower().strip().replace(" ", "") in entry
129 and self._street.lower().strip().replace(" ", "") in entry
130 and self._suburb.lower().strip().replace(" ", "") in entry
131 and self._unit_number.lower().strip().replace(" ", "") in entry
132 ):
133 if match:
134 LOGGER.warning(
135 f"Multiple addresses found, using first one \nfound:{', '.join(found[:10])}{'...' if len(found) >= 10 else ''} \nusing:{found[using_index]}"
136 )
137 break
138 using_index = index
139 match = True
140 if not match:
141 LOGGER.warning(
142 f"no perfect address match found, using:{found[using_index]}"
143 )
144
145 # request first address
146 args = self.__set_args(
147 soup,
148 event_taget="ctl00$MainContent$gvPropertyResults$ctl02$btnSelect",
149 additional={selected["href"].split("'")[1]: ""},
150 )
151 r = session.post(API_URL, data=args)
152 r.raise_for_status()
153
154 soup = BeautifulSoup(r.text, "html.parser")
155 cal_header = soup.find("th", {"class": "header-month"}).find("span").text
156
157 from_month = cal_header.split("-")[0].strip()
158 to_month = cal_header.split("-")[1].strip().split(" ")[0]
159 to_year = from_year = cal_header.split("-")[1].strip().split(" ")[1]
160 # if main month contains a year, set it (maybe happens in december???)
161 if len(from_month.split(" ")) > 1:
162 from_year = from_month.split(" ")[1]
163 from_month = from_month.split(" ")[0]
164
165 today_div = soup.find("table", id="cal").find("td", class_="today")
166 print(today_div)
167
168 # if other-month is to_month
169 if (
170 "other-month" in today_div.attrs
171 and datetime.now().strftime("%B") != to_month
172 ):
173 main_month, other_month = from_month, to_month
174 main_year, other_year = from_year, to_year
175 else: # if other-month is from_month
176 main_month, other_month = to_month, from_month
177 main_year, other_year = to_year, from_year
178
179 entries = []
180
181 calendar = soup.find("table", {"class": "collection-day-calendar"})
182 # Iterate over all days with pickups
183 for pickup in calendar.find_all(
184 "div", {"class": re.compile(r"pickup|next-pickup")}
185 ):
186 parent_td = pickup.parent
187 month = (
188 main_month if "main-month" in parent_td.attrs["class"] else other_month
189 )
190 year = main_year if "main-month" in parent_td.attrs["class"] else other_year
191 day = parent_td.find("div", {"class": "daynumber"}).text
192
193 # Iterate over all pickup container types for this day
194 for container in pickup.find_all("div", {"class": "bin-container"}):
195 container_type = " ".join(container.find("div").attrs["class"])
196 container_icon = ICON_MAP.get(container_type)
197
198 date = datetime.strptime(f"{year}-{month}-{day}", "%Y-%B-%d").date()
199 entries.append(
200 Collection(date=date, t=container_type, icon=container_icon)
201 )
202
203 return entries
204
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/portenf_sa_gov_au.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/portenf_sa_gov_au.py
--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/portenf_sa_gov_au.py
+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/portenf_sa_gov_au.py
@@ -163,12 +163,14 @@
from_month = from_month.split(" ")[0]
today_div = soup.find("table", id="cal").find("td", class_="today")
- print(today_div)
# if other-month is to_month
if (
- "other-month" in today_div.attrs
- and datetime.now().strftime("%B") != to_month
+ "other-month" in today_div.attrs["class"]
+ and datetime.now().strftime("%B") == to_month
+ ) or (
+ "main-month" in today_div.attrs["class"]
+ and datetime.now().strftime("%B") == from_month
):
main_month, other_month = from_month, to_month
main_year, other_year = from_year, to_year
| {"golden_diff": "diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/portenf_sa_gov_au.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/portenf_sa_gov_au.py\n--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/portenf_sa_gov_au.py\n+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/portenf_sa_gov_au.py\n@@ -163,12 +163,14 @@\n from_month = from_month.split(\" \")[0]\n \n today_div = soup.find(\"table\", id=\"cal\").find(\"td\", class_=\"today\")\n- print(today_div)\n \n # if other-month is to_month\n if (\n- \"other-month\" in today_div.attrs\n- and datetime.now().strftime(\"%B\") != to_month\n+ \"other-month\" in today_div.attrs[\"class\"]\n+ and datetime.now().strftime(\"%B\") == to_month\n+ ) or (\n+ \"main-month\" in today_div.attrs[\"class\"]\n+ and datetime.now().strftime(\"%B\") == from_month\n ):\n main_month, other_month = from_month, to_month\n main_year, other_year = from_year, to_year\n", "issue": "[Bug]: portenf_sa_gov_au reporting incorrect dates\n### I Have A Problem With:\r\n\r\nA specific source\r\n\r\n### What's Your Problem\r\n\r\nThe portenf_sa_gov_au sensor has been reporting incorrectly since it updated itself on 24 December 2023 (I can see this from HA logs). It appears that when there is 1 week or less left in the month \"main-month\" switches to the coming month and \"other-month\" becomes the current month.\r\n\r\nBecause of this, the integration reports the current collection next month and the next collections as in the past (and hides them).\r\n\r\nThe fix in #1110 by @5ila5 partly addresses the problem but it was not foreseeable to him that EOM would be treated this way. @5ila5 also noted that this might be an issue in that closed issue.\r\n\r\n### Source (if relevant)\r\n\r\nportenf_sa_gov_au\r\n\r\n### Logs\r\n\r\n```Shell\r\nOutput of test_sources.py:\r\n\r\nTesting source portenf_sa_gov_au ...\r\n found 8 entries for Broadview, Regency Road, 565\r\n 2024-01-26 : general-waste bin [mdi:trash-can]\r\n 2024-01-26 : recycling bin [mdi:recycle]\r\n 2023-12-02 : general-waste bin [mdi:trash-can]\r\n 2023-12-02 : organics bin [mdi:leaf]\r\n 2023-12-09 : general-waste bin [mdi:trash-can]\r\n 2023-12-09 : recycling bin [mdi:recycle]\r\n 2023-12-16 : general-waste bin [mdi:trash-can]\r\n 2023-12-16 : organics bin [mdi:leaf]\r\n found 8 entries for 48 Floriedale Rd\r\n 2024-01-26 : general-waste bin [mdi:trash-can]\r\n 2024-01-26 : recycling bin [mdi:recycle]\r\n 2023-12-02 : general-waste bin [mdi:trash-can]\r\n 2023-12-02 : organics bin [mdi:leaf]\r\n 2023-12-09 : general-waste bin [mdi:trash-can]\r\n 2023-12-09 : recycling bin [mdi:recycle]\r\n 2023-12-16 : general-waste bin [mdi:trash-can]\r\n 2023-12-16 : organics bin [mdi:leaf]\r\n found 8 entries for 24 Margaret Terrace\r\n 2024-01-28 : general-waste bin [mdi:trash-can]\r\n 2024-01-28 : organics bin [mdi:leaf]\r\n 2023-12-04 : general-waste bin [mdi:trash-can]\r\n 2023-12-04 : recycling bin [mdi:recycle]\r\n 2023-12-11 : general-waste bin [mdi:trash-can]\r\n 2023-12-11 : organics bin [mdi:leaf]\r\n 2023-12-18 : general-waste bin [mdi:trash-can]\r\n 2023-12-18 : recycling bin [mdi:recycle]\r\n found 8 entries for Addison Road 91 with unit\r\n 2024-01-28 : general-waste bin [mdi:trash-can]\r\n 2024-01-28 : organics bin [mdi:leaf]\r\n 2023-12-04 : general-waste bin [mdi:trash-can]\r\n 2023-12-04 : recycling bin [mdi:recycle]\r\n 2023-12-11 : general-waste bin [mdi:trash-can]\r\n 2023-12-11 : organics bin [mdi:leaf]\r\n 2023-12-18 : general-waste bin [mdi:trash-can]\r\n 2023-12-18 : recycling bin [mdi:recycle]\r\n```\r\n\r\n\r\n### Relevant Configuration\r\n\r\n_No response_\r\n\r\n### Checklist Source Error\r\n\r\n- [X] Use the example parameters for your source (often available in the documentation) (don't forget to restart Home Assistant after changing the configuration)\r\n- [X] Checked that the website of your service provider is still working\r\n- [X] Tested my attributes on the service provider website (if possible)\r\n- [X] I have tested with the latest version of the integration (master) (for HACS in the 3 dot menu of the integration click on \"Redownload\" and choose master as version)\r\n\r\n### Checklist Sensor Error\r\n\r\n- [X] Checked in the Home Assistant Calendar tab if the event names match the types names (if types argument is used)\r\n\r\n### Required\r\n\r\n- [X] I have searched past (closed AND opened) issues to see if this bug has already been reported, and it hasn't been.\r\n- [X] I understand that people give their precious time for free, and thus I've done my very best to make this problem as easy as possible to investigate.\n", "before_files": [{"content": "import logging\nimport re\nfrom datetime import datetime\n\nimport requests\nimport urllib3\nfrom bs4 import BeautifulSoup\nfrom waste_collection_schedule import Collection # type: ignore[attr-defined]\n\n# With verify=True the POST fails due to a SSLCertVerificationError.\n# Using verify=False works, but is not ideal. The following links may provide a better way of dealing with this:\n# https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings\n# https://urllib3.readthedocs.io/en/1.26.x/user-guide.html#ssl\n# These two lines areused to suppress the InsecureRequestWarning when using verify=False\nurllib3.disable_warnings()\n\nTITLE = \"Port Adelaide Enfield, South Australia\"\nDESCRIPTION = \"Source for City of Port Adelaide Enfield, South Australia.\"\nURL = \"https://ecouncil.portenf.sa.gov.au/\"\nTEST_CASES = {\n \"Broadview, Regency Road, 565 \": {\n \"suburb\": \"Broadview\",\n \"street\": \"Regency Road\",\n \"house_number\": 565,\n \"unit_number\": \"\",\n },\n \"48 Floriedale Rd \": {\n \"suburb\": \"Greenacres\",\n \"street\": \"Floriedale Rd\",\n \"house_number\": \"48\",\n },\n \"24 Margaret Terrace\": {\n \"suburb\": \"Rosewater\",\n \"street\": \"Margaret Terrace\",\n \"house_number\": \"24\",\n },\n \"Addison Road 91 with unit\": {\n \"suburb\": \"Rosewater\",\n \"street\": \"Addison Road\",\n \"house_number\": 91,\n \"unit_number\": 2,\n },\n}\n\nICON_MAP = {\n \"general-waste bin\": \"mdi:trash-can\",\n \"organics bin\": \"mdi:leaf\",\n \"recycling bin\": \"mdi:recycle\",\n}\n\nLOGGER = logging.getLogger(__name__)\n\nAPI_URL = \"https://ecouncil.portenf.sa.gov.au/public/propertywastedates/public.aspx\"\n\n\nclass Source:\n def __init__(\n self,\n suburb: str,\n street: str,\n house_number: str | int,\n unit_number: str | int = \"\",\n ):\n self._suburb: str = suburb\n self._street: str = street\n self._house_number: str = str(house_number)\n self._unit_number: str = str(unit_number)\n\n def __set_args(\n self, soup: BeautifulSoup, event_taget=None, additional: dict = {}\n ) -> dict:\n args = {\n \"ctl00$MainContent$txtSuburb\": self._suburb,\n \"ctl00$MainContent$txtStreetName\": self._street,\n \"ctl00$MainContent$txtHouseNumber\": self._house_number,\n \"ctl00$MainContent$txtUnitNumber\": self._unit_number,\n }\n if event_taget is not None:\n args[\"__EVENTTARGET\"] = event_taget\n\n for hidden_val in soup.find_all(\"input\", {\"type\": \"hidden\"}):\n args[hidden_val[\"name\"]] = hidden_val[\"value\"]\n\n for key, value in additional.items():\n args[key] = value\n return args\n\n def fetch(self):\n session = requests.Session()\n\n # get First page\n r = session.get(API_URL, verify=False)\n r.raise_for_status()\n\n # extractt arguments\n args = self.__set_args(\n BeautifulSoup(r.text, \"html.parser\"),\n event_taget=\"ctl00$MainContent$btnSearch\",\n )\n\n r = session.post(API_URL, data=args)\n r.raise_for_status()\n\n # get page to select an address\n soup = BeautifulSoup(r.text, \"html.parser\")\n\n selectable = soup.find_all(\"a\", {\"class\": \"anchor-button small\"}, text=\"Select\")\n\n if len(selectable) == 0:\n raise ValueError(\"No address found\")\n selected = selectable[0]\n\n # If multiple addresses are found, try to find the one that matches the input and warn if there are multiple or none matches\n if len(selectable) > 1:\n found = [\n \" \".join(\n [y.text for y in x.parent.parent.find_all(\"td\")[1].find_all(\"span\")]\n )\n for x in selectable\n ]\n using_index = 0\n\n match = False\n\n for index, entry in enumerate(found):\n entry = entry.lower().strip().replace(\" \", \"\")\n if (\n self._house_number.lower().strip().replace(\" \", \"\") in entry\n and self._street.lower().strip().replace(\" \", \"\") in entry\n and self._suburb.lower().strip().replace(\" \", \"\") in entry\n and self._unit_number.lower().strip().replace(\" \", \"\") in entry\n ):\n if match:\n LOGGER.warning(\n f\"Multiple addresses found, using first one \\nfound:{', '.join(found[:10])}{'...' if len(found) >= 10 else ''} \\nusing:{found[using_index]}\"\n )\n break\n using_index = index\n match = True\n if not match:\n LOGGER.warning(\n f\"no perfect address match found, using:{found[using_index]}\"\n )\n\n # request first address\n args = self.__set_args(\n soup,\n event_taget=\"ctl00$MainContent$gvPropertyResults$ctl02$btnSelect\",\n additional={selected[\"href\"].split(\"'\")[1]: \"\"},\n )\n r = session.post(API_URL, data=args)\n r.raise_for_status()\n\n soup = BeautifulSoup(r.text, \"html.parser\")\n cal_header = soup.find(\"th\", {\"class\": \"header-month\"}).find(\"span\").text\n\n from_month = cal_header.split(\"-\")[0].strip()\n to_month = cal_header.split(\"-\")[1].strip().split(\" \")[0]\n to_year = from_year = cal_header.split(\"-\")[1].strip().split(\" \")[1]\n # if main month contains a year, set it (maybe happens in december???)\n if len(from_month.split(\" \")) > 1:\n from_year = from_month.split(\" \")[1]\n from_month = from_month.split(\" \")[0]\n\n today_div = soup.find(\"table\", id=\"cal\").find(\"td\", class_=\"today\")\n print(today_div)\n\n # if other-month is to_month\n if (\n \"other-month\" in today_div.attrs\n and datetime.now().strftime(\"%B\") != to_month\n ):\n main_month, other_month = from_month, to_month\n main_year, other_year = from_year, to_year\n else: # if other-month is from_month\n main_month, other_month = to_month, from_month\n main_year, other_year = to_year, from_year\n\n entries = []\n\n calendar = soup.find(\"table\", {\"class\": \"collection-day-calendar\"})\n # Iterate over all days with pickups\n for pickup in calendar.find_all(\n \"div\", {\"class\": re.compile(r\"pickup|next-pickup\")}\n ):\n parent_td = pickup.parent\n month = (\n main_month if \"main-month\" in parent_td.attrs[\"class\"] else other_month\n )\n year = main_year if \"main-month\" in parent_td.attrs[\"class\"] else other_year\n day = parent_td.find(\"div\", {\"class\": \"daynumber\"}).text\n\n # Iterate over all pickup container types for this day\n for container in pickup.find_all(\"div\", {\"class\": \"bin-container\"}):\n container_type = \" \".join(container.find(\"div\").attrs[\"class\"])\n container_icon = ICON_MAP.get(container_type)\n\n date = datetime.strptime(f\"{year}-{month}-{day}\", \"%Y-%B-%d\").date()\n entries.append(\n Collection(date=date, t=container_type, icon=container_icon)\n )\n\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/portenf_sa_gov_au.py"}], "after_files": [{"content": "import logging\nimport re\nfrom datetime import datetime\n\nimport requests\nimport urllib3\nfrom bs4 import BeautifulSoup\nfrom waste_collection_schedule import Collection # type: ignore[attr-defined]\n\n# With verify=True the POST fails due to a SSLCertVerificationError.\n# Using verify=False works, but is not ideal. The following links may provide a better way of dealing with this:\n# https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings\n# https://urllib3.readthedocs.io/en/1.26.x/user-guide.html#ssl\n# These two lines areused to suppress the InsecureRequestWarning when using verify=False\nurllib3.disable_warnings()\n\nTITLE = \"Port Adelaide Enfield, South Australia\"\nDESCRIPTION = \"Source for City of Port Adelaide Enfield, South Australia.\"\nURL = \"https://ecouncil.portenf.sa.gov.au/\"\nTEST_CASES = {\n \"Broadview, Regency Road, 565 \": {\n \"suburb\": \"Broadview\",\n \"street\": \"Regency Road\",\n \"house_number\": 565,\n \"unit_number\": \"\",\n },\n \"48 Floriedale Rd \": {\n \"suburb\": \"Greenacres\",\n \"street\": \"Floriedale Rd\",\n \"house_number\": \"48\",\n },\n \"24 Margaret Terrace\": {\n \"suburb\": \"Rosewater\",\n \"street\": \"Margaret Terrace\",\n \"house_number\": \"24\",\n },\n \"Addison Road 91 with unit\": {\n \"suburb\": \"Rosewater\",\n \"street\": \"Addison Road\",\n \"house_number\": 91,\n \"unit_number\": 2,\n },\n}\n\nICON_MAP = {\n \"general-waste bin\": \"mdi:trash-can\",\n \"organics bin\": \"mdi:leaf\",\n \"recycling bin\": \"mdi:recycle\",\n}\n\nLOGGER = logging.getLogger(__name__)\n\nAPI_URL = \"https://ecouncil.portenf.sa.gov.au/public/propertywastedates/public.aspx\"\n\n\nclass Source:\n def __init__(\n self,\n suburb: str,\n street: str,\n house_number: str | int,\n unit_number: str | int = \"\",\n ):\n self._suburb: str = suburb\n self._street: str = street\n self._house_number: str = str(house_number)\n self._unit_number: str = str(unit_number)\n\n def __set_args(\n self, soup: BeautifulSoup, event_taget=None, additional: dict = {}\n ) -> dict:\n args = {\n \"ctl00$MainContent$txtSuburb\": self._suburb,\n \"ctl00$MainContent$txtStreetName\": self._street,\n \"ctl00$MainContent$txtHouseNumber\": self._house_number,\n \"ctl00$MainContent$txtUnitNumber\": self._unit_number,\n }\n if event_taget is not None:\n args[\"__EVENTTARGET\"] = event_taget\n\n for hidden_val in soup.find_all(\"input\", {\"type\": \"hidden\"}):\n args[hidden_val[\"name\"]] = hidden_val[\"value\"]\n\n for key, value in additional.items():\n args[key] = value\n return args\n\n def fetch(self):\n session = requests.Session()\n\n # get First page\n r = session.get(API_URL, verify=False)\n r.raise_for_status()\n\n # extractt arguments\n args = self.__set_args(\n BeautifulSoup(r.text, \"html.parser\"),\n event_taget=\"ctl00$MainContent$btnSearch\",\n )\n\n r = session.post(API_URL, data=args)\n r.raise_for_status()\n\n # get page to select an address\n soup = BeautifulSoup(r.text, \"html.parser\")\n\n selectable = soup.find_all(\"a\", {\"class\": \"anchor-button small\"}, text=\"Select\")\n\n if len(selectable) == 0:\n raise ValueError(\"No address found\")\n selected = selectable[0]\n\n # If multiple addresses are found, try to find the one that matches the input and warn if there are multiple or none matches\n if len(selectable) > 1:\n found = [\n \" \".join(\n [y.text for y in x.parent.parent.find_all(\"td\")[1].find_all(\"span\")]\n )\n for x in selectable\n ]\n using_index = 0\n\n match = False\n\n for index, entry in enumerate(found):\n entry = entry.lower().strip().replace(\" \", \"\")\n if (\n self._house_number.lower().strip().replace(\" \", \"\") in entry\n and self._street.lower().strip().replace(\" \", \"\") in entry\n and self._suburb.lower().strip().replace(\" \", \"\") in entry\n and self._unit_number.lower().strip().replace(\" \", \"\") in entry\n ):\n if match:\n LOGGER.warning(\n f\"Multiple addresses found, using first one \\nfound:{', '.join(found[:10])}{'...' if len(found) >= 10 else ''} \\nusing:{found[using_index]}\"\n )\n break\n using_index = index\n match = True\n if not match:\n LOGGER.warning(\n f\"no perfect address match found, using:{found[using_index]}\"\n )\n\n # request first address\n args = self.__set_args(\n soup,\n event_taget=\"ctl00$MainContent$gvPropertyResults$ctl02$btnSelect\",\n additional={selected[\"href\"].split(\"'\")[1]: \"\"},\n )\n r = session.post(API_URL, data=args)\n r.raise_for_status()\n\n soup = BeautifulSoup(r.text, \"html.parser\")\n cal_header = soup.find(\"th\", {\"class\": \"header-month\"}).find(\"span\").text\n\n from_month = cal_header.split(\"-\")[0].strip()\n to_month = cal_header.split(\"-\")[1].strip().split(\" \")[0]\n to_year = from_year = cal_header.split(\"-\")[1].strip().split(\" \")[1]\n # if main month contains a year, set it (maybe happens in december???)\n if len(from_month.split(\" \")) > 1:\n from_year = from_month.split(\" \")[1]\n from_month = from_month.split(\" \")[0]\n\n today_div = soup.find(\"table\", id=\"cal\").find(\"td\", class_=\"today\")\n\n # if other-month is to_month\n if (\n \"other-month\" in today_div.attrs[\"class\"]\n and datetime.now().strftime(\"%B\") == to_month\n ) or (\n \"main-month\" in today_div.attrs[\"class\"]\n and datetime.now().strftime(\"%B\") == from_month\n ):\n main_month, other_month = from_month, to_month\n main_year, other_year = from_year, to_year\n else: # if other-month is from_month\n main_month, other_month = to_month, from_month\n main_year, other_year = to_year, from_year\n\n entries = []\n\n calendar = soup.find(\"table\", {\"class\": \"collection-day-calendar\"})\n # Iterate over all days with pickups\n for pickup in calendar.find_all(\n \"div\", {\"class\": re.compile(r\"pickup|next-pickup\")}\n ):\n parent_td = pickup.parent\n month = (\n main_month if \"main-month\" in parent_td.attrs[\"class\"] else other_month\n )\n year = main_year if \"main-month\" in parent_td.attrs[\"class\"] else other_year\n day = parent_td.find(\"div\", {\"class\": \"daynumber\"}).text\n\n # Iterate over all pickup container types for this day\n for container in pickup.find_all(\"div\", {\"class\": \"bin-container\"}):\n container_type = \" \".join(container.find(\"div\").attrs[\"class\"])\n container_icon = ICON_MAP.get(container_type)\n\n date = datetime.strptime(f\"{year}-{month}-{day}\", \"%Y-%B-%d\").date()\n entries.append(\n Collection(date=date, t=container_type, icon=container_icon)\n )\n\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/portenf_sa_gov_au.py"}]} | 3,719 | 273 |
gh_patches_debug_4863 | rasdani/github-patches | git_diff | digitalfabrik__integreat-cms-1210 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
PDF Export URL pattern
### Describe the Bug
The web app calls `/REGION/LANG/wp-json/ig-mpdf/v1/pdf` to export a PDF which returns a 404. Our API currently uses `REGION/LANG/pdf`.
The normal mapping does not work, as we
### Steps to Reproduce
```shell
curl 'https://malte-test.tuerantuer.org/joerdenstorf/de/wp-json/ig-mpdf/v1/pdf'
```
### Expected Behavior
Map old URL pattern to new endpoint.
### Actual Behavior
404
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `integreat_cms/api/urls.py`
Content:
```
1 """
2 Expansion of API-Endpoints for the CMS
3 """
4 from django.urls import include, path, re_path
5
6 from .v3.events import events
7 from .v3.feedback import (
8 page_feedback,
9 search_result_feedback,
10 region_feedback,
11 offer_feedback,
12 offer_list_feedback,
13 event_list_feedback,
14 event_feedback,
15 poi_feedback,
16 map_feedback,
17 imprint_page_feedback,
18 legacy_feedback_endpoint,
19 )
20 from .v3.imprint import imprint
21 from .v3.languages import languages
22 from .v3.locations import locations
23 from .v3.pages import pages, children, parents, single_page
24 from .v3.pdf_export import pdf_export
25 from .v3.push_notifications import sent_push_notifications
26 from .v3.regions import regions, liveregions, hiddenregions
27 from .v3.offers import offers
28
29
30 #: The namespace for this URL config (see :attr:`django.urls.ResolverMatch.app_name`)
31 app_name = "api"
32
33 content_api_urlpatterns = [
34 path("pages/", pages, name="pages"),
35 path("locations/", locations, name="locations"),
36 path("events/", events, name="events"),
37 path("page/", single_page, name="single_page"),
38 path("post/", single_page, name="single_page"),
39 path("children/", children, name="children"),
40 path("parents/", parents, name="parents"),
41 path("pdf/", pdf_export, name="pdf_export"),
42 path(
43 "sent_push_notifications/",
44 sent_push_notifications,
45 name="sent_push_notifications",
46 ),
47 path("imprint/", imprint, name="imprint"),
48 path("disclaimer/", imprint, name="imprint"),
49 path("offers/", offers, name="offers"),
50 path("extras/", offers, name="offers"),
51 re_path(
52 r"^feedback/?$",
53 legacy_feedback_endpoint.legacy_feedback_endpoint,
54 name="legacy_feedback_endpoint",
55 ),
56 path(
57 "feedback/",
58 include(
59 [
60 re_path(
61 r"^categories/?$",
62 region_feedback.region_feedback,
63 name="region_feedback",
64 ),
65 re_path(r"^page/?$", page_feedback.page_feedback, name="page_feedback"),
66 re_path(r"^poi/?$", poi_feedback.poi_feedback, name="poi_feedback"),
67 re_path(
68 r"^event/?$", event_feedback.event_feedback, name="event_feedback"
69 ),
70 re_path(
71 r"^events/?$",
72 event_list_feedback.event_list_feedback,
73 name="event_list_feedback",
74 ),
75 re_path(
76 r"^imprint-page/?$",
77 imprint_page_feedback.imprint_page_feedback,
78 name="imprint_page_feedbacks",
79 ),
80 re_path(r"^map/?$", map_feedback.map_feedback, name="map_feedback"),
81 re_path(
82 r"^search/?$",
83 search_result_feedback.search_result_feedback,
84 name="search_result_feedback",
85 ),
86 re_path(
87 r"^offers/?$",
88 offer_list_feedback.offer_list_feedback,
89 name="offer_list_feedback",
90 ),
91 re_path(
92 r"^extras/?$",
93 offer_list_feedback.offer_list_feedback,
94 name="offer_list_feedback",
95 ),
96 re_path(
97 r"^offer/?$", offer_feedback.offer_feedback, name="offer_feedback"
98 ),
99 re_path(
100 r"^extra/?$", offer_feedback.offer_feedback, name="offer_feedback"
101 ),
102 ]
103 ),
104 ),
105 ]
106
107 region_api_urlpatterns = [
108 path("", regions, name="regions"),
109 path("live/", liveregions, name="regions_live"),
110 path("hidden/", hiddenregions, name="regions_hidden"),
111 ]
112
113 #: The url patterns of this module (see :doc:`topics/http/urls`)
114 urlpatterns = [
115 path("api/regions/", include(region_api_urlpatterns)),
116 path("wp-json/extensions/v3/sites/", include(region_api_urlpatterns)),
117 path(
118 "api/<slug:region_slug>/",
119 include(
120 [
121 path("languages/", languages, name="languages"),
122 path("offers/", offers, name="offers"),
123 path("extras/", offers, name="offers"),
124 path("<slug:language_slug>/", include(content_api_urlpatterns)),
125 ]
126 ),
127 ),
128 path(
129 "<slug:region_slug>/",
130 include(
131 [
132 path(
133 "de/wp-json/extensions/v3/languages/", languages, name="languages"
134 ),
135 path(
136 "<slug:language_slug>/wp-json/extensions/v3/",
137 include(content_api_urlpatterns),
138 ),
139 ]
140 ),
141 ),
142 ]
143
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/integreat_cms/api/urls.py b/integreat_cms/api/urls.py
--- a/integreat_cms/api/urls.py
+++ b/integreat_cms/api/urls.py
@@ -136,6 +136,11 @@
"<slug:language_slug>/wp-json/extensions/v3/",
include(content_api_urlpatterns),
),
+ path(
+ "<slug:language_slug>/wp-json/ig-mpdf/v1/pdf/",
+ pdf_export,
+ name="pdf_export",
+ ),
]
),
),
| {"golden_diff": "diff --git a/integreat_cms/api/urls.py b/integreat_cms/api/urls.py\n--- a/integreat_cms/api/urls.py\n+++ b/integreat_cms/api/urls.py\n@@ -136,6 +136,11 @@\n \"<slug:language_slug>/wp-json/extensions/v3/\",\n include(content_api_urlpatterns),\n ),\n+ path(\n+ \"<slug:language_slug>/wp-json/ig-mpdf/v1/pdf/\",\n+ pdf_export,\n+ name=\"pdf_export\",\n+ ),\n ]\n ),\n ),\n", "issue": "PDF Export URL pattern\n### Describe the Bug\r\nThe web app calls `/REGION/LANG/wp-json/ig-mpdf/v1/pdf` to export a PDF which returns a 404. Our API currently uses `REGION/LANG/pdf`.\r\n\r\nThe normal mapping does not work, as we\r\n\r\n### Steps to Reproduce\r\n\r\n```shell\r\ncurl 'https://malte-test.tuerantuer.org/joerdenstorf/de/wp-json/ig-mpdf/v1/pdf'\r\n```\r\n\r\n### Expected Behavior\r\nMap old URL pattern to new endpoint.\r\n\r\n\r\n### Actual Behavior\r\n404\n", "before_files": [{"content": "\"\"\"\nExpansion of API-Endpoints for the CMS\n\"\"\"\nfrom django.urls import include, path, re_path\n\nfrom .v3.events import events\nfrom .v3.feedback import (\n page_feedback,\n search_result_feedback,\n region_feedback,\n offer_feedback,\n offer_list_feedback,\n event_list_feedback,\n event_feedback,\n poi_feedback,\n map_feedback,\n imprint_page_feedback,\n legacy_feedback_endpoint,\n)\nfrom .v3.imprint import imprint\nfrom .v3.languages import languages\nfrom .v3.locations import locations\nfrom .v3.pages import pages, children, parents, single_page\nfrom .v3.pdf_export import pdf_export\nfrom .v3.push_notifications import sent_push_notifications\nfrom .v3.regions import regions, liveregions, hiddenregions\nfrom .v3.offers import offers\n\n\n#: The namespace for this URL config (see :attr:`django.urls.ResolverMatch.app_name`)\napp_name = \"api\"\n\ncontent_api_urlpatterns = [\n path(\"pages/\", pages, name=\"pages\"),\n path(\"locations/\", locations, name=\"locations\"),\n path(\"events/\", events, name=\"events\"),\n path(\"page/\", single_page, name=\"single_page\"),\n path(\"post/\", single_page, name=\"single_page\"),\n path(\"children/\", children, name=\"children\"),\n path(\"parents/\", parents, name=\"parents\"),\n path(\"pdf/\", pdf_export, name=\"pdf_export\"),\n path(\n \"sent_push_notifications/\",\n sent_push_notifications,\n name=\"sent_push_notifications\",\n ),\n path(\"imprint/\", imprint, name=\"imprint\"),\n path(\"disclaimer/\", imprint, name=\"imprint\"),\n path(\"offers/\", offers, name=\"offers\"),\n path(\"extras/\", offers, name=\"offers\"),\n re_path(\n r\"^feedback/?$\",\n legacy_feedback_endpoint.legacy_feedback_endpoint,\n name=\"legacy_feedback_endpoint\",\n ),\n path(\n \"feedback/\",\n include(\n [\n re_path(\n r\"^categories/?$\",\n region_feedback.region_feedback,\n name=\"region_feedback\",\n ),\n re_path(r\"^page/?$\", page_feedback.page_feedback, name=\"page_feedback\"),\n re_path(r\"^poi/?$\", poi_feedback.poi_feedback, name=\"poi_feedback\"),\n re_path(\n r\"^event/?$\", event_feedback.event_feedback, name=\"event_feedback\"\n ),\n re_path(\n r\"^events/?$\",\n event_list_feedback.event_list_feedback,\n name=\"event_list_feedback\",\n ),\n re_path(\n r\"^imprint-page/?$\",\n imprint_page_feedback.imprint_page_feedback,\n name=\"imprint_page_feedbacks\",\n ),\n re_path(r\"^map/?$\", map_feedback.map_feedback, name=\"map_feedback\"),\n re_path(\n r\"^search/?$\",\n search_result_feedback.search_result_feedback,\n name=\"search_result_feedback\",\n ),\n re_path(\n r\"^offers/?$\",\n offer_list_feedback.offer_list_feedback,\n name=\"offer_list_feedback\",\n ),\n re_path(\n r\"^extras/?$\",\n offer_list_feedback.offer_list_feedback,\n name=\"offer_list_feedback\",\n ),\n re_path(\n r\"^offer/?$\", offer_feedback.offer_feedback, name=\"offer_feedback\"\n ),\n re_path(\n r\"^extra/?$\", offer_feedback.offer_feedback, name=\"offer_feedback\"\n ),\n ]\n ),\n ),\n]\n\nregion_api_urlpatterns = [\n path(\"\", regions, name=\"regions\"),\n path(\"live/\", liveregions, name=\"regions_live\"),\n path(\"hidden/\", hiddenregions, name=\"regions_hidden\"),\n]\n\n#: The url patterns of this module (see :doc:`topics/http/urls`)\nurlpatterns = [\n path(\"api/regions/\", include(region_api_urlpatterns)),\n path(\"wp-json/extensions/v3/sites/\", include(region_api_urlpatterns)),\n path(\n \"api/<slug:region_slug>/\",\n include(\n [\n path(\"languages/\", languages, name=\"languages\"),\n path(\"offers/\", offers, name=\"offers\"),\n path(\"extras/\", offers, name=\"offers\"),\n path(\"<slug:language_slug>/\", include(content_api_urlpatterns)),\n ]\n ),\n ),\n path(\n \"<slug:region_slug>/\",\n include(\n [\n path(\n \"de/wp-json/extensions/v3/languages/\", languages, name=\"languages\"\n ),\n path(\n \"<slug:language_slug>/wp-json/extensions/v3/\",\n include(content_api_urlpatterns),\n ),\n ]\n ),\n ),\n]\n", "path": "integreat_cms/api/urls.py"}], "after_files": [{"content": "\"\"\"\nExpansion of API-Endpoints for the CMS\n\"\"\"\nfrom django.urls import include, path, re_path\n\nfrom .v3.events import events\nfrom .v3.feedback import (\n page_feedback,\n search_result_feedback,\n region_feedback,\n offer_feedback,\n offer_list_feedback,\n event_list_feedback,\n event_feedback,\n poi_feedback,\n map_feedback,\n imprint_page_feedback,\n legacy_feedback_endpoint,\n)\nfrom .v3.imprint import imprint\nfrom .v3.languages import languages\nfrom .v3.locations import locations\nfrom .v3.pages import pages, children, parents, single_page\nfrom .v3.pdf_export import pdf_export\nfrom .v3.push_notifications import sent_push_notifications\nfrom .v3.regions import regions, liveregions, hiddenregions\nfrom .v3.offers import offers\n\n\n#: The namespace for this URL config (see :attr:`django.urls.ResolverMatch.app_name`)\napp_name = \"api\"\n\ncontent_api_urlpatterns = [\n path(\"pages/\", pages, name=\"pages\"),\n path(\"locations/\", locations, name=\"locations\"),\n path(\"events/\", events, name=\"events\"),\n path(\"page/\", single_page, name=\"single_page\"),\n path(\"post/\", single_page, name=\"single_page\"),\n path(\"children/\", children, name=\"children\"),\n path(\"parents/\", parents, name=\"parents\"),\n path(\"pdf/\", pdf_export, name=\"pdf_export\"),\n path(\n \"sent_push_notifications/\",\n sent_push_notifications,\n name=\"sent_push_notifications\",\n ),\n path(\"imprint/\", imprint, name=\"imprint\"),\n path(\"disclaimer/\", imprint, name=\"imprint\"),\n path(\"offers/\", offers, name=\"offers\"),\n path(\"extras/\", offers, name=\"offers\"),\n re_path(\n r\"^feedback/?$\",\n legacy_feedback_endpoint.legacy_feedback_endpoint,\n name=\"legacy_feedback_endpoint\",\n ),\n path(\n \"feedback/\",\n include(\n [\n re_path(\n r\"^categories/?$\",\n region_feedback.region_feedback,\n name=\"region_feedback\",\n ),\n re_path(r\"^page/?$\", page_feedback.page_feedback, name=\"page_feedback\"),\n re_path(r\"^poi/?$\", poi_feedback.poi_feedback, name=\"poi_feedback\"),\n re_path(\n r\"^event/?$\", event_feedback.event_feedback, name=\"event_feedback\"\n ),\n re_path(\n r\"^events/?$\",\n event_list_feedback.event_list_feedback,\n name=\"event_list_feedback\",\n ),\n re_path(\n r\"^imprint-page/?$\",\n imprint_page_feedback.imprint_page_feedback,\n name=\"imprint_page_feedbacks\",\n ),\n re_path(r\"^map/?$\", map_feedback.map_feedback, name=\"map_feedback\"),\n re_path(\n r\"^search/?$\",\n search_result_feedback.search_result_feedback,\n name=\"search_result_feedback\",\n ),\n re_path(\n r\"^offers/?$\",\n offer_list_feedback.offer_list_feedback,\n name=\"offer_list_feedback\",\n ),\n re_path(\n r\"^extras/?$\",\n offer_list_feedback.offer_list_feedback,\n name=\"offer_list_feedback\",\n ),\n re_path(\n r\"^offer/?$\", offer_feedback.offer_feedback, name=\"offer_feedback\"\n ),\n re_path(\n r\"^extra/?$\", offer_feedback.offer_feedback, name=\"offer_feedback\"\n ),\n ]\n ),\n ),\n]\n\nregion_api_urlpatterns = [\n path(\"\", regions, name=\"regions\"),\n path(\"live/\", liveregions, name=\"regions_live\"),\n path(\"hidden/\", hiddenregions, name=\"regions_hidden\"),\n]\n\n#: The url patterns of this module (see :doc:`topics/http/urls`)\nurlpatterns = [\n path(\"api/regions/\", include(region_api_urlpatterns)),\n path(\"wp-json/extensions/v3/sites/\", include(region_api_urlpatterns)),\n path(\n \"api/<slug:region_slug>/\",\n include(\n [\n path(\"languages/\", languages, name=\"languages\"),\n path(\"offers/\", offers, name=\"offers\"),\n path(\"extras/\", offers, name=\"offers\"),\n path(\"<slug:language_slug>/\", include(content_api_urlpatterns)),\n ]\n ),\n ),\n path(\n \"<slug:region_slug>/\",\n include(\n [\n path(\n \"de/wp-json/extensions/v3/languages/\", languages, name=\"languages\"\n ),\n path(\n \"<slug:language_slug>/wp-json/extensions/v3/\",\n include(content_api_urlpatterns),\n ),\n path(\n \"<slug:language_slug>/wp-json/ig-mpdf/v1/pdf/\",\n pdf_export,\n name=\"pdf_export\",\n ),\n ]\n ),\n ),\n]\n", "path": "integreat_cms/api/urls.py"}]} | 1,656 | 129 |
gh_patches_debug_29434 | rasdani/github-patches | git_diff | plone__Products.CMFPlone-1515 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Resources from third party add-ons are not being included in compiled plone-legacy bundle
Seems JS resources registered in Plone 5 using old approach (`jsregistry.xml`) are not included in the final compilation: I installed an add-on and, even as I can see the JS resources listed in `default.js`, the source code is not present.
If I enable development mode, then I can see the source code included in `plone-legacy-compiled.js` and it's executed normally.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `Products/CMFPlone/resources/browser/combine.py`
Content:
```
1 from zExceptions import NotFound
2 from Acquisition import aq_base
3 from datetime import datetime
4 from plone.registry.interfaces import IRegistry
5 from plone.resource.file import FilesystemFile
6 from plone.resource.interfaces import IResourceDirectory
7 from Products.CMFPlone.interfaces import IBundleRegistry
8 from Products.CMFPlone.interfaces.resources import (
9 OVERRIDE_RESOURCE_DIRECTORY_NAME,
10 )
11 from StringIO import StringIO
12 from zope.component import getUtility
13 from zope.component import queryUtility
14
15 PRODUCTION_RESOURCE_DIRECTORY = "production"
16
17
18 def get_production_resource_directory():
19 persistent_directory = queryUtility(IResourceDirectory, name="persistent")
20 if persistent_directory is None:
21 return ''
22 container = persistent_directory[OVERRIDE_RESOURCE_DIRECTORY_NAME]
23 try:
24 production_folder = container[PRODUCTION_RESOURCE_DIRECTORY]
25 except NotFound:
26 return "%s/++unique++1" % PRODUCTION_RESOURCE_DIRECTORY
27 timestamp = production_folder.readFile('timestamp.txt')
28 return "%s/++unique++%s" % (
29 PRODUCTION_RESOURCE_DIRECTORY, timestamp)
30
31
32 def get_resource(context, path):
33 resource = context.unrestrictedTraverse(path)
34 if isinstance(resource, FilesystemFile):
35 (directory, sep, filename) = path.rpartition('/')
36 return context.unrestrictedTraverse(directory).readFile(filename)
37 else:
38 if hasattr(aq_base(resource), 'GET'):
39 # for FileResource
40 return resource.GET()
41 else:
42 # any BrowserView
43 return resource()
44
45
46 def write_js(context, folder, meta_bundle):
47 registry = getUtility(IRegistry)
48 resources = []
49
50 # default resources
51 if meta_bundle == 'default' and registry.records.get(
52 'plone.resources/jquery.js'
53 ):
54 resources.append(get_resource(context,
55 registry.records['plone.resources/jquery.js'].value))
56 resources.append(get_resource(context,
57 registry.records['plone.resources.requirejs'].value))
58 resources.append(get_resource(context,
59 registry.records['plone.resources.configjs'].value))
60
61 # bundles
62 bundles = registry.collectionOfInterface(
63 IBundleRegistry, prefix="plone.bundles", check=False)
64 for bundle in bundles.values():
65 if bundle.merge_with == meta_bundle:
66 resources.append(get_resource(context, bundle.jscompilation))
67
68 fi = StringIO()
69 for script in resources:
70 fi.write(script + '\n')
71 folder.writeFile(meta_bundle + ".js", fi)
72
73
74 def write_css(context, folder, meta_bundle):
75 registry = getUtility(IRegistry)
76 resources = []
77
78 bundles = registry.collectionOfInterface(
79 IBundleRegistry, prefix="plone.bundles", check=False)
80 for bundle in bundles.values():
81 if bundle.merge_with == meta_bundle:
82 resources.append(get_resource(context, bundle.csscompilation))
83
84 fi = StringIO()
85 for script in resources:
86 fi.write(script + '\n')
87 folder.writeFile(meta_bundle + ".css", fi)
88
89
90 def combine_bundles(context):
91 persistent_directory = queryUtility(IResourceDirectory, name="persistent")
92 if persistent_directory is None:
93 return
94 if OVERRIDE_RESOURCE_DIRECTORY_NAME not in persistent_directory:
95 persistent_directory.makeDirectory(OVERRIDE_RESOURCE_DIRECTORY_NAME)
96 container = persistent_directory[OVERRIDE_RESOURCE_DIRECTORY_NAME]
97 if PRODUCTION_RESOURCE_DIRECTORY not in container:
98 container.makeDirectory(PRODUCTION_RESOURCE_DIRECTORY)
99 production_folder = container[PRODUCTION_RESOURCE_DIRECTORY]
100
101 # store timestamp
102 fi = StringIO()
103 fi.write(datetime.now().isoformat())
104 production_folder.writeFile("timestamp.txt", fi)
105
106 # generate new combined bundles
107 write_js(context, production_folder, 'default')
108 write_js(context, production_folder, 'logged-in')
109 write_css(context, production_folder, 'default')
110 write_css(context, production_folder, 'logged-in')
111
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/Products/CMFPlone/resources/browser/combine.py b/Products/CMFPlone/resources/browser/combine.py
--- a/Products/CMFPlone/resources/browser/combine.py
+++ b/Products/CMFPlone/resources/browser/combine.py
@@ -30,6 +30,14 @@
def get_resource(context, path):
+ if path.startswith('++plone++'):
+ # ++plone++ resources can be customized, we return their override
+ # value if any
+ overrides = get_override_directory(context)
+ filepath = path[9:]
+ if overrides.isFile(filepath):
+ return overrides.readFile(filepath)
+
resource = context.unrestrictedTraverse(path)
if isinstance(resource, FilesystemFile):
(directory, sep, filename) = path.rpartition('/')
@@ -87,13 +95,17 @@
folder.writeFile(meta_bundle + ".css", fi)
-def combine_bundles(context):
+def get_override_directory(context):
persistent_directory = queryUtility(IResourceDirectory, name="persistent")
if persistent_directory is None:
return
if OVERRIDE_RESOURCE_DIRECTORY_NAME not in persistent_directory:
persistent_directory.makeDirectory(OVERRIDE_RESOURCE_DIRECTORY_NAME)
- container = persistent_directory[OVERRIDE_RESOURCE_DIRECTORY_NAME]
+ return persistent_directory[OVERRIDE_RESOURCE_DIRECTORY_NAME]
+
+
+def combine_bundles(context):
+ container = get_override_directory(context)
if PRODUCTION_RESOURCE_DIRECTORY not in container:
container.makeDirectory(PRODUCTION_RESOURCE_DIRECTORY)
production_folder = container[PRODUCTION_RESOURCE_DIRECTORY]
| {"golden_diff": "diff --git a/Products/CMFPlone/resources/browser/combine.py b/Products/CMFPlone/resources/browser/combine.py\n--- a/Products/CMFPlone/resources/browser/combine.py\n+++ b/Products/CMFPlone/resources/browser/combine.py\n@@ -30,6 +30,14 @@\n \n \n def get_resource(context, path):\n+ if path.startswith('++plone++'):\n+ # ++plone++ resources can be customized, we return their override\n+ # value if any\n+ overrides = get_override_directory(context)\n+ filepath = path[9:]\n+ if overrides.isFile(filepath):\n+ return overrides.readFile(filepath)\n+\n resource = context.unrestrictedTraverse(path)\n if isinstance(resource, FilesystemFile):\n (directory, sep, filename) = path.rpartition('/')\n@@ -87,13 +95,17 @@\n folder.writeFile(meta_bundle + \".css\", fi)\n \n \n-def combine_bundles(context):\n+def get_override_directory(context):\n persistent_directory = queryUtility(IResourceDirectory, name=\"persistent\")\n if persistent_directory is None:\n return\n if OVERRIDE_RESOURCE_DIRECTORY_NAME not in persistent_directory:\n persistent_directory.makeDirectory(OVERRIDE_RESOURCE_DIRECTORY_NAME)\n- container = persistent_directory[OVERRIDE_RESOURCE_DIRECTORY_NAME]\n+ return persistent_directory[OVERRIDE_RESOURCE_DIRECTORY_NAME]\n+\n+\n+def combine_bundles(context):\n+ container = get_override_directory(context)\n if PRODUCTION_RESOURCE_DIRECTORY not in container:\n container.makeDirectory(PRODUCTION_RESOURCE_DIRECTORY)\n production_folder = container[PRODUCTION_RESOURCE_DIRECTORY]\n", "issue": "Resources from third party add-ons are not being included in compiled plone-legacy bundle\nSeems JS resources registered in Plone 5 using old approach (`jsregistry.xml`) are not included in the final compilation: I installed an add-on and, even as I can see the JS resources listed in `default.js`, the source code is not present.\n\nIf I enable development mode, then I can see the source code included in `plone-legacy-compiled.js` and it's executed normally.\n\n", "before_files": [{"content": "from zExceptions import NotFound\nfrom Acquisition import aq_base\nfrom datetime import datetime\nfrom plone.registry.interfaces import IRegistry\nfrom plone.resource.file import FilesystemFile\nfrom plone.resource.interfaces import IResourceDirectory\nfrom Products.CMFPlone.interfaces import IBundleRegistry\nfrom Products.CMFPlone.interfaces.resources import (\n OVERRIDE_RESOURCE_DIRECTORY_NAME,\n)\nfrom StringIO import StringIO\nfrom zope.component import getUtility\nfrom zope.component import queryUtility\n\nPRODUCTION_RESOURCE_DIRECTORY = \"production\"\n\n\ndef get_production_resource_directory():\n persistent_directory = queryUtility(IResourceDirectory, name=\"persistent\")\n if persistent_directory is None:\n return ''\n container = persistent_directory[OVERRIDE_RESOURCE_DIRECTORY_NAME]\n try:\n production_folder = container[PRODUCTION_RESOURCE_DIRECTORY]\n except NotFound:\n return \"%s/++unique++1\" % PRODUCTION_RESOURCE_DIRECTORY\n timestamp = production_folder.readFile('timestamp.txt')\n return \"%s/++unique++%s\" % (\n PRODUCTION_RESOURCE_DIRECTORY, timestamp)\n\n\ndef get_resource(context, path):\n resource = context.unrestrictedTraverse(path)\n if isinstance(resource, FilesystemFile):\n (directory, sep, filename) = path.rpartition('/')\n return context.unrestrictedTraverse(directory).readFile(filename)\n else:\n if hasattr(aq_base(resource), 'GET'):\n # for FileResource\n return resource.GET()\n else:\n # any BrowserView\n return resource()\n\n\ndef write_js(context, folder, meta_bundle):\n registry = getUtility(IRegistry)\n resources = []\n\n # default resources\n if meta_bundle == 'default' and registry.records.get(\n 'plone.resources/jquery.js'\n ):\n resources.append(get_resource(context,\n registry.records['plone.resources/jquery.js'].value))\n resources.append(get_resource(context,\n registry.records['plone.resources.requirejs'].value))\n resources.append(get_resource(context,\n registry.records['plone.resources.configjs'].value))\n\n # bundles\n bundles = registry.collectionOfInterface(\n IBundleRegistry, prefix=\"plone.bundles\", check=False)\n for bundle in bundles.values():\n if bundle.merge_with == meta_bundle:\n resources.append(get_resource(context, bundle.jscompilation))\n\n fi = StringIO()\n for script in resources:\n fi.write(script + '\\n')\n folder.writeFile(meta_bundle + \".js\", fi)\n\n\ndef write_css(context, folder, meta_bundle):\n registry = getUtility(IRegistry)\n resources = []\n\n bundles = registry.collectionOfInterface(\n IBundleRegistry, prefix=\"plone.bundles\", check=False)\n for bundle in bundles.values():\n if bundle.merge_with == meta_bundle:\n resources.append(get_resource(context, bundle.csscompilation))\n\n fi = StringIO()\n for script in resources:\n fi.write(script + '\\n')\n folder.writeFile(meta_bundle + \".css\", fi)\n\n\ndef combine_bundles(context):\n persistent_directory = queryUtility(IResourceDirectory, name=\"persistent\")\n if persistent_directory is None:\n return\n if OVERRIDE_RESOURCE_DIRECTORY_NAME not in persistent_directory:\n persistent_directory.makeDirectory(OVERRIDE_RESOURCE_DIRECTORY_NAME)\n container = persistent_directory[OVERRIDE_RESOURCE_DIRECTORY_NAME]\n if PRODUCTION_RESOURCE_DIRECTORY not in container:\n container.makeDirectory(PRODUCTION_RESOURCE_DIRECTORY)\n production_folder = container[PRODUCTION_RESOURCE_DIRECTORY]\n\n # store timestamp\n fi = StringIO()\n fi.write(datetime.now().isoformat())\n production_folder.writeFile(\"timestamp.txt\", fi)\n\n # generate new combined bundles\n write_js(context, production_folder, 'default')\n write_js(context, production_folder, 'logged-in')\n write_css(context, production_folder, 'default')\n write_css(context, production_folder, 'logged-in')\n", "path": "Products/CMFPlone/resources/browser/combine.py"}], "after_files": [{"content": "from zExceptions import NotFound\nfrom Acquisition import aq_base\nfrom datetime import datetime\nfrom plone.registry.interfaces import IRegistry\nfrom plone.resource.file import FilesystemFile\nfrom plone.resource.interfaces import IResourceDirectory\nfrom Products.CMFPlone.interfaces import IBundleRegistry\nfrom Products.CMFPlone.interfaces.resources import (\n OVERRIDE_RESOURCE_DIRECTORY_NAME,\n)\nfrom StringIO import StringIO\nfrom zope.component import getUtility\nfrom zope.component import queryUtility\n\nPRODUCTION_RESOURCE_DIRECTORY = \"production\"\n\n\ndef get_production_resource_directory():\n persistent_directory = queryUtility(IResourceDirectory, name=\"persistent\")\n if persistent_directory is None:\n return ''\n container = persistent_directory[OVERRIDE_RESOURCE_DIRECTORY_NAME]\n try:\n production_folder = container[PRODUCTION_RESOURCE_DIRECTORY]\n except NotFound:\n return \"%s/++unique++1\" % PRODUCTION_RESOURCE_DIRECTORY\n timestamp = production_folder.readFile('timestamp.txt')\n return \"%s/++unique++%s\" % (\n PRODUCTION_RESOURCE_DIRECTORY, timestamp)\n\n\ndef get_resource(context, path):\n if path.startswith('++plone++'):\n # ++plone++ resources can be customized, we return their override\n # value if any\n overrides = get_override_directory(context)\n filepath = path[9:]\n if overrides.isFile(filepath):\n return overrides.readFile(filepath)\n\n resource = context.unrestrictedTraverse(path)\n if isinstance(resource, FilesystemFile):\n (directory, sep, filename) = path.rpartition('/')\n return context.unrestrictedTraverse(directory).readFile(filename)\n else:\n if hasattr(aq_base(resource), 'GET'):\n # for FileResource\n return resource.GET()\n else:\n # any BrowserView\n return resource()\n\n\ndef write_js(context, folder, meta_bundle):\n registry = getUtility(IRegistry)\n resources = []\n\n # default resources\n if meta_bundle == 'default' and registry.records.get(\n 'plone.resources/jquery.js'\n ):\n resources.append(get_resource(context,\n registry.records['plone.resources/jquery.js'].value))\n resources.append(get_resource(context,\n registry.records['plone.resources.requirejs'].value))\n resources.append(get_resource(context,\n registry.records['plone.resources.configjs'].value))\n\n # bundles\n bundles = registry.collectionOfInterface(\n IBundleRegistry, prefix=\"plone.bundles\", check=False)\n for bundle in bundles.values():\n if bundle.merge_with == meta_bundle:\n resources.append(get_resource(context, bundle.jscompilation))\n\n fi = StringIO()\n for script in resources:\n fi.write(script + '\\n')\n folder.writeFile(meta_bundle + \".js\", fi)\n\n\ndef write_css(context, folder, meta_bundle):\n registry = getUtility(IRegistry)\n resources = []\n\n bundles = registry.collectionOfInterface(\n IBundleRegistry, prefix=\"plone.bundles\", check=False)\n for bundle in bundles.values():\n if bundle.merge_with == meta_bundle:\n resources.append(get_resource(context, bundle.csscompilation))\n\n fi = StringIO()\n for script in resources:\n fi.write(script + '\\n')\n folder.writeFile(meta_bundle + \".css\", fi)\n\n\ndef get_override_directory(context):\n persistent_directory = queryUtility(IResourceDirectory, name=\"persistent\")\n if persistent_directory is None:\n return\n if OVERRIDE_RESOURCE_DIRECTORY_NAME not in persistent_directory:\n persistent_directory.makeDirectory(OVERRIDE_RESOURCE_DIRECTORY_NAME)\n return persistent_directory[OVERRIDE_RESOURCE_DIRECTORY_NAME]\n\n\ndef combine_bundles(context):\n container = get_override_directory(context)\n if PRODUCTION_RESOURCE_DIRECTORY not in container:\n container.makeDirectory(PRODUCTION_RESOURCE_DIRECTORY)\n production_folder = container[PRODUCTION_RESOURCE_DIRECTORY]\n\n # store timestamp\n fi = StringIO()\n fi.write(datetime.now().isoformat())\n production_folder.writeFile(\"timestamp.txt\", fi)\n\n # generate new combined bundles\n write_js(context, production_folder, 'default')\n write_js(context, production_folder, 'logged-in')\n write_css(context, production_folder, 'default')\n write_css(context, production_folder, 'logged-in')\n", "path": "Products/CMFPlone/resources/browser/combine.py"}]} | 1,383 | 338 |
gh_patches_debug_22011 | rasdani/github-patches | git_diff | docker__docker-py-1330 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add docker network IPAM options parameter
IPAM driver missing options
supports an options field in the IPAM config
It introduced in API v1.22.
```
POST /networks/create Now supports an options field in the IPAM config that provides options for custom IPAM plugins.
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docker/types/networks.py`
Content:
```
1 from .. import errors
2 from ..utils import normalize_links, version_lt
3
4
5 class EndpointConfig(dict):
6 def __init__(self, version, aliases=None, links=None, ipv4_address=None,
7 ipv6_address=None, link_local_ips=None):
8 if version_lt(version, '1.22'):
9 raise errors.InvalidVersion(
10 'Endpoint config is not supported for API version < 1.22'
11 )
12
13 if aliases:
14 self["Aliases"] = aliases
15
16 if links:
17 self["Links"] = normalize_links(links)
18
19 ipam_config = {}
20 if ipv4_address:
21 ipam_config['IPv4Address'] = ipv4_address
22
23 if ipv6_address:
24 ipam_config['IPv6Address'] = ipv6_address
25
26 if link_local_ips is not None:
27 if version_lt(version, '1.24'):
28 raise errors.InvalidVersion(
29 'link_local_ips is not supported for API version < 1.24'
30 )
31 ipam_config['LinkLocalIPs'] = link_local_ips
32
33 if ipam_config:
34 self['IPAMConfig'] = ipam_config
35
36
37 class NetworkingConfig(dict):
38 def __init__(self, endpoints_config=None):
39 if endpoints_config:
40 self["EndpointsConfig"] = endpoints_config
41
42
43 class IPAMConfig(dict):
44 """
45 Create an IPAM (IP Address Management) config dictionary to be used with
46 :py:meth:`~docker.api.network.NetworkApiMixin.create_network`.
47
48 Args:
49
50 driver (str): The IPAM driver to use. Defaults to ``default``.
51 pool_configs (list): A list of pool configurations
52 (:py:class:`~docker.types.IPAMPool`). Defaults to empty list.
53
54 Example:
55
56 >>> ipam_config = docker.types.IPAMConfig(driver='default')
57 >>> network = client.create_network('network1', ipam=ipam_config)
58
59 """
60 def __init__(self, driver='default', pool_configs=None):
61 self.update({
62 'Driver': driver,
63 'Config': pool_configs or []
64 })
65
66
67 class IPAMPool(dict):
68 """
69 Create an IPAM pool config dictionary to be added to the
70 ``pool_configs`` parameter of
71 :py:class:`~docker.types.IPAMConfig`.
72
73 Args:
74
75 subnet (str): Custom subnet for this IPAM pool using the CIDR
76 notation. Defaults to ``None``.
77 iprange (str): Custom IP range for endpoints in this IPAM pool using
78 the CIDR notation. Defaults to ``None``.
79 gateway (str): Custom IP address for the pool's gateway.
80 aux_addresses (dict): A dictionary of ``key -> ip_address``
81 relationships specifying auxiliary addresses that need to be
82 allocated by the IPAM driver.
83
84 Example:
85
86 >>> ipam_pool = docker.types.IPAMPool(
87 subnet='124.42.0.0/16',
88 iprange='124.42.0.0/24',
89 gateway='124.42.0.254',
90 aux_addresses={
91 'reserved1': '124.42.1.1'
92 }
93 )
94 >>> ipam_config = docker.types.IPAMConfig(
95 pool_configs=[ipam_pool])
96 """
97 def __init__(self, subnet=None, iprange=None, gateway=None,
98 aux_addresses=None):
99 self.update({
100 'Subnet': subnet,
101 'IPRange': iprange,
102 'Gateway': gateway,
103 'AuxiliaryAddresses': aux_addresses
104 })
105
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docker/types/networks.py b/docker/types/networks.py
--- a/docker/types/networks.py
+++ b/docker/types/networks.py
@@ -50,6 +50,8 @@
driver (str): The IPAM driver to use. Defaults to ``default``.
pool_configs (list): A list of pool configurations
(:py:class:`~docker.types.IPAMPool`). Defaults to empty list.
+ options (dict): Driver options as a key-value dictionary.
+ Defaults to `None`.
Example:
@@ -57,12 +59,17 @@
>>> network = client.create_network('network1', ipam=ipam_config)
"""
- def __init__(self, driver='default', pool_configs=None):
+ def __init__(self, driver='default', pool_configs=None, options=None):
self.update({
'Driver': driver,
'Config': pool_configs or []
})
+ if options:
+ if not isinstance(options, dict):
+ raise TypeError('IPAMConfig options must be a dictionary')
+ self['Options'] = options
+
class IPAMPool(dict):
"""
| {"golden_diff": "diff --git a/docker/types/networks.py b/docker/types/networks.py\n--- a/docker/types/networks.py\n+++ b/docker/types/networks.py\n@@ -50,6 +50,8 @@\n driver (str): The IPAM driver to use. Defaults to ``default``.\n pool_configs (list): A list of pool configurations\n (:py:class:`~docker.types.IPAMPool`). Defaults to empty list.\n+ options (dict): Driver options as a key-value dictionary.\n+ Defaults to `None`.\n \n Example:\n \n@@ -57,12 +59,17 @@\n >>> network = client.create_network('network1', ipam=ipam_config)\n \n \"\"\"\n- def __init__(self, driver='default', pool_configs=None):\n+ def __init__(self, driver='default', pool_configs=None, options=None):\n self.update({\n 'Driver': driver,\n 'Config': pool_configs or []\n })\n \n+ if options:\n+ if not isinstance(options, dict):\n+ raise TypeError('IPAMConfig options must be a dictionary')\n+ self['Options'] = options\n+\n \n class IPAMPool(dict):\n \"\"\"\n", "issue": "Add docker network IPAM options parameter\nIPAM driver missing options\n\nsupports an options field in the IPAM config \nIt introduced in API v1.22.\n\n```\nPOST /networks/create Now supports an options field in the IPAM config that provides options for custom IPAM plugins.\n```\n\n", "before_files": [{"content": "from .. import errors\nfrom ..utils import normalize_links, version_lt\n\n\nclass EndpointConfig(dict):\n def __init__(self, version, aliases=None, links=None, ipv4_address=None,\n ipv6_address=None, link_local_ips=None):\n if version_lt(version, '1.22'):\n raise errors.InvalidVersion(\n 'Endpoint config is not supported for API version < 1.22'\n )\n\n if aliases:\n self[\"Aliases\"] = aliases\n\n if links:\n self[\"Links\"] = normalize_links(links)\n\n ipam_config = {}\n if ipv4_address:\n ipam_config['IPv4Address'] = ipv4_address\n\n if ipv6_address:\n ipam_config['IPv6Address'] = ipv6_address\n\n if link_local_ips is not None:\n if version_lt(version, '1.24'):\n raise errors.InvalidVersion(\n 'link_local_ips is not supported for API version < 1.24'\n )\n ipam_config['LinkLocalIPs'] = link_local_ips\n\n if ipam_config:\n self['IPAMConfig'] = ipam_config\n\n\nclass NetworkingConfig(dict):\n def __init__(self, endpoints_config=None):\n if endpoints_config:\n self[\"EndpointsConfig\"] = endpoints_config\n\n\nclass IPAMConfig(dict):\n \"\"\"\n Create an IPAM (IP Address Management) config dictionary to be used with\n :py:meth:`~docker.api.network.NetworkApiMixin.create_network`.\n\n Args:\n\n driver (str): The IPAM driver to use. Defaults to ``default``.\n pool_configs (list): A list of pool configurations\n (:py:class:`~docker.types.IPAMPool`). Defaults to empty list.\n\n Example:\n\n >>> ipam_config = docker.types.IPAMConfig(driver='default')\n >>> network = client.create_network('network1', ipam=ipam_config)\n\n \"\"\"\n def __init__(self, driver='default', pool_configs=None):\n self.update({\n 'Driver': driver,\n 'Config': pool_configs or []\n })\n\n\nclass IPAMPool(dict):\n \"\"\"\n Create an IPAM pool config dictionary to be added to the\n ``pool_configs`` parameter of\n :py:class:`~docker.types.IPAMConfig`.\n\n Args:\n\n subnet (str): Custom subnet for this IPAM pool using the CIDR\n notation. Defaults to ``None``.\n iprange (str): Custom IP range for endpoints in this IPAM pool using\n the CIDR notation. Defaults to ``None``.\n gateway (str): Custom IP address for the pool's gateway.\n aux_addresses (dict): A dictionary of ``key -> ip_address``\n relationships specifying auxiliary addresses that need to be\n allocated by the IPAM driver.\n\n Example:\n\n >>> ipam_pool = docker.types.IPAMPool(\n subnet='124.42.0.0/16',\n iprange='124.42.0.0/24',\n gateway='124.42.0.254',\n aux_addresses={\n 'reserved1': '124.42.1.1'\n }\n )\n >>> ipam_config = docker.types.IPAMConfig(\n pool_configs=[ipam_pool])\n \"\"\"\n def __init__(self, subnet=None, iprange=None, gateway=None,\n aux_addresses=None):\n self.update({\n 'Subnet': subnet,\n 'IPRange': iprange,\n 'Gateway': gateway,\n 'AuxiliaryAddresses': aux_addresses\n })\n", "path": "docker/types/networks.py"}], "after_files": [{"content": "from .. import errors\nfrom ..utils import normalize_links, version_lt\n\n\nclass EndpointConfig(dict):\n def __init__(self, version, aliases=None, links=None, ipv4_address=None,\n ipv6_address=None, link_local_ips=None):\n if version_lt(version, '1.22'):\n raise errors.InvalidVersion(\n 'Endpoint config is not supported for API version < 1.22'\n )\n\n if aliases:\n self[\"Aliases\"] = aliases\n\n if links:\n self[\"Links\"] = normalize_links(links)\n\n ipam_config = {}\n if ipv4_address:\n ipam_config['IPv4Address'] = ipv4_address\n\n if ipv6_address:\n ipam_config['IPv6Address'] = ipv6_address\n\n if link_local_ips is not None:\n if version_lt(version, '1.24'):\n raise errors.InvalidVersion(\n 'link_local_ips is not supported for API version < 1.24'\n )\n ipam_config['LinkLocalIPs'] = link_local_ips\n\n if ipam_config:\n self['IPAMConfig'] = ipam_config\n\n\nclass NetworkingConfig(dict):\n def __init__(self, endpoints_config=None):\n if endpoints_config:\n self[\"EndpointsConfig\"] = endpoints_config\n\n\nclass IPAMConfig(dict):\n \"\"\"\n Create an IPAM (IP Address Management) config dictionary to be used with\n :py:meth:`~docker.api.network.NetworkApiMixin.create_network`.\n\n Args:\n\n driver (str): The IPAM driver to use. Defaults to ``default``.\n pool_configs (list): A list of pool configurations\n (:py:class:`~docker.types.IPAMPool`). Defaults to empty list.\n options (dict): Driver options as a key-value dictionary.\n Defaults to `None`.\n\n Example:\n\n >>> ipam_config = docker.types.IPAMConfig(driver='default')\n >>> network = client.create_network('network1', ipam=ipam_config)\n\n \"\"\"\n def __init__(self, driver='default', pool_configs=None, options=None):\n self.update({\n 'Driver': driver,\n 'Config': pool_configs or []\n })\n\n if options:\n if not isinstance(options, dict):\n raise TypeError('IPAMConfig options must be a dictionary')\n self['Options'] = options\n\n\nclass IPAMPool(dict):\n \"\"\"\n Create an IPAM pool config dictionary to be added to the\n ``pool_configs`` parameter of\n :py:class:`~docker.types.IPAMConfig`.\n\n Args:\n\n subnet (str): Custom subnet for this IPAM pool using the CIDR\n notation. Defaults to ``None``.\n iprange (str): Custom IP range for endpoints in this IPAM pool using\n the CIDR notation. Defaults to ``None``.\n gateway (str): Custom IP address for the pool's gateway.\n aux_addresses (dict): A dictionary of ``key -> ip_address``\n relationships specifying auxiliary addresses that need to be\n allocated by the IPAM driver.\n\n Example:\n\n >>> ipam_pool = docker.types.IPAMPool(\n subnet='124.42.0.0/16',\n iprange='124.42.0.0/24',\n gateway='124.42.0.254',\n aux_addresses={\n 'reserved1': '124.42.1.1'\n }\n )\n >>> ipam_config = docker.types.IPAMConfig(\n pool_configs=[ipam_pool])\n \"\"\"\n def __init__(self, subnet=None, iprange=None, gateway=None,\n aux_addresses=None):\n self.update({\n 'Subnet': subnet,\n 'IPRange': iprange,\n 'Gateway': gateway,\n 'AuxiliaryAddresses': aux_addresses\n })\n", "path": "docker/types/networks.py"}]} | 1,302 | 254 |
gh_patches_debug_2452 | rasdani/github-patches | git_diff | pyinstaller__pyinstaller-2225 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
missing hidden import for skimage
When packaging an application that imports skimage.feature (and nothing else), the app would not run due to an ImportError on the "transform" module. This can be fixed by adding one item to the hiddenimports in hook-skimage.transform.py file (bolded below):
> hiddenimports = ['skimage.draw.draw',
> 'skimage._shared.geometry',
> 'skimage.filters.rank.core_cy',
> **'skimage._shared.transform'**]
>
> datas = collect_data_files('skimage')
PyInstaller 3.2, Windows 7 64 bit, Python 2.7.12, Anaconda 4.1.1 distribution.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `PyInstaller/hooks/hook-skimage.transform.py`
Content:
```
1 #-----------------------------------------------------------------------------
2 # Copyright (c) 2014-2016, PyInstaller Development Team.
3 #
4 # Distributed under the terms of the GNU General Public License with exception
5 # for distributing bootloader.
6 #
7 # The full license is in the file COPYING.txt, distributed with this software.
8 #-----------------------------------------------------------------------------
9 from PyInstaller.utils.hooks import collect_data_files
10
11 # Hook tested with scikit-image (skimage) 0.9.3 on Mac OS 10.9 and Windows 7
12 # 64-bit
13 hiddenimports = ['skimage.draw.draw',
14 'skimage._shared.geometry',
15 'skimage.filters.rank.core_cy']
16
17 datas = collect_data_files('skimage')
18
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/PyInstaller/hooks/hook-skimage.transform.py b/PyInstaller/hooks/hook-skimage.transform.py
--- a/PyInstaller/hooks/hook-skimage.transform.py
+++ b/PyInstaller/hooks/hook-skimage.transform.py
@@ -12,6 +12,7 @@
# 64-bit
hiddenimports = ['skimage.draw.draw',
'skimage._shared.geometry',
+ 'skimage._shared.transform',
'skimage.filters.rank.core_cy']
datas = collect_data_files('skimage')
| {"golden_diff": "diff --git a/PyInstaller/hooks/hook-skimage.transform.py b/PyInstaller/hooks/hook-skimage.transform.py\n--- a/PyInstaller/hooks/hook-skimage.transform.py\n+++ b/PyInstaller/hooks/hook-skimage.transform.py\n@@ -12,6 +12,7 @@\n # 64-bit\n hiddenimports = ['skimage.draw.draw',\n 'skimage._shared.geometry',\n+ 'skimage._shared.transform',\n 'skimage.filters.rank.core_cy']\n \n datas = collect_data_files('skimage')\n", "issue": "missing hidden import for skimage\nWhen packaging an application that imports skimage.feature (and nothing else), the app would not run due to an ImportError on the \"transform\" module. This can be fixed by adding one item to the hiddenimports in hook-skimage.transform.py file (bolded below):\n\n> hiddenimports = ['skimage.draw.draw',\n> 'skimage._shared.geometry',\n> 'skimage.filters.rank.core_cy',\n> **'skimage._shared.transform'**] \n> \n> datas = collect_data_files('skimage')\n\nPyInstaller 3.2, Windows 7 64 bit, Python 2.7.12, Anaconda 4.1.1 distribution.\n\n", "before_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2014-2016, PyInstaller Development Team.\n#\n# Distributed under the terms of the GNU General Public License with exception\n# for distributing bootloader.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#-----------------------------------------------------------------------------\nfrom PyInstaller.utils.hooks import collect_data_files\n\n# Hook tested with scikit-image (skimage) 0.9.3 on Mac OS 10.9 and Windows 7\n# 64-bit\nhiddenimports = ['skimage.draw.draw',\n 'skimage._shared.geometry',\n 'skimage.filters.rank.core_cy']\n\ndatas = collect_data_files('skimage')\n", "path": "PyInstaller/hooks/hook-skimage.transform.py"}], "after_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2014-2016, PyInstaller Development Team.\n#\n# Distributed under the terms of the GNU General Public License with exception\n# for distributing bootloader.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#-----------------------------------------------------------------------------\nfrom PyInstaller.utils.hooks import collect_data_files\n\n# Hook tested with scikit-image (skimage) 0.9.3 on Mac OS 10.9 and Windows 7\n# 64-bit\nhiddenimports = ['skimage.draw.draw',\n 'skimage._shared.geometry',\n 'skimage._shared.transform',\n 'skimage.filters.rank.core_cy']\n\ndatas = collect_data_files('skimage')\n", "path": "PyInstaller/hooks/hook-skimage.transform.py"}]} | 588 | 116 |
gh_patches_debug_35099 | rasdani/github-patches | git_diff | sanic-org__sanic-2774 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Headers from Exceptions
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Describe the bug
Headers set on Exception objects not carried through on all renderers
### Code snippet
```py
raise Unauthorized(
"Auth required.",
headers={"foo": "bar"},
)
```
### Expected Behavior
Response should have:
```
Foo: bar
```
### How do you run Sanic?
Sanic CLI
### Operating System
all
### Sanic Version
23.3
### Additional context
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sanic/errorpages.py`
Content:
```
1 """
2 Sanic `provides a pattern
3 <https://sanicframework.org/guide/best-practices/exceptions.html#using-sanic-exceptions>`_
4 for providing a response when an exception occurs. However, if you do no handle
5 an exception, it will provide a fallback. There are three fallback types:
6
7 - HTML - *default*
8 - Text
9 - JSON
10
11 Setting ``app.config.FALLBACK_ERROR_FORMAT = "auto"`` will enable a switch that
12 will attempt to provide an appropriate response format based upon the
13 request type.
14 """
15 from __future__ import annotations
16
17 import sys
18 import typing as t
19
20 from functools import partial
21 from traceback import extract_tb
22
23 from sanic.exceptions import BadRequest, SanicException
24 from sanic.helpers import STATUS_CODES
25 from sanic.log import deprecation, logger
26 from sanic.pages.error import ErrorPage
27 from sanic.response import html, json, text
28
29
30 dumps: t.Callable[..., str]
31 try:
32 from ujson import dumps
33
34 dumps = partial(dumps, escape_forward_slashes=False)
35 except ImportError: # noqa
36 from json import dumps
37
38 if t.TYPE_CHECKING:
39 from sanic import HTTPResponse, Request
40
41 DEFAULT_FORMAT = "auto"
42 FALLBACK_TEXT = """\
43 The application encountered an unexpected error and could not continue.\
44 """
45 FALLBACK_STATUS = 500
46 JSON = "application/json"
47
48
49 class BaseRenderer:
50 """
51 Base class that all renderers must inherit from.
52 """
53
54 dumps = staticmethod(dumps)
55
56 def __init__(self, request, exception, debug):
57 self.request = request
58 self.exception = exception
59 self.debug = debug
60
61 @property
62 def headers(self):
63 if isinstance(self.exception, SanicException):
64 return getattr(self.exception, "headers", {})
65 return {}
66
67 @property
68 def status(self):
69 if isinstance(self.exception, SanicException):
70 return getattr(self.exception, "status_code", FALLBACK_STATUS)
71 return FALLBACK_STATUS
72
73 @property
74 def text(self):
75 if self.debug or isinstance(self.exception, SanicException):
76 return str(self.exception)
77 return FALLBACK_TEXT
78
79 @property
80 def title(self):
81 status_text = STATUS_CODES.get(self.status, b"Error Occurred").decode()
82 return f"{self.status} — {status_text}"
83
84 def render(self) -> HTTPResponse:
85 """
86 Outputs the exception as a :class:`HTTPResponse`.
87
88 :return: The formatted exception
89 :rtype: str
90 """
91 output = (
92 self.full
93 if self.debug and not getattr(self.exception, "quiet", False)
94 else self.minimal
95 )
96 return output()
97
98 def minimal(self) -> HTTPResponse: # noqa
99 """
100 Provide a formatted message that is meant to not show any sensitive
101 data or details.
102 """
103 raise NotImplementedError
104
105 def full(self) -> HTTPResponse: # noqa
106 """
107 Provide a formatted message that has all details and is mean to be used
108 primarily for debugging and non-production environments.
109 """
110 raise NotImplementedError
111
112
113 class HTMLRenderer(BaseRenderer):
114 """
115 Render an exception as HTML.
116
117 The default fallback type.
118 """
119
120 def full(self) -> HTTPResponse:
121 page = ErrorPage(
122 debug=self.debug,
123 title=super().title,
124 text=super().text,
125 request=self.request,
126 exc=self.exception,
127 )
128 return html(page.render(), status=self.status, headers=self.headers)
129
130 def minimal(self) -> HTTPResponse:
131 return self.full()
132
133
134 class TextRenderer(BaseRenderer):
135 """
136 Render an exception as plain text.
137 """
138
139 OUTPUT_TEXT = "{title}\n{bar}\n{text}\n\n{body}"
140 SPACER = " "
141
142 def full(self) -> HTTPResponse:
143 return text(
144 self.OUTPUT_TEXT.format(
145 title=self.title,
146 text=self.text,
147 bar=("=" * len(self.title)),
148 body=self._generate_body(full=True),
149 ),
150 status=self.status,
151 )
152
153 def minimal(self) -> HTTPResponse:
154 return text(
155 self.OUTPUT_TEXT.format(
156 title=self.title,
157 text=self.text,
158 bar=("=" * len(self.title)),
159 body=self._generate_body(full=False),
160 ),
161 status=self.status,
162 headers=self.headers,
163 )
164
165 @property
166 def title(self):
167 return f"⚠️ {super().title}"
168
169 def _generate_body(self, *, full):
170 lines = []
171 if full:
172 _, exc_value, __ = sys.exc_info()
173 exceptions = []
174
175 lines += [
176 f"{self.exception.__class__.__name__}: {self.exception} while "
177 f"handling path {self.request.path}",
178 f"Traceback of {self.request.app.name} "
179 "(most recent call last):\n",
180 ]
181
182 while exc_value:
183 exceptions.append(self._format_exc(exc_value))
184 exc_value = exc_value.__cause__
185
186 lines += exceptions[::-1]
187
188 for attr, display in (("context", True), ("extra", bool(full))):
189 info = getattr(self.exception, attr, None)
190 if info and display:
191 lines += self._generate_object_display_list(info, attr)
192
193 return "\n".join(lines)
194
195 def _format_exc(self, exc):
196 frames = "\n\n".join(
197 [
198 f"{self.SPACER * 2}File {frame.filename}, "
199 f"line {frame.lineno}, in "
200 f"{frame.name}\n{self.SPACER * 2}{frame.line}"
201 for frame in extract_tb(exc.__traceback__)
202 ]
203 )
204 return f"{self.SPACER}{exc.__class__.__name__}: {exc}\n{frames}"
205
206 def _generate_object_display_list(self, obj, descriptor):
207 lines = [f"\n{descriptor.title()}"]
208 for key, value in obj.items():
209 display = self.dumps(value)
210 lines.append(f"{self.SPACER * 2}{key}: {display}")
211 return lines
212
213
214 class JSONRenderer(BaseRenderer):
215 """
216 Render an exception as JSON.
217 """
218
219 def full(self) -> HTTPResponse:
220 output = self._generate_output(full=True)
221 return json(output, status=self.status, dumps=self.dumps)
222
223 def minimal(self) -> HTTPResponse:
224 output = self._generate_output(full=False)
225 return json(output, status=self.status, dumps=self.dumps)
226
227 def _generate_output(self, *, full):
228 output = {
229 "description": self.title,
230 "status": self.status,
231 "message": self.text,
232 }
233
234 for attr, display in (("context", True), ("extra", bool(full))):
235 info = getattr(self.exception, attr, None)
236 if info and display:
237 output[attr] = info
238
239 if full:
240 _, exc_value, __ = sys.exc_info()
241 exceptions = []
242
243 while exc_value:
244 exceptions.append(
245 {
246 "type": exc_value.__class__.__name__,
247 "exception": str(exc_value),
248 "frames": [
249 {
250 "file": frame.filename,
251 "line": frame.lineno,
252 "name": frame.name,
253 "src": frame.line,
254 }
255 for frame in extract_tb(exc_value.__traceback__)
256 ],
257 }
258 )
259 exc_value = exc_value.__cause__
260
261 output["path"] = self.request.path
262 output["args"] = self.request.args
263 output["exceptions"] = exceptions[::-1]
264
265 return output
266
267 @property
268 def title(self):
269 return STATUS_CODES.get(self.status, b"Error Occurred").decode()
270
271
272 def escape(text):
273 """
274 Minimal HTML escaping, not for attribute values (unlike html.escape).
275 """
276 return f"{text}".replace("&", "&").replace("<", "<")
277
278
279 MIME_BY_CONFIG = {
280 "text": "text/plain",
281 "json": "application/json",
282 "html": "text/html",
283 }
284 CONFIG_BY_MIME = {v: k for k, v in MIME_BY_CONFIG.items()}
285 RENDERERS_BY_CONTENT_TYPE = {
286 "text/plain": TextRenderer,
287 "application/json": JSONRenderer,
288 "multipart/form-data": HTMLRenderer,
289 "text/html": HTMLRenderer,
290 }
291
292 # Handler source code is checked for which response types it returns with the
293 # route error_format="auto" (default) to determine which format to use.
294 RESPONSE_MAPPING = {
295 "json": "json",
296 "text": "text",
297 "html": "html",
298 "JSONResponse": "json",
299 "text/plain": "text",
300 "text/html": "html",
301 "application/json": "json",
302 }
303
304
305 def check_error_format(format):
306 if format not in MIME_BY_CONFIG and format != "auto":
307 raise SanicException(f"Unknown format: {format}")
308
309
310 def exception_response(
311 request: Request,
312 exception: Exception,
313 debug: bool,
314 fallback: str,
315 base: t.Type[BaseRenderer],
316 renderer: t.Type[t.Optional[BaseRenderer]] = None,
317 ) -> HTTPResponse:
318 """
319 Render a response for the default FALLBACK exception handler.
320 """
321 if not renderer:
322 mt = guess_mime(request, fallback)
323 renderer = RENDERERS_BY_CONTENT_TYPE.get(mt, base)
324
325 renderer = t.cast(t.Type[BaseRenderer], renderer)
326 return renderer(request, exception, debug).render()
327
328
329 def guess_mime(req: Request, fallback: str) -> str:
330 # Attempt to find a suitable MIME format for the response.
331 # Insertion-ordered map of formats["html"] = "source of that suggestion"
332 formats = {}
333 name = ""
334 # Route error_format (by magic from handler code if auto, the default)
335 if req.route:
336 name = req.route.name
337 f = req.route.extra.error_format
338 if f in MIME_BY_CONFIG:
339 formats[f] = name
340
341 if not formats and fallback in MIME_BY_CONFIG:
342 formats[fallback] = "FALLBACK_ERROR_FORMAT"
343
344 # If still not known, check for the request for clues of JSON
345 if not formats and fallback == "auto" and req.accept.match(JSON):
346 if JSON in req.accept: # Literally, not wildcard
347 formats["json"] = "request.accept"
348 elif JSON in req.headers.getone("content-type", ""):
349 formats["json"] = "content-type"
350 # DEPRECATION: Remove this block in 24.3
351 else:
352 c = None
353 try:
354 c = req.json
355 except BadRequest:
356 pass
357 if c:
358 formats["json"] = "request.json"
359 deprecation(
360 "Response type was determined by the JSON content of "
361 "the request. This behavior is deprecated and will be "
362 "removed in v24.3. Please specify the format either by\n"
363 f' error_format="json" on route {name}, by\n'
364 ' FALLBACK_ERROR_FORMAT = "json", or by adding header\n'
365 " accept: application/json to your requests.",
366 24.3,
367 )
368
369 # Any other supported formats
370 if fallback == "auto":
371 for k in MIME_BY_CONFIG:
372 if k not in formats:
373 formats[k] = "any"
374
375 mimes = [MIME_BY_CONFIG[k] for k in formats]
376 m = req.accept.match(*mimes)
377 if m:
378 format = CONFIG_BY_MIME[m.mime]
379 source = formats[format]
380 logger.debug(
381 f"The client accepts {m.header}, using '{format}' from {source}"
382 )
383 else:
384 logger.debug(f"No format found, the client accepts {req.accept!r}")
385 return m.mime
386
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sanic/errorpages.py b/sanic/errorpages.py
--- a/sanic/errorpages.py
+++ b/sanic/errorpages.py
@@ -92,8 +92,10 @@
self.full
if self.debug and not getattr(self.exception, "quiet", False)
else self.minimal
- )
- return output()
+ )()
+ output.status = self.status
+ output.headers.update(self.headers)
+ return output
def minimal(self) -> HTTPResponse: # noqa
"""
@@ -125,7 +127,7 @@
request=self.request,
exc=self.exception,
)
- return html(page.render(), status=self.status, headers=self.headers)
+ return html(page.render())
def minimal(self) -> HTTPResponse:
return self.full()
@@ -146,8 +148,7 @@
text=self.text,
bar=("=" * len(self.title)),
body=self._generate_body(full=True),
- ),
- status=self.status,
+ )
)
def minimal(self) -> HTTPResponse:
@@ -157,9 +158,7 @@
text=self.text,
bar=("=" * len(self.title)),
body=self._generate_body(full=False),
- ),
- status=self.status,
- headers=self.headers,
+ )
)
@property
@@ -218,11 +217,11 @@
def full(self) -> HTTPResponse:
output = self._generate_output(full=True)
- return json(output, status=self.status, dumps=self.dumps)
+ return json(output, dumps=self.dumps)
def minimal(self) -> HTTPResponse:
output = self._generate_output(full=False)
- return json(output, status=self.status, dumps=self.dumps)
+ return json(output, dumps=self.dumps)
def _generate_output(self, *, full):
output = {
| {"golden_diff": "diff --git a/sanic/errorpages.py b/sanic/errorpages.py\n--- a/sanic/errorpages.py\n+++ b/sanic/errorpages.py\n@@ -92,8 +92,10 @@\n self.full\n if self.debug and not getattr(self.exception, \"quiet\", False)\n else self.minimal\n- )\n- return output()\n+ )()\n+ output.status = self.status\n+ output.headers.update(self.headers)\n+ return output\n \n def minimal(self) -> HTTPResponse: # noqa\n \"\"\"\n@@ -125,7 +127,7 @@\n request=self.request,\n exc=self.exception,\n )\n- return html(page.render(), status=self.status, headers=self.headers)\n+ return html(page.render())\n \n def minimal(self) -> HTTPResponse:\n return self.full()\n@@ -146,8 +148,7 @@\n text=self.text,\n bar=(\"=\" * len(self.title)),\n body=self._generate_body(full=True),\n- ),\n- status=self.status,\n+ )\n )\n \n def minimal(self) -> HTTPResponse:\n@@ -157,9 +158,7 @@\n text=self.text,\n bar=(\"=\" * len(self.title)),\n body=self._generate_body(full=False),\n- ),\n- status=self.status,\n- headers=self.headers,\n+ )\n )\n \n @property\n@@ -218,11 +217,11 @@\n \n def full(self) -> HTTPResponse:\n output = self._generate_output(full=True)\n- return json(output, status=self.status, dumps=self.dumps)\n+ return json(output, dumps=self.dumps)\n \n def minimal(self) -> HTTPResponse:\n output = self._generate_output(full=False)\n- return json(output, status=self.status, dumps=self.dumps)\n+ return json(output, dumps=self.dumps)\n \n def _generate_output(self, *, full):\n output = {\n", "issue": "Headers from Exceptions\n### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Describe the bug\n\nHeaders set on Exception objects not carried through on all renderers\n\n### Code snippet\n\n```py\r\nraise Unauthorized(\r\n \"Auth required.\",\r\n headers={\"foo\": \"bar\"},\r\n)\r\n```\n\n### Expected Behavior\n\nResponse should have:\r\n\r\n```\r\nFoo: bar\r\n```\n\n### How do you run Sanic?\n\nSanic CLI\n\n### Operating System\n\nall\n\n### Sanic Version\n\n23.3\n\n### Additional context\n\n_No response_\n", "before_files": [{"content": "\"\"\"\nSanic `provides a pattern\n<https://sanicframework.org/guide/best-practices/exceptions.html#using-sanic-exceptions>`_\nfor providing a response when an exception occurs. However, if you do no handle\nan exception, it will provide a fallback. There are three fallback types:\n\n- HTML - *default*\n- Text\n- JSON\n\nSetting ``app.config.FALLBACK_ERROR_FORMAT = \"auto\"`` will enable a switch that\nwill attempt to provide an appropriate response format based upon the\nrequest type.\n\"\"\"\nfrom __future__ import annotations\n\nimport sys\nimport typing as t\n\nfrom functools import partial\nfrom traceback import extract_tb\n\nfrom sanic.exceptions import BadRequest, SanicException\nfrom sanic.helpers import STATUS_CODES\nfrom sanic.log import deprecation, logger\nfrom sanic.pages.error import ErrorPage\nfrom sanic.response import html, json, text\n\n\ndumps: t.Callable[..., str]\ntry:\n from ujson import dumps\n\n dumps = partial(dumps, escape_forward_slashes=False)\nexcept ImportError: # noqa\n from json import dumps\n\nif t.TYPE_CHECKING:\n from sanic import HTTPResponse, Request\n\nDEFAULT_FORMAT = \"auto\"\nFALLBACK_TEXT = \"\"\"\\\nThe application encountered an unexpected error and could not continue.\\\n\"\"\"\nFALLBACK_STATUS = 500\nJSON = \"application/json\"\n\n\nclass BaseRenderer:\n \"\"\"\n Base class that all renderers must inherit from.\n \"\"\"\n\n dumps = staticmethod(dumps)\n\n def __init__(self, request, exception, debug):\n self.request = request\n self.exception = exception\n self.debug = debug\n\n @property\n def headers(self):\n if isinstance(self.exception, SanicException):\n return getattr(self.exception, \"headers\", {})\n return {}\n\n @property\n def status(self):\n if isinstance(self.exception, SanicException):\n return getattr(self.exception, \"status_code\", FALLBACK_STATUS)\n return FALLBACK_STATUS\n\n @property\n def text(self):\n if self.debug or isinstance(self.exception, SanicException):\n return str(self.exception)\n return FALLBACK_TEXT\n\n @property\n def title(self):\n status_text = STATUS_CODES.get(self.status, b\"Error Occurred\").decode()\n return f\"{self.status} \u2014 {status_text}\"\n\n def render(self) -> HTTPResponse:\n \"\"\"\n Outputs the exception as a :class:`HTTPResponse`.\n\n :return: The formatted exception\n :rtype: str\n \"\"\"\n output = (\n self.full\n if self.debug and not getattr(self.exception, \"quiet\", False)\n else self.minimal\n )\n return output()\n\n def minimal(self) -> HTTPResponse: # noqa\n \"\"\"\n Provide a formatted message that is meant to not show any sensitive\n data or details.\n \"\"\"\n raise NotImplementedError\n\n def full(self) -> HTTPResponse: # noqa\n \"\"\"\n Provide a formatted message that has all details and is mean to be used\n primarily for debugging and non-production environments.\n \"\"\"\n raise NotImplementedError\n\n\nclass HTMLRenderer(BaseRenderer):\n \"\"\"\n Render an exception as HTML.\n\n The default fallback type.\n \"\"\"\n\n def full(self) -> HTTPResponse:\n page = ErrorPage(\n debug=self.debug,\n title=super().title,\n text=super().text,\n request=self.request,\n exc=self.exception,\n )\n return html(page.render(), status=self.status, headers=self.headers)\n\n def minimal(self) -> HTTPResponse:\n return self.full()\n\n\nclass TextRenderer(BaseRenderer):\n \"\"\"\n Render an exception as plain text.\n \"\"\"\n\n OUTPUT_TEXT = \"{title}\\n{bar}\\n{text}\\n\\n{body}\"\n SPACER = \" \"\n\n def full(self) -> HTTPResponse:\n return text(\n self.OUTPUT_TEXT.format(\n title=self.title,\n text=self.text,\n bar=(\"=\" * len(self.title)),\n body=self._generate_body(full=True),\n ),\n status=self.status,\n )\n\n def minimal(self) -> HTTPResponse:\n return text(\n self.OUTPUT_TEXT.format(\n title=self.title,\n text=self.text,\n bar=(\"=\" * len(self.title)),\n body=self._generate_body(full=False),\n ),\n status=self.status,\n headers=self.headers,\n )\n\n @property\n def title(self):\n return f\"\u26a0\ufe0f {super().title}\"\n\n def _generate_body(self, *, full):\n lines = []\n if full:\n _, exc_value, __ = sys.exc_info()\n exceptions = []\n\n lines += [\n f\"{self.exception.__class__.__name__}: {self.exception} while \"\n f\"handling path {self.request.path}\",\n f\"Traceback of {self.request.app.name} \"\n \"(most recent call last):\\n\",\n ]\n\n while exc_value:\n exceptions.append(self._format_exc(exc_value))\n exc_value = exc_value.__cause__\n\n lines += exceptions[::-1]\n\n for attr, display in ((\"context\", True), (\"extra\", bool(full))):\n info = getattr(self.exception, attr, None)\n if info and display:\n lines += self._generate_object_display_list(info, attr)\n\n return \"\\n\".join(lines)\n\n def _format_exc(self, exc):\n frames = \"\\n\\n\".join(\n [\n f\"{self.SPACER * 2}File {frame.filename}, \"\n f\"line {frame.lineno}, in \"\n f\"{frame.name}\\n{self.SPACER * 2}{frame.line}\"\n for frame in extract_tb(exc.__traceback__)\n ]\n )\n return f\"{self.SPACER}{exc.__class__.__name__}: {exc}\\n{frames}\"\n\n def _generate_object_display_list(self, obj, descriptor):\n lines = [f\"\\n{descriptor.title()}\"]\n for key, value in obj.items():\n display = self.dumps(value)\n lines.append(f\"{self.SPACER * 2}{key}: {display}\")\n return lines\n\n\nclass JSONRenderer(BaseRenderer):\n \"\"\"\n Render an exception as JSON.\n \"\"\"\n\n def full(self) -> HTTPResponse:\n output = self._generate_output(full=True)\n return json(output, status=self.status, dumps=self.dumps)\n\n def minimal(self) -> HTTPResponse:\n output = self._generate_output(full=False)\n return json(output, status=self.status, dumps=self.dumps)\n\n def _generate_output(self, *, full):\n output = {\n \"description\": self.title,\n \"status\": self.status,\n \"message\": self.text,\n }\n\n for attr, display in ((\"context\", True), (\"extra\", bool(full))):\n info = getattr(self.exception, attr, None)\n if info and display:\n output[attr] = info\n\n if full:\n _, exc_value, __ = sys.exc_info()\n exceptions = []\n\n while exc_value:\n exceptions.append(\n {\n \"type\": exc_value.__class__.__name__,\n \"exception\": str(exc_value),\n \"frames\": [\n {\n \"file\": frame.filename,\n \"line\": frame.lineno,\n \"name\": frame.name,\n \"src\": frame.line,\n }\n for frame in extract_tb(exc_value.__traceback__)\n ],\n }\n )\n exc_value = exc_value.__cause__\n\n output[\"path\"] = self.request.path\n output[\"args\"] = self.request.args\n output[\"exceptions\"] = exceptions[::-1]\n\n return output\n\n @property\n def title(self):\n return STATUS_CODES.get(self.status, b\"Error Occurred\").decode()\n\n\ndef escape(text):\n \"\"\"\n Minimal HTML escaping, not for attribute values (unlike html.escape).\n \"\"\"\n return f\"{text}\".replace(\"&\", \"&\").replace(\"<\", \"<\")\n\n\nMIME_BY_CONFIG = {\n \"text\": \"text/plain\",\n \"json\": \"application/json\",\n \"html\": \"text/html\",\n}\nCONFIG_BY_MIME = {v: k for k, v in MIME_BY_CONFIG.items()}\nRENDERERS_BY_CONTENT_TYPE = {\n \"text/plain\": TextRenderer,\n \"application/json\": JSONRenderer,\n \"multipart/form-data\": HTMLRenderer,\n \"text/html\": HTMLRenderer,\n}\n\n# Handler source code is checked for which response types it returns with the\n# route error_format=\"auto\" (default) to determine which format to use.\nRESPONSE_MAPPING = {\n \"json\": \"json\",\n \"text\": \"text\",\n \"html\": \"html\",\n \"JSONResponse\": \"json\",\n \"text/plain\": \"text\",\n \"text/html\": \"html\",\n \"application/json\": \"json\",\n}\n\n\ndef check_error_format(format):\n if format not in MIME_BY_CONFIG and format != \"auto\":\n raise SanicException(f\"Unknown format: {format}\")\n\n\ndef exception_response(\n request: Request,\n exception: Exception,\n debug: bool,\n fallback: str,\n base: t.Type[BaseRenderer],\n renderer: t.Type[t.Optional[BaseRenderer]] = None,\n) -> HTTPResponse:\n \"\"\"\n Render a response for the default FALLBACK exception handler.\n \"\"\"\n if not renderer:\n mt = guess_mime(request, fallback)\n renderer = RENDERERS_BY_CONTENT_TYPE.get(mt, base)\n\n renderer = t.cast(t.Type[BaseRenderer], renderer)\n return renderer(request, exception, debug).render()\n\n\ndef guess_mime(req: Request, fallback: str) -> str:\n # Attempt to find a suitable MIME format for the response.\n # Insertion-ordered map of formats[\"html\"] = \"source of that suggestion\"\n formats = {}\n name = \"\"\n # Route error_format (by magic from handler code if auto, the default)\n if req.route:\n name = req.route.name\n f = req.route.extra.error_format\n if f in MIME_BY_CONFIG:\n formats[f] = name\n\n if not formats and fallback in MIME_BY_CONFIG:\n formats[fallback] = \"FALLBACK_ERROR_FORMAT\"\n\n # If still not known, check for the request for clues of JSON\n if not formats and fallback == \"auto\" and req.accept.match(JSON):\n if JSON in req.accept: # Literally, not wildcard\n formats[\"json\"] = \"request.accept\"\n elif JSON in req.headers.getone(\"content-type\", \"\"):\n formats[\"json\"] = \"content-type\"\n # DEPRECATION: Remove this block in 24.3\n else:\n c = None\n try:\n c = req.json\n except BadRequest:\n pass\n if c:\n formats[\"json\"] = \"request.json\"\n deprecation(\n \"Response type was determined by the JSON content of \"\n \"the request. This behavior is deprecated and will be \"\n \"removed in v24.3. Please specify the format either by\\n\"\n f' error_format=\"json\" on route {name}, by\\n'\n ' FALLBACK_ERROR_FORMAT = \"json\", or by adding header\\n'\n \" accept: application/json to your requests.\",\n 24.3,\n )\n\n # Any other supported formats\n if fallback == \"auto\":\n for k in MIME_BY_CONFIG:\n if k not in formats:\n formats[k] = \"any\"\n\n mimes = [MIME_BY_CONFIG[k] for k in formats]\n m = req.accept.match(*mimes)\n if m:\n format = CONFIG_BY_MIME[m.mime]\n source = formats[format]\n logger.debug(\n f\"The client accepts {m.header}, using '{format}' from {source}\"\n )\n else:\n logger.debug(f\"No format found, the client accepts {req.accept!r}\")\n return m.mime\n", "path": "sanic/errorpages.py"}], "after_files": [{"content": "\"\"\"\nSanic `provides a pattern\n<https://sanicframework.org/guide/best-practices/exceptions.html#using-sanic-exceptions>`_\nfor providing a response when an exception occurs. However, if you do no handle\nan exception, it will provide a fallback. There are three fallback types:\n\n- HTML - *default*\n- Text\n- JSON\n\nSetting ``app.config.FALLBACK_ERROR_FORMAT = \"auto\"`` will enable a switch that\nwill attempt to provide an appropriate response format based upon the\nrequest type.\n\"\"\"\nfrom __future__ import annotations\n\nimport sys\nimport typing as t\n\nfrom functools import partial\nfrom traceback import extract_tb\n\nfrom sanic.exceptions import BadRequest, SanicException\nfrom sanic.helpers import STATUS_CODES\nfrom sanic.log import deprecation, logger\nfrom sanic.pages.error import ErrorPage\nfrom sanic.response import html, json, text\n\n\ndumps: t.Callable[..., str]\ntry:\n from ujson import dumps\n\n dumps = partial(dumps, escape_forward_slashes=False)\nexcept ImportError: # noqa\n from json import dumps\n\nif t.TYPE_CHECKING:\n from sanic import HTTPResponse, Request\n\nDEFAULT_FORMAT = \"auto\"\nFALLBACK_TEXT = \"\"\"\\\nThe application encountered an unexpected error and could not continue.\\\n\"\"\"\nFALLBACK_STATUS = 500\nJSON = \"application/json\"\n\n\nclass BaseRenderer:\n \"\"\"\n Base class that all renderers must inherit from.\n \"\"\"\n\n dumps = staticmethod(dumps)\n\n def __init__(self, request, exception, debug):\n self.request = request\n self.exception = exception\n self.debug = debug\n\n @property\n def headers(self):\n if isinstance(self.exception, SanicException):\n return getattr(self.exception, \"headers\", {})\n return {}\n\n @property\n def status(self):\n if isinstance(self.exception, SanicException):\n return getattr(self.exception, \"status_code\", FALLBACK_STATUS)\n return FALLBACK_STATUS\n\n @property\n def text(self):\n if self.debug or isinstance(self.exception, SanicException):\n return str(self.exception)\n return FALLBACK_TEXT\n\n @property\n def title(self):\n status_text = STATUS_CODES.get(self.status, b\"Error Occurred\").decode()\n return f\"{self.status} \u2014 {status_text}\"\n\n def render(self) -> HTTPResponse:\n \"\"\"\n Outputs the exception as a :class:`HTTPResponse`.\n\n :return: The formatted exception\n :rtype: str\n \"\"\"\n output = (\n self.full\n if self.debug and not getattr(self.exception, \"quiet\", False)\n else self.minimal\n )()\n output.status = self.status\n output.headers.update(self.headers)\n return output\n\n def minimal(self) -> HTTPResponse: # noqa\n \"\"\"\n Provide a formatted message that is meant to not show any sensitive\n data or details.\n \"\"\"\n raise NotImplementedError\n\n def full(self) -> HTTPResponse: # noqa\n \"\"\"\n Provide a formatted message that has all details and is mean to be used\n primarily for debugging and non-production environments.\n \"\"\"\n raise NotImplementedError\n\n\nclass HTMLRenderer(BaseRenderer):\n \"\"\"\n Render an exception as HTML.\n\n The default fallback type.\n \"\"\"\n\n def full(self) -> HTTPResponse:\n page = ErrorPage(\n debug=self.debug,\n title=super().title,\n text=super().text,\n request=self.request,\n exc=self.exception,\n )\n return html(page.render())\n\n def minimal(self) -> HTTPResponse:\n return self.full()\n\n\nclass TextRenderer(BaseRenderer):\n \"\"\"\n Render an exception as plain text.\n \"\"\"\n\n OUTPUT_TEXT = \"{title}\\n{bar}\\n{text}\\n\\n{body}\"\n SPACER = \" \"\n\n def full(self) -> HTTPResponse:\n return text(\n self.OUTPUT_TEXT.format(\n title=self.title,\n text=self.text,\n bar=(\"=\" * len(self.title)),\n body=self._generate_body(full=True),\n )\n )\n\n def minimal(self) -> HTTPResponse:\n return text(\n self.OUTPUT_TEXT.format(\n title=self.title,\n text=self.text,\n bar=(\"=\" * len(self.title)),\n body=self._generate_body(full=False),\n )\n )\n\n @property\n def title(self):\n return f\"\u26a0\ufe0f {super().title}\"\n\n def _generate_body(self, *, full):\n lines = []\n if full:\n _, exc_value, __ = sys.exc_info()\n exceptions = []\n\n lines += [\n f\"{self.exception.__class__.__name__}: {self.exception} while \"\n f\"handling path {self.request.path}\",\n f\"Traceback of {self.request.app.name} \"\n \"(most recent call last):\\n\",\n ]\n\n while exc_value:\n exceptions.append(self._format_exc(exc_value))\n exc_value = exc_value.__cause__\n\n lines += exceptions[::-1]\n\n for attr, display in ((\"context\", True), (\"extra\", bool(full))):\n info = getattr(self.exception, attr, None)\n if info and display:\n lines += self._generate_object_display_list(info, attr)\n\n return \"\\n\".join(lines)\n\n def _format_exc(self, exc):\n frames = \"\\n\\n\".join(\n [\n f\"{self.SPACER * 2}File {frame.filename}, \"\n f\"line {frame.lineno}, in \"\n f\"{frame.name}\\n{self.SPACER * 2}{frame.line}\"\n for frame in extract_tb(exc.__traceback__)\n ]\n )\n return f\"{self.SPACER}{exc.__class__.__name__}: {exc}\\n{frames}\"\n\n def _generate_object_display_list(self, obj, descriptor):\n lines = [f\"\\n{descriptor.title()}\"]\n for key, value in obj.items():\n display = self.dumps(value)\n lines.append(f\"{self.SPACER * 2}{key}: {display}\")\n return lines\n\n\nclass JSONRenderer(BaseRenderer):\n \"\"\"\n Render an exception as JSON.\n \"\"\"\n\n def full(self) -> HTTPResponse:\n output = self._generate_output(full=True)\n return json(output, dumps=self.dumps)\n\n def minimal(self) -> HTTPResponse:\n output = self._generate_output(full=False)\n return json(output, dumps=self.dumps)\n\n def _generate_output(self, *, full):\n output = {\n \"description\": self.title,\n \"status\": self.status,\n \"message\": self.text,\n }\n\n for attr, display in ((\"context\", True), (\"extra\", bool(full))):\n info = getattr(self.exception, attr, None)\n if info and display:\n output[attr] = info\n\n if full:\n _, exc_value, __ = sys.exc_info()\n exceptions = []\n\n while exc_value:\n exceptions.append(\n {\n \"type\": exc_value.__class__.__name__,\n \"exception\": str(exc_value),\n \"frames\": [\n {\n \"file\": frame.filename,\n \"line\": frame.lineno,\n \"name\": frame.name,\n \"src\": frame.line,\n }\n for frame in extract_tb(exc_value.__traceback__)\n ],\n }\n )\n exc_value = exc_value.__cause__\n\n output[\"path\"] = self.request.path\n output[\"args\"] = self.request.args\n output[\"exceptions\"] = exceptions[::-1]\n\n return output\n\n @property\n def title(self):\n return STATUS_CODES.get(self.status, b\"Error Occurred\").decode()\n\n\ndef escape(text):\n \"\"\"\n Minimal HTML escaping, not for attribute values (unlike html.escape).\n \"\"\"\n return f\"{text}\".replace(\"&\", \"&\").replace(\"<\", \"<\")\n\n\nMIME_BY_CONFIG = {\n \"text\": \"text/plain\",\n \"json\": \"application/json\",\n \"html\": \"text/html\",\n}\nCONFIG_BY_MIME = {v: k for k, v in MIME_BY_CONFIG.items()}\nRENDERERS_BY_CONTENT_TYPE = {\n \"text/plain\": TextRenderer,\n \"application/json\": JSONRenderer,\n \"multipart/form-data\": HTMLRenderer,\n \"text/html\": HTMLRenderer,\n}\n\n# Handler source code is checked for which response types it returns with the\n# route error_format=\"auto\" (default) to determine which format to use.\nRESPONSE_MAPPING = {\n \"json\": \"json\",\n \"text\": \"text\",\n \"html\": \"html\",\n \"JSONResponse\": \"json\",\n \"text/plain\": \"text\",\n \"text/html\": \"html\",\n \"application/json\": \"json\",\n}\n\n\ndef check_error_format(format):\n if format not in MIME_BY_CONFIG and format != \"auto\":\n raise SanicException(f\"Unknown format: {format}\")\n\n\ndef exception_response(\n request: Request,\n exception: Exception,\n debug: bool,\n fallback: str,\n base: t.Type[BaseRenderer],\n renderer: t.Type[t.Optional[BaseRenderer]] = None,\n) -> HTTPResponse:\n \"\"\"\n Render a response for the default FALLBACK exception handler.\n \"\"\"\n if not renderer:\n mt = guess_mime(request, fallback)\n renderer = RENDERERS_BY_CONTENT_TYPE.get(mt, base)\n\n renderer = t.cast(t.Type[BaseRenderer], renderer)\n return renderer(request, exception, debug).render()\n\n\ndef guess_mime(req: Request, fallback: str) -> str:\n # Attempt to find a suitable MIME format for the response.\n # Insertion-ordered map of formats[\"html\"] = \"source of that suggestion\"\n formats = {}\n name = \"\"\n # Route error_format (by magic from handler code if auto, the default)\n if req.route:\n name = req.route.name\n f = req.route.extra.error_format\n if f in MIME_BY_CONFIG:\n formats[f] = name\n\n if not formats and fallback in MIME_BY_CONFIG:\n formats[fallback] = \"FALLBACK_ERROR_FORMAT\"\n\n # If still not known, check for the request for clues of JSON\n if not formats and fallback == \"auto\" and req.accept.match(JSON):\n if JSON in req.accept: # Literally, not wildcard\n formats[\"json\"] = \"request.accept\"\n elif JSON in req.headers.getone(\"content-type\", \"\"):\n formats[\"json\"] = \"content-type\"\n # DEPRECATION: Remove this block in 24.3\n else:\n c = None\n try:\n c = req.json\n except BadRequest:\n pass\n if c:\n formats[\"json\"] = \"request.json\"\n deprecation(\n \"Response type was determined by the JSON content of \"\n \"the request. This behavior is deprecated and will be \"\n \"removed in v24.3. Please specify the format either by\\n\"\n f' error_format=\"json\" on route {name}, by\\n'\n ' FALLBACK_ERROR_FORMAT = \"json\", or by adding header\\n'\n \" accept: application/json to your requests.\",\n 24.3,\n )\n\n # Any other supported formats\n if fallback == \"auto\":\n for k in MIME_BY_CONFIG:\n if k not in formats:\n formats[k] = \"any\"\n\n mimes = [MIME_BY_CONFIG[k] for k in formats]\n m = req.accept.match(*mimes)\n if m:\n format = CONFIG_BY_MIME[m.mime]\n source = formats[format]\n logger.debug(\n f\"The client accepts {m.header}, using '{format}' from {source}\"\n )\n else:\n logger.debug(f\"No format found, the client accepts {req.accept!r}\")\n return m.mime\n", "path": "sanic/errorpages.py"}]} | 3,973 | 423 |
gh_patches_debug_7034 | rasdani/github-patches | git_diff | aws__aws-cli-5019 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add support for PyYAML 5.3
Closes: https://github.com/aws/aws-cli/issues/4828
Signed-off-by: Igor Raits <[email protected]>
*Issue #, if available:*
*Description of changes:*
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 import codecs
3 import os.path
4 import re
5 import sys
6
7 from setuptools import setup, find_packages
8
9
10 here = os.path.abspath(os.path.dirname(__file__))
11
12
13 def read(*parts):
14 return codecs.open(os.path.join(here, *parts), 'r').read()
15
16
17 def find_version(*file_paths):
18 version_file = read(*file_paths)
19 version_match = re.search(r"^__version__ = ['\"]([^'\"]*)['\"]",
20 version_file, re.M)
21 if version_match:
22 return version_match.group(1)
23 raise RuntimeError("Unable to find version string.")
24
25
26 install_requires = [
27 'botocore==1.15.10',
28 'docutils>=0.10,<0.16',
29 'rsa>=3.1.2,<=3.5.0',
30 's3transfer>=0.3.0,<0.4.0',
31 'PyYAML>=3.10,<5.3',
32 ]
33
34
35 if sys.version_info[:2] == (3, 4):
36 install_requires.append('colorama>=0.2.5,<0.4.2')
37 else:
38 install_requires.append('colorama>=0.2.5,<0.4.4')
39
40
41 setup_options = dict(
42 name='awscli',
43 version=find_version("awscli", "__init__.py"),
44 description='Universal Command Line Environment for AWS.',
45 long_description=read('README.rst'),
46 author='Amazon Web Services',
47 url='http://aws.amazon.com/cli/',
48 scripts=['bin/aws', 'bin/aws.cmd',
49 'bin/aws_completer', 'bin/aws_zsh_completer.sh',
50 'bin/aws_bash_completer'],
51 packages=find_packages(exclude=['tests*']),
52 package_data={'awscli': ['data/*.json', 'examples/*/*.rst',
53 'examples/*/*.txt', 'examples/*/*/*.txt',
54 'examples/*/*/*.rst', 'topics/*.rst',
55 'topics/*.json']},
56 install_requires=install_requires,
57 extras_require={},
58 license="Apache License 2.0",
59 classifiers=[
60 'Development Status :: 5 - Production/Stable',
61 'Intended Audience :: Developers',
62 'Intended Audience :: System Administrators',
63 'Natural Language :: English',
64 'License :: OSI Approved :: Apache Software License',
65 'Programming Language :: Python',
66 'Programming Language :: Python :: 2',
67 'Programming Language :: Python :: 2.7',
68 'Programming Language :: Python :: 3',
69 'Programming Language :: Python :: 3.4',
70 'Programming Language :: Python :: 3.5',
71 'Programming Language :: Python :: 3.6',
72 'Programming Language :: Python :: 3.7',
73 'Programming Language :: Python :: 3.8',
74 ],
75 )
76
77
78 if 'py2exe' in sys.argv:
79 # This will actually give us a py2exe command.
80 import py2exe
81 # And we have some py2exe specific options.
82 setup_options['options'] = {
83 'py2exe': {
84 'optimize': 0,
85 'skip_archive': True,
86 'dll_excludes': ['crypt32.dll'],
87 'packages': ['docutils', 'urllib', 'httplib', 'HTMLParser',
88 'awscli', 'ConfigParser', 'xml.etree', 'pipes'],
89 }
90 }
91 setup_options['console'] = ['bin/aws']
92
93
94 setup(**setup_options)
95
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -28,13 +28,14 @@
'docutils>=0.10,<0.16',
'rsa>=3.1.2,<=3.5.0',
's3transfer>=0.3.0,<0.4.0',
- 'PyYAML>=3.10,<5.3',
]
if sys.version_info[:2] == (3, 4):
+ install_requires.append('PyYAML>=3.10,<5.3')
install_requires.append('colorama>=0.2.5,<0.4.2')
else:
+ install_requires.append('PyYAML>=3.10,<5.4')
install_requires.append('colorama>=0.2.5,<0.4.4')
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -28,13 +28,14 @@\n 'docutils>=0.10,<0.16',\n 'rsa>=3.1.2,<=3.5.0',\n 's3transfer>=0.3.0,<0.4.0',\n- 'PyYAML>=3.10,<5.3',\n ]\n \n \n if sys.version_info[:2] == (3, 4):\n+ install_requires.append('PyYAML>=3.10,<5.3')\n install_requires.append('colorama>=0.2.5,<0.4.2')\n else:\n+ install_requires.append('PyYAML>=3.10,<5.4')\n install_requires.append('colorama>=0.2.5,<0.4.4')\n", "issue": "Add support for PyYAML 5.3\nCloses: https://github.com/aws/aws-cli/issues/4828\r\nSigned-off-by: Igor Raits <[email protected]>\r\n\r\n*Issue #, if available:*\r\n\r\n*Description of changes:*\r\n\r\n\r\nBy submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\nimport codecs\nimport os.path\nimport re\nimport sys\n\nfrom setuptools import setup, find_packages\n\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\n\ndef read(*parts):\n return codecs.open(os.path.join(here, *parts), 'r').read()\n\n\ndef find_version(*file_paths):\n version_file = read(*file_paths)\n version_match = re.search(r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\",\n version_file, re.M)\n if version_match:\n return version_match.group(1)\n raise RuntimeError(\"Unable to find version string.\")\n\n\ninstall_requires = [\n 'botocore==1.15.10',\n 'docutils>=0.10,<0.16',\n 'rsa>=3.1.2,<=3.5.0',\n 's3transfer>=0.3.0,<0.4.0',\n 'PyYAML>=3.10,<5.3',\n]\n\n\nif sys.version_info[:2] == (3, 4):\n install_requires.append('colorama>=0.2.5,<0.4.2')\nelse:\n install_requires.append('colorama>=0.2.5,<0.4.4')\n\n\nsetup_options = dict(\n name='awscli',\n version=find_version(\"awscli\", \"__init__.py\"),\n description='Universal Command Line Environment for AWS.',\n long_description=read('README.rst'),\n author='Amazon Web Services',\n url='http://aws.amazon.com/cli/',\n scripts=['bin/aws', 'bin/aws.cmd',\n 'bin/aws_completer', 'bin/aws_zsh_completer.sh',\n 'bin/aws_bash_completer'],\n packages=find_packages(exclude=['tests*']),\n package_data={'awscli': ['data/*.json', 'examples/*/*.rst',\n 'examples/*/*.txt', 'examples/*/*/*.txt',\n 'examples/*/*/*.rst', 'topics/*.rst',\n 'topics/*.json']},\n install_requires=install_requires,\n extras_require={},\n license=\"Apache License 2.0\",\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'Intended Audience :: System Administrators',\n 'Natural Language :: English',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n ],\n)\n\n\nif 'py2exe' in sys.argv:\n # This will actually give us a py2exe command.\n import py2exe\n # And we have some py2exe specific options.\n setup_options['options'] = {\n 'py2exe': {\n 'optimize': 0,\n 'skip_archive': True,\n 'dll_excludes': ['crypt32.dll'],\n 'packages': ['docutils', 'urllib', 'httplib', 'HTMLParser',\n 'awscli', 'ConfigParser', 'xml.etree', 'pipes'],\n }\n }\n setup_options['console'] = ['bin/aws']\n\n\nsetup(**setup_options)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\nimport codecs\nimport os.path\nimport re\nimport sys\n\nfrom setuptools import setup, find_packages\n\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\n\ndef read(*parts):\n return codecs.open(os.path.join(here, *parts), 'r').read()\n\n\ndef find_version(*file_paths):\n version_file = read(*file_paths)\n version_match = re.search(r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\",\n version_file, re.M)\n if version_match:\n return version_match.group(1)\n raise RuntimeError(\"Unable to find version string.\")\n\n\ninstall_requires = [\n 'botocore==1.15.10',\n 'docutils>=0.10,<0.16',\n 'rsa>=3.1.2,<=3.5.0',\n 's3transfer>=0.3.0,<0.4.0',\n]\n\n\nif sys.version_info[:2] == (3, 4):\n install_requires.append('PyYAML>=3.10,<5.3')\n install_requires.append('colorama>=0.2.5,<0.4.2')\nelse:\n install_requires.append('PyYAML>=3.10,<5.4')\n install_requires.append('colorama>=0.2.5,<0.4.4')\n\n\nsetup_options = dict(\n name='awscli',\n version=find_version(\"awscli\", \"__init__.py\"),\n description='Universal Command Line Environment for AWS.',\n long_description=read('README.rst'),\n author='Amazon Web Services',\n url='http://aws.amazon.com/cli/',\n scripts=['bin/aws', 'bin/aws.cmd',\n 'bin/aws_completer', 'bin/aws_zsh_completer.sh',\n 'bin/aws_bash_completer'],\n packages=find_packages(exclude=['tests*']),\n package_data={'awscli': ['data/*.json', 'examples/*/*.rst',\n 'examples/*/*.txt', 'examples/*/*/*.txt',\n 'examples/*/*/*.rst', 'topics/*.rst',\n 'topics/*.json']},\n install_requires=install_requires,\n extras_require={},\n license=\"Apache License 2.0\",\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'Intended Audience :: System Administrators',\n 'Natural Language :: English',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n ],\n)\n\n\nif 'py2exe' in sys.argv:\n # This will actually give us a py2exe command.\n import py2exe\n # And we have some py2exe specific options.\n setup_options['options'] = {\n 'py2exe': {\n 'optimize': 0,\n 'skip_archive': True,\n 'dll_excludes': ['crypt32.dll'],\n 'packages': ['docutils', 'urllib', 'httplib', 'HTMLParser',\n 'awscli', 'ConfigParser', 'xml.etree', 'pipes'],\n }\n }\n setup_options['console'] = ['bin/aws']\n\n\nsetup(**setup_options)\n", "path": "setup.py"}]} | 1,288 | 196 |
gh_patches_debug_18897 | rasdani/github-patches | git_diff | quantumlib__Cirq-1897 | "We are currently solving the following issue within our repository. Here is the issue text:\n--- BE(...TRUNCATED) | "diff --git a/cirq/ops/fsim_gate.py b/cirq/ops/fsim_gate.py\n--- a/cirq/ops/fsim_gate.py\n+++ b/cirq(...TRUNCATED) | "{\"golden_diff\": \"diff --git a/cirq/ops/fsim_gate.py b/cirq/ops/fsim_gate.py\\n--- a/cirq/ops/fsi(...TRUNCATED) | 2,120 | 319 |
End of preview. Expand
in Data Studio
prompts < 4096 Qwen3 tokens golden_diff < 1024 Qwen3 tokens
- Downloads last month
- 82